11_02_08_Virtual_World_Framework

advertisement

Prepared by

Lockheed Martin

1140 Kildaire Farm Road

Cary, NC 27511

DoD Virtual World

Framework

Virtual Worlds and Gaming

Current State, Future Vision, and Architecture

Prepared for

Office of the Under Secretary of Defense for Personnel and Readiness

“We need a giant leap forward in our simulated training environment for small units in ground combat …to replicate to the degree practical using modern simulation, combat scenarios that will test our small units…”

Gen J. M. Mattis, USMC Commander

U.S. Joint Forces Command

Lockheed Martin Proprietary Information i

Contents

1.0

EXECUTIVE SUMMARY ............................................................................................................................. 1

2.0

WHY A VIRTUAL WORLD FRAMEWORK .................................................................................................... 3

2.1

W HERE W E A RE .................................................................................................................................................. 4

2.2

C OLLABORATIVE V IRTUAL W ORLD P LATFORMS .......................................................................................................... 4

2.3

C RITICAL M ASS .................................................................................................................................................... 4

2.4

R EQUIRES S IGNIFICANT I NVESTMENT ........................................................................................................................ 4

2.5

V ISUAL AND P HYSICAL F IDELITY ............................................................................................................................... 4

2.6

C OMPLEX TO U SE ................................................................................................................................................. 5

2.7

I

NTEROPERABILITY

................................................................................................................................................ 5

2.8

D ISTRIBUTION AND S ECURITY .................................................................................................................................. 5

2.9

D EVICE S CALABILITY .............................................................................................................................................. 5

2.10

N

EGATIVE

T

RAINING

............................................................................................................................................. 6

2.11

F LEXIBLE B USINESS M ODELS ................................................................................................................................... 7

3.0

TECHNICAL LANDSCAPE ........................................................................................................................... 8

3.1

P ROCESSING P OWER ............................................................................................................................................. 9

3.2

S OFTWARE AND A LGORITHMS ................................................................................................................................. 9

3.3

P OWER S TORAGE AND M ANAGEMENT ................................................................................................................... 10

3.4

D ISPLAY T ECHNOLOGIES ...................................................................................................................................... 11

3.5

T HE I NFORMATION U TILITY .................................................................................................................................. 11

3.6

S ENSORS ........................................................................................................................................................... 12

3.7

T HE N EXT C YCLE ................................................................................................................................................ 13

4.0

WHAT IS REQUIRED ................................................................................................................................ 16

4.1

O PEN S OURCE ................................................................................................................................................... 16

4.2

H UGE I NSTALLED B ASE ........................................................................................................................................ 17

4.3

P LATFORM S CALABILITY ....................................................................................................................................... 18

4.4

D ISTRIBUTION .................................................................................................................................................... 18

4.5

S ECURITY .......................................................................................................................................................... 18

4.6

U TILIZE AND D EFINE S TANDARDS ........................................................................................................................... 19

4.7

F UTURE P ROOF .................................................................................................................................................. 19

4.8

B USINESS M ODELS ............................................................................................................................................. 19

5.0

THE WEB IS THE PLATFORM ................................................................................................................... 20

5.1

J AVA S CRIPT ....................................................................................................................................................... 20

5.2

T HE L IVELY K ERNEL ............................................................................................................................................. 21

5.3

W EB GL S HADER F OCUSED G RAPHICS .................................................................................................................. 22

5.4

XML................................................................................................................................................................ 23

5.5

COLLADA A R EAL 3D S TANDARD FOR D ATA I NTERCHANGE ..................................................................................... 24

5.6

XMPP (J ABBER ) ................................................................................................................................................ 24

5.7

HTML5 N EXT G ENERATION OF THE W EB ............................................................................................................. 25

5.8

R EPLICATED C OMPUTATION , C LIENT /S ERVER , AND S TREAMING D ATA T YPES ................................................................. 26

6.0

PROPOSAL .............................................................................................................................................. 28

6.1

S TANDARDS ...................................................................................................................................................... 28

6.2

C OMPONENTS ................................................................................................................................................... 28

6.3

M ODEL /V IEW ................................................................................................................................................... 28

6.4

S

TRUCTURE

....................................................................................................................................................... 28

6.5

S YNCHRONIZATION ............................................................................................................................................. 28

Lockheed Martin Proprietary Information ii

6.6

S ECURITY .......................................................................................................................................................... 29

6.7

C

LOUD

-C

OMPUTING

........................................................................................................................................... 29

6.8

S CALABILITY ...................................................................................................................................................... 29

6.9

M OBILE A CCESS ................................................................................................................................................. 29

6.10

A PPLICATIONS ................................................................................................................................................... 29

6.11

A FTER A CTION R EVIEW ....................................................................................................................................... 29

6.12

A DOPTION ........................................................................................................................................................ 29

6.13

F UTURE ............................................................................................................................................................ 30

7.0

ARCHITECTURE ....................................................................................................................................... 31

7.1

C ORE C ONCEPTS ................................................................................................................................................ 31

7.1.1

Participant .............................................................................................................................................. 31

7.1.2

Component ............................................................................................................................................. 31

7.1.3

Bobble .................................................................................................................................................... 31

7.1.4

Model/View/Controller .......................................................................................................................... 32

7.1.5

Deterministic Computation .................................................................................................................... 32

7.1.6

Time Stream ........................................................................................................................................... 33

7.1.7

Reflector ................................................................................................................................................. 33

7.1.8

Replicated Computation Model or TeaTime ........................................................................................... 33

7.1.9

Client/Server ........................................................................................................................................... 33

7.1.10

Streaming Object .................................................................................................................................... 34

8.0

VWF SYSTEM OVERVIEW ........................................................................................................................ 35

8.1

B OBBLES AND R EPLICATED C OMPUTATION .............................................................................................................. 35

8.2

R EPLICATED B OBBLES .......................................................................................................................................... 39

8.3

VWF M ESSAGES ................................................................................................................................................ 40

8.4

T

IMING IS

E

VERYTHING

........................................................................................................................................ 40

8.5

B OBBLE T IME .................................................................................................................................................... 41

8.6

T HE VWF R EFLECTOR /S EQUENCER ....................................................................................................................... 42

8.7

T HE VWF C ONTROLLER ....................................................................................................................................... 43

8.8

VWF M ESSAGE E XECUTION ................................................................................................................................. 43

8.9

R EPLICATED M ESSAGE E XECUTION ......................................................................................................................... 44

8.10

S

TARTING

, J

OINING

,

AND

P

ARTICIPATING

................................................................................................................ 45

8.11

S TARTING U P .................................................................................................................................................... 45

8.12

J OINING ............................................................................................................................................................ 46

8.13

A

DDING

U

SERS

.................................................................................................................................................. 47

8.14

P ARTICIPATING ................................................................................................................................................... 49

8.15

N ICE S IDE E FFECTS ............................................................................................................................................. 50

8.16

C OMPONENTS ................................................................................................................................................... 50

8.17

T HE F UTURE OF VWF C OMPONENTS ..................................................................................................................... 50

9.0

SAMPLE IMPLEMENTATION ................................................................................................................... 52

9.1

INDEX .

HTML ...................................................................................................................................................... 53

9.2

VWF .

JS ............................................................................................................................................................. 57

9.3

VWF MODEL .

JS .................................................................................................................................................. 72

9.4

VWF MODEL JAVASCRIPT .

JS .................................................................................................................................. 75

9.5

VWF

-

MODEL

-

SCENE

.

JS

......................................................................................................................................... 78

9.6

VWF VIEW .

JS ..................................................................................................................................................... 81

9.7

VWF VIEW HTML .

JS ............................................................................................................................................ 84

10.0

SAMPLE APPLICATION ............................................................................................................................ 86

11.0

DATA AND MESSAGE FORMATS ............................................................................................................. 92

Lockheed Martin Proprietary Information iii

11.1

B OBBLE AND C OMPONENT ................................................................................................................................... 93

11.2

C

OMPONENT

API: API

SPECIFICATION FOR INTRA

-

MODEL INTERACTIONS

...................................................................... 94

11.3

M ODEL S TIMULUS API ........................................................................................................................................ 94

11.4

V IEW S TIMULUS API ........................................................................................................................................... 95

11.5

M ODEL AND V IEW R ESPONSE API ......................................................................................................................... 95

11.6

R EFLECTION S ERVER API ..................................................................................................................................... 96

12.0

NEXT STEPS ............................................................................................................................................ 97

12.1

D EVELOPMENT .................................................................................................................................................. 97

12.2

VWF M ARKETING .............................................................................................................................................. 98

13.0

CONCLUSION .......................................................................................................................................... 99

14.0

REFERENCES ......................................................................................................................................... 100

Lockheed Martin Proprietary Information iv

Virtual World Framework

Virtual Worlds and Gaming - Current State, Future Vision, and Architecture

1.0 Executive Summary

The present document addresses the need for a Virtual World Framework (VWF) provides a list of requirements for such a system, and proposes an approach toward its creation in the form of a draft architecture design.

The overarching goal of this document is to clearly define an approach to creating a single, secure, and sustainable DoD virtual world training platform. This platform must allow the DoD services sufficient flexibility to design and build the specific pieces of a larger federated system, but must allow these pieces to dynamically interoperate as required. Further, this platform must be designed for delivery via existing computer platforms, the new generation of mobile devices; and there must be a clear path toward integration with live augmented reality training. Finally, this platform must be sufficiently open and extensible and not be seen as yet another technological dead-end. Virtual world platforms have an extremely short shelf-life, so it is essential that a path be chosen that ensures that there will be significant pressure on the continued evolution of the system that is created.

Key elements of a successful platform are:

Training available 24/7, via the global information grid across the spectrum of training audiences; from individual home station users and small units to large force Combatant

Command users.

Possess a high level of realism. The platform must take advantage of modern graphics capabilities, and offer a clear path to ensure that the system can integrate new advances in the area as they become available. Further, the platform must provide realistic object interactions including physical models and artificial intelligent characters, and societies.

Use common applications, references, and operational capabilities. The platform must allow integration of existing key applications as well as new applications that will be developed.

This includes both 2D and 3D applications and support for both server-based and clientbased execution. A common shared ontological/behavioral model will need to be explored.

Rapidly scalable and composable by the training user without the need for specialized skills.

The platform must be accessible to and extensible by a wide range of users and third-party developers, and there must be a clear model for developing and integrating easy-to-use tools.

Rapidly modifiable to replicate new operational capabilities or changes in the real operating environment. Quickly support mission rehearsals. The platform must have a clear content pipeline that ensures a simple path for creation of new content, importing of existing content, and a clear path for integration of real-time data acquisition and deployment.

There must be a well understood path toward enabling a two-way interface between live and virtual training systems and their virtual representations in the federated virtual world.

Operations in the federated virtual world and live and virtual training systems will need to be synchronized in real time so as to enable stimulation of sensors, visual replications, and interactions between platforms operating within and outside of the federated virtual world

Lockheed Martin Proprietary

1

across the spectrum of training environments or systems securely. Support for future augmented reality systems must be considered.

Support information operations, cyberspace, nuclear or catastrophic warfare, space, civil affairs, language and culture, and other soft skills training requirements across the globe.

The platform must be open and extensible in virtually every dimension required for the vast complexity of training the next generations of soldiers.

Be interoperable with interagency partners and multinational capabilities in order to train to a comprehensive approach. The platform must enable a number of orthogonal missions while maintaining a common extensible framework. In short, if two groups can interoperate in the real world, they must be able to interoperate in the virtual domain.

There are a number of key technical capabilities that the platform architecture must address from the start. These are virtually impossible to add to an existing platform as they are deeply integral to how the system operates.

Support for detailed after action review. “Key-frame” world states and all significant state modifications and data interchange (user actions, audio, and video) must be archivable and streamable. This includes both 3D world interactions, but should also include document management, and document and world versioning (storage is basically free). A “rewind” capability would be optimal, but likely computationally expensive, as most world modifications are “lossy” and cannot be easily revoked.

Support for replicated computation and simulation. This allows for very complex direct user and physical interactions that would otherwise place a significant load on a server.

Support for client/server interactions. These are typical “game” interactions where precise interactions are not required.

Support for bi-directional data streaming interactions (audio, video, VNC). This allows sharing of legacy applications, as well as enhanced user-to-user communication modalities.

Lockheed Martin Proprietary

2

2.0 Why a Virtual World Framework

Captain Chesley Sullenberger may be an extraordinary pilot, but it wasn’t just his heroism that brought Flight 1549 down safely in the Hudson river. It was the rigorous simulation training that’s an essential aspect of the U.S. aviation system. Pilots have to fly for years before they can command an airliner, and even experienced pilots must routinely train in simulators and pass “check rides” at least once a year under the supervision of Federal Aviation Administration inspectors. This training focuses on extreme scenarios, such as the US Airways crew encountered. “Pilots don’t spend their training time flying straight and level,” says airline pilot

Lynn Spencer, author of Touching History: The Untold Story of the Drama That Unfolded in the

Skies over America on 9/11 . “In simulator training, we’re doing nothing but flying in all sorts of emergencies. Even emergencies become just another set of procedures when repeatedly trained.”

Virtual world simulations have proven time and again their value in providing real-world training in almost every dimension. The impact it has had in the realm of flight training is undeniable and immense. There is simply no other way to train to successfully land an airplane in the water.

The same will undoubtedly hold true for training in other extreme scenarios, such as the battlefield. The challenge is that though flying and landing an airplane is complex, this complexity is quickly dwarfed by even modest scenarios that can take place on the battlefield simply because the number of degrees of freedom in the situation is so much greater. The rules of physics dominate the simulation of an aircraft, but they do not necessarily apply to what your enemy, or even what your buddies, will do next. Further, extremely high-fidelity flight simulation can be accomplished by bringing the world to the pilot sitting on a high-tech chair with a dome over it. Recreating the equivalent for a moving soldier carrying a weapon will not be as easy.

Still, it is clear that virtual worlds have an essential role to play in training the next generation of soldiers and once a number of technical and social hurdles are overcome, will quickly become a dominant aspect of training even for the complexity of the battlefield. If we look even further ahead, virtual worlds will not just be an essential part of training; they will immeasurably enhance the capabilities of the soldier while he is actually in battle.

A number of challenges must be met to provide an interoperable platform that crosses the lines of the various services but ensures that their special capabilities and requirements are respected. The services have begun experimenting with and developing training capabilities in virtual worlds, but as is appropriate at this early stage, they have been taking their own serviceunique drivers and objectives without considering the broader requirements and opportunities that a common interoperable framework would provide. Such an interoperable platform would allow for leveraging interoperable and scalable objects; systems and training experiences across service lines; and perhaps even more important, a number of open, competitive business models. A common VWF should lead to a more relevant training experience that is significantly more cost effective.

Lockheed Martin Proprietary

3

2.1 Where We Are

Virtual worlds have not lived up to their promise. There are areas of interesting capabilities and successful point applications to be sure, but there does not yet exist a common framework that exhibits the necessary range of capabilities that will allow it to be applied across the diverse requirements that are imposed for training and collaboration within the DoD and beyond.

Further, none of the existing platforms have a viable business model that will properly incent the huge number of participants that are required to build out the various components, applications, and systems that are needed. In short, without a significant change in approach, virtual worlds will remain a niche market with minimal impact on day-to-day preparedness and capabilities.

There are a number of key factors that must be addressed to enable the creation and adoption of a successful VWF. There is no way to create a complete list, but these examples are necessary elements of a successful end result.

2.2 Collaborative Virtual World Platforms

In traditional multi-player gaming-based virtual environments, the focus of the platform is allowing the users to interact with the world and other users with an extremely limited but extremely optimized set of capabilities. Everything is focused on low-latency, high-speed interactions that ensure a very immersive experience. The trade-off is that these systems are so focused on these dimensions that they are simply unusable for almost any other use.

By contrast, collaborative virtual worlds are focused on a high degree of customizability of the environment and high bandwidth communications between users. These are environments that are focused on work and working together – even those that are intended primarily for entertainment.

2.3 Critical Mass

This is probably the most important issue. Platform developers and content providers must see a return on investment on their efforts for a platform to be viable. Obviously, this does not necessarily rule out commercial platforms, as the iPhone and Android demonstrate, however, this is a very high bar for a small commercial business to attain. Further, larger companies are extremely cautious in engaging in such a new area without some assurance of return and control over the results, especially today where the major growth area is in mobile devices.

Developing a new large scale platform on the desktop is not a compelling business opportunity today.

2.4 Requires Significant Investment

Virtual world platforms are complex to create and difficult to extend. The platforms themselves are significantly more difficult to create than traditional 2D applications and operating systems, requiring significantly more robust models of network collaboration, user interaction, data streaming and management, persistence, and synchronization. For the developers on these platforms, most virtual world tools require significant expertise in areas such as 3D content development, coding, physics, client/server architectures, etc. Though many virtual world platforms provide very good development capabilities, these tools are captive to that platform and tend to be focused on the high-end developers.

2.5 Visual and Physical Fidelity

Virtual world platforms have a relatively short lifespan because they have difficulty keeping up with the extreme pace of technological innovation required by changes in the capabilities of hardware, algorithms, and devices. This is particularly true of graphics and physics. Most designers of a platform will strongly link the user interactions and object behaviors to the

Lockheed Martin Proprietary

4

graphics capabilities of the game engine they have chosen. This makes updates to the system tricky at best. This is somewhat avoidable using traditional controller/model/view architectures, where the model and its behaviors are independent of the view that is expressed to the end user, though this is typically avoided for “performance” reasons.

2.6 Complex to Use

Virtual worlds must provide far easier user interaction models than currently available. Virtual worlds are difficult to navigate and it is too hard for the user to accomplish key tasks. This is due to the additional degrees of freedom that the users have available to them in moving through a

3D space, the fact that the PC platforms that run the virtual worlds were designed primarily for

2D control, and the fact that a good model of interaction design in 3D has not been developed yet. In the past, once a new user mastered moving a cursor around with a mouse on the current

PC interfaces, the rest was pretty straightforward. Moving around inside of 3D space is just the beginning of the complexity involved in accomplishing a task in 3D. Games are successful because the level of complexity required rarely goes beyond moving around in the space

(including jumping and crouching) and shooting. More complex interactions often limit the success of the game.

2.7 Interoperability

Content and code that is developed for one virtual world is almost impossible to move to another. Sometimes this is on purpose, as it is for walled garden platforms and their business models. Even so, whether intended or not, once an object or avatar exists inside one virtual world, the level of effort to move it to another is usually similar to effort of developing the object in the first place. This is an extremely high cost and ensures that content remains stuck behind the wall within which it was created. Of course, moving the 2D and 3D content is a trivial task compared to moving the associated behaviors and interactions. The problem goes well beyond just the nature of the scripts and APIs that are used to create the behavior. Knowing when and how a behavior needs to be activated and how it will affect the surrounding objects is an extremely complex problem that will be difficult to address even with a common scripting and application programming interface (API) model.

2.8 Distribution and Security

Accessing a virtual world usually requires, at minimum, the installation of a new application on a computer. Installing it in an enterprise can require getting corporate permissions, setting up proxy connections, and may require data audits to ensure the streams to and from these clients are secure – and only sending the information that they are authorized to access. Installing a virtual world server behind the firewall can dramatically increase the level of complexity. These are just a few of the first order problems that need to be addressed. Managing sensitive information in a virtual world will require additional tools and capabilities such as mixed security, where users in the same virtual world may have different levels of access to information or capabilities and information transit, where data can easily hop between virtual worlds.

2.9 Device Scalability

The boundaries between devices are rapidly eroding. A typical user today has almost as much capability in the mobile device he carries in his pocket as he does on his desktop system and is likely more reliant on the mobile device. This trend will accelerate, and it is quite possible that for many users, most of the capabilities of the desktop will be absorbed by the mobile device.

Certainly in the case of soldiers in the field, the mobile device is acknowledged to be the go-to platform. This means that any virtual world solution must provide a robust capability across all of

Lockheed Martin Proprietary

5

these devices. Further, as the image below shows, mobile devices are poised to eclipse desktop Internet users in 2014.

Finally, an extremely important platform in the near future will be wearable head-mounted devices. This device will undoubtedly become the cornerstone of training and beyond, across the DoD, so must be considered when designing a new framework that will maintain its value into the future.

2.10 Negative Training

Current 3D gaming technology continues to evolve at an extraordinary pace, and the visual and auditory quality are rapidly becoming very close to providing an intense and believable experience. As this quality improves, we must ask whether this experience is translatable to the real world in the way that flight simulation is. Just because something looks and acts real does not necessarily mean that it is. In fact, we may be fooling ourselves, and worse, fooling the soldier into believing that his training experience is actually accomplishing the goals of preparing him for the real situation. The potential problems are many. We can place a pilot into a flight simulator where virtually every aspect of the experience is near-real and faithfully reproduces the aircraft behaviors – even those behaviors that are counter-intuitive. Placing a soldier on a virtual battlefield is quite a different issue.

First, how accurate is the simulation that is being experienced? Recreating the complex conditions that exist on a battlefield and the opportunity for exponential interactions between the various participants and components requires significant compute resources.

Second, how accurately is this information relayed to the trainee and how accurately are his responses relayed back. Viewing the battlefield through a flat 2D screen and controlling both the trainee’s motions and weapon with a mouse is likely an unsatisfactory and probably misleading

Lockheed Martin Proprietary

6

experience. In the actual situation, the soldier has an unhindered 180x180 degree field of view while the screen might provide at best a 60x40 degree field of view – about 1/12th the visual area – and ignoring the visual resolution and stereo (the reality is you cannot compare the two; they are simply different animals).

Obviously, we must develop better metrics for understanding the quality of this kind of training to ascertain its value. Further, we need a better approach to providing actual in-world and augmented reality experiences. Just as we create very realistic. fully simulated cockpits and virtual worlds for flight simulations, we must extend the reality of simulations for the soldier on the ground.

2.11 Flexible Business Models

There is simply no one-size-fits-all business model for virtual worlds. There are as many business opportunities as there are applications of these technologies. Walled-garden approaches such as Second Life and Teleplace impose a business model and these quickly fail when the requirements of the application exceed the business model and capabilities of the platform. Stand-alone open source projects such as OpenSim or Wonderland cannot reach critical mass because of the huge effort to productize these platforms as had to be done with

Croquet to create Teleplace. Even this level of effort is often not sufficient for success.

A successful VWF must address each of these issues head-on.

Lockheed Martin Proprietary

7

3.0 Technical Landscape

It is essential that we consider future technological trends and opportunities when designing a new architecture, especially now, with such an extraordinary acceleration of the rate of change.

If this architecture is to be relevant beyond the time of its release, we must have a clear technical roadmap forward. Though Moore’s law is a particularly powerful mechanism, it is just one dimension that must be examined. We must also consider the evolving relationship of humans to computers, the impact of advances in software and algorithms, power storage and management, massive hardware parallelism, wireless bandwidth, access to mass storage and remote computation in a networked world, and display technologies.

A recent Morgan Stanley report illustrated clearly how technology cycles tend to have about a

10-year lifespan before the next major change occurs. These changes absorb the capabilities of the previous cycles, but usually demonstrate a very different perspective on the capabilities that are delivered to the users. As shown in the chart below, we are entering the mobile Internet decade, which is replacing the desktop Internet decade, which replaced the personal computing decade. Clearly, the message here is that for any new architecture to be relevant today and over the next ten years, it must be extremely focused on mobile connected devices.

Perhaps just as important, or even possibly more so, is to look at the next cycle of systems and devices and what impact they will have on a VWF, or even what impact a successful VWF might have on the next cycle. What we can be sure of is the technological infrastructure driving us toward this next computing iteration is powerful and must be understood and properly leveraged. Let ’s examine the building blocks that will certainly enable the next compute platform and attempt to forecast the nature of the system.

Lockheed Martin Proprietary

8

3.1 Processing Power

Moore’s law is not inevitable, but it has demonstrated a predictive power that is quite impressive. W hat Moore’s law actually says is: The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years. The actual rate has actually been between one and two years since Moore first articulated his “law”. (Doug

Engelbart first noticed this trend a number of years before Moore.) This has a direct effect on a number of devices, including processing speed, memory capacity, sensors, and even the number and size of pixels on displays and in digital cameras. Additional forces, such as advances in hardware and software architectures and algorithms, have ensured that actual compute performance has actually exceeded even this 2-year doubling.

Graphic processing units have been doubling in capabilities almost at a 9-month pace. Graphics rendering is a highly parallelizable process, so adding more tiny processors scales up extremely well. We are already seeing similar kinds of approaches to general purpose processing with systems like Intel’s i7 8-core chips, but this itself is just the beginning. Intel is already experimenting with 100+ core devices, and we can easily see thousand core systems deployed over the next decade. We will even see the first multi-core mobile devices in 2011 (LG

Electronics has already announced a multi-core Android phone). It is extremely important to note that many small processors can be ganged up to out perform a large single processor, but require significantly less electrical power than the large processor. This points directly toward even higher numbers of cores on mobile devices in the future. Further, the traditional graphicsfocused GPUs are already being repurposed for complex SIMD (single instruction multiple data) general processing. This has a direct effect on such areas as physics, signal processing, and algorithms.

3.2 Software and Algorithms

The recent report to the President – Designing a Digital Future: Federally Funded Research and

Development in Networking and Information Technology. December 2010 – noted that developments in software and algorithms have in many cases out-paced even the steady exponential pace of Moore’s law. They reported:

“ In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Mart in Grötschel of Konrad-Zuse-Zentrum für

Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008

.”

Improvements in software and algorithm performance goes hand-in-hand with the enhanced capabilities and requirements of the new hardware. Certainly, we are just beginning to understand how to leverage the massive numbers of CPU cores that are becoming available to us. We are inventing new languages to deal with the problem of developing algorithms for a high performance GPU. We are even seeing huge performance gains on more traditional platforms, such as JavaScript, in a web browser. Some metrics show that JavaScript has improved by as much as 100 fold between 1996 and 2006. We have added the benchmark performance of a 2010 Android cell phone for comparison with a 2006 desktop PC.

Lockheed Martin Proprietary

9

benchmark 1996 2006 speed up 2010 Android

primes pgap sieve

0.15

3.13

5.05

0.02

0.06

0.02

8X

52X

252X

0.004

0.441

0.102 fib(20) tak mb100

2.15

10.44

8.4

0.03

0.08

0.2

72X

131X

42X

0.072

0.073

1.385

Source: http://www.codinghorror.com

/ and the actual benchmark run was at http://www.nicholson.com

/.

We are on a similar path with a veritable browser war going on between vendors right now, and the main battlefields are HTML5 and JavaScript speed. Moore’s law will yield about five or six doublings of performance in the next 10 years; this is a speed-up of a factor of 32 or 64. We can easily imagine a speed-up of JavaScript that could match or even exceed the gains in hardware.

This does not make the case that JavaScript will ultimately replace high-performance compiled languages like C++, but most game engines utilize similarly performing scripting engines to choreograph the events that occur within the world. Further, especially in the world of simulation and graphics, most of the hard-core computing will be done inside of the GPU anyway.

3.3 Power Storage and Management

Though multi-core devices will certainly yield better performance per watt, the demands for power will likely continue to grow faster than the battery storage density. This is a potential

Achilles heel for the mobile user and will require more than a few innovations to deliver appropriate solutions. As a baseline, we should be seeing at least a doubling of battery energy densities over the next decade, as well as significantly better power management and lower power requirements from processors and GPUs, but the demand for performance and features of mobile devices will continue to outstrip the projected battery capabilities to service them. The battery power and management has improved enough, however, that none of Apple Inc.’s computers, pads, or phones have user-accessible batteries anymore. The iPad, in particular, has over a 10-hour battery life, and the MacBook Air 11.6 inch lasts over five hours with constant network use.

Lockheed Martin Proprietary

10

Alternative sources of power will still need to be explored - a particularly interesting one is using the movement of the human body itself to power these devices. Though this may not be a reasonable solution for the current generation of mobile devices, it would not be too difficult to imagine situations such as a soldier requiring 24x7 access to their wearable supercomputer systems in the field wearing a lightweight, body movement-based recharger.

3.4 Display Technologies

The resolution of the iPhone 4 retina display is 960x640 pixels on a screen that is about

4.

5”x2.3”. Micro-displays are currently available that are far better than this, at 1280x1024 pixels with a diameter of less than two inches. We will certainly see full 1080p HD (1920x1080) in a 2inch diameter screen this year. Certainly, we will see higher resolution displays for mobile devices, but we are quickly reaching the limits of what the human eye can see in this form factor. What is required is a new approach to the optics of the problem; we need to become immersed in the image.

3.5 The Information Utility

Access to information is available virtually everywhere and anytime. It is just a matter of price.

Much of this document has been written while I was in a vehicle driving between Cary, NC and

Orlando, FL with full access to the best research library in history – the World Wide Web. I used the same device – my cell phone – to find a place to eat and get gas on the road. My wife was able to check my progress by tracking the location of my phone on her home computer. I was able to purchase and begin reading the new biography of George Washington without getting out of the car.

Lockheed Martin Proprietary

11

If anything, this access to information will become more pervasive, cheaper, and even invisible.

We don’t think about the availability of water, phone, or electricity in our homes today. Electricity was first introduced into the home to provide light at night. Today, the uses are uncountable, but at the same time, these products of our age have become invisible; we are quite surprised when they stop working. They are called utilities because they are useful (have utility) and have a similar model of distribution from the utility companies via the appropriate pipes and into our homes. In the same way, we are seeing the rise of the Information Utility companies, such as

Google, which provides a product, and the phone and cable companies that provide the pipes through which it is delivered.

A big difference is that you don’t have to be at home to access these information utilities. We are dependent on them – they are available anytime, anywhere – and just as it did with electricity into the home, our expectations and dependence on them continues to grow.

Today, we make a conscious effort to access the Information Utilities. We turn on our phone, run our browser, and type in a web address. This utility, too, will soon become invisible; soon it will always be on and always ready to answer the questions you have not asked yet. It will be listening to your conversations, watching the events that occur directly around you and the events in the rest of the world that concern you, and offer real-time and relevant information and even advice as you move through your world.

3.6 Sensors

The urinals and toilets in many public restrooms are currently more aware of the presence of a user than a computer is. This is going to change dramatically over the next few years. Certainly, as noted above, a mobile phone “knows” where it is – or at least be prompted to find out that information. Further, these devices have a number of sensors that can make them more responsive. They have a video camera – sometimes even two of them - and audio capture (they are phones after all) with the ability to convert voice chat into actionable commands using extremely good voice recognition, a gyroscope, a GPS, and a compass. This is a good start.

The main problem with all of these sensors is they spend most of the time in the user’s pocket or purse.

Of course, there are other sensors that are available to the user of a mobile device than just the ones it is built with. We can access the video camera of our home computer to watch a burglar go through our drawers while we are out, and we can monitor the traffic on the road ahead and note that an accident has occurred and we should reroute ourselves. In the traffic example, we may be crowd-sourcing the GPS sensor data of the users that are stuck in the traffic jam to create a global picture of the situation.

The Air Force is exponentially increasing surveillance across Afghanistan. The monthly number of unmanned and manned aircraft surveillance sorties has more than doubled since January

2010 and quadrupled since the beginning of 2009. This is not a fire hose of information – this is an ocean, and a rapidly growing one. Unlike the traffic example above where we have computer systems dynamically acquiring, interpreting, and forwarding the information generated from the data to the appropriate people, we are creating more data which directly causes us to be able to generate LESS information. Computers are the things that scale exponentially; humans only scale linearly. If you double the amount of data that a human must search, his probability of finding the one important event has just been halved. Sensors without interpretation create problems – they don’t solve them.

A next generation platform will certainly be instrumented with powerful sensors and will have access to many others, but it is essential that it also have access to powerful systems that can interpret this data and generate meaningful and actionable inform ation. A user’s situation

Lockheed Martin Proprietary

12

awareness is only as good as the interpretation of the data that is obtained from the world around him.

3.7 The Next Cycle

The next major computing cycle will be built around a wearable, human-aware, wireless netcentric supercomputer. This will be a transformative system in many dimensions.

The user will have a constant and dynamic information overlay on the real world with a high resolution, wide field of view head-mount display that looks like a pair of large sunglasses. The system will be aware of where the user is, what or who he is looking at and what he is listening to, and will be able to immediately respond to user requests or issues with minimal effort from the user. Though someone watching the user of the system will scarcely notice him doing any work at all, the user will have full access and control of vast information utilities and sensor infrastructure. He will control it all with a glance, or a slight twitch of his finger, or a mere utterance.

The table below illustrates the changing capabilities over the last computing cycles, but extrapolates the features of a new platform that might become available in 2010.

1990 2000 2010 2020

Macintosh Classic iMac

Mac OS 6 Mac OS 9.0.4 iPhone 4 iOS 4.0

Human Centric System

Linux, Chrome, Android

8 MHz 68000

8 MB RAM

500 MHz PowerPC 1 GHz ARM

128 MB RAM 512 MB RAM

20 GHz - 100 core

20 GB RAM

25 K triangles/s

178K pixels

8 MM triangles/s

786K pixels

LocalTalk 230.4 kbps 10 MB/s Ethernet

28 MM triangles/s

614K pixels

.5 B triangles/s

8MM pixels/eye

20 MB/s 3G-WiFi 100 MB/s 6G/mesh

40 MB Hard Drive 40 GB Hard Drive 32 GB Flash Drive 1 TB Flash Drive

Though existing fully proprietary operating systems will still exist, it is expected that open source

Unix variants will likely dominate the landscape. This is likely to be fueled primarily by Google’s successful deployment of the Android platform. It is a bit early to see what impact Chrome may have, but it is important to note that it is actually a virtual OS that runs as a secure web browser on a number of platforms already. What is important to understand, however, is that it won’t be a Unix machine but a web-based OS. The main aspect of this system is that the interface is a next generation web browser where many of the technologies that we have touched on in this document have matured to offer a fully functional human aware information management and control platform.

The key to the performance of this platform will be an extremely large number of relatively lowpower cores that make up the CPU and an extremely powerful graphics engine. It is quite

Lockheed Martin Proprietary

13

possible that the CPU and GPU will merge by this point providing an extremely flexible computation environment.

We will soon see the first high resolution, high field of view head-mount displays that enable both a fully-immersive training experience – one that should replace the expensive domes for flight simulation

– and a fully-augmented reality head-mount that will overlay virtual data, including training simulations on top of the real world in real time. The display of 2020 presumes a display with four times the number of pixels that current HD provides, but this is wrapped around the users eyes, giving him a full 180 degree field of view into his augmented world.

This is truly a network-centric system, so extremely high bandwidth will be required. This will certainly be a mesh-type network where the users are leveraging the communication bandwidth of their local partners to increase throughput in much the same way that Bit torrent networks increase bandwidth of similar information to the participants.

Additional capabilities and interfaces that the system will certainly have are:

Six degrees of freedom location awareness. Using a combination of video, GPS, gyroscopes, and other sensors, the system will be able to track the users head position in the world and provide an update to the augmented reality at 60 Hz or better.

Lockheed Martin Proprietary

14

Pairs of video cameras in front and in back, which can be used to store events in real time and play them back as required, allowing the user to “Tivo” their life. The video streams can be sent directly to team members or archived as required. The video can be used to construct 3D models of the world in real time that can be used in training later, or be integrated in a global real time situational database that can be accessed by the team; this is one way that your partners will be able to see through walls.

The cameras will further be able to track the user’s motions to incorporate control by pointing and other gestures. Also, the video streams will be used for real-time face recognition and local threat analysis.

Voice recognition has already evolved sufficiently to provide an excellent primary interface to this platform. The quality of recognition is good enough to reliably replace a keyboard. No doubt additional technologies will be needed to handle voice recognition in more complex noise environments, such as combat.

Voice synthesis has also evolved to a point where it is extremely natural and comfortable as a primary relayer of information to the user. When we couple this with voice recognition, we are on the threshold of the conversational computer.

This wearable system will certainly be a full 3D augmented reality web environment, so we see a descendant of the VWF discussed here as a central component to the interface provided to the user.

The head-mount will be extremely light, not much more than a pair of sunglasses today.

The user will simply place this component on his head and pick up the matching super computer box, which will be about the size of a cell phone today, and place this in his pocket. The system turns on as soon as it is worn and remains on until it is taken off.

Just as we depend on the light switch today to see in the dark – the next platform will allow us to see the next reality – it, too, will be like turning on a switch in the dark.

Lockheed Martin Proprietary

15

4.0 What is Required

Our primary challenge is to enable a new virtual world ecosystem. Though this may not be a sufficient solution for providing all of the aspects required for a next generation training platform, it is a necessary step. As has been shown repeatedly, reaching a critical mass of users and developers is far more important than the quality of the system, or its range of capabilities.

Windows beat out the Macintosh in market share in the early part of the transition to graphical user interfaces, though it was clearly an inferior platform, especially in the early days. However,

Microsoft was able to leverage its huge installed base of MS-DOS systems to provide a huge accelerant into a critical mass of users, especially in the enterprise, hence easily winning over the “superior” platform. Note also that they had a deep understanding of “good enough” – it didn’t need to be the best.

In the same way, current technical trends point to the World Wide Web as an extremely viable candidate to support the major requirements of a next generation VWF. Not only is it the most successful business ecosystem of all time, but it is also rapidly evolving a number of key next generation technologies that are central to the VWF. Much of this advancement is due to the competitive pressures that are placed on the web and browsers as platforms. This comes from a number of directions – competition between the new Internet-focused platforms such as

Google and the established desktop platforms such as Microsoft Windows; competition between browser providers which is fueling huge increases in performance on almost every front; the emergence of Internet-capable mobile devices with their unique requirements and limitations; the ever increasing demand for additional web-based services, media, and applications; and the natural and open development of the web as a platform. Simply, the web is where the action is; hence, this is the ideal rocket to ride as we create and promote a new VWF.

The characteristics that made the web a success are many, but we can point to a number of the key elements. These are the same elements that we would need to replicate, or preferably leverage, if we were to create a successful new platform.

4.1 Open Source

The platform must avoid being a captive solution. It must be accessible to non-business users, especially education. Quite simply, the World Wide Web is the largest open source project in history. The source code for every web page is directly accessible and is a menu command away. The reason this has been such a critical accelerator for the success of the web is that it has ensured that every good idea that has been implemented is immediately available to everyone who has an interest in understanding and reimplementing it. From a developer’s perspective, the web itself is an extraordinary resource from which to draw ideas. These ideas are then further validated and codified into any number of important standards that further drive innovation by allowing the developers to focus on their unique value add. Finally, the web itself is a continuous experiment, accessible to universities, companies, and even the users.

Lockheed Martin Proprietary

16

Selecting the View>>Page Source above yields the full source code for creating this page below.

4.2 Huge Installed Base

The system must quickly reach a critical mass of users and developers. Clearly, there is already a critical mass of both developers and users around the web. The challenge will be to provide a compelling set of capabilities to attract these developers to enhance their current offerings and to provide new capabilities. This implies that we will need to closely follow and utilize the existing frameworks and standards that have evolved over the last few years, as well as provide compelling examples for them to follow and develop a suite of tools that enable them to quickly create useful systems.

Lockheed Martin Proprietary

17

4.3 Platform Scalability

The platform must work across OSs and devices (desktops/handhelds). Information is power, and access to information is empowering. More than ever, the web is accessible anywhere, anytime, and on any device. This is certainly no accident; it is a direct result of huge economic pressures to provide continuous and uninterrupted access to the most important information resource in history. Of critical import is how the web can easily be transformed to service the platform it is being delivered to. We are seeing an almost instantaneous evolution of web services and businesses to provide value to the next generation of mobile devices. Even more exciting is how new companies are continuously being started to service new opportunities that arise. These are the true engines of value and wealth. That is the power of an ecosystem.

4.4 Distribution

The platform must be easily deployed across an entire organization. It needs to work on both sides of a firewall. The web has established itself as the standard for virtually every aspect of an organization’s infrastructure – from information access to complex training applications. The web is a friction-free distribution model; any web application can be accessed from virtually any web-empowered device and from within virtually any organization. Once a user has a browser and a connection, the world is immediately and readily accessible. Currently, all of the major browser providers are enhancing their platforms to include the majority of HTML5 technologies.

We believe that within the next two years, most major providers will support all of the key technologies and all will provide them with additional plug-ins.

4.5 Security

The platform needs to work with existing IT capabilities and requirements.

Enterprise IT organizations have evolved extremely sophisticated capabilities around managing network security, especially for web-based services. Though not perfect, the current security infrastructure is extremely robust and well understood. Enterprises have full control over access to resources, as well as the ability to monitor complex transactions for undesired behaviors.

The additional capabilities that will be part of the HTML5 frameworks will certainly challenge the existing systems, but it will be an extremely broad front. All of the IT organizations and support

Lockheed Martin Proprietary

18

companies will be working to safely integrate these new capabilities into their infrastructure to ensure their employees remain competitive in the information economy.

4.6 Utilize and Define Standards

This is required for interoperability and scalability, as well as leveraging the huge existing infrastructure of application code that has been developed. As the web has evolved, so have the core standards that drive it. These include not just the document object models that make up the web browser or the HTML code, but complex and powerful libraries of JavaScript, Ruby, and

Java code that allow developers extraordinary leverage to add value to their users. These are a key aspect of the success of the new class of web “applications” that have been emerging over the last few years. This trend will certainly accelerate with new and more powerful capabilities.

This is an ideal environment for the creation and evolution of the VWF.

4.7 Future Proof

The platform must scale dynamically with new requirements and new opportunities while protecting investments in content and infrastructure. The web has demonstrated its ability to reinvent itself in virtually every dimension. Even the idea of a final product is turned on its head.

Every service on the web is constantly undergoing updates and modifications, sometimes even on a daily basis, as the relationship with the users and customers is better understood and becomes more refined. In a real sense, every web application is in perpetual beta mode.

Further, new companies creating new applications for the web are being started virtually every day. There is no barrier to entry, and the users are willing to experiment with new services; hence, it is an ideal Darwinistic ecosystem. New, powerful ideas can create new business opportunities, displace existing businesses that are not maintaining an innovative edge, or can be absorbed into larger systems. In the end, the user is the big winner.

4.8 Business Models

The platform must provide an interesting business ecosystem for small and large organizations.

It must lower the cost of content development while raising the level of quality and affordability.

The web is the basis of the largest continuous creation of opportunity and wealth in history. It will remain so. There is no barrier to entry for anyone or any business. The value of the businesses that are created is a direct result of the innovativeness of the company and the capability of the people.

Lockheed Martin Proprietary

19

5.0 The Web is the Platform

Our challenge is to construct a new platform that exhibits these characteristics to ensure that a robust ecosystem develops around it. The best and most obvious way for this to occur is to simply leverage the World Wide Web, as creating a new platform with the same capabilities and reach is virtually impossible.

However, until recently this has been impractical. No single technology for presenting 3D content has been consistently supported across all popular browsers. Any virtual world environment making use of web technologies would still require the installation of a browser plug-in or a custom player in order to render 3D images. While such an environment would enjoy the distribution benefits of the web, the client installation requirement more than negated those potential gains.

Several technologies nearing maturity promise to change this equation.

5.1 JavaScript

1

JavaScript (ECMAScript) is easily the most popular programming language in the world today with literally millions of web developers using it regularly. It is undergoing a number of critical enhancements that will be essential to the development of a web-based next generation VWF.

Perhaps the most important is the incredible increases in performance that the various

JavaScript engines have undergone over the last three years. This area has become a key element in comparing the various web browsers, which is fiercely competitive. This is an area in which we can expect to see even more significant improvements moving forward.

Another important update is the completion of the updated ECMAScript 5 (JavaScript) standard that includes critical new capabilities that are essential for robust and secure systems.

These include:

The Object Capabilities Model , which provides access control on an object-by-object basis.

Strict Mode , which creates a far more robust application development and delivery model.

WebSockets provide for full TCP connections between clients and servers.

Server-Sent Events allow servers to deliver data to clients quickly and efficiently without the overhead and delays of client polling. Together, they provide tools to create dynamic user interfaces that deliver a richer, better, faster, and truly connected user experience that is vital for real-time dynamic, collaborative virtual world experiences.

1

Javascript is also known as ECMAScript , which is the scripting language standardized by

ECMA International in the ECMA-262 specification and ISO/IEC 16262. The language is widely used for client-side scripting on the web in the form of several well-known dialects such as

JavaScript, JScript, and ActionScript. ECMAScript 5 and JavaScript 5 are considered to be identical.

Lockheed Martin Proprietary

20

Note: A recent vulnerability has been found that directly affects the current WebSockets protocol as well as Flash and Java. This has caused WebSockets to be disabled in current browsers for now, but there will certainly be a solution for this within the timeframe of the proposed VWF project.

In addition, a number of extremely good development tools have evolved, which are essential to developing more complex applications such as the VWF is certain to be. The Firebug plug-in for the FireFox browser is especially useful.

JSON (an acronym for JavaScript Object Notation pronounced / ˈd ʒ e ɪ sən/) is a lightweight textbased open standard designed for human-readable data interchange. It is derived from the

JavaScript programming language for representing simple data structures and associative arrays, called objects. Despite its relationship to JavaScript, it is language-independent, with parsers available for most programming languages.

The JSON format was originally specified by Douglas Crockford and is described in RFC 4627.

The official Internet media type for JSON is application/json. The JSON filename extension is

.json.

The JSON format is often used for serializing and transmitting structured data over a network connection. It is primarily used to transmit data between a server and web application, serving as an alternative to XML.

5.2 The Lively Kernel

The (formerly) Sun Labs Lively Kernel project is one of the best demonstrations of a full-fledged operating environment written in JavaScript and running totally in a browser. This platform is worthy of note here for a number of reasons: it demonstrates just how powerful the current browser and JavaScript are as a platform; it has a superb design that is well worth studying and emulating; and it is similar to and was created by one of the key developers of Squeak, which is the foundation platform for the Croquet project, as well as Teleplace. Hence it further demonstrates that the conceptual ideas posed in this document are not unreasonable.

Lively Kernel is a new approach to web programming. It provides a complete platform for web applications including dynamic graphics, network access, and development tools, and requires nothing more than available web browsers. The developers call the system lively for three reasons:

1. It comes live off a web page; there is no installation. The entire system is written in

JavaScript, and it becomes active as soon as the page is loaded by a browser.

2. It can change itself and create new content. The Lively Kernel includes a basic graphics editor that allows it to alter and create new graphical content, as well as a simple IDE that allows it to alter and create new applications. It comes with a basic library of graphical and computational components, and these, as well as the kernel, can be altered and extended on the fly.

Lockheed Martin Proprietary

21

3. It can save new artifacts – even clone itself – onto new web pages. The kernel includes

WebDAV 2 support for browsing and extending remote file systems, and has the ability to save its objects and "worlds" (applications) as new active web pages.

The Lively Kernel uses only existing web standards. The implementation and user language is

JavaScript, known by millions and supported in every browser. The graphics APIs are built upon

SVG (Scalable Vector Graphics), also available in major browsers. The network protocols used are asynchronous HTTP and WebDAV.

The Lively Kernel is being made available as Open Source software under a GPL license. While it is not ready for use as a product, we expect significant participation from adventurous developers and academia.

5.3 WebGL - Shader Focused Graphics

WebGL is a cross-platform, royalty-free web standard for a low-level 3D graphics API based on

OpenGL ES 2.0, exposed through the HTML5 Canvas element as Document Object Model interfaces. Developers familiar with OpenGL ES 2.0 will recognize WebGL as a Shader-based

2 Web-based Distributed Authoring and Versioning ( WebDAV ) is a set of methods based on the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on World Wide Web servers. WebDAV was defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).

Lockheed Martin Proprietary

22

API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL

ES 2.0 API. It stays very close to the OpenGL ES 2.0 specification, with some concessions made for what developers expect out of memory-managed languages such as JavaScript.

WebGL brings plug-in-free 3D to the web, implemented right into the browser and directly controllable using JavaScript. Major browser vendors Apple (Safari), Google (Chrome), Mozilla

(Firefox), and Opera (Opera) are members of the WebGL Working Group.

WebGL grants web browsers the same access to hardware-accelerated 3D graphics available to desktop applications. Using WebGL, a web application may retrieve 3D content from any accessible web location and render it using the high performance OpenGL ES 2.0 APIs.

Basically, the web browser will soon have the same access to the 3D rendering hardware that the high-end games have today.

A WebGL interactive shader application.

5.4

XML

XML is short for Extensible Markup Language. It defines a set of rules for encoding documents in a machine-readable form. It is defined in the XML 1.0 Specification produced by the W3C.

XML's design goals emphasize simplicity, generality, and usability over the Internet. It is a

Lockheed Martin Proprietary

23

textual data format with strong support via Unicode for the languages of the world. Although the design of XML focuses on documents, it is widely used for the representation of arbitrary data structures, for example in web services.

Many APIs have been developed that software developers use to process XML data, and several schema systems exist to aid in the definition of XML-based languages.

As of 2009, hundreds of XML-based languages have been developed including RSS,

COLLADA, and XHTML, though it is NOT the basis of the JSON file format described above.

5.5 COLLADA - a Real 3D Standard for Data Interchange

COLLADA is a COLLA borative D esign A ctivity for establishing an interchange file format for interactive 3D applications. COLLADA is managed by the not-for-profit technology consortium, the Khronos Group, who is also managing the development of OpenGL and WebGL.

COLLADA defines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible file formats. COLLADA documents that describe digital assets are XML files, usually identified with a .dae ( d igital a sset e xchange) filename extension.

COLLADA is rapidly becoming the Lingua Franca of 3D data interchange on the web. It has demonstrated its value in many applications, including Google Earth and Google SketchUp, and is supported by most 3D design tools. COLLADA is a high quality repository of 3D data, animations, and metadata, with a clear versioning model to ensure that the metadata remains consistent.

By combining WebGL and COLLADA, web applications are able to build upon 3D models in much the same way that they have always been able to work with 2D images.

5.6 XMPP (Jabber)

Extensible Messaging and Presence Protocol ( XMPP ) is an open-standard communications protocol for message-oriented middleware based on XML. The protocol was originally named

Jabber , and was developed by the Jabber open-source community in 1999 for, originally, nearreal-time, extensible instant messaging (IM), presence information, and contact list maintenance. Designed to be extensible, the protocol today also finds application in Voice-over

Internet Protocol and file transfer signaling.

Unlike most instant messaging protocols, XMPP uses an open systems approach of development and application by which anyone may implement an XMPP service and interoperate with other organizations' implementations. The software implementation and many

Lockheed Martin Proprietary

24

client applications are distributed as free and open source software.

The IETF formed an XMPP Working Group in 2002 to formalize the core protocols as an IETF instant messaging and presence technology. The XMPP WG produced four specifications (RFC

3920, RFC 3921, RFC 3922, RFC 3923), which were approved by the Internet Engineering

Steering Group as Proposed Standards in 2004. The XMPP Standards Foundation (formerly the

Jabber Software Foundation) is active in developing open XMPP extensions.

XMPP-based software is deployed widely across the Internet and by 2003 was used by over 10 million people worldwide, according to the XMPP Standards Foundation.

5.7 HTML5 - Next Generation of the Web

HTML5 – Hypertext Markup Language v.5.

This is the new version of HTML and is already being deployed by most browser providers in some form. It will continue to roll out through the next year and will become the reliable standard for the web by the end of 2011. In addition to specifying markup, HTML5 specifies scripting APIs. Existing document object model (DOM) interfaces are extended and features documented. There are also new APIs, such as:

 The canvas element for immediate mode 2D drawing. See Canvas 2D API

Specification 1.0 specification.

 Timed media playback.

This includes audio and video, as well as new media types that will certainly be developed.

 Offline storage database (offline web applications). Web Storage provides persistent local data storage, allowing remote data sets to be cached and offline applications to store intermediate data.

 Document editing.

This is full rich-text editing capabilities inside of the browser.

 Drag-and-drop brings the familiar user interface to web applications and provides a simple mechanism for moving data in and out of the virtual world.

 Cross-document messaging.

This is a messaging system that allows documents to communicate with each other regardless of their source domain, in a way designed to not enable cross-site scripting attacks.

Browser history management.

These are general APIs that let you analyze history for various intelligent decisions, whether it is URL typing or your flavour of websites.

MIME type and protocol handler registration.

Microdata.

Ability to annotate content with specific machine-readable labels, e.g., to allow generic scripts to provide services that are customized to the page, or to enable content from a variety of cooperating authors to be processed by a single script in a consistent manner.

Offline Web Application support allows web applications to be available even when the network is not. Offline web applications behave in many ways like desktop applications but can be deployed over the web in the same manner as online applications.

Web Workers assist applications in performing complex multiprocessing tasks. Virtual world clients can use this ability to increase the fidelity of the simulated environment.

Not all of the above technologies are included in the W3C HTML5 specification, though they are in the WHATWG HTML specification. Some related technologies, which are not part of either the W3C HTML5 or the WHATWG HTML specification, are:

 Geolocation allows applications to determine where the user is located in the world and helps augmented-reality worlds to map easily to the real world.

Lockheed Martin Proprietary

25

 Web SQL Database , a local SQL Database.

 The Indexed Database API , an indexed hierarchical key-value store (formerly

WebSimpleDB).

Together, these technologies turn the traditional web browser into an extremely powerful platform for the delivery of high performance collaborative virtual worlds.

As powerful as the next generation web browser will be, these technologies alone are not sufficient to enable a next generation interoperable VWF with the necessary capability and reach. Further requirements include a secure, collaborative 3D layer built on top of this that allows even reasonably novice developers to quickly construct valuable applications and create useful content. Our goal is to create a small but powerful framework that will establish a clear baseline and capability to begin the development of our ecosystem.

5.8 Replicated Computation, Client/Server, and Streaming Data Types

The methods used to synchronize the individual users’ spaces and the events that occur within them are critical to the performance of a distributed simulation system. While most distributed simulation systems use a client/server architecture, there are alternatives such as the peer-topeer concurrency mechanism in Croquet/Teleplace™. Further, there is a clear requirement to en able existing streamed media types such as audio, video, and VNC’d applications and systems.

In a client/server model, the server is typically the actual owner of the world state and is responsible for most of the complex computations that are required to move the state of the world forward. There are a number of advantages to this approach, including a simpler model of new user synchronization and simpler concept of state. For performance reasons, the client will usually perform a number of computations, but this is generally to give the user the proper realtime feedback on actions that are being performed. The server is still the final authority and can and does override the results of the client computations. This can result in a very heavy update pipeline that must be maintained. As more simultaneous actions are performed, the resulting update information grows, putting a significant load on both the client and the server. This has a tendency to greatly limit the number of simultaneous users in a client/server-based environment.

An alternative approach is a replicated peer-to-peer model where the computations are performed simultaneously on all of the systems and are guaranteed to compute deterministically. The benefit to this approach is that system-wide updates are not required, as all users are guaranteed to be consistent without a central server maintaining and broadcasting this state. The only thing that is required to be broadcast are external events that are typically triggered by the user. These have significantly smaller bandwidth requirements.

Croquet is a computer software architecture built from the ground up with a focus on deep collaboration between teams of users. Croquet is a totally ad hoc multi-user network. It mirrors the current incarnation of the World Wide Web in many ways, in that any user with the appropriate permissions has the ability to create and modify a home world and create links to any other such world. But in addition, any user or group of users (assuming appropriate sharing privileges), can visit and work inside any other world on the Internet. Just as the web has links between web pages, Croquet allows fully dynamic connections between worlds via spatial portals. The important differences from the web are that Croquet is a fully dynamic environment, everything is a collaborative object, and Croquet is fully modifiable at all times.

Croquet implements synchronization with a technology called TeaTime. It differs from the often used client/server approach where the server maint ains the “real” state of the shared environment and updates the clients as the state changes. This is a relatively simple approach

Lockheed Martin Proprietary

26

and is the standard model for most shared virtual worlds such as OpenSim and Second Life.

TeaTime has a much more robust model of interactions and time. It has the advantage that even very complex user interactions, such as interacting with a real document inside of a shared virtual world is guaranteed to act identically. Further, there is no central server state, as the role of the server in TeaTime is to simply time-stamp and forward events from the users. This means that each user maintains their own world state, which is guaranteed to be identical to all of the other users. It is a replicated computation model driven by external events; hence, even complex simulations will run identically.

Both the client/server and peer-to-peer models have been utilized in successful commercial systems and both have strengths. For example, client/server is best for visualizing real-time data streams and managing certain kinds of applications, as well as dealing with lowperformance compute clients. However, for regular user-user interactions and normal direct manipulation of an environment, peer-to-peer is significantly more robust and scalable.

What is clear is that both approaches have strengths that compensate for weaknesses in the other. A truly robust approach must consider each of these as simply the building blocks out of which a final system will be composed.

Lockheed Martin Proprietary

27

6.0 Proposal

We propose “ Connected ” as a name to describe the VWF. An independent consortium should own the trademark for this name.

Connected is a fast, light-weight architecture for creating and distributing secure, scalable, component-based collaborative virtual spaces. It leverages existing standards and infrastructure with the intent of establishing a powerful yet simple to use equivalent to HTML for multi-user, massive collaboration.

Connected will be particularly focused on portable and mobile platforms, as well as scalable, adhoc network infrastructure such as cloud computing. It is a zero-install platform, with additional components added dynamically as required. Connected spaces can be embedded in virtually any application including web pages and emails. Further, Connected spaces can embed existing applications and browsers.

Connected will be deployed as an open standard architecture to ensure world-wide adoption.

6.1 Standards

A fundamental problem of the state of 3DVEs is the lack of a common framework and protocols in which different vendors can interact, compete, and contribute. Connected aims to provide such a fabric similar to the way HTML and HTTP(S) define the fabric that makes up the World

Wide Web. Connected uses standards wherever possible, from the browser as the platform,

COLLADA for 3D data representation, JavaScript as the scripting language, HTTPS and

WebSockets as the basis for communication, and WebGL for rendering.

6.2 Components

Connected spaces consist of a set of dynamically loaded components. Interaction with the components in a space is only possible if the component has been loaded; however, it may be possible to see previews of unloaded components. This allows third parties to provide demos or utilize custom licensing models for their components. Components are installed based on a plug-in model, allowing the system to utilize different implementations of the same component.

6.3 Model/View

The Connected architecture is implemented as a model/view system, where the model includes just the replicated state information, behavior computations, and URL-based references to the visual implementation of the data that is dynamically loaded into the view. The user and application can only interact with the model indirectly via the view, which forwards any event message to a reflection server, which then forwards a time-stamped event message to the model. This, along with a strict deterministic computation model, ensures that components perform bit-identical computations among multiple users.

6.4 Structure

Connected may consist of hexagonal "regions" that are linked together. A user present in a region can see and move into the six neighbors linked from the current region. When a bidirectional link is established between two regions, the resulting space is continuous and can span multiple servers. As users move through the environment, regions get loaded and unloaded as required.

6.5 Synchronization

Connected spaces consist of individual components, which can have different synchronization models. Components will usually utilize either a server-based approach (centralized

Lockheed Martin Proprietary

28

computation and data storage), a replicated approach (decentralized computation and data storage), or a streaming model (video via overlay network) for display, synchronization, and interaction.

6.6 Security

Connected provides a complete security model. Regions are owned by servers and may require authentication. Objects utilize access control lists, which are ultimately exercised by the server managing the particular region. This allows fine-grained control over individual components of environment.

6.7 Cloud-Computing

In order to provide additional computational resources, Connected will utilize cloud-computing techniques (such as Amazon's EC2 services) to scale to the number of participants in a space.

When a server starts to become crowded it can replicate a part (or all) of the computation to one or more machines in the cloud. The original owner of the region is then only consulted for special operations such as logins and to verify access controls (delete operations, etc).

6.8 Scalability

Connected spaces will be highly scalable based on the following principles: The user visualization component controls both the number of users that any participant needs to display

(control of rendering load), as well as the network traffic it is willing to accept (control of network traffic and computational load for updates).

In addition, Connected components will be unloaded if the system determines that they cause an undue computational load on the system. In an extreme case, Connected may unload all components and render only the users, but generally, the tradeoffs will be chosen as a combination of users, active components, and current focus of interest.

6.9 Mobile Access

Since Connected components are dynamically installed JavaScript-based plug-ins, it is possible to render Connected spaces using custom plug-ins on mobile platforms. These plug-ins may not even display the space in 3D; it is conceivable that they simply present a map view of the space with its current users.

6.10 Applications

The first application area for Connected will be in training for the DoD and in higher education.

Universities have both a large untapped potential for 3D social spaces as well as the need for improved distance learning. This application area will also force many of the aspects of security, standards integration, and scalability of the architecture.

6.11 After Action Review

A key capability of the Connected architecture is the ability to recreate a multi-user scenario for after action review. Since Connected utilizes a deterministic computation model with a well ordered and clearly defined event queue, all that is required for a complete playback is a stored version of the original base state of a given world and the archived events. The events are simply “played back” into the world which perfectly recreates the users actions, even with multiple participants.

6.12 Adoption

Connected will be deployed on the server using an Apache and IIS add-on. This will make it easy for administrators to provide Connected services to their customers simply by installing the

Lockheed Martin Proprietary

29

Connected add-on. On the client side, Connected will require HTML5, WebGL, and JavaScript, which will be available on almost all browsers when Connected is deployed. Additional components can be installed on-demand in a platform-specific format. The goal is to involve users starting with a zero-install experience and progressively find and install the required components.

Connected spaces are embeddable as HTML frames, which makes it possible to add a collaborative application to virtually any web page and email. Further, Connected spaces can contain other applications and web pages.

6.13 Future

When coupled with a specially developed browser, head tracking, voice recognition and synthesis, and gesture recognition, the Connected virtual world platform will become the operating system for the future immersive and augmented reality head-mount display systems.

Lockheed Martin Proprietary

30

7.0 Architecture

The VWF employs emerging web technologies to provide a simulation and training platform that is powerful, extensible, and accessible. To encourage adoption, it relies heavily on open, established standards and presents a development model familiar to any web developer. The primary technologies include:

WebGL for high performance 3D

COLLADA for 3D model specifications

WebSockets for low-latency communication

XML and JSON for world definition and data transfer

ECMAScript (JavaScript) for world scripting

XMPP (Jabber) for collaboration

WebDAV for persistent worlds

Our approach to the design of the VWF architecture is to focus on a minimal common functional system that will act as a kernel for a much broader set of capabilities and enhancements. We must define those elements that are essential to create a working system that will interoperate with other systems that are composed out of this same minimal capability, but no more.

Though this document is based on a great deal of experience designing and implementing virtual world systems, it contains a number of new concepts that have not been implemented or tested. We believe, however, that it describes a robust approach to actually creating the system, and although changes to this architecture are certain to be required, the overall design is a solid basis for planning and implementing a development effort to implement this system.

7.1 Core Concepts

In reading the following architectural and functional description of the VWF, there are a number of terms and concepts with which the reader will need to become familiar. In particular, some of the terms used are familiar but are used in a different and specific way. We will introduce many of those terms here.

7.1.1 Participant

The VWF is implicitly a multi-user system. The users are referred to as participants in the world.

More explicitly, any given VWF web page can be treated as a distinct user; hence, if a single person has the same world opened in multiple pages, he is actually managing multiple participants. The opportunities for long distance collaborative training could dramatically reduce training costs.

7.1.2 Component

A component is an independent object that includes both the visual definition of an object, typically via a URI reference, as well as the encoding of the behavior of that object. Components can include other components, creating compound objects that can exhibit a wide range of behaviors.

7.1.3 Bobble

A Bobble typically refers to a single virtual world that a participant may be able to see and interact with that may include other participants. Bobble is a word coined by Vernor Vinge in his books “The Peace War” and “Marooned in Real Time”, and refers to a dynamically generated sphere that can enclose a given location such that no information can be shared between the

Lockheed Martin Proprietary

31

existing world and the bobbled part of it, nor can information be shared between Bobbles. This is a good representation for how the VWF will work, where Bobbles defined by unique time streams can indeed include references to other Bobbles but cannot contain them directly, nor can they communicate with each other. The concept of the Bobble only extends to the logical construct, however. The VWF is quite flexible, and in this case, a Bobble is a more general thing than a traditional world. It is a container of components and references to other Bobbles. These references to other Bobbles are dynamic ally resolved and may “overlap” the current Bobble in that they are both available and accessible simultaneously by the user in the same 3D space, or the resolved references may appear as visible portals to other Bobbles. Visually, it will appear to a user that Bobbles can overlap, be linked, or contain one another. This actually only occurs via a reference inside the Bobble that is resolved by the view as the Bobble and its links are rendered. As an example, a web page does not actually include an image; instead, it includes a reference to the image that is resolved as the page is being loaded.

It is possible that a Bobble is simply a component that manages the event time stream for all of its components that is delivered from the reflection server.

7.1.4 Model/View/Controller

The Model/View/Controller architecture has been adopted by the W3C consortium that manages standards for the Internet as the architecture for the Internet. The actual implementation of the VWF is based on a model/view architecture. In this approach, the

“model” refers to that part of the system that maintains the actual state of the virtual world as well as performs the associated computations on that state. In the VWF, the model is the basis of sharing the state and interactions of the virtual world. There is no visual representation defined in the model. Instead, the “view” manages both the visual representation and the user interactions. For example, the model may include the properties of a cube including the size, references to any textures on its faces, its specific location and pose in its containing world, and compute the associated behavior that has it spinning on its axis.

The view actually maintains the visual properties of the cube, based on the definitions, references, and computations in the model. The view is a visual representation of the model and has significant flexibility in how it might portray the model. For example, one user may have a full 3D immersive view of the models that are contained in a world, where another may have a simple 2D map view of these models. Both are completely valid representations.

The controller in this case is simply the view handling the user events and redirecting them to the reflection server or Reflector as required. Any messages meant for the contents of a Bobble are simply redirected to the Reflector described later. The other role of the controller is to receive the “now” time-stamped messages from the Reflector and insert them into the Bobble.

7.1.5 Deterministic Computation

The VWF is based on a deterministic computation model . Simply, this means that there is no real randomness associated with a computation state change. If we start from a given state S1 and execute a transformation T1, the result will always result in S2 – no matter how many times this is done or on what machine.

S1 = S1’

Machine 1: S1 x T1 = S2

Machine 2: S1’ x T1 = S2’

S2 = S2’

Another way to put it is we have two identical components (their models are in an identical state) anywhere in the world – and they both receive the exact same event to trigger their

Lockheed Martin Proprietary

32

(identical) behaviors. At the end of this computation, though they are both in a new state, they are both in the same identical new state. Applied to the VWF, this means that two independent participants interacting with the same virtual world will see the exact same behaviors occurring in their respective views.

The current JavaScript interpreters are believed to support proper deterministic computations in that the results of a computed state change in one browser will be identical to that on another. It is known that certain transcendental functions may exhibit slight differences, which can easily multiply into non-deterministic state changes if not properly managed.

7.1.6 Time Stream

A time stream is a well-ordered stream of events that are read into a model in the VWF. These events are typically targeted at specific components in a model and will stimulate a particular event handler in that component. By “well-ordered” we mean that earlier events are always received and executed before later events; they are never executed out of order. In the VWF, all external events have a well-defined ordering to them, whether they are replicated events or client/server control events. In particular, all time stream events will include a consecutive event number (1,2,3,4…), as well as a time stamp in milliseconds. The time stamp will always be a monotonically increasing value.

7.1.7 Reflector

A Reflector, or reflection server is an extremely simple server. Its main role is to receive events generated by participants interacting with a virtual world, add an event number and a time stamp to it, and “reflect” the event back to the full list of participants in that virtual world. This ensures that all events are properly communicated to all users and further guarantees that there is no possibility of simultaneous events.

7.1.8 Replicated Computation Model or TeaTime

Given a deterministic computation framework and a specific time stream generated be a

Reflector server, we can create a replicated computation model that basically ensures that multiple participants can view and interact with a complex virtual world that maintains identical evolving state, no matter which participant is viewing or how they are interacting. The replicated model evolves based on the event time stream and the deterministic computations that are triggered by those events. In the Replicated Computation model, all client configurations are identical. When one client receives an external event – such as a mouse click – it sends it to the world Reflector, which assigns a time and reflects it back to all clients. All clients then respond to the event, performing the same work. This replicated computation model is known as

TeaTime, and was originally developed for the Croquet Project.

7.1.9 Client/Server

Our concept of client/server control of virtual worlds is similar to established models of multiuser computer games. The key benefit of a client/server is that users get immediate feedback to actions; for example, a direct shot from the local participant to a remote participant or non-player character results in an immediate action – the target falls. This event is also forwarded to the server, which maintains the “true” state of the world and determines whether the user action actually resulted in a change. The actual state is then forwarded back to the user via the same model described above to be reconciled with the view the user sees. Hence, though the user may see a direct hit, this is based on an estimate – perhaps via a dead-reckoning model, and the server may simply disagree so the target springs back to life. The model maintains proper state throughout; it simply disagrees with the local user’s view for a short time.

Lockheed Martin Proprietary

33

7.1.10 Streaming Object

Additionally, an extended client may Stream data into the simulation. This model is similar to the client/server model except that the streamed data may be lossy and is delivered at a lower priority. Since streamed data cannot be assumed to be identical on all clients, it is considered tainted and will not be a source of events for the shared simulation. A client may perform calculations on tainted data, but those results will also be tainted until overwritten.

Lockheed Martin Proprietary

34

8.0 VWF System Overview

The VWF is a new approach to developing and delivering collaborative interactive media and training applications via next generation web technologies. Every part of the system is designed around enabling real-time, identical interactions between groups of users. The architecture of the VWF is intended to make it quite easy to develop collaborative applications without having to spend a lot of effort and expertise in understanding how replicated, multi-user applications work. There are a number of simple patterns and rules to remember, but otherwise, it is quite simple to quickly develop very powerful systems. The system allows for a piece-wise development of the applications in the system via easily constructed components which include information about their visual appearance and event managers that determine how the components will respond to user interactions and to other components.

TeaTime and components are the basis for the VWF’s replicated computation and synchronization applications that can be scaled to massive numbers of concurrently interacting users in a shared virtual space. The VWF’s treatment of distributed computation assumes a truly large-scale distributed computing platform, consisting of heterogeneous computing devices distributed throughout a planet-scale communications network. Applications are expected to span machines and involve many users. In contrast with the more traditional architectures we grew up with, the VWF incorporates replication of computation (both objects and activity) and the idea of active shared sub-components in its basic interpreter model. More traditional distributed systems replicate state data but try very hard not to replicate computation, as it has been extremely difficult to ensure even reasonable similarity between the evolution of client state. If a clear deterministic model is available, however, it is easier and more efficient to send the request for the computation to the data, rather than the other way around. The design and implementation of TeaTime in the Croquet Project demonstrated that a replicated computation model is both feasible and extremely efficient. Consequently, the VWF is defined so that replication of computations is just as easy as replication of data alone. Both approaches are fully supported, so that integration with legacy client/server models is quite feasible.

8.1 Bobbles and Replicated Computation

The basic unit of replication and collaboration in the VWF is a component called a “Bobble. A

Bobble is a secure container of two kinds of objects. The first are components that share the same time stream with the Bobble and the other components inside of the Bobble. The others are URI references to other objects, such as other Bobbles, textures, 3D objects, etc. Though these references exist inside the Bobble, the actual de-referenced objects do not. When the local view renders the contents of the Bobble, the references will first trigger a secondary load of its contents, and when complete, the view will render these objects. They can appear as if they are contained within the Bobble, but are actually totally independent objects.

Access to the contents of a Bobble is governed by strict rules that ensure proper bottlenecking.

Internally, objects see no difference between the rights they have in accessing other objects than in any other traditional system, except for a significant restriction that explicit infinite loops are simply not allowed. This is replaced instead by a mechanism we refer to as “temporal tail recursion”, discussed later. In addition, an object inside a Bobble has full access to the external reference URI to another object, even adding and removing them as required, but has no access to the de-referenced object.

The VWF is based on the concept of replicated computation, rather than replicated data. This uses a synchronized message passing model, where the messages themselves ensure that the replicated systems remain consistent between machines. Though it is necessary to synchronize

Lockheed Martin Proprietary

35

the world state of a new user by transferring the current contents of the world, after that, the worlds stay consistent only through the creation and processing of time-based messages.

Bobbles are the units of replication. A single Bobble is actually made up of a collection of containers, all in identical states on different machines connected by a network. One of these containers on a particular machine could be thought of as a replica or projection of the Bobble, albeit a complete projection. The VWF guarantees that the evolution of the state of a particular

Bobble replica is identical to any other replica of the same Bobble. This is the basis of the VWF collaboration architecture.

The term “Bobble” is used in several ways in this document. As just described, a Bobble consists of the set of all of its replicas. For clarity, we may refer to a copy as a “Bobble replica”

(or simply “replica”), while the whole set is called the “Bobble”. Furthermore, Bobble is a

JavaScript class that implements the base “Bobble” model; a replica is an instance of the

Bobble class.

A single virtual world “ Bobble ” – each user has a replica of the Bobble .

VWF Bobbles are secure containers of other objects. They act as a kind of meta-object in that they have perhaps an even better model of encapsulation – certainly more secure – than traditional object models, and they enforce a rigorous content-hiding and message passing model. This is a necessary precursor to guarantee identical behavior and identical response to external events. Bobble s are referred to as “Bobbles” in Croquet and a similar concept is the

“Vats” in the E programming language.

Lockheed Martin Proprietary

36

Bobbles can be saved and transported to other users.

Bobbles are the basic unit of replication in the VWF. They are generic object containers that are simple to checkpoint and exchange. They can easily be saved to disk for use later or archiving, or they can be transported between users to initiate interaction with the contents of the Bobbles.

Contents of a Bobble can interact directly.

Objects internal to a Bobble have the same access privileges to each other that ordinary objects have. They can send messages directly to each other or themselves; can maintain direct access links to each other; and, in general, exhibit the same kinds of relationships that ordinary objects

Lockheed Martin Proprietary

37

enjoy. They cannot, however, send messages outside the scope of the Bobble, nor can objects outside the Bobble send messages directly to the objects inside.

Access to the contents of a Bobble is always indirect.

That is not to say that there is no way for external messages to be sent to an object inside of a

Bobble. A FarRef is an object that exists outside of the Bobble and can act as a proxy for an object that is actually inside the Bobble. A message sent to the FarRef is ultimately forwarded on to the actual object that is inside. This is actually a somewhat simplistic view of what actually happens; the actual process is a bit more interesting.

The Bobble maintains a Dictionary of all accessible objects inside of it.

The FarRefs are actually generated by having the Bobble register a particular object as being externally accessible. An external name is generated, and a FarRef is made available by the

Bobble. The Bobble maintains a Dictionary that maps the FarRef back to the original object.

Lockheed Martin Proprietary

38

Messages are sent indirectly via a FarRef.

In the VWF, messages are sent to the original object inside of the Bobble indirectly via the

FarRef. This ensures that we have a nice way of bottlenecking the message, as we will usually have to redirect it in such a way that ensures it is properly replicated.

Though there are ways to bypass this bottlenecking, it is extremely dangerous do so, as it can easily lead to a violation of the replicated state of the system. This invalidates the guarantee the

VWF has of ensuring perfectly replicated state inside of a Bobble.

8.2 Replicated Bobbles

Bobbles in the VWF are the units of replication in the system. For Bobbles to work properly, they must be deterministically equivalent. This means that given an identical initial state between two

Bobble replicas – and given exactly the same inputs at the same time – the end states of these

Bobble replicas must be identical. If for some reason there is even a slight divergence in state, this can easily be multiplied such that the end results are completely out of sync. Since the entire point of the VWF is to provide the users a perfectly replicated simulation environment that can be used as the basis of communication of training, information, and ideas, this kind of breakdown renders the system useless.

Lockheed Martin Proprietary

39

Of course, the entire point of the VWF architecture is to have any number of Bobble replicas exhibiting identical state anywhere on the network; hence, anywhere in the world or even beyond. A number of new concepts and objects need to be introduced to describe how the replicated Bobble architecture works.

The first is the VWF Message, which includes not just the token message name but the name of the target of the message, the message arguments, and when the message will be executed.

The second is the VWF Reflector, which is the object that manages messages that are generated externally to a Bobble but are sent to it. It both determines when this external message will be executed and replicates it to all of the Bobble replicas. The third is the VWF view/controller, which is the interface between the Bobble and the Reflector and manages external events by redirecting them to the Reflector. Together with Bobbles, these are the main elements of a robust time-based replicated architecture.

8.3 VWF Messages

A VWF message is made up of four components, the target, which is the object that will actually execute the message; the actual message; the arguments to the message, if any; and the time at which the message will be executed. The time value is also used to sort the unexecuted message in the Bobble ’s message queue. VWF messages can be generated either internally as the result of the execution of a previous message inside of a Bobble, or externally as the result of an external event usually generated by one of the users of the system.

A Bobble ’s message queue and a message.

There is virtually no difference between internally and externally generated messages as far as the internal execution of the Bobble is concerned. A major difference between the two is that the time stamps on externally generated messages are used by a Bobble to indicate an upper bound to which the Bobble can compute its current message queue to without danger of computing beyond any possible pending messages.

8.4 Timing is Everything

The definition and manipulation of time plays the central role in how we are able to create and maintain a replicated Bobble state. We must be able to guarantee that every internally generated message will be executed in exactly the proper order at exactly the proper time.

Lockheed Martin Proprietary

40

Externally generated messages must be properly interleaved with the internally generated messages at exactly the right time and order.

When a new message is generated, it is inserted in the sorted queue based on its execution time.

A new message inserted into the message queue.

8.5 Bobble Time

A Bobble ’s view of time is defined only by the order of the messages it has in an internal queue.

Bobbles can only respond to external, atomic, time-stamped messages. These messages are literally the Bobble clock. Though Bobbles have internal time-based messages that can be queued up, these cannot be released for computation until an external time based message has been received that indicates the outer temporal bound to which the Bobble can compute to. This is a key point of the architecture. Though we may have a huge number of internal messages ready to be executed, they remain pending until an external time-stamped message is received indicating that these internal messages are free to be computed up to and including the newly received message. Each Bobble

’s message queue is processed by a single thread, so issues with improperly interleaved messages do not arise.

When a message is executed, the time remains atomic in that it does not advance during the execution of this message. The “now” of the message stays the same. When we generate a future message during the current message, we always define its execution time in terms of the current “now” plus an offset value. This offset must always be greater than zero. (Though zero is an acceptable value in certain circumstances, it should almost always be avoided because if it is infinitely iterated, the VWF simulation cannot advance, and the system will appear to freeze.) If we generate multiple future messages, they will have an identical “now”, though they will have d ifferent offsets. If we generate two messages at the same “now” and with an identical temporal offset value, an additional message number is used to ensure deterministic ordering of the messages.

All of the messages in the Bobble queue are “future” messages. They are messages generated as the result of the execution of a previous internal message with a side effect of sending messages to another object at some predefined time in the future, or they are messages that are generated as the result of an external event – usually from a user – that is posted to the

Bobble to execute at some point in the future, usually as soon as possible. All of these

Lockheed Martin Proprietary

41

messages have time stamps associated with them. The internal messages have time stamps that are determined by the original time of the execution of the message that initially posted the message plus the programmer defined offset. The external messages have a time that is determined by an external server called a Reflector and is set to a value that is usually closely aligned with an actual time, though it does not need to be.

Internal future messages are implicitly replicated; they involve messages generated and processed within each Bobble replica, so they involve no network traffic. This means that a

Bobble ’s computations are, and must be, deterministically equivalent on all replicas. As an example, any given external message received and executed inside of a group of replicated

Bobbles must generate exactly the same internal future messages that are placed into the

Bobble s’ message queues. The resulting states of the replicated Bobbles after receipt of the external message must be identical, including the contents of the message queues.

External future messages are explicitly replicated. Of course external messages are generated outside of the scope of the Bobble, typically by one of the users of the system via the view/controller. The replication of external messages is handled by a server called a Reflector, which specifies when the message will be executed. The Reflector is more fully described below.

External non-replicated messages targeted to the contents of a Bobble are extremely dangerous and must be avoided. They play a role, but it is extremely rare that anyone will ever have a need to make use of this mechanism. The problem is obviously that if a non-replicated message is executed and happens to modify the state of a Bobble, it breaks the determinism the Bobble shares with the other replicated copies. We use such non-replicated messages when rendering the contents of a Bobble, but this is extremely well controlled and read-only.

Each Bobble has an independent view of time that has no relationship to any other Bobble ’s time (Bobble used here as the complete collection of Bobble replicas). This includes (for example) that a given Bobble could have a speed of time (relative to real time) that was a fraction of another. This is useful for collaborative debugging, where a Bobble can actually have a replicated single step followed by observation by the peers.

Since time is atomic and the external messages act as the actual clock, latency has no impact on ensuring that messages are properly replicated and global Bobble state is maintained. It does mean that higher latency users have a degraded feedback experience.

Bobbles enforce an internal “temporal tail-recursion" with the use of the future message.

Basically, a message is arranged to execute some unit of time from the atomic "now" in the future. Hence, a message send like the following causes the angle of the cube to be increased by one degree every 100 milliseconds: function turn(angle){ cube.rotateAroundY(angle); this.future (100).turn(angle+1);

}

8.6 The VWF Reflector/Sequencer

The VWF Reflector plays two major roles. First, it acts as the clock for the replicated Bobbles in that it determines when an external event will be executed. These external events are the only information a Bobble has about the actual passage of time, so the Bobble simply cannot execute any pending messages in its message queue until it receives one of these timestamped external messages. The second critical role played by the Reflector is to forward any

Lockheed Martin Proprietary

42

messages it receives from a particular VWF user via the view/controller to all of the currently registered Bobbles.

Given that Bobbles cannot execute beyond these external messages, it is usually necessary to manufacture new messages simply for the sake of moving time forward. These messages are created by the Reflector and are called heartbeat messages. They are basically message-free and contain only a time stamp that allows the Bobble to execute all of the current queued events up to and including the new heartbeat message.

Reflectors can be located almost anywhere on the network and need not be collocated with a particular Bobble. Typically, the Reflector will be an extension to an existing web server, such as

Apache.

8.7 The VWF Controller

The VWF View Controller is the non-replicated part of the Model/View/Controller. The role of the VWF Controller is to act as the interface between the Bobble and the Reflector and between the user and the Bobble. Its main job is to ship messages around between the other parts of the system.

The Controller also manages the Bobble ’s message queue by determining when messages will get executed.

Interestingly, a VWF Controller can exist without a Bobble, acting as a proto-Bobble until the real Bobble is either created or duplicated. In this case it is used to maintain the message queue until either a new Bobble is created or until an existing Bobble is replicated.

8.8 VWF Message Execution

The basic idea behind the VWF ’s replicated message model is that the VWF Reflector acts as the clock for all of the Bobble replicas. This is a guarantee that they all share exactly the same model of time. The VWF Controller acts as the interface between the Reflector and the user and the Reflector and the Bobble. Every replica of a Bobble has its own Controller, but there is only one Reflector for the set of replicas of a Bobble.

To track a message from an initial event to execution inside of a Bobble, we first consider a user interacting with a specific object. The user never has direct access to the objects inside of a

Bobble, so he can only interact with a far reference to that object, a FarRef. A message is constructed using the following line of code: farRef.future.aMessage(arguments)

What we are doing here is sending “aMessage” to the farRef to be executed as soon as possible in the future. In fact, another way to read the future object is “ASAP”. The farRef forwards this message to the VWF Controller of the Bobble that contains the actual object that the farRef refers to. The Controller simply forwards the message again to the VWF Reflector associated with the Bobble. The Reflector immediately places the current time stamp on the message – note that no two time stamps are equivalent – and forwards the message, now containing the execution time, back to the original Controller. The Controller then inserts the message into the

Bobble’s message queue and begins to execute all of the messages that are already in the queue that have a time stamp less than the new message.

Lockheed Martin Proprietary

43

This may seem a bit round about just to get a message to the local Bobble, but this process makes much more sense when seen in the context of replicated messages as described in the next section.

An interesting thing to point out is that a given internal message can generate a new message at a delta from the time of the original internal message that is actually less than the time of the new external message that is driving the clock. This newly minted message is simply added to the queue and executed in its own proper order before we actually execute the external message. If its delta is small enough, it may even generate a number of additional messages that get executed before the external time-stamped message is.

8.9 Replicated Message Execution

The main reason for the existence of the Bobbles, Reflectors, and Controllers is to enable the perfect replication of even complex interactions and simulations. The model for this is basically identical to the description in the previous section up to the point where the no time-stamped messages are sent out of VWF Reflector. A replicated Bobble has multiple identical copies of itself in various locations around the world (or the office or school). There is still only one

Reflector, but now, for every Bobble replica there is a Controller.

Lockheed Martin Proprietary

44

After the Reflector receives a message from one of the Controllers, it forwards the time stamped message to all of the Controllers connected to it, including the original one. The Controllers insert the new message into the sorted message queues and execute the messages in each queue up to and including the new message. This means that every Bobble is completely up-todate with every other one.

8.10 Starting, Joining, and Participating

The process of creating a new VWF session from scratch and then having new users join into the process is relatively simple. There are three parts to it: creating the Reflector, Controller, and Bobble; Joining the Reflector; and Participating in the VWF.

8.11 Starting Up

The first action required in creating a new VWF world is for us to create a new VWF Reflector.

This Reflector can be on any accessible machine on the network – either remotely on a WAN, locally on the LAN, or on the same machine that will act as host to the original Bobble, which can be useful for testing. Reflectors are extremely lightweight objects, so they really don’t take up many resources, either in space or computation. The Reflector has a network address and port number, which is how we will find it later.

The new Reflector can be on any machine, typically running with the web server.

Lockheed Martin Proprietary

45

Once the Reflector exists, we need to create a new Controller on the user’s system. This needs to be located on the same machine where the new Bobble will be located, and is usually the original user ’s own computer. Again, this is not essential. It is quite easy to create a

Controller/Bobble pair on a remote server. Of course, this may take a few more resources than a simple Reflector requires. We give the Controller the address and port number for the original

Reflector and it begins to connect.

Reflector

Controller

The new Controller will be on the user ’s machine running in the browser. It is given the

Reflector address and port number. Since it will be used to construct the initial Bobble, we call it the master controller.

8.12 Joining

The first thing the Controller does is send a message to the Reflector asking to subscribe to its message stream. Given that we made both the Reflector and the Controller, we are guaranteed of getting access, but it is important to note that this may not be true for other users as they attempt to join. You will have to grant them explicit permission or leave the Reflector open to anyone, if they are to join the session. Once the Reflector authorizes the Controller, it will begin publishing its message stream to it.

Reflector

Join request

Controller

-

The controller sends a message to the Reflector asking for messages. The Reflector, if it is authorized, begins publishing its message stream to the controller.

The only messages coming from the Reflector at this point are the heartbeat messages, assuming we set the Reflector to generate these. In any case, the Controller is designed to simply begin adding these messages to its message queue. This is actually important when we

Lockheed Martin Proprietary

46

are joining an already existent replicated Bobble, because in that case, many of the messages that get sent and stored on the queue will be necessary to bring the Bobble replica up-to-date after it is replicated locally. Joining allows the new user “view only” access. At this point, even if there were a Bobble, the user is not allowed to send messages that might modify it in any way.

Reflector

Controller

Message

Stream

Once the join is accepted, the Reflector sends all replicated messages and heartbeats to the Controller. The Controller saves these messages into a message queue.

At this point, we can create a new Bobble from scratch using the Controller. We can also populate the Bobble from a JSON description file (see below) and have the objects inside of it begin the process of sending their own internal messages into the message queue. Once the

Bobble exists, we still need to be allowed to participate in it, which allows us to send it external messages generated by user events. Like joining, this is simply making a request to the Bobble to be allowed to participate. If granted, our Controller receives a list of facets, which is a kind of encrypted dictionary of messages that we are allowed to send from the user through the

Reflector to the Bobble. This is an additional security measure. It ensures that a user must be granted explicit capabilities, and even allows the system to grant varying levels of access to different users. The facets are actually encrypted values that are tied to a particular user and cannot be constructed by the user.

Reflector

Message

Stream

Controller

Machine A

8.13 Adding Users

The new users need to be able to join a VWF session while it is running, with minimal cost to the other users. If done properly, the other users might not even notice that another user has joined the session apart from seeing a new avatar appear in the scene.

Since a Reflector already exists, we only need to create a new Controller on the new local machine. This is identical to creating the original Controller on the original user ’s machine.

Lockheed Martin Proprietary

47

Reflector

Machine A

This is similar to constructing the initial controller/Bobble pair. First, create the controller.

Just as before, the new Controller requests that it be allowed to join the ongoing VWF session.

Reflector

Request to join the Reflector.

Machine A

Once permission to join is granted by the Reflector, the new Controller will begin receiving messages from the Reflector. In this case, these messages will likely include events generated by the current users of the Bobble, so these messages are extremely important.

Reflector

Machine A

Once granted, we add new mes sages into the message queue.

Now, instead of creating a new local Bobble, the Controller needs to request a copy of the current Bobble from the other user. The Reflector forwards this request to the original Controller and the Bobble is “checkpointed” by ALL OF THE USERS, in that a copy is made at a particular instant in time. One of the current users streams their checkpointed Bobble out to the new user via the Reflector.

Lockheed Martin Proprietary

48

Reflector Reflector

Machine A

The controller can now be used to request a copy of the Bobble.

Machine A

The Bobble is checkpoint streamed to the new controller via the Reflector.

Reflector

Reflector

Machine A

Machine A

The controller resurrects the island locally.

All users must create a JSON image of the Bobble and RELOAD this image. This is because a

JSON encoding is presumed to be a lossy copy of a Bobble, and by having each user save and load, it is further presumed that they will all restart with an identical Bobble that is lossy in the same way.

Once the Bobble has been copied over to the new user ’s machine, the message queue is truncated to the time of the most recent message executed by the new replica of the Bobble and execution seamlessly picks up from that point. The two users are perfectly synchronized with identical Bobbles.

Reflector

Machine A Machine B

8.14 Participating

At this point, the new user has basically read-only access to the Bobble and must also ask to be allowed to participate. Again, it requests this from the Reflector and, if granted, the new user can begin to interact with his peers inside of the Bobble.

Lockheed Martin Proprietary

49

8.15 Nice Side Effects

Because no messages are ever lost and because the original message senders cannot specify when a message is to be executed, latency does not create timing or synchronization problems, just feedback problems. Systems will act sluggish if you have a higher latency, but the contents of the Bobble will remain identical between all users regardless.

This also means that users are not punished for having a high-latency participant sharing a

Bobble, though the high-latency participant may have a less than satisfactory experience.

Since Reflectors are independent of Bobble/Controller pairs, they can be positioned anywhere on the network. This means that they can be moved to a position of minimal group latency or onto centralized balanced latency servers. Reflectors can even be moved around if necessary to improve latency for specific users or groups for a certain time period.

8.16 Components

VWF Components are the basic building blocks of the system. Components define both the behavior and state aspects of the object that are included inside of the Bobble, and they define how the visual representation of the object is defined inside of the view and how a user may interact with it. The view representation may include additional code to manage non-replicated behavior, for example for controlling a local user’s avatar or for managing a dead-reckoning response. Components are intended to be an atomic element in the VWF and it is intended that these objects are the basis of interoperability between virtual worlds. A component created for one virtual world should easily be transferrable to another via its JSON definition.

8.17 The Future of VWF Components

The real work of the VWF is actually performed by the components that are inside of the

Bobbles. These are the objects that know how to tell the view how to display themselves, know how respond to external user events, and can perform time-based simulations. They can be 3D objects that get rendered using WebGL, or 2D objects that lie flat on the screen, or even zero-D objects that have no visual representation at all but can perform complex computations.

In fact, there are really no special VWF objects. The real distinction is that components inside of

Bobbles can send and receive future messages. These are virtually any message that a component understands, but sent into the future to be executed at an explicit later time. The syntax is basically the same as sending a normal message to a component, except we need to specify how far into the future the message will be executed.

As an example, if we want to rotate a cube in the 3D world by ten degrees around its y-axis we would normally write: cube.addRotationAroundY(10);

This would be executed immediately and the cube would be rotated 10 degrees. If instead, we wanted to perform this operation sometime in the future

– perhaps one second – we would write something like this: cube.future(1000).addRotationAroundY(10);

The only real difference is the future (1000), that specifies we want the next message – addRotationAroundY() – to be executed 1000 milliseconds or one second from now.

Virtually any object can be sent messages this way. Outside of a Bobble, we only have indirect references to the components and other objects inside of it. We can still send messages to these inside objects via the FarRef, but we cannot specify when these messages are actually

Lockheed Martin Proprietary

50

executed. To send a message to the cube inside of the Bobble, we first need to have a FarRef to the cube – call it farCube – and to send the transform message we would do something like the following: farCube.future.addRotationAroundY(10);

We cannot specify how far into the future this message will be executed. The only guarantee is that the Reflector will attempt to have it execute as soon as possible. If it is necessary to have a delayed future send, then you will need to write another method that performs a future send of the required time. As an example, if you want to have the rotation triggered in five seconds after the external future send, you could write the following method: function MyCube(){ function myAddRotationAroundY(angle, time){ this.future (time) .addRotationAroundY(angle);

}

}

Then, you would execute the following from outside of the Bobble: farCube.future.addRotationAroundY(10, 5000);

Note that this is still relatively undefined. All you know is that the actual add rotation message will be executed exactly five seconds after this message is executed, and all you know about that is it will be dependent on the best efforts of the Reflector and network.

Lockheed Martin Proprietary

51

9.0 Sample Implementation

This section provides a sample implementation of significant skeletal pieces of the VWF. Our approach is to write the code in a literate style that makes the functionality of the system clear by the context of the code and comments. The files demonstrate client code that runs in a browser and provides:

A dynamically loaded simulation assembled from components delivered over standard network protocols.

A connection to a conference server providing a shared environment.

A representative connection to a 3D scene manager.

The listing consists of the following files:

index.html – the main page that the user loads into the browser to launch the world. This page and all associated scripts can be delivered by a web server that hosts the simulation components and provides the shared conference space.

vwf.js – the main scripts implementing the VWF.

vwf-model.js

– The base implementation for the VWF models.

vwf-model-javascript.js – A stand-in implementation for JavaScript control of simulation objects.

vwf-model-scenejs.js – A stand-in implementation for a WebGL scene manager.

vwf-view.js – The base implementation for the VWF views.

vwf-view-html.js – A stand-in implementation of a HTML-based simulation block diagram.

Lockheed Martin Proprietary

52

9.1 index.html

<!

DOCTYPE html >

< html >

<!-- The Virtual World Framework client is a collection of scripts and a world specification -->

<!-- passed to an initialization call. In this sample, the world specification is provided -->

<!-- inline for clarity, but it is normally provided by the conference server or may be -->

<!-- specified as a URI and loaded from a network-visible location. -->

< head >

< title > Virtual World Framework </ title >

<!-- The Virtual World Framework makes use of the jQuery library. -->

< script type ="text/javascript" src ="jquery-1.4.4.js"></ script >

<!-- This is the main client library. vwf.js creates a framework manager and attaches it to -->

<!-- the global window object as window.vwf. All access to the framework is through that -->

<!-- reference, and no other objects are globally visible. -->

< script type ="text/javascript" src ="vwf.js"></ script >

<!-- The core framework manages the simulation and synchronizes it across worlds shared by -->

<!-- multiple users. But, the manner in which the simulation is expressed is controlled by -->

<!-- extension modules. There are two flavors. Models directly control the simulation but -->

<!-- cannot accept external input. The model configuration is identical for all participants -->

<!-- in a shared world. Views may accept external input -such as pointer and key events or -->

<!-- directives from a connection to an outside engine that is not visible to all users- but -->

<!-- may only affect the simulation indirectly through the synchronization server. -->

<!-- This is the common model implementation and an example model that connects the -->

<!-- simulation to a WebGL scene manager. -->

< script type ="text/javascript" src ="vwf-model.js"></ script >

< script type ="text/javascript" src ="vwf-model-scenejs.js"></ script >

<!-- This is the common view implementation and an example view that summarizes the -->

<!-- simulation state in HTML on the main page. -->

< script type ="text/javascript" src ="vwf-view.js"></ script >

< script type ="text/javascript" src ="vwf-view-html.js"></ script >

Lockheed Martin Proprietary

53

<!-- With the scripts loaded, we must initialize the framework. vwf.initialize() accepts -->

<!-- three parameters: a world specification, model configuration parameters, and view -->

<!-- configuration parameters. -->

< script type ="text/javascript">

vwf.initialize(

// This is the world specification. The world may be specified using a component literal

// as shown here, or the specification may be placed in a network-visible location and

// specified here as a URI or as a query parameter to this index page.

// As a literal:

// { extends: "http://vwf.example.com/types/example-type", properties: { ... }, ... }

// As a string:

// "http://vwf.example.com/types/example-type",

{

children:

{

scene:

{

name: "scene" ,

children:

{

earth:

{

name: "earth" ,

extends: "node3" ,

properties:

{

transform: [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 2, 3 ],

enabled: true ,

angle: 0,

}

},

mars:

{

name: "mars" ,

Lockheed Martin Proprietary

54

extends: "node3" ,

properties:

{

transform: [ 1, 0, 0, 0, 1, 0, 0, 0, 1, 4, 5, 6 ],

enabled: true ,

angle: 0

}

},

venus:

{

name: "venus" ,

extends: "node3" ,

source: "http://assets.vwf.example.com/venus.js" ,

type: "x-model/scenejs"

},

}

}

}

},

// These are the model configurations. Each key within the configuration object is a

// model name, and each value is an argument or an array of arguments to be passed to

// the model's constructor.

// With an array of arguments for the "example" model:

// { example: [ p1, p2, ... ], // ==> new vwf.modules.example( vwf, p1, p2, ... ) }

// As a single argument to the "html" view:

// { html: "#vwf-root" // ==> new vwf.modules.html( vwf, "#vwf-root" ) }

{

javascript: [], // invokes: new vwf.modules.javascript( vwf )

scenejs: "#vwf-scene" , // invokes: new vwf.modules.scene( vwf, "#vwf-scene" )

},

// These are the view configurations. They use the same format at the model

// configurations.

{

html: "#vwf-root" // invokes: new vwf.modules.html( vwf, "#vwf-root" )

}

Lockheed Martin Proprietary

55

)

</ script >

< link rel ="stylesheet" href ="index.css" type ="text/css" />

</ head >

< body >

<!-- Generic clients may have nearly empty pages, but pages for custom clients may be laid -->

<!-- out in any manner desired. Any view and models that render to the page should be -->

<!-- instructed where to attach there content. -->

< h1 > World </ h1 >

<!-- For this sample, the "scenejs" model has been configured to render to "#vwf-scene", the -->

<!-- item with the id "vwf-scene". -->

< div id ="vwf-scene"></ div >

<!-- The "html" view has been configured to render to "#vwf-root" -->

< div id ="vwf-root"></ div >

</ body >

</ html >

Lockheed Martin Proprietary

56

9.2 vwf.js

( function ( window ) {

console.info( "loading vwf" );

// vwf.js is the main Virtual World Framework manager. It is constructed as a JavaScript module

// (http://www.yuiblog.com/blog/2007/06/12/module-pattern) to isolate it from the rest of the

// page's JavaScript environment. The vwf module self-creates its own instance when loaded and

// attaches to the global window object as window.vwf. Nothing else should affect the global

// environment.

window.vwf = new function () {

console.info( "creating vwf" );

// == Public attributes ====================================================================

// Each model and view module loaded by the main page registers itself here.

this .modules = [];

// vwf.initialize() creates an instance of each model and view module configured on the main

// page and attaches them here.

this .models = []; this .views = [];

// This is the simulation clock, which contains the current time in milliseconds. Time is

// controlled by the conference server and updates here as we receive control messages.

this .time = 0;

// == Private variables ==================================================================== this .private = {}; // for debugging

// Components describe the objects that make up the simulation. They may also serve as

// prototype objects for further derived components. External components are identified by

// URIs. Once loaded, we save a mapping here from its URI to the node ID of its prototype so

// that we can find it if it is reused. Components specified internally as object literals

// are anonymous and are not indexed here.

var types = this .private.types = {}; // URI => ID

Lockheed Martin Proprietary

57

// Control messages from the conference server are stored here in a priority queue, order by

// execution time.

var queue = this .private.queue = [];

// This is the connection to the conference server. In this sample implementation, "socket"

// a socket.io client that communicates over a channel provided by the server hosting the

// client documents.

var socket = undefined;

// The proto-prototype of all nodes is "node", identified by this URI. This type is

// intrinsic to the system and is nothing is loaded from the URI.

var nodeTypeURI = "http://vwf.example.com/types/node" ;

// Each node is assigned an ID as it is created. This is the most recent ID assigned.

// Communication between the manager and the models and views uses these IDs to refer to the

// nodes. The manager doesn't maintain any particular state for the nodes and knows them

// only as their IDs. The models work in federation to provide the meaning to each node.

var lastID = 0;

// Callback functions defined in this scope use this local "vwf" to locate the manager.

var vwf = this ;

// == Public functions =====================================================================

// -- initialize ---------------------------------------------------------------------------

// The main page only needs to call vwf.initialize() to launch the world. initialize()

// accepts three parameters.

// A component specification identifies the world to be loaded. If a URI is provided, the

// specification is loaded from there [1]. Alternately, a JavaScript object literal

// containing the specfication may be provided [2]. Since a component can extend and

// specialize a prototype, using a simple object literal allows existing component to be

// configured for special uses [3].

//

// [1] vwf.initialize( "http://vwf.example.com/worlds/sample12345", ... )

//

// [2] vwf.initialize( { source: "model.dae", type: "model/x-collada",

Lockheed Martin Proprietary

58

// properties: { "p1": ... }, ... }, ... )

//

// [3] vwf.initialize( { extends: "http://vwf.example.com/worlds/sample12345",

// source: "alternate-model.dae", type: "model/x-collada" }, ... )

//

// modelArguments and viewArguments identify the model and view modules that should be

// attached to the simulation and provides their configuration parameters. Each argument set

// is specified as an object (hash) in which each key is the name of a model or view to

// construct, and the value is the set of arguments to pass to the constructor. The

// arguments may be specified as an array of values [4], or as a single value is there is

// only one [5].

//

// [4] vwf.initialize( ..., { scenejs: "#scene" }, { ... } )

// [5] vwf.initialize( ..., { ... }, { html: [ "#world", "second param" ] } ) this .initialize = function ( /* [ componentURI|componentObject ] [ modelArguments ]

[ viewArguments ] */ ) { var args = Array.prototype.slice.call( arguments );

// Get the world specification if one is provided in the query string. Parse it into a

// world specification object if it's valid JSON, otherwise keep the query string and

// assume it's a URI.

var world = jQuery.getQueryString( "world" ); try { world = jQuery.parseJSON( world ) || world || {}; } catch ( e ) { }

// Parse the function parameters. If the first parameter is a string or contains

// component properties, then treat it as the world specification. Otherwise, fall back

// to the "world" parameter in the query string.

if ( typeof args[0] == "string" || args[0] instanceof String ||

objectIsComponent( args[0] ) ) {

world = args.shift();

}

// Shift off the parameter containing the model argument lists.

var modelArgumentLists = args.shift() || {}; if ( typeof modelArgumentLists != "object" && ! modelArgumentLists instanceof Object )

modelArgumentLists = {};

// Shift off the parameter containing the view argument lists.

Lockheed Martin Proprietary

59

var viewArgumentLists = args.shift() || {}; if ( typeof viewArgumentLists != "object" && ! viewArgumentLists instanceof Object )

viewArgumentLists = {};

// Register a callback with jQuery to be invoked when the HTML page has finished

// loading.

jQuery( window.document ).ready( function () {

// Create and attach each configured model.

jQuery.each( modelArgumentLists, function ( modelName, modelArguments ) { var model = vwf.modules[modelName];

model && vwf.models.push( model.apply( new model(), [ vwf ].concat( modelArguments || [] ) ) );

} );

// Create and attach each configured view.

jQuery.each( viewArgumentLists, function ( viewName, viewArguments ) { var view = vwf.modules[viewName];

view && vwf.views.push( view.apply( new view(), [ vwf ].concat( viewArguments || [] ) ) );

} );

// Load the world.

vwf.ready( world );

} );

};

// -- ready -------------------------------------------------------------------------------this .ready = function ( component_uri_or_object ) {

// Connect to the conference server. This implementation uses the socket.io library,

// which communicates using a channel back to the server that provided the client

// documents.

try {

Lockheed Martin Proprietary

60

socket = new io.Socket();

} catch ( e ) {

// If a connection to the conference server is not available, then run in single-

// user mode. Messages intended for the conference server will loop directly back to

// us in this case. Start a timer to monitor the incoming queue and dispatch the

// messages as though they were received from the server.

this .dispatch( 0 );

setInterval( function () {

vwf.time += 10;

vwf.dispatch( vwf.time );

}, 10 );

} if ( socket ) {

socket.on( "connect" , function () { console.info( "vwf.socket connected" ) } );

// Configure a handler to receive messages from the server. Note that this example

// code doesn't implement a robust parser capable of handing arbitrary text and that

// the messages should be placed in a dedicated priority queue for best performance

// rather than resorting the queue as each message arrives. Additionally,

// overlapping messages may cause actions to be performed out of order in some cases

// if messages are not processed on a single thread.

socket.on( "message" , function ( message ) {

console.info( "vwf.socket message " + message ); var fields = message.split( " " );

// Add the message to the queue and keep it ordered by time.

queue.push( fields );

queue.sort( function ( a, b ) { return Number( a[0] ) - Number( b[0] ) } );

// Each message from the server allows us to move time forward. Parse the

// timestamp from the message and call dispatch() to execute all queued actions

// through that time, including the message just received.

Lockheed Martin Proprietary

61

// The simulation may perform immediate actions at the current time or it may

// post actions to the queue to be performed in the future. But we only move

// time forward for items arriving in the queue from the conference server.

var time = Number( fields[0] );

vwf.dispatch( time );

} );

socket.on( "disconnect" , function () { console.log( "vwf.socket disconnected" ) } );

// Start communication with the conference server.

socket.connect();

}

// Load the world. The world is a rooted in a single node constructed here as an

// instance of the component passed to initialize(). That component, it's prototype(s),

// and it's children, and their prototypes and children, flesh out the entire world.

this .createNode( component_uri_or_object );

};

// -- send ---------------------------------------------------------------------------------

// Send a message to the conference server. The message will be reflected back to all

// participants in the conference.

this .send = function ( /* nodeID, actionName, parameters ... */ ) { var args = Array.prototype.slice.call( arguments );

// Attach the current simulation time and pack the message as an array of the arguments.

var fields = [ this .time ].concat( args ); if ( socket ) {

// Send the message if the connection is available.

var message = fields.join( " " );

socket.send( message );

Lockheed Martin Proprietary

62

} else {

// Otherwise, for single-user mode, loop it immediately back to the incoming queue.

queue.push( fields );

queue.sort( function ( a, b ) { return Number( a[0] ) - Number( b[0] ) } );

}

};

// -- receive ------------------------------------------------------------------------------

// Handle receipt of a message. Unpack the arguments and call the appropriate handler.

this .receive = function ( message ) {

// Note that this example code doesn't implement a robust parser capable of handing

// arbitrary text. Additionally, the message should be validated before looking up and

// invoking an arbitrary handler.

var fields = message.split( " " );

// Shift off the now-unneeded time parameter (dispatch() has already advanced the time)

// and locate the node ID and action name.

var time = Number( fields.shift() ); var nodeID = Number( fields.shift() ); var actionName = fields.shift();

// Look up the action handler and invoke it with the remaining parameters.

this [actionName] && this [actionName].apply( this , [ nodeID ] + parameters );

};

// -- dispatch -----------------------------------------------------------------------------

// Dispatch incoming messages waiting in the queue. "currentTime" specifies the current

// simulation time that we should advance to and was taken from the time stamp of the last

// message received from the conference server.

this .dispatch = function ( currentTime ) {

Lockheed Martin Proprietary

63

// Handle messages until we empty the queue or reach the new current time.

while ( queue.length > 0 && Number( queue[0][0] ) <= currentTime ) {

// Set the simulation time to the message time, remove the message and perform the

// action.

this .time = messageTime; this .receive( queue.shift() );

}

// Set the simulation time to the new current time.

this .time = currentTime;

};

// -- createNode ---------------------------------------------------------------------------

// Create a node from a component specification. Construction may require loading data from

// multiple remote documents. This function returns before construction is complete. A

// callback invoked once the node has fully loaded.

//

// A simple node consists of a set of properties, methods and events, but a node may

// specialize a prototype component and may also contain multiple child nodes, any of which

// may specialize a prototype component and contain child nodes, etc. So components cover a

// vast range of complexity. The world definition for the overall simulation is a single

// component instance.

//

// A node is a component instance--a single, anonymous specialization of its component.

// Nodes specialize components in the same way that any component may specialize a prototype

// component. The prototype component is made available as a base, then new or modified

// properties, methods, events, child nodes and scripts are attached to modify the base

// implemenation.

//

// To create a node, we first make the prototoype available by loading it (if it has not

// already been loaded). This is a recursive call to createNode() with the prototype

// specification. Then we add new, and modify existing, properties, methods, and events

// according to the component specification. Then we load an add any children, again

// recursively calling createNode() for each. Finally, we attach any new scripts and invoke

// an initialization function.

this .createNode = function ( /* [ parentID, ] */ component_uri_or_object, callback ) {

Lockheed Martin Proprietary

64

console.info( "vwf.createNode " + component_uri_or_object ); var name = undefined;

// Any component specification may be provided as either a URI identifying a network

// resource containing the specification or as an object literal that provides the data

// directly.

// We must resolve a URI to an object before we can create the component. If the

// specification parameter is a string, treat is as URI and load the document at that

// location. Call construct() with the specification once it has loaded.

if ( typeof component_uri_or_object == "string" || component_uri_or_object instanceof String )

{

// nodeTypeURI is a special URI identifying the base "node" component that is the

// ultimate prototype of all other components. Its specification is known

// intrinsicly and does not exist as a network resource. If the component URI

// identifies "node", call construct() directly and pass a null prototype and an

// empty specification.

if ( component_uri_or_object == nodeTypeURI ) { var prototypeID = undefined; var component = {};

console.log( "vwf.createNode: creating " + nodeTypeURI + " prototype" );

construct.call( this , prototypeID, component );

// For any other URI, load the document. Once it loads, call findType() to locate or

// load the prototype node, then pass the prototype and the component specification

// to construct().

} else {

console.log( "vwf.createNode: creating node of type " + component_uri_or_object );

jQuery.ajax( {

url: component_uri_or_object,

dataType: "jsonp" ,

jsonpCallback: "cb" ,

success: function ( component ) { this .findType( component[ "extends" ] || nodeTypeURI, function ( prototypeID ) {

construct.call( this , prototypeID, component );

} )

Lockheed Martin Proprietary

65

},

context: this

} );

}

// If an component literal was provided, call findType() to locate or load the prototype

// node, then pass the prototype and the component specification to construct().

} else { var component = component_uri_or_object;

console.log( "vwf.createNode: creating node of literal subclass of " +

( component[ "extends" ] || nodeTypeURI ) ); this .findType( component[ "extends" ] || nodeTypeURI, function ( prototypeID ) {

construct.call( this , prototypeID, component );

} );

}

// When we arrive here, we have a prototype node in hand (by way of its ID) and an

// object containing a component specification. We now need to create and assemble the

// new node.

//

// The VWF manager doesn't directly manipulate any node. The various models act in

// federation to create the greater model. The manager simply routes messages within the

// system to allow the models to maintain the necessary data. Additionally, the views

// receive similar messages that allow them to keep their interfaces current.

//

// To create a node, we simply assign a new ID, then invoke a notification on each model

// and a notification on each view.

function construct( prototypeID, component ) {

// Allocate an ID for the node. We just use an incrementing counter.

var nodeID = ++lastID;

console.info( "vwf.createNode " + nodeID + " " +

component.source + " " + component.type );

// Call creatingNode() on each model. The node is considered to be constructed after

// each model has run.

Lockheed Martin Proprietary

66

jQuery.each( vwf.models, function ( index, model ) {

model.creatingNode && model.creatingNode(

nodeID, name, prototypeID, [], component.source, component.type );

} );

// Call createdNode() on each view. The view is being notified of a model that has

// been constructed.

jQuery.each( vwf.views, function ( index, view ) {

view.createdNode && view.createdNode(

nodeID, name, prototypeID, [], component.source, component.type );

} );

// Create the properties, methods, and events.

// This is a placeholder for creating the properties, methods, and events. For each

// item in each set, we invoke vwf.createProperty(), vwf.createMethod(), or

// vwf.createEvent() to create the field. Each of those functions delegates to the

// models and views as we just did above.

// Create and attach the children.

// This is a placeholder for creating and attaching the children. For each child, we

// call vwf.createNode() with the child's component specification, then once loaded,

// call vwf.addChild() to attach the new node as a child. addChild() delegates to

// the models and views as before.

// Attach the scripts.

// This is a placeholder for attaching and evaluating scripts. For each script, we

// the network resource if the script is specified with as a URI, then once loaded,

// call vwf.execute() to direct any model that manages scripts of this item's type

// to evaluate the script, perform any immediate actions, and retain any callbacks

// as appropriate for the script type.

// Invoke an initialization method.

// This is placeholder for a call into the object to invoke its initialize() method

// if it has a script attached that provides one.

// The node is complete. Invoke the callback method and pass the new node ID and the

// ID of its prototype. If this was the root node for the world, the world is now

// fully initialized.

Lockheed Martin Proprietary

67

callback && callback.call( this , nodeID, prototypeID );

}

};

// -- findType -----------------------------------------------------------------------------

// Find or load a node that will serve as the prototype for a component specification. If

// the component is identified using a URI, save a mapping from the URI to the prototype ID

// in the "types" database for reuse. If the component is not identified by a URI, don't

// save a reference in the database (since no other component can refer to it), and just

// create it as an anonymous type.

this .findType = function ( component_uri_or_object, callback ) { var typeURI = undefined, typeID = undefined;

// If the component is identified using a URI, look in the database to see if it has

// already been loaded.

if ( typeof component_uri_or_object == "string" || component_uri_or_object instanceof String )

{

typeURI = component_uri_or_object;

typeID = types[typeURI];

}

// If we found the URI in the database, invoke the callback with the ID of the

// previously-loaded prototype node.

if ( typeID ) {

callback && callback.call( this , typeID );

// If the type has not been loaded but is identified with a URI, call createNode() to

// make the node that we will use as the prototype. When it loads, save the ID in the

// types database and invoke the callback with the new prototype node's ID.

} else if ( typeURI ) { this .createNode( component_uri_or_object, function ( typeID, prototypeID ) {

types[typeURI] = typeID;

callback && callback.call( this , typeID );

} );

Lockheed Martin Proprietary

68

// If the type is specified as a component literal, call createNode() then invoke the

// callback, but don't save a reference in the types database.

} else { this .createNode( component_uri_or_object, function ( typeID, prototypeID ) {

callback && callback.call( this , typeID );

} );

}

};

// -- setProperty --------------------------------------------------------------------------

// Set a property value on a node.

this .setProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.setProperty " + nodeID + " " + propertyName + " " + propertyValue );

// Call settingProperty() on each model. The property is considered set after each model

// has run.

jQuery.each( vwf.models, function ( index, model ) {

model.settingProperty && model.settingProperty( nodeID, propertyName, propertyValue );

} );

// Call satProperty() on each view. The view is being notified of a property that has

// been set.

jQuery.each( vwf.views, function ( index, view ) {

view.satProperty && view.satProperty( nodeID, propertyName, propertyValue );

} ); return propertyValue;

};

// -- getProperty --------------------------------------------------------------------------

// Get a property value for a node.

this .getProperty = function ( nodeID, propertyName ) {

console.info( "vwf.getProperty " + nodeID + " " + propertyName + " " + propertyValue );

Lockheed Martin Proprietary

69

// Call gettingProperty() on each model. The first model to return a non-undefined value

// dictates the return value.

var propertyValue = undefined;

jQuery.each( vwf.models, function ( index, model ) { var value = model.gettingProperty && model.gettingProperty( nodeID, propertyName );

propertyValue = value !== undefined ? value : propertyValue;

} );

// Call gotProperty() on each view.

jQuery.each( vwf.views, function ( index, view ) {

view.gotProperty && view.gotProperty( nodeID, propertyName, propertyValue );

} ); return propertyValue;

};

// == Private functions ====================================================================

// -- objectIsComponent --------------------------------------------------------------------

// Determine if a JavaScript object is a component specification by searching for component

// specification attributes in the candidate object.

var objectIsComponent = function ( candidate ) { var componentAttributes = [

"extends" ,

"implements" ,

"source" ,

"type" ,

"properties" ,

"methods" ,

"events" ,

"children" ,

"scripts" ,

]; var isComponent = false ; if ( ( typeof candidate == "object" || candidate instanceof Object ) && candidate != null ) {

Lockheed Martin Proprietary

70

jQuery.each( componentAttributes, function ( index, attributeName ) {

isComponent = isComponent || Boolean( candidate[attributeName] );

} );

} return isComponent;

};

};

} ) ( window );

// Extend jQuery to add a function to retrive parameters from the page's query string.

// From http://stackoverflow.com/questions/901115/get-querystring-values-with-jquery/2880929#2880929

// and http://stackoverflow.com/questions/901115/get-querystring-values-with-jquery/3867610#3867610.

jQuery.extend( {

getQueryString: function ( name ) { function parseParams() { var params = {},

e,

a = /\+/g, // regex for replacing addition symbol with a space

r = /([^&;=]+)=?([^&;]*)/g,

d = function ( s ) { return decodeURIComponent( s.replace(a, " " ) ); },

q = window.location.search.substring(1); while ( e = r.exec(q) )

params[ d(e[1]) ] = d(e[2]); return params;

} if ( !

this .queryStringParams ) this .queryStringParams = parseParams(); return this .queryStringParams[name];

} // getQueryString

} );

Lockheed Martin Proprietary

71

9.3 vwf-model.js

( function ( modules ) {

console.info( "loading vwf.model" );

// vwf-model.js is the common implementation of all Virtual World Framework models. Each model

// is part of a federation with other models attached to the simulation that implements part of

// the greater model. Taken together, the models create the entire model system for the

// simulation.

//

// Models are inside of, and directly part of the simulation. They may control the simulation

// and cause immediate change, but they cannot accept external input. The model configuration is

// identical for all participants in a shared world.

//

// A given model might be responsible for a certain subset of nodes in the the simulation, such

// as those representing Flash objects. Or it might implement part of the functionality of any

// node, such as translating 3D transforms and material properties back and forth to a scene

// manager. Or it might implement functionality that is only active for a short period, such as

// importing a document.

//

// vwf-model, as well as all deriving models, is constructed as a JavaScript module

// (http://www.yuiblog.com/blog/2007/06/12/module-pattern). It attaches to the vwf modules list

// as vwf.modules.model.

var module = modules.model = function ( vwf ) { if ( ! vwf ) return ;

console.info( "creating vwf.model" ); this .vwf = vwf; return this ;

};

// == Stimulus API =============================================================================

// The base model stands between the VWF manager and the deriving model classes. API calls pass

// through in two directions. Calls from a deriving model to the manager are commands, causing

// change. These calls are the stimulus half of the API.

//

// For models, stimulus calls pass directly through to the manager. (Views make these calls

// through the conference reflector.) Future development will move some functionality from the

Lockheed Martin Proprietary

72

// deriving models to provide a common service for mapping between vwf and model object

// identifiers.

// -- createNode -------------------------------------------------------------------------------

module.prototype.createNode = function ( component_uri_or_object, callback ) {

console.info( "vwf.model.createNode " + component_uri_or_object ) return this .vwf.createNode( component_uri_or_object, callback );

};

// deleteNode, addChild, removeChild

// createProperty, deleteProperty

// -- setProperty ------------------------------------------------------------------------------

module.prototype.setProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.setProperty " + nodeID + " " + propertyName + " " + propertyValue ); return this .vwf.setProperty( nodeID, propertyName, propertyValue );

};

// -- getProperty ------------------------------------------------------------------------------

module.prototype.getProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.getProperty " + nodeID + " " + propertyName + " " + propertyValue ); return this .vwf.getProperty( nodeID, propertyName, propertyValue );

};

// createMethod, deleteMethod, callMethod

// createEvent, deleteEvent, addEventListener, removeEventListener, fireEvent

// execute

// time

// == Response API =============================================================================

// Calls from the manager to a deriving model are notifications, informing of change. These

// calls are the response half of the API.

// For models, responses are where work is actually performed, and response implementations may

// generate additional stimulus calls. (In contrast, views generally transfer data outward, away

// from the simulation when handling a response.)

Lockheed Martin Proprietary

73

// Each of these implementations provides the default, null response. A deriving model only

// needs to implement the response handlers that it needs for its work. These will handle the

// rest.

// -- creatingNode -----------------------------------------------------------------------------

module.prototype.creatingNode = function ( nodeID, nodeName, nodeExtendsID, nodeImplementsIDs,

nodeSource, nodeType ) {

console.info( "vwf.model.creatingNode " + nodeID + " " + nodeName + " " +

nodeExtendsID + " " + nodeImplementsIDs + " " + nodeSource + " " + nodeType );

};

// deletingNode, addingChild, removingChild

// creatingProperty, deletingProperty

// -- settingProperty --------------------------------------------------------------------------

module.prototype.settingProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.settingProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// -- gettingProperty --------------------------------------------------------------------------

module.prototype.gettingProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.gettingProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// creatingMethod, deletingMethod, callingMethod

// creatingEvent, deltetingEvent, firingEvent

// executing

} ) ( window.vwf.modules );

Lockheed Martin Proprietary

74

9.4 vwf-model-javascript.js

( function ( modules ) {

console.info( "loading vwf.model.javascript" );

// vwf-model-javascript.js is a placeholder for the JavaScript object interface to the

// simulation.

//

// vwf-model is a JavaScript module (http://www.yuiblog.com/blog/2007/06/12/module-pattern). It

// attaches to the vwf modules list as vwf.modules.javascript.

var module = modules.javascript = function ( vwf ) { if ( ! vwf ) return ;

console.info( "creating vwf.model.javascript" );

modules.model.call( this , vwf ); this .types = {}; this .root = undefined; this .nodes = {}; return this ;

};

// Delegate any unimplemented functions to vwf-model.

module.prototype = new modules.model();

// == Response API =============================================================================

// This is a placeholder for providing a natural integration between simulation and the browser's

// JavaScript environment.

//

// Within the JavaScript environment, component instances appear as JavaScript objects.

//

// - Properties appear in the "properties" field. Each property contains a getter and setter

// callback to notify the object of property manipulation.

// - Methods appear in "methods".

// - Events appear in "events".

// - "parent" refers to the parent node and "children" is an array of the child nodes.

Lockheed Martin Proprietary

75

//

// - Node prototypes use the JavaScript prototype chain.

// - Properties, methods, events, and children may be referenced directly on the node or

// within their respective collections by name when there is no conflict with another

// attribute.

// - Properties support getters and setters that invoke a handler that may influence the

// property access.

//

// -- creatingNode -----------------------------------------------------------------------------

module.prototype.creatingNode = function ( nodeID, nodeName, nodeExtendsID, nodeImplementsIDs,

nodeSource, nodeType ) {

console.info( "vwf.model.javascript.creatingNode " + nodeID + " " + nodeName + " " +

nodeExtendsID + " " + nodeImplementsIDs + " " + nodeSource + " " + nodeType ); var type = this .types[nodeExtendsID]; if ( ! type ) { var prototype = this .nodes[nodeExtendsID]; this .types[nodeExtendsID] = type = function () { base.apply( this , arguments ) };

type.prototype = prototype;

type.prototype.constructor = type;

} this .nodes[nodeID] = new type( nodeName, nodeSource, nodeType );

};

// -- settingProperty --------------------------------------------------------------------------

module.prototype.settingProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.javascript.settingProperty " + nodeID + " " +

propertyName + " " + propertyValue );

};

// -- gettingProperty --------------------------------------------------------------------------

Lockheed Martin Proprietary

76

module.prototype.gettingProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.model.javascript.gettingProperty " + nodeID + " " +

propertyName + " " + propertyValue );

};

// == Node ===================================================================================== var node = function ( nodeName, nodeSource, nodeType ) { this .parent = undefined; this .name = nodeName; this .source = nodeSource; this .type = nodeType; this .properties = {}; this .methods = {}; this .events = {}; this .children = [];

};

} ) ( window.vwf.modules );

Lockheed Martin Proprietary

77

9.5 vwf-model-scene.js

( function ( modules ) {

console.info( "loading vwf.view" );

// vwf-view.js is the common implementation of all Virtual World Framework views. Views

// interpret information from the simulation, present it to the user, and accept user input

// influencing the simulation.

//

// Views are outside of the simulation. Unlike models, they may accept external input--such as

// pointer and key events from a user--but may only affect the simulation indirectly through the

// synchronization server.

//

// vwf-view, as well as all deriving views, is constructed as a JavaScript module

// (http://www.yuiblog.com/blog/2007/06/12/module-pattern). It attaches to the vwf modules list

// as vwf.modules.view.

var module = modules.view = function ( vwf ) { if ( ! vwf ) return ;

console.info( "creating vwf.view" ); this .vwf = vwf; return this ;

};

// == Stimulus API =============================================================================

// The base view stands between the VWF manager and the deriving view classes. API calls pass

// through in two directions. Calls from a deriving view to the manager are commands, causing

// change. These calls are the stimulus half of the API.

//

// Since views cannot directly manipulate the simulation, stimulus calls are sent via the manager

// to the replication server. Future development will move some functionality from the deriving

// views to provide a common service for mapping between vwf and view object identifiers.

// -- createNode -------------------------------------------------------------------------------

module.prototype.createNode = function ( component_uri_or_object ) {

console.info( "vwf.view.createNode " + component_uri_or_object ); this .vwf.send( undefined, "createNode" , component_uri_or_object );

Lockheed Martin Proprietary

78

};

// deleteNode, addChild, removeChild

// createProperty, deleteProperty

// -- setProperty ------------------------------------------------------------------------------

module.prototype.setProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.setProperty " + nodeID + " " + propertyName + " " + propertyValue ); this .vwf.send( nodeID, "setProperty" , propertyName, propertyValue );

};

// -- getProperty ------------------------------------------------------------------------------

module.prototype.getProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.getProperty " + nodeID + " " + propertyName + " " + propertyValue ); this .vwf.send( nodeID, "getProperty" , propertyName, propertyValue );

};

// createMethod, deleteMethod, callMethod

// createEvent, deleteEvent, addEventListener, removeEventListener, fireEvent

// execute

// time

// == Response API =============================================================================

// Calls from the manager to a deriving view are notifications, informing of change. These calls

// are the response half of the API.

// Views generally handle a response by updating a UI element to reflect the internal state

// change in the simulation.

// Each of these implementations provides the default, null response. A deriving view only needs

// to implement the response handlers that it needs for its work. These will handle the rest.

// -- createdNode ------------------------------------------------------------------------------

module.prototype.createdNode = function ( nodeID, nodeName, nodeExtendsID, nodeImplementsIDs,

nodeSource, nodeType ) {

console.info( "vwf.view.createdNode " + nodeID + " " + nodeName + " " +

nodeExtendsID + " " + nodeImplementsIDs + " " + nodeSource + " " + nodeType );

Lockheed Martin Proprietary

79

};

// deletedNode, addedChild, removedChild

// createdProperty, deletedProperty

// -- satProperty ------------------------------------------------------------------------------

// Please excuse the horrible grammar. It needs to be a past tense verb distinct from the

// present tense command that invokes the action.

module.prototype.satProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.satProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// -- gotProperty ------------------------------------------------------------------------------

module.prototype.gotProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.gotProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// createdMethod, deletedMethod, calledMethod

// createdEvent, deletedEvent, firedEvent

// executed

} ) ( window.vwf.modules );

Lockheed Martin Proprietary

80

9.6 vwf-view.js

( function ( modules ) {

console.info( "loading vwf.view" );

// vwf-view.js is the common implementation of all Virtual World Framework views. Views

// interpret information from the simulation, present it to the user, and accept user input

// influencing the simulation.

//

// Views are outside of the simulation. Unlike models, they may accept external input--such as

// pointer and key events from a user--but may only affect the simulation indirectly through the

// synchronization server.

//

// vwf-view, as well as all deriving views, is constructed as a JavaScript module

// (http://www.yuiblog.com/blog/2007/06/12/module-pattern). It attaches to the vwf modules list

// as vwf.modules.view.

var module = modules.view = function ( vwf ) { if ( ! vwf ) return ;

console.info( "creating vwf.view" ); this .vwf = vwf; return this ;

};

// == Stimulus API =============================================================================

// The base view stands between the VWF manager and the deriving view classes. API calls pass

// through in two directions. Calls from a deriving view to the manager are commands, causing

// change. These calls are the stimulus half of the API.

//

// Since views cannot directly manipulate the simulation, stimulus calls are sent via the manager

// to the replication server. Future development will move some functionality from the deriving

// views to provide a common service for mapping between vwf and view object identifiers.

// -- createNode -------------------------------------------------------------------------------

module.prototype.createNode = function ( component_uri_or_object ) {

console.info( "vwf.view.createNode " + component_uri_or_object ); this .vwf.send( undefined, "createNode" , component_uri_or_object );

Lockheed Martin Proprietary

81

};

// deleteNode, addChild, removeChild

// createProperty, deleteProperty

// -- setProperty ------------------------------------------------------------------------------

module.prototype.setProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.setProperty " + nodeID + " " + propertyName + " " + propertyValue ); this .vwf.send( nodeID, "setProperty" , propertyName, propertyValue );

};

// -- getProperty ------------------------------------------------------------------------------

module.prototype.getProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.getProperty " + nodeID + " " + propertyName + " " + propertyValue ); this .vwf.send( nodeID, "getProperty" , propertyName, propertyValue );

};

// createMethod, deleteMethod, callMethod

// createEvent, deleteEvent, addEventListener, removeEventListener, fireEvent

// execute

// time

// == Response API =============================================================================

// Calls from the manager to a deriving view are notifications, informing of change. These calls

// are the response half of the API.

// Views generally handle a response by updating a UI element to reflect the internal state

// change in the simulation.

// Each of these implementations provides the default, null response. A deriving view only needs

// to implement the response handlers that it needs for its work. These will handle the rest.

// -- createdNode ------------------------------------------------------------------------------

module.prototype.createdNode = function ( nodeID, nodeName, nodeExtendsID, nodeImplementsIDs,

nodeSource, nodeType ) {

console.info( "vwf.view.createdNode " + nodeID + " " + nodeName + " " +

nodeExtendsID + " " + nodeImplementsIDs + " " + nodeSource + " " + nodeType );

Lockheed Martin Proprietary

82

};

// deletedNode, addedChild, removedChild

// createdProperty, deletedProperty

// -- satProperty ------------------------------------------------------------------------------

// Please excuse the horrible grammar. It needs to be a past tense verb distinct from the

// present tense command that invokes the action.

module.prototype.satProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.satProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// -- gotProperty ------------------------------------------------------------------------------

module.prototype.gotProperty = function ( nodeID, propertyName, propertyValue ) {

console.info( "vwf.view.gotProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

// createdMethod, deletedMethod, calledMethod

// createdEvent, deletedEvent, firedEvent

// executed

} ) ( window.vwf.modules );

Lockheed Martin Proprietary

83

9.7 vwf-view-html.js

( function ( modules ) {

console.info( "loading vwf.view.html" );

// vwf-view-html.js is a placeholder for an HTML view of the simulation state. It is a stand-in

// for any number of potential UI elements, including WebGL renderings, traditional UI controls,

// and connections to external services.

//

// vwf-view is a JavaScript module (http://www.yuiblog.com/blog/2007/06/12/module-pattern). It

// attaches to the vwf modules list as vwf.modules.html.

var module = modules.html = function ( vwf ) { if ( ! vwf ) return ;

console.info( "creating vwf.view.html" );

modules.view.call( this , vwf ); return this ;

};

// Delegate any unimplemented functions to vwf-view.

module.prototype = new modules.view();

// == Response API =============================================================================

// This is a placeholder for maintaining a view of the changing state of the simulation using

// nested HTML block elements.

// -- createdNode ------------------------------------------------------------------------------

module.prototype.createdNode = function ( nodeID, nodeName, nodeExtendsID, nodeImplementsIDs,

nodeSource, nodeType ) {

console.info( "vwf.view.html.createdNode " + nodeID + " " + nodeName + " " +

nodeExtendsID + " " + nodeImplementsIDs + " " + nodeSource + " " + nodeType );

};

// -- satProperty ------------------------------------------------------------------------------

module.prototype.satProperty = function ( nodeID, propertyName, propertyValue ) {

Lockheed Martin Proprietary

84

console.info( "vwf.view.html.satProperty " + nodeID + " " + propertyName + " " + propertyValue );

};

} ) ( window.vwf.modules );

Lockheed Martin Proprietary

85

10.0 Sample Application

This is a sample world demonstrating how a device simulation may be created in JSON. In this example, a generic device consisting of a power switch and power indicator is defined. The switch and indicator are based on components provided outside of the device (allowing them to be reused in other devices). The switch and indicator extend a third control component that allows them to share common parts of their behavior.

The device component assembles the switch and indicator and configures them for its particular use. It specifies the 3D model that provides the visual representations and configures parameters to override the switch's default initial state and increases the indicator's brightness.

It also provides the "wiring" that connects the switch to the indicator. As the switch changes state, such as after being clicked by the user, it invokes a function that the device component provides that transfers the switch state to the indicator.

The following diagram demonstrates the flow of data in this scenario.

The simulated device may be the entire world or may be used as a part of a larger world. In this example, the world definition is available at http://vwf.example.com/types/sample_device . The conference server is configured to launch using that URL.

A world, and the components that compose it, are described using a structured object format encoded using the JSON format. When the component is loaded into a world, a node representing the component is created in the simulated environment and the properties and scripts defined for the component are loaded into the node. Additionally, any child nodes defined by the component are loaded and attached as child nodes.

Lockheed Martin Proprietary

86

http://vwf.example.com/types/sample_device:

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

{

extends: "http://vwf.example.com/types/node3",

source: "sample_device.dae", type: "model/collada+xml",

children: {

"power_switch": {

extends: "http://vwf.example.com/types/switch",

properties: {

value: 1

}

},

"power_indicator": {

18

19

20

21

22

23

24

25

extends: "http://vwf.example.com/types/indicator",

properties: {

brightness: 10

}

}

},

26

27

28

29

scripts: [

{

text: "

30 this.children.power_switch.events.value_changed.addListener( function( value

) {

31

32

33

34

35

36

37

this.children.power_indicator.value = value

} )

",

type: "application/javascript"

}

]

}

01: Components are specified as JavaScript objects. They are encoded to JSON when stored in files and transferred across networks.

02: This simulated device is a 3D object. It specifies the standard "node3" component as a prototype. node3 grants by inheritance the component the ability to manipulate 3D attributes such as location, orientation, and visual attributes.

03: The source and type fields may specify base data that the component should wrap. In this case refer to a 3D model in COLLADA format. The 3D scene manager will load the model, and the scripts defined in the component will control it.

Lockheed Martin Proprietary

87

07, 16: This component specifies two child nodes. If the child names specified here match child nodes in the "sample_device.dae" 3D model, the child components will attach to and allow of those nodes. If not, the child nodes will still be created as empty 3D nodes.

09, 18: The two child nodes are specializations of two components defined below.

"power_switch" is a configured copy of the "http://vwf.example.com/types/switch" component.

Similarly, "power_indicator" is a configured copy of "http://vwf.example.com/types/indicator".

12, 21: The ".../switch" defines a "value" property and assigns a default value of 0. Here, we override that default with a new initial value of 1. For the switch component, this means the "on" position. Similarly, we override the "brightness" property of the ".../indicator" component.

26: The scripts field provides JavaScript code to control the world defined in the component.

Simple scripts may be provided inline as shown here. Longer scripts, or scripts shared by multiple components, may be specified using a URI and loaded separately.

30: The "power_switch" child sends an event when the switch position changes. Here, we attach a short function to receive that event and to turn on or off the "power_indicator" in response.

This is a "switch" component, which implements a simple two-position switch. It assumes that it is attached to a 3D node where the "off" and "on" positions are at -45 and 45 degrees rotation, respectively. A more complete switch component would provide configuration options to control the rotation limits.

Lockheed Martin Proprietary

88

http://vwf.example.com/types/switch:

01

18

19

20

21

22

23

24

25

26

27

28

10

11

12

13

14

15

16

17

02

03

04

05

06

07

08

09

{

extends: "http://vwf.example.com/types/control",

properties: {

value: {

set: {

text: "

this.value = Number( value ) > 0.5 ? 1 : 0;

this.transform.rotation.z = this.value ? 45 : -45;

",

type: "application/javascript"

}

}

},

methods: {

toggle: {

text: "this.value = this.value ? 0 : 1",

type: "application/javascript"

},

pointer_click: {

text: "this.toggle()",

type: "application/javascript"

}

}

}

02: "switch" is a specialization of the "control" component, which provides functions common to input controls such as switches and output controls such as indicator lights.

05: The "control" component defines a "value" property. Here, we attach a setter function to that property so that we are notified when it changes. Our function restricts the property to contain only the values 0 or 1, (off and on, respectively), and to update the rotation property of the attached 3D node to match the selected position.

16: Methods are incoming actions for the component. In contrasts, events are outgoing actions.

18: This method directs the component to switch the state of the switch when invoked. We simply switch the value property to 1 if it was 0 or to 0 if it was 1. The setter function will trap the property change and update the 3D node's orientation appropriately.

23: "pointer_click" is a user event that the system delivers to the node when the user clicks on it.

This component interprets a pointer click as a request to toggle the switch's state. We do this by calling the "toggle" method.

Lockheed Martin Proprietary

89

This is an "indicator" component, which implements a light controlled by the "value" property. It assumes that the first material on the attached 3D node can be manipulated to control the light attributes. http://vwf.example.com/types/indicator:

13

14

15

16

17

18

01

02

03

04

05

06

07

08

09

10

11

12

{

extends: "http://vwf.example.com/types/control",

properties: {

brightness: 1,

value: {

set: {

text: "

this.value = value;

this.materials[0].emmisive = this.value ? 0 : this.brightness;

",

type: "application/javascript"

}

}

}

}

02: "indicator" is a specialization of the "control" component, which provides functions common to input controls such as switches and output controls such as indicator lights.

06: We add a "brightness" property to specify the brightness of the light when it is on. The default value is 1, but it may be overridden when this component is used.

08: The "control" component defines a "value" property. Here, we attach a setter function to that property so that we are notified when it changes. Our function updates the material of the attached 3D node to turn the brightness up or down to match the selected value.

This is a "control" component, which provides functions common to input controls such as switches and output controls such as indicator lights. It implements a "value" property with a default value of 0 and a "value_changed" event to signal when the property changes. This component doesn't assign any particular meaning to the "value" property. That is left to the specializations.

Lockheed Martin Proprietary

90

http://vwf.example.com/types/control:

01

10

11

12

13

14

15

16

17

02

03

04

05

06

07

08

09

18

19

20

21

22

{

extends: "http://vwf.example.com/types/node3",

properties: {

value: {

value: 0,

set: {

text: "

if ( value != this.value ) {

this.value = value;

this.events.value_changed.fire( this.value );

}

",

type: "application/javascript"

}

},

},

events: {

value_changed: null

}

}

02: A "control" is a 3D object. It specifies the standard "node3" component as a prototype. node3 grants by inheritance the component the ability to manipulate 3D attributes such as location, orientation, and visual attributes.

05: This is the "value" property.

06: The property's default value is 0.

07: We specify a setter function for the property so that we are notified when it is set by any source. Our function sends the "value_changed" event if the property changes value.

20: This is the "value_changed" event. An event provides a "fire()" action that the component may use to fire the event, and allows other components to attach listener functions to be notified when the event fires.

Lockheed Martin Proprietary

91

11.0 Data and Message Formats

This design is taken from the viewpoint of the component developer who is mainly interested in scripting local behavior while not being burdened with plumbing details of the underlying system.

The API should be expressive so that easy actions may be performed easily and should map naturally into the JavaScript environment that the components see. Additionally, the system needs to track state changes in the model, so that it can keep the views informed.

A good way to enable components to interoperate is to encourage them to express their interfaces as a set of properties, methods, and events. Properties contain the public state, methods respond to incoming actions, and events generate outgoing actions. With the state exposed in the properties, components may be easily saved, restored, and replicated. With incoming and outgoing actions well-defined, components may be integrated using minimal glue code.

The properties, methods, and events are not simply manifestations within the components’

JavaScript objects however. Some of a component’s parts are directly attached to the

JavaScript object. Others – namely those controlled by a 3D scene manager – are not.

We need a way to interact with these fractional component pieces and to treat the aggregate as a whole. The get/set property, call method, and fire event, etc. interfaces are the plumbing that allows this to work. A portion of the model code reacts changes in 3D properties, such as transforms, and marshalls those to the scene manager. A different part of the model turns the crank on the component’s script, causing the right code to execute in response to actions arriving at the object. In essence, to use an OS metaphor, these fractional models act as device drivers to the components’ applications and provide the underlying functionally that the components may exploit. As separate parts, they may be combined in various combinations in different deployments – swapping out one scene manager for another, for example.

We are currently calling these fractional models shards , but this can be confused with shards referring to multiple instances of a virtual world, so will need to be changed.

(As an aside, the shard API is identical to the view API with the exception that a view must route through the Reflector. The only differentiator between views and shards is whether they are inside of the replicated simulation or not.)

Components do not directly see the get/set property, call method, and fire event, etc. interfaces, however. To create a natural interface within the component, we make use of the new

ECMAScript 5 Object.defineProperty function to place accessor code behind each property, method, and event and make the appropriate calls to the corresponding model APIs.

For example, a simple component with two properties and a setter function on one may look something like this: p1: “off” p2: 0 p2.set = function( value ) { this.p1 = ( value ? “on” : “off” }

If the component sets p1 for any reason in the normal JavaScript fashion:

p1 = “on”

The plumbing code that we attached behind p1 using Object.definedProperty will send a setproperty notification to each shard with the details on the node, property, and value. If p1 refers

Lockheed Martin Proprietary

92

to a material of a 3D object (as defined by the no de’s type and if a 3D shard is active), for example, this change may result in a change to the node’s material. If not, it may be a simple value change.

If a view, which is outside of the component’s JavaScript environment and outside of the model were to invoke a setproperty on this node’s p2 property, the JavaScript shard would direct that update to the node’s p2 property, cause its setter to be invoked, changing p1 , which may then update the node’s material in the scene manger as before.

11.1 Bobble and Component

A Bobble is defined recursively as a “kind of” Component, where a Component consists of an optional base Component; any number of connected child Components; additional properties, methods, and events; an optional media asset base; and optional script code to manage the collection.

Summary:

{

name: name,

extends: component | uri,

source: uri, type: mime-type

properties: { name: value, … },

methods: { name, … },

events: { name, … },

children: [ component | uri, … ],

scripts: [ { type: mime-type, source | text }, … ]

}

A Component is a JavaScript object containing certain fields. When stored in a file or transported over a network, a Component is expressed as the JSON ( http://www.json.org

) serialization of the object. In an executing JavaScript environment, a Component may be expressed as a JavaScript object literal or may be a JavaScript Object constructed some other way having the correct format.

- Bobble = component | uri

A Bobble is a Component or a URI identifying a Component.

- component = '{' name? extends? media? properties? methods? events? children? scripts? '}'

- name = json-string ("string" at http://www.json.org

)

- extends = component | uri

The extends field specifies the Component's base type. A Component instance of that type will be installed as the prototype object for this Component's instances.

- media = source type media specifies a base data blob that this Component wraps.

- source = 'source' ':' uri

- type = 'type' ':' mime-type

- properties = 'properties' ':' '{' property* '}'

- property = property-name ':' property-value

- property-value = json-value ("value" at http://www.json.org

)

- methods = 'methods' ':' '{' method* '}'

- method = method-name ':' 'null'

Lockheed Martin Proprietary

93

- method-name = json-string ("string" at http://www.json.org

)

- events = 'events' ':' '{' event* '}'

- event = event-name ':' 'null'

- event-name = json-string ("string" at http://www.json.org

)

- children = 'children' ':' '[' ( component | uri )* ']'

- scripts = 'scripts' ':' '[' script* ']'

- script = '{' script-source | script-text script-type? '}'

- script-source = 'source' ':' uri script-source is the URI identifying source code for the Component's control script.

- script-text = 'text' ':' .... script-text is text consisting of source code for the Component's control script.

- script-type = 'type' ':' mime-type script-type should specify the MIME type of the script referred to by script-source or contained in script-text. It is required for script-text and for script-source if the host serving the text does not provide a correct type.

- uri = json-string ("string" at http://www.json.org

conforming to http://www.rfc-editor.org/rfc/rfc3986.txt

)

When a URI refers to a Component, if the URI refers to a network resource, that resource should be the JSON encoding of the Component. Other Components known to the system may be identified by URIs that don't necessarily refer to network resources.

- mime-type = json-string

11.2 Component API: API specification for intra-model interactions

Within the JavaScript environment, Component instances appear as JavaScript objects.

- Properties appear in the "properties" field. Each property contains a getter and setter callback to notify the object of property manipulation

- Methods appear in "methods"

- Events appear in "events"

- "parent" refers to the parent node and "children" is an array of the child nodes.

- prototype inheritance model

- properties, methods, events, and children collections and name mapping up to owning object

- event registration, property change events

11.3 Model Stimulus API

These functions are available to the model shards as the means for changing state the simulation state. They execute immediately.

- nodeID and childID are scalar values, either numbers or strings, as returned by createNode().

- nodeName, propertyName, methodName, and eventName are strings and parameters are JavaScript objects

- nodeID = createNode( nodeName, nodeExtends, nodeImplements, nodeSource | nodeText, nodeType )

- deleteNode( nodeID )

- addChild( nodeID, childID )

Lockheed Martin Proprietary

94

- removeChild( nodeID, childID )

- createProperty( nodeID, propertyName [ , propertyValue ] )

- deleteProperty( nodeID, propertyName )

- setProperty( nodeID, propertyName, propertyValue )

- value = getProperty( nodeID, propertyName )

- createMethod( nodeID, methodName )

- deleteMethod( nodeID, methodName )

- callMethod( nodeID, methodName [, parameters, ... ] )

- createEvent( nodeID, eventName )

- deleteEvent( nodeID, eventName )

- addEventListener( nodeID, eventName )

- removeEventListener( nodeID, eventName )

- fireEvent( nodeID, eventName [, parameters, ... ] )

- execute( nodeID, scriptSource | scriptText [, scriptType ] )

- time

11.4 View Stimulus API

These functions are available to the views as the means for queuing state changes in the simulation. They route through the reflection server and execute at the time assigned.

Results are available through a callback function invoked when the function executes.

- createNode( nodeName, nodeExtends, nodeImplements, nodeSource | nodeText, nodeType, callback )

- deleteNode( nodeID, callback )

- addChild( nodeID, childID, callback )

- removeChild( nodeID, childID, callback )

- createProperty( nodeID, propertyName [ , propertyValue ], callback )

- deleteProperty( nodeID, propertyName, callback )

- setProperty( nodeID, propertyName, propertyValue, callback )

- getProperty( nodeID, propertyName, callback )

- createMethod( nodeID, methodName, callback )

- deleteMethod( nodeID, methodName, callback )

- callMethod( nodeID, methodName [, parameters, ... ], callback )

- createEvent( nodeID, eventName, callback )

- deleteEvent( nodeID, eventName, callback )

- addEventListener( nodeID, eventName, callback )

- removeEventListener( nodeID, eventName, callback )

- fireEvent( nodeID, eventName [, parameters, ... ], callback )

- execute( nodeID, scriptSource | scriptText [, scriptType ], callback )

- time

11.5 Model and View Response API

These functions are invoked on the model shards and the views in response to a Stimulus and provide the opportunity for the model or view to react.

- onConstruct( nodeID, nodeName, nodeExtends, nodeImplements, nodeSource, nodeType )

- onDestruct( nodeID )

- onChildAdded( nodeID, childID )

- onChildRemoved( nodeID, childID )

- onCreateProperty( nodeID, propertyName [ , propertyValue ] )

Lockheed Martin Proprietary

95

- onDeleteProperty( nodeID, propertyName )

- onSetProperty( nodeID, propertyName, propertyValue )

- onGetProperty( nodeID, propertyName )

- onCreateMethod( nodeID, methodName )

- onDeleteMethod( nodeID, methodName )

- onCallMethod( nodeID, methodName [, parameters, ... ] )

- onCreateEvent( nodeID, eventName )

- onDeleteEvent( nodeID, eventName )

- onFireEvent( nodeID, eventName [, parameters, ... ] )

- onExecute( nodeID, scriptSource | scriptText [, scriptType ] )

11.6 Reflection Server API

Communication between a participant and the Reflection server consists of a serialized and time-stamped version of the Stimulus API.

- Each message is a string of characters.

- When delivered over a record-oriented transport, such as XMPP or socket.io, the string should not be terminated with line-ending characters.

- When delivered over a streaming transport with not delineation between messages, each message should be terminated with a CR LF sequence.

- message = time ( ' ' node ' ' action ( ' ' parameter )* )*

- The format is the same in both directions.

- For messages from a participant to the reflector, time indicates the time at the participant when the action was generated.

- For a message from the reflector to a participant, time indicates the current system time.

- time = json-number ("number" at http://www.json.org

)

- node = node-id

- action = 'createNode' | 'deleteNode' | 'addChild' | 'removeChild' | 'createProperty' |

'deleteProperty' | 'setProperty' | 'getProperty' | 'createMethod' | 'deleteMethod' |

'callMethod' | 'createEvent' | 'deleteEvent' | 'addEventListener' | 'removeEventListener' |

'fireEvent' | 'execute'

- parameter = json-value ("value" at http://www.json.org

)

Each parameter should be the JSON encoding of the equivalent parameter for the function in the Stimulus API.

Lockheed Martin Proprietary

96

12.0 Next Steps

Though our intention is to define a minimal kernel architecture for the VWF, it is essential that we create enough of a working system to enable third parties to engage, embrace, and extend it. This document describes an overall approach to the design and implementation of a VWF system. The next steps in this process are the actual development of the working system and associated tools, documentation, and infrastructure, as well as the marketing and evangelism to ensure the project reaches a critical mass of developers and users to become a self-sustaining entity.

12.1 Development

- Prototype Systems. It is said that web-based applications are always in Beta release; that is, they are never completed but are constantly being modified and updated based on new requirements, capabilities, and fixes. The application becomes an ongoing dialog between a business and its users. The most important part of this process is to deliver a first working system that is “good enough to be criticized” 3 . This means that it is essential that the VWF be prototyped quickly and often – sharing the system with potential users and developers as quickly and often as possible to begin the dialog that is critical for its proper development.

- 1.0 System Development.

Though the prototype phase is an ongoing exploration and this process will likely remain in place even as it is transitioned into a formal development effort, it is essential that there be a focused effort around creating a 1.0 release of the VWF. This will include the system itself, additional sample frameworks, proper documents, and sample components and applications that are focused on the key target markets.

- Tools.

Key elements in the success of a game engine or virtual world application are the tools that empower the developers and users to easily and quickly construct valuable content.

- Sample Frameworks.

Though the focus of the project is the kernel of the system, it is also essential that a number of additional optional elements be developed in support of the kernel. This includes such sub-systems as interface modules (including first person fixed and moving, third person, mouse look, ease in – ease out, mobile device controls, etc.) and server modules (pure reflector systems, and client/server models, including interfaces to OpenSim-like servers.)

- System, Component and Sample Documentation. Though the VWF is intended to be relatively easy to develop for and use, it will still require clear and substantial documentation both at the system level – how does it work, and the application level – how do I develop for it. Documents that would need to be created include a System

Developer Guide, Application Developer Guide and Cookbook, and even a User Guide for common elements.

- Sample Components and Applications. The best way to learn how to create for a system is to look at actual working examples of that system. It is critical that a number of key example applications be developed that demonstrate the capabilities of the system, as well as providing starting points for new developers in working with the system.

3 Alan Kay describing the first Macintosh.

Lockheed Martin Proprietary

97

Further, this is best done as a user supported repository of sample code, components, and applications that can be added to and modified by the community.

12.2 VWF Marketing

Creating a VWF is simply the start of the process of creating a successful system. It is essential to the success of the system that the broader development and user communities are engaged.

This requires a multi-dimensional approach including a web presence, evangelism, and partnership outreach including corporate, university, and DoD organizations.

- Web Presence. It is likely that the project will require a managed web presence to act as both a repository of the system code, documentation, and other associated content; and perhaps even more important, to act as a hub for the community that would develop around the system. This presence should be considered as much a part of the system as the code or the documentation.

- Evangelism and Marketing.

The system will need to be presented in a number of forums including conferences, invited talks, training workshops, and ongoing engagement with thought leaders in industry, universities, and government. Traditional guerilla marketing techniques will also need to be exploited including generating buzz in web-based venues and traditional media outlets.

- Partnership Outreach and Development.

The success of this system is totally dependent upon creating a community of support including application and tool developers, customers, and content providers. It is critical that we leverage an existing like-minded consortium or develop a new one that can act as the social center of the project.

Lockheed Martin Proprietary

98

13.0 Conclusion

The new technologies being built into the next generation web browsers are poised to redefine the nature of the web. It is rapidly becoming a full-featured operating system where there are simply no limits to its capabilities or functionality. Google Chrome demonstrates this. It is indeed a full Internet-based OS for the next generation of net books. Given the extraordinary position that the web browser has in the current world information ecosystem, the evolutionary pressure will continue, and we see improvements on both technical capabilities and performance. In particular, we are continuing to see huge leaps in performance enhancements of JavaScript, to the point where it is now capable of being used to create virtually any kind of application that will run on a PC. Further, WebGL is emerging as a very high performance graphics capability that will be implicit to the next gen web browsers. The main challenge for developers is to understand how to leverage its powerful Shader capabilities to get extreme performance. The trick is to move as much compute from the client to the graphics chip as possible using the

Shader capabilities of the GPU.

WebGL Shader-focused application

Given the absolute pervasive nature of the web – it literally is everywhere – and the extreme innovative pressure being applied by both browser vendors and the development community to push new capabilities and higher performance, it is clear that this is where the real action is for any significant new project, especially one that is focused on rich 2D and 3D media and high bandwidth interactions and communication such as the DoD VWF.

WebGL Virtual World

Lockheed Martin Proprietary

99

14.0 References

Smith, David A. The Colony. Mindscape. 1987.

Smith, David A. Virtus WalkThrough. Virtus Corporation. 1990.

Smith, David A. OpenSpace: A Small System Architecture. Virtus Corporation. Unpublished

Document. 1994.

Smith, David A., Andreas Raab, David P. Reed, Alan Kay. Croquet User Manual v.0.01 Web site: http://www.opencroquet.org

..

Smith, David A., Alan Kay, Andreas Raab, David P. Reed. Croquet – A Collaboration System

Architecture. C5: Conference on Creating, Connecting and Collaborating through Computing.

2003. p.2-.

Smith, David A. “You Will Be Superman.” Forward to book “Working Through Synthetic Worlds”

Smith, C.A.P., Kisiel, Kenneth, Morrison, Jeffrey, Ashgate. 2009.

Spencer, Lynn. Touching History: The Untold Story of the Drama That Unfolded in the Skies

Over America on 9/11. Free Press. 2008.

Report to the President and Congress. Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology. Executive Office of the President.

President’s Council of Advisors on Science and Technology. DECEMBER 2010. http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf

Web Links http://www.chromeexperiments.com/webgl A number of WebGL examples can be found here.

Requires a WebGL enables browser. http://www.kk.org/thetechnium/archives/2009/07/was_moores_law.php

Kelly, Kevin. “Was

Moore’s Law Inevitable” http://www.lively-kernel.org

/ The Lively Kernel Project is hosted here. http://www.iquilezles.org/apps/shadertoy/ Shader Toy demonstrates a number of WebGL shade-based scripts. This is an interactive application that allows the visitor to dynamically modify the script and try it out. http://www.glge.org

/ GLGE is a JavaScript library intended to ease the use of WebGL. http://www.morganstanley.com/institutional/techresearch/pdfs/Internet_Trends_041210.pdf

.

Morgan Stanley Mobile Internet Trends. http://www.khronos.org/webgl/ WebGL is a cross-platform, royalty-free web standard for a lowlevel 3D graphics API based on OpenGL ES 2.0, exposed through the HTML5 Canvas element as Document Object Model interfaces. Developers familiar with OpenGL ES 2.0 will recognize

WebGL as a Shader-based API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL ES 2.0 API. It stays very close to the OpenGL ES 2.0 specification, with some concessions made for what developers expect out of memory-managed languages such as JavaScript.

Lockheed Martin Proprietary

100

WebGL brings plug-in-free 3D to the web, implemented right into the browser. Major browser vendors Apple (Safari), Google (Chrome), Mozilla (Firefox), and Opera (Opera) are members of the WebGL Working Group. https://collada.org/mediawiki/index.php/COLLADA_-

_Digital_Asset_and_FX_Exchange_Schema COLLADA is a COLLAborative Design Activity for establishing an interchange file format for interactive 3D applications. http://www.teleplace.com/ A fully collaborative virtual world based on the Croquet project. It demonstrates replicated computation for simulations as well as virtually every sort of streaming media including voice, video, and full VNC support.

Lockheed Martin Proprietary

101

Download