UCD Doc

advertisement

_______________

An energy dashboard for energy monitoring across UCD departments

Lei Gao

_____________

A thesis submitted in part fulfilment of the degree of

MSc in Ubiquitous and Sensor Systems

Supervisor: Dr. Antonio Ruzzelli, Dr. Gregory O’Hare

UCD School of Computer Science and Informatics

College of Engineering Mathematical and Physical Sciences

University College Dublin

September 18, 2009

I

Table of Contents

ABSTRACT .......................................................................................................................................... 3

DEDICATION ....................................................................................................................................... 4

ACKNOWLEDGEMENTS ...................................................................................................................... 5

CHAPTER 1: INTRODUCTION .............................................................................................................. 6

1.1.

T HEORETICAL F OUNDATION O F I NFORMATION V ISUALIZATION ............................................................... 7

1.2.

GIS AND G OOGLE E ARTH ............................................................................................................... 8

1.3.

T HESIS A IM ................................................................................................................................. 9

1.4.

T HESIS L AYOUT ........................................................................................................................... 10

CHAPTER 2: BACKGROUND AND MOTIVATION ................................................................................ 11

2.1

W HY VISUALIZATION ? .................................................................................................................. 11

2.2

I

NFORMATION

V

ISUALIZATION

L

AYOUT

............................................................................................ 14

2.2.1

Visualization Pipeline ............................................................................................................... 14

2.2.2

Data Exploration ...................................................................................................................... 15

2.2.3

Visual Mapping ........................................................................................................................ 16

2.2.4

Data Visualization Techniques ................................................................................................. 18

2.2.5

Interaction and Combination ................................................................................................... 20

2.3

V ISUALIZATION A PPLICATIONS ....................................................................................................... 21

2.4

W EB MAPPING ........................................................................................................................... 21

2.4.1

Google Earth (API) and Google Maps (API) .............................................................................. 22

2.4.2

JavaScript ................................................................................................................................. 24

2.4.3

KML .......................................................................................................................................... 24

2.5

C ONCLUSION .............................................................................................................................. 25

CHAPTER 3: DESIGN ......................................................................................................................... 27

3.1

O VERALL D ESIGN ........................................................................................................................ 27

3.2

P ARTICULAR D ESIGN .................................................................................................................... 28

3.2.1

Data Exploration ...................................................................................................................... 28

3.2.2

Visual Mapping and Data Transformation ............................................................................... 31

3.2.3

Interactions .............................................................................................................................. 34

3.3

C ONCLUSION .............................................................................................................................. 35

CHAPTER 4: DEVELOPMENT AND IMPLEMENTATION ....................................................................... 36

4.1

D

ATA

E

XPLORATION

..................................................................................................................... 36

4.1.1

Data access module ................................................................................................................. 36

D OWNLOAD BY AN URL LIST ( H TTP G ET .

CLASS ) .................................................................................. 37

U NZIP F ILES ( T EST Z IP .

CLASS ) .............................................................................................................. 38

E XCEL R EADER (E XCEL R EADER .

CLASS ) ............................................................................................. 39

1

C ONCURRENCY AND T IMER

................................................................................................................. 39

4.1.2

Data manipulation module ...................................................................................................... 41

D OUBLY LINKED LIST AND R ANKING METHOD

.................................................................................... 42

4.2

V ISUAL M APPING ....................................................................................................................... 44

4.2.1

Google Earth API ...................................................................................................................... 45

CHAPTER 5: EVALUATION ................................................................................................................ 53

CHAPTER 6: CONCLUSION AND FURTHER WORK .............................................................................. 54

Reference: ....................................................................................................................................... 55

2

Abstract

Visualization transforms data or information into graphical forms to have data publishers generates rendered representations of data, rather than raw data. That is, instead of publishing abstract data, visualization deals with transformations and representations of the data to be computer supported, interactive, and amplifies cognition. In this project I investigate some commonly visualization techniques in converts the data acquisition into 2D and 3D visual representations: points, lines, areas, images and models, with interactive visualization and some data processing techniques which work with data structures that are used to handle and store the various outputs of processes. Moreover, when the amount of data to be displayed is large, data needs to be summarized in useful ways so that users can easily extract the useful information. As part of this work I design and implement a graphical user interface (GUI) that show the energy consumption and expenditure of each department at UCD in real-time by leveraging energy data of departments online[1] with the Google Earth API[2] and Google map API [2] which provides a slick, highly responsive visual interface built using AJAX technologies along with detailed street and aerial imagery data, and an open API allowing customization of the map output including the ability to add application specific data to the map. By transforming the large data set, I develop interactive time-varying visualization map tool in showing energy peaks, patterns, and trend of and energy usage to make people aware of the energy cost at UCD by providing time-varying energy gradient overlaid to the map of the Belfield campus.

3

Dedication

I declare that this thesis is my own work and has not been submitted in any form for another degree or diploma at this, or any other, university or institute of tertiary education.

Lei Gao

September 1, 2009

4

Acknowledgements

First, I would like to thank my supervisor Dr. Antonio Ruzzelli. He not only provided me with invaluable advice and guidance on my research, but he also shared his life experience with me.

I have learnt so much from him, from finding research interest, to learning how to do research, and to figuring out future research career plans. His support and belief in me gave me courage to finish my M.sc.

Thanks also to Dr. Gregory O’Hare who had patiently endured so much during my work on this project.

I thank my parents for their love and support throughout my growing up. They have provided as much as possible to ensure that I had a good education.

I would also like to express my sincere gratitude to my fiancé Da Huo, who made much sacrifice to let me finish Msc study abroad. He always can cheer me up when I was depressed and frustrated. Without him, I would never have completed my project.

5

Chapter 1: Introduction

Graphics is a very simple language. Its laws become self-evident when we recognize that the image is transformable, that it must be reordered, and that its transformations represent a visual form of information-processing. [3]

—Jacques Bertin 1

Information revolution is as big as the industrial revolution and is changing everything in people’s life since the last half of the 20th century began with the invention of the integrated circuit and computer chip. Nowadays, information exists as a very important social component. Vast quantities and diverse types of information are being generated, stored, and disseminated every day. The need to understand and extract knowledge from stored information is becoming a ubiquitous task.

The main goal of visualization is to transform and represent large-scale of non-numerical information, such as words or character strings as well as abstract dataset onto physical screen space as 2D or 3D graphical structures by data processing techniques and visualization techniques. Further, user interface (UI), as an important part in context of computer systems becomes to be the most common end-product of visualization systems. Interaction and distortion techniques allow users to directly interact with the visualizations through UI.

Processing steps of visualization usually be described as a visualization pipeline with two ends: on one end of the pipeline, we have the dataset, while on the other end, we have the visualization. The operators between these two ends are transformations and representation of datasets. Data transformations analysis and extract the values of dataset then generate some formats of analytical abstraction, followed by reducing it into some formats of visualization content and presents a graphical view to the user after visual mapping.

1 Jacques Bertin (born 1918) is a French cartographer and theorist, known from his book Semiologie Graphique (Semiology of Graphics), edited in 1967. This monumental work, based on his experience as a cartographer and geographer, represents the first and widest intent to provide a theoretical foundation to

Information Visualization.

6

1.1.

Theoretical Foundation Of Information Visualization

A fundamental problem of measuring datasets is to handle spatial relationships between abstract data. It therefore follows that for tasks such as exploring data it may be better to draw upon the considerable power of human perceptual experience in order to tackle such difficult problems, which is called information visualization. As a guide to research, information visualizations can be categorized into 7 data types [4]:

1.

Linear data: 1-dimensional, including texts, dictionaries, alphabetical lists

2.

2D map data: maps, plans and layouts with domain and interface attributes

3.

3D World: include real world objects which have volume and complex relationships to each other, sometimes in the form of dimensional representations such as 3D maps- related to 3D computer graphical imaging & design, virtual reality design

4.

Multidimensional data: Statistical and relational database contents that can be manipulated as multidimensional data

5.

Temporal data: Data that needs to be viewed temporally- as a time series

6.

Tree data: Hierarchical or tree structured data

7.

Network data: When a tree structure is not enough, relationships conveyed through linking as a network

Besides his seven data types, Shneiderman 2 also proposes seven tasks a user generally wants to perform. These seven tasks are [4]:

1.

Overview: Gain an overview of the entire collection.

2.

Zoom: Zoom in on items of interest

3.

Filter: Filter out uninteresting items

4.

Details-on-demand: Select an item or group and get details when needed.

5.

Relate: View relationships among items.

2

Ben Shneiderman (born August 21, 1947) is an American computer scientist, and professor for Computer Science at the Human-Computer Interaction

Laboratory at the University of Maryland, College Park. He conducted fundamental research in the field of human–computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design.

7

6.

History: Keep a history of actions to support undo, replay, and progressive refinement

7.

Extract: Allow extraction of sub-collections and of the query parameters.

According to the theoretical foundation above, the basic visualization tasks should deal with both of data transforms and data representations. By data transform, the data sources are changed by such process as modifying or combining sub datasets, filtering or deleting useless resource, generating suitable format dataset with specific relationship not only between datasets but also between data cells; while by data representation changes the visualization content only, such as a horizontal or vertical flip of an image, zooming, 3D rotation, showing and hiding visualization content, adding interaction components, or changing a surface in order to see the underlying structures better. However, distinction between data transforms and data representations is not always clear. For example, sometimes we would like to apply filtering to generate a data set. Other times we just like to temporarily highlight some certain data points without affecting the underlying data source. The same filtering operation need to change their properties depending on the situations.

1.2.

GIS and Google Earth

Information is one type of human thought. Data is one type of information. [5] Variety of data needs to be displayed on the portal in meaningful ways. A geographic information system (GIS) integrates hardware tools and software applications in remote sensing, land surveying, aerial photography, mathematics, photogrammetric, geography, linked data for capturing, managing, analyzing, and displaying to geographical location. GIS allows users to view, understand, query, interpret, and visualize data in many ways that reveal relationships, patterns, and trends in the form of maps, globes, reports, and charts. GIS technology can be integrated into any enterprise information system framework. [6] Some common software and the growing number of open source or free software tools that offer broadly equivalent functions have been developed during last few years, such as ESRI’s[7], ArcGIS[7] or

MapInfo[8].

8

Web-based mapping tools are released by several famous software companies in the recent past. Although their functional capabilities in complex analysis and transformations of spatial data are much lower than traditional GIS vendors, their emergence has been significant in that they have managed to capture a wider audience because of their satellite imagery, map data pre-installed and easy use functions. Over the past four years since Google Earth was launched, together with its web-based counterpart Google Maps, In addition to solving existing problems like creating and publishing a simple map of directions to an event, the combination of other web-based tools and an its open API is prompting its new information-based services. Moreover, the ability of Google Earth application and its API provide not only comprehensive set components and services enabling the development of desktop and Internet/Intranet applications but also a real-time data capture/observation mechanism by which a specified fourth dimensional time aspect is possibly to be implemented.

1.3.

Thesis Aim

The main objective of this project is to investigate visualization techniques and data processing techniques to profile energy consumption expenditure of each department at UCD

Belfield campus in real-time by leveraging on Google Earth API and Google map API. By using the real-time data set online and a ranking method based on the energy gradient, the maps would provide an immediate and clear visual impact of the energy consumption behavior, such as interactive time-varying visualization tools show energy peaks, patterns, and trend of and energy usage.

Other aims of develop this visualization tool are to make people aware of the energy cost at

UCD Belfield campus and show the departments that are “environmentally bad” and the

“green departments” that are effectively working to reduce their energy consumption and bills.

9

1.4.

Thesis Layout

This section offers a brief overview of the remainder of this thesis:

Chapter 2 provides background information including basic ideas with visualization and web mapping, especially Google Earth and Google maps, besides, other work in this area and technologies used in the production of the tool.

Chapter 3 describes my approach to the analysis of the dataset.

Chapter 4 details interesting aspects of the design and implementation of the solution.

Chapter 5 provides the results of the evaluation of the tool by testing it on ….server

Chapter 6 states the conclusions from this project and suggests some ideas for future directions in the immediate area of this project.

10

Chapter 2: Background and Motivation

This chapter will put related initial research period in context. Section 2.1 describes the necessity of visualization, and then section 2.2 layouts how information visualization works and several important techniques related; section 2.3 presents typical applications in visual domain, at the last section 2.4 extends web mapping, Google Earth and Google Maps with their APIs as the most important technologies in this project.

2.1

Why visualization?

We talk about information revolution every day, never before in history data has been generated at such high volumes as it is today. People’s life is found on information on some level; when you are reading the news paper as the beginning of one day, information has been translated in your brain infect. The past 10 years have also brought about significant changes in the graphic capabilities of average machines, such as personal computers, PDAs, even cell phones. Information exits as various formats: words, numbers, pictures, sounds and abstract data etc. Abstract data are being generated all the time as cells and shapes of other usable information so that exploring and analyzing the vast volumes of data becomes increasingly difficult. Usually these raw data consists of a large number of records each consisting of a number of variables or dimensions, how to make such abstract information usable becomes to be a big question. Figures below show the raw DNA sequence data obtained from hairpin-bisulfite PCR and a DNA sequence diagram obtained from Wellcome Trust Sanger

Institute.

11

Figure 2.1-1 raw DNA sequence data

Figure 2.1-2 visual DNA sequence data

A simple question here: how many kinds of cells in a DNA?

Obviously, figure two is much clearer than the figure one. We can get some basic ideas from figure two even we do not understand how DNA sequenced. The most intuitive information we can get is the answer to the question above: there are four different kinds of cells because they are presented as four different colors. Why figure two is easier for us to explore? This is because although human vision contains millions of photoreceptors and is capable of rapid parallel processing and pattern recognition [9], we have a limited capacity for attention which limits the amount of information processed at any particular time. The impressive bandwidth of vision as a mode of communication leads to the efficient transfer of

12

data from digital storage to human mind. Yet, a more important reason is the human ability to visually reason about the data and extract higher level knowledge, or insight, beyond simple data transfer [10]. Several such insights are listed below: [4]

Simple insights:

· Summaries: minimum, maximum, average, percentages

· Find: known item search

Complex insights:

· Patterns: distributions, trends, frequencies, structures

· Outliers: exceptions

· Relationships: correlations, multi-way interactions

· Tradeoffs: balance, combined minimum/maximum

· Comparisons: choices (1:1), context (1: M), sets (M: N)

· Clusters: groups, similarities

· Paths: distance, multiple connections, decompositions

· Anomalies: data errors

As Card 3 , Mackinlay 4 , and Shneiderman 2 note, the purpose of visualization is insight, not pictures; the main goals are discovery, decision making, and explanation. Information visualization is useful to the extent that it increases our ability to perform these and other cognitive activities. Insight is a term that cuts to the very core of cognition, understanding, and learning, and particular theoretical models of cognition influence our views of just what understanding might be.

On the basis of all the above, visualization enables users to infer mental models of the real phenomena represented by the data. In other words, visualization can transform raw data into a powerful advocacy tool to motivate an outcome. Translating data into a visual format may help reveal patterns that might not otherwise be apparent. Advances in visualization enable to

3 Stuart K. Card is an American researcher. He is a Senior Research Fellow at Xerox PARC and one of the pioneers of applying human factors in human–computer interaction.

4 Jock D. Mackinlay is an American information visualization expert and Director of Visual Analysis at Tableau Software. With Stuart K. Card, George G.

Robertson and others he invented a number of Information Visualization techniques.

13

represent differently and dynamically in different literature databases with integrating underlying information space. By analyzing and understanding unprecedented amounts of experimental, simulated, and observational data through representing data visually on a chart or graph can reveal wider trends and unexpected clusters around specific demographics, geographies or time-periods.

2.2

Information Visualization Layout

Fundamentally, information visualization is the use of computer-supported, interactive, visual representation of abstract data to amplify cognition. [10] In other words, information visualizations make abstract information perceptible. The two most challenging characteristics of information that make designing effective information visualizations difficult are: [12]

 Complexity : supporting diverse abstract information that may have multiple interrelated data types and structures.

 Scalability : supporting very large quantities of information.

Therefore, effective information visualizations should not only present data in such a way that informative patterns are easy to perceive but also allowing users to take advantage of the pattern-finding capabilities. In addition, the interface should be optimized for low-cost, rapid information seeking and minimized cognitive impact.

Obviously, visual representations alone are no longer enough and interactive techniques must also be sent in.

2.2.1

Visualization Pipeline

The visualization pipeline is the computational process of converting information into a visual form that users can interact with. [10] We can understand this concept from its profile: there are two extremes in a visualization pipeline which are raw data and user interface.

Computational process that deals with both of data transform and visual presentation works as an operator between these two extremes. At last, users should be able to interact with any step

14

to alter the visual view. Figure below shows how visualization pipeline works:

Raw

Information

Data

Transform

Dataset

Visual

Mapping

Visual Form

View

Transform

Interaction

Visual

Perception

User View

Figure 2.2.1-1 the visualization pipeline (slightly modified from Stuart Card’s model)

As the figure slightly modified from Stuart Card’s model, the first step of visualization pipeline is to transform raw information into a well-organized canonical data format which typically consists of a dataset containing and a set of data entities. These two containers are associated by data attribute values. The second step is the heart of the visualization pipeline which is commonly called “visual mapping”. In this step, dataset transformed from step one will be mapped into visual form. The visual form contains visual glyphs that correspond to the dataset entities, such like points, lines, regions and icons. The third step called view transform in which visual form is displayed onto screen to provide various view transformations such as navigation. The view is then presented to the user through the human visual system. Underlying information is reconstructed depends on users’ interpretations.

As a result, for any remaining attributes, interaction techniques can be applied. In general, the direct visual mapping of information is the most effective for rapid insight, while interaction techniques require slower physical actions by the user to reveal insights.

2.2.2

Data Exploration

Visual data exploration usually follows a three step process :

Overview first, Zoom and Filter, then Details on demand . This has been called the

15

Visual-Information-Seeking [4]. In the overview, users identify interesting patterns and analyses one or more of these patterns by training and accessing of the datasets then filter out those uninteresting data and grouping those data we selected. The data usually consists of a large number of records each consisting of a number of variables or dimensions. Each record corresponds to an observation, measurement, transaction, etc. We call the number of variables the dimensionality of the data set.[13] Information visualization focuses on datasets lacking inherent 2D or 3D semantics and therefore also lacking a standard mapping of the abstract data onto the physical screen space. As we mentioned in chapter one that information visualizations can be categorized into 7 data types, now according to dimensions of the data, these seven data types can be boiled down to: One-dimensional data this kind of data usually has one dense dimension. A typical example of one-dimensional data is alphabetical lists;

Two-dimensional data has two distinct dimensions. A typical example is geographical data where the two distinct dimensions are longitude and latitude; but the most of real world objects have more than two attributes, for example, a normal person has several basic attributes, such like name, sex, height, weight and so on, we give this kind of complex data a name which is Multi-dimensional data . However, not all data can be described by dimensional if there are lots of complex relationships between data. Thus we have another two kinds of data which are described by structures: Tree data and Network data . It’s easy to understand, these two kinds of data use hierarchical structure and conveyed through linking to describe the relationships between data. Frequently in real-world applications, information involves complex combinations of multiple information structures. Text and Document

Collection Structure is the most complex structure in visual data types. It typically includes digital libraries, news archives, digital image repositories, and software code. In order to transform this kind of data, we need to combine several structures together, such like trees and networks.

2.2.3

Visual Mapping

Visual mapping is the step in which the visualization pipeline transform abstract data into visual form, that is, pipeline takes abstract data as input and generates visual representation as output. Visual mapping is a way to work out complex hierarchies and helps to see patterns,

16

relationships and dependencies that might otherwise remain hidden. The visual mapping step is accomplished by two sub-steps: [10] First, each data entity is mapped into a visual glyph.

The vocabulary of possible glyphs consists primarily of points (dots, simple shapes), lines

(segments, curves, paths), regions (polygons, areas, volumes), and icons (symbols, pictures).

Points Lines Regions Icons

Figure 2.2.3-1 visual glyphs

Second, attribute values of each data entity are mapped onto visual properties of the entity’s glyph. Common visual properties of glyphs include spatial position (x, y, z, a, b), size

(length, area, volume), color (gray scale, hue, intensity), orientation (angle, slope, unit vector), and shape. Other visual properties include texture, motion, blink, density, and transparency.

Spatial position properties are the most effective, and should be reserved to layout the dataset in the visual representation according most important data attributes.

Positions Sizes Colors Orientations Shapes

Figure 2.2.3-2 visual properties

17

2.2.4

Data Visualization Techniques

On the basis of data types we evolved from Shneiderman 2 ’s seven data types before, there

are number of visualization techniques can be used for visualizing each of them. The most popular visualization techniques are classified in Standard 2D/3D displays , geometric, icon-based, pixel-oriented, hierarchical, graph-based, or hybrid class [12, 13, 14]. Figure below shows some well-known examples of these techniques.

Bar charts X-Y plots

Standard 2D/3D

One-dimensional,

Two-dimensional datasets

Geometric

Multi-dimensional datasets.

Scatter plots (exploratory statistics)[15]

Projection Pursuit [16]

Hyperslice [17] Parallel Coordinates [18]

18

Figure 2.2.4-1 visualization techniques (i)

Icon-Based

Multi-dimension al dataset

(continued)

Chernoff Faces [19]

Color Icons [21]

Stick Figures [20]

TileBars [22]

Pixel-oriented dimension value to a colored pixel and group the pixels belonging to each dimension into adjacent areas

Recursive Pattern

Technique [23]

Circle Segment Technique

[24]

Spiral- and Axes

Techniques [25]

Figure 2.2.4-2 visualization techniques (ii)

19

Hierarchical

Structured datasets

Dimensional Stacking [26] World-within-World [27]

Tree map [28] Info-Cube [29]

Hybrid

Arbitrary combination from above

Figure 2.2.4-3 visualization techniques (iii)

2.2.5

Interaction and Combination

In the visualization pipeline which we presented above, after visualization results, for an effective visualization it is necessary to use some interaction techniques, which allow the users to: a)

Tour data by varying views or by labeling b)

Delete any data uninterested in. c)

Zooming to focus attention d)

Directly and dynamically change the visualizations by adjusting mapping e)

Relate and combine multiple independent visualizations. f)

See correspondence in multiple views or explore neighborhoods by brushing, highlighting or panning.

20

2.3

Visualization Applications

Numbers of visualization applications have been developed in different domains, general charts like bar, pie & line are used by pretty much everywhere, such like in business domain, sales reporting applications are everywhere, by the help of plotting, charts and graphs, sales data is condensed heaps into easy digestible information. Data visualization is used extensively for presenting statistics of market research and analysis of survey data, as it helps in getting an insight into the same quickly and coming out with solid conclusions. Microsoft

PowerPoint 5 is another good example in information visualization with interactions. In architectural, meteorological, medical, biological and other specialized domain, graphs are needed to be used for analysis and interpretation of scientific data.

Another important kind of applications that I use in this project is called Interactive Maps, which involve location-based decision-making specific websites visualization application.

One of the best applications of integrating data with interactive maps is Google Maps .

2.4

Web mapping

Web mapping is the process of designing, implementing, generating and delivering maps on the World Wide Web. [30] There are two basic classic types of web maps, which are static and dynamic web maps [31]. Static web maps are those maps online but can be viewed only, while dynamic web maps are those maps we can interactive with.

Web mapping works as a visualization tool which integrates GIS and internet together, allow users access to information or data without any GIS software, so that distribution geographic information is accessible in broad area. Furthermore, as we mentioned before, lots of web maps, those we call them dynamic web maps, can not only present information to users, but also allow users to Pan and zoom, turn layer visibility on and off, identify features, find features, select features by their attributes or their geometry and output maps to printers

5 Microsoft PowerPoint is a presentation program developed by Microsoft. It is part of the Microsoft Office suite, and runs on Microsoft

Windows and Apple's Mac OS X computer operating systems.

21

or as images. In such ways users are able to interactive with maps to search to particular area, find out information they are interested in or create their own spatial objects onto maps such like either static or dynamic icons, images, videos and so on.

Usually, web mapping components include a visualization machine, such like computers; an internet connection; spatial or geospatial data; a web server, such like Apache HTTP

Server; a map server; some map files and metadata. Figure below shows how web mapping works with these components.

Meta data

Map Server

Map File

Meta data

Web Server

Meta data

Figure 2.4-1 web mapping components

Here, map server needs to be configured to communicate between the web server and the assemble data layers; map file contains the relationships between map objects, such like size, color, extent, labeling etc. Metadata is data about map attributions such like server URLs, geographic locations, and projection.

2.4.1

Google Earth (API) and Google Maps (API)

Google Earth is a free, downloadable application which combines satellite imagery, maps,

3D terrain and 3D buildings to create a highly realistic virtual globe, while Google Maps are

2D Maps viewed in your web browser. Choose between Map view, Satellite and Terrain view.

22

[32][33]

Figure 2.4.1-1 Google Earth [32]

Figure 2.4.1-2 Google Maps [33]

The Google Earth API is Google Earth in a browser. Embed Google Earth and your Google

Earth KML layer into your site using the Google Earth API. Using the plug-in, people don't have to open the separate Google Earth application to view your story in Google Earth.

Rather, they can view your annotations to the 3D globe right in your site, while The Google

Maps API is a JavaScript API that lets you embed Google Maps and your annotations to the map into your own site. [2][34][35]

2.4.2

XHTML

Google Earth/Maps will work in any standard HTML page, but Google recommends creating pages that are compliant with the XHTML standard. This will ensure not only that the HTML is compatible with the standard to which most browsers are now also made compatible, but also the reliable rendering of the web page on as wide a range of web sites as possible. An XHTML page is marked as below:

23

2.4.3

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN”

“http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd”>

<html xmlns=”http://www.w3.org/1999/xhtml”>

<head>

<title>XHTML Page</title>

</head>

<body>

</body>

</html>

JavaScript

Standard JavaScript [36] is a small, lightweight, object-oriented, cross-platform dynamic scripting language developed by Netscape, which supports all the structured programming.

The most important difference between Java and JavaScript is former has static typing while latter typing is dynamic. A JavaScript web server would expose host objects representing a

HTTP request and response objects with different data types. Client side JavaScript can then automatically render these data types from different data sources to compose more complex visualization. The JavaScript can also handle interactions like panning, zooming, etc. Clients can have controls to add and delete contents, change the order of the layers to modify views.

A minimal example “Hello World” of a JavaScript would be:

<script type="text/javascript">

document.write('Hello World!');

</script>

2.4.4

KML

KML is the open data standard of Google Maps and Google Earth, its full name is Keyhole

Markup Language. [37] KML is the native file format used by Google Earth and also readable by Google Maps. It is an XML-type file which uses ‘tags’ to separate and describe data variables. A KML file can describe a single point on the globe, or a complex set of data that displays in Google Earth as a complete map layer and contains links to many other data sets.

The great majority of KML files tend to contain mainly or solely point data of four main classes of spatial objects in Google Earth, which are points, polygons, paths and overlays.

Figure below shows a simple KML place mark

24

Figure 2.4.1.3-1 a KML place mark [37]

KML documents can also contain other related content items.

KML provides a way for humanitarian practitioners to view a variety of spatial data types, collected on their own and others, using easily learned software. A simple place mark example of KML would be:

<?xml version="1.0" encoding="UTF-8"?>

<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom">

<Document>

<Placemark>

<coordinates>

-6.224374882808923,53.30924178063212,100

</coordinates>

</Placemark>

<Document>

</kml>

2.5

Conclusion

This chapter presented the background and related research to this work. We started with review of the necessary of the visualization in common sectors and how visualization helps people insight before continuing on to illustrate visualization pipeline with powerful examples and technologies. Adhere to those principles; the former offering the motivation of building a

25

visualization while the latter providing how the challenges of achieving and maintaining a good design and how to implement an expedient visual interface. In the following section we considered several typical applications in this area and lead to the most suitable tool which we use in this project. After explained the tool situation in a nutshell, the chapter concluded with a short introduction of other related techniques. In the following chapters we will present my own design of this project by following the theoretical foundation which we presented in this chapter.

26

Chapter 3: Design

This chapter will describe the system that evolved during this project. In section 3.1 we describe the application at a high level, before delving deeper to examine its constituent components. In section 3.2, we will examine some of the interesting aspects of the design and several important design processes by both of data side and view side.

3.1

Overall Design

We mentioned that data visualization on a computer system is a multi – discipline process.

The main aim of this process is the representation of abstract data in a visual model, which can better help to visualize and to understand relationships or dependencies in the main frame of this data. The design of a visualization technique should dependent on visualization pipeline and follow structures of Google Earth API and Google Maps API. The basic architecture of this visualization system is shown as below:

UI Process

Abstract

Dataset

Data Process

Acquire

Dataset

Overview

Dataset

Visual Process

Import Google

Earth API&

Google Maps

API

View global

Filter

Dataset

Transform

&Label

Data

Zoom into

Details

Parse Data

Locate

&Represent

Data

Other

Interactions

Figure 3.1-1 system overall architecture

27

Figure above shows the high level architecture of the visualization system. The system consists of a small number of sub-systems, of which the data process is the first step and an interactive user interface is the expect result.

3.2

Particular Design

This section will present the modular design for each of the constituent components in the system. First of all, it’s necessary to hark back to the purpose of this project, that is, by using the real-time data set online and a ranking method based on the energy gradient, the web maps would provide an immediate and clear visual impact of the energy consumption behavior, such as interactive time-varying visualization tools show energy peaks, patterns, and trend of and energy usage.

3.2.1

Data Exploration

Because of different ways of implementation and drawbacks of logical and graphical operations during visualization process, it is necessary to consider which process will be applied in which time of data execution. Two modules are introduced during data process: a) Data access module is a module for data reading from an external storage medium into the operating system memory. b) Data manipulation module is a module for data-transformation-related operations.

The aim of data process is to obtain dataset from the data source and transform raw data into a well-organized canonical data format which typically consists of a dataset containing and a set of data entities. The energy data are available at this site: http://energy32.ucd.ie/live_data_ucd.shtml and are streamed directly into one of the

CLARITY servers at UCD. Figure 3.2.1-1 is a screenshot of UCD Energy Management Unit website with .zip files link list on.

28

Figure 3.2.1-1 .zip files at UCD Energy Management Unit

All the data are collected from preset energy sensors in numbers of departments in Belfield campus and being updated every fifteen minutes through the HTTP web server, so that datasets can be downloaded by an URL list. Raw data have been refined as Excel format zipped files and can be identified by different departments or different buildings. Each Excel file contains three sheets. The first sheet presents monthly profiles of each department with diagrams, the second sheet is called daily sheet, which presents daily profiles, and the last sheet is the real-time data sheet in which such table contains data entities to present wattage cost during last fifteen minutes by specific department. Each row in a table contains ninety-six cells to present wattage cost during one day. Columns index is the particular time segment which start from 12:15AM and end at 12:00AM. The units in the first and the second sheets are calculated from the real-time data which are saved in the third sheets by mathematic functions. Figure 3.2.1-2 shows three segments of computer science data file.

Sheet-1 Monthly

29

Sheet-2 Daily

Sheet-3 Real-time

Figure 3.2.1-2 segments of an Excel file

This project aims at creating a real-time web map so that only the third sheets are approached.

Another reason why other two sheets are ignored during this project is that all the units stuffing in these two sheets are calculated from the third sheets by those mathematic functions pre-embedded in Excel, which cannot be read as numeric data directly. Well then, we focus on the third data sheet to continue the data process. According to the data type principle in chapter one, each data entity has four basic attributes: the department it indicates, numeric watt of power, date stamp and time stamp; all data in the same raw comprise a daily dataset, while all data in the same column present watt value under the same time but in different days.

Every seven rows of data comprise a weekly dataset and every thirty or thirty-one rows data comprise a monthly dataset depends on specific months. For project purposes, we need a real-time list which contains real-time watt values of all departments, a daily ranking list with the energy gradient during one day of each department besides a weekly ranking list and a monthly ranking list all work in the same way.

30

3.2.2

Visual Mapping and Data Transformation

Visual mapping and data transformation is the way to work out patterns, relationships and represent dataset depends on important attributes. Different data types over different represent aims lead to different views and different interactions. This project mainly based on numeric datasets which are also called quantitative data . As we mentioned before, data visualization focuses on datasets lacking inherent 2D or 3D semantics such like quantitative data.

Whenever we represent quantitative data visually, whether on a map or otherwise, visual objects those represent abstract concepts in a clear and understandable manner way are very powerful. In Google Earth and Google Maps, spatial position properties are the most effective, and should be reserved to layout the dataset in the visual representation.

Initial maps only include those geographical features that are needed to provide meaningful context for the data so that different representations of geography are required for different tasks.

By data transformation, normal quantitative data do not have position attribute, yet the energy data for this project has an attribute which is department name, such attribute relates quantitative data to positions. Thus departments’ name must work as an operator on transformations between energy dataset and location dataset. On the other hand that by view, an ineffective display should represent information in a clear, clean non-doubt and simple way that allows end users to see only what’s useful, without distraction from anything that isn’t. As we all know, several represent patterns play leading roles in visualization, such like colors, size, shapes, and so on. For this project, 3D models are powerful in identifying different departments and high columns are striking to show entities over large areas. Color can be used to convey additional layers of meaning and emotion, yet when design a view color also has to be considered as a context pattern. A typical example is, black-and-white may be more cost-effective and more readable at high contrast, but this kind of color also disappears when photocopied or printed in black-and-white. Relate to Google Earth, green is the prevailing color through the Belfield campus over its surface, accordingly, as avoiding objects to be too similar to the geographical features, rather than use green, some distinct hues should be used.

31

Nevertheless some other patterns as important as them likewise. Following examples give some clues to the idea that how to design an effective display. Figure 3.2.2-1 is a pair of

Google Earth screenshots on the same area.

Figure 3.2.2-1 a pair of Google Earth screenshots

Obviously, although the first display represents more information than the second display, it’s not effective over the mash view. How many information will be shown to users at one time and how to show those information reasonable is the first question when we design visualization . Further, Lots of visualization techniques work well with 2D graphs but

32

unreadable due to occlusion on 3D maps, such like bar or scatter plots.

A good example must be John Snow's map showing cholera deaths in London in 1854.

Figure 3.2.2-2 John Snow's map [38]

Dr. Snow recorded the number of deaths that occurred at specific locations using bars.

By displaying this information on a map, he was able to easily see that most deaths occurred close to the Broad Street, so that he investigated the outliers to determine if they contradicted the likelihood that the Broad Street well was the epidemic’s source. Dr. Snow’s map is a meaningful use of a geographical display undoubtedly. However, if we focus on that marked area, some issues come up. Firstly, it’s hard to compare the length of bars which are not on the same side without counting grids; secondly, longitudinal bars and latitudinal bars may cross over each other during the number growing; finally, after a bar reached street edge on the other side, how could we deal with the next growing number? These issues occur everywhere in the real maps, especially in this project. Not all the roads and streets can be presented as beelines, therefore, choose suitable objects to present dataset become to the most important process in the visual mapping and data transform steps. Google Earth API and Google Maps

API provide visualization designer numbers of effective patterns to show information, such like place marks, icons, 3D buildings, polygons and balloons etc. As we introduced before,

Google Earth and Google Maps work by way of overprinting different layers on satellite photos, it will not be hard to present sensible visual data if we make a good decision of

33

patterns.

3.2.3

Interactions

Instead of complex processes with filter and arrange objects, a better way to solve the issues above is to use normal graph visual techniques and geographical features in collaboration besides such interactions can relate them together.

Figure 3.2.3-1 graph visual techniques and geographical interaction

Interaction is the way for users to manage, control and explore the data or change the viewpoint. Another enhancement to user interaction enables the users to traverse the display laterally and run through several of the prefixes.

The purpose of this project is not only to show energy variation in real-time, but also to rank departments by values which have been calculated weekly and monthly. Interaction should allow users explore overview information, zoom into any specific department they are interest in or filter any information they do not care about. On the whole, Google Earth show information in 3D pattern while Google Maps view is based on 2D maps, therefore in collaboration with real-time variation and ranking tables, we can display real-time information as 3D patterns over Google Earth and represent ranking results as 2D tables over

Google Maps. Although Google Maps and Google Earth work in separate way, they have closely relationship over the coordinates.

34

3.3

Conclusion

In this chapter we described the design of the interactive map system at a high-level and some of the more interesting details of the design of the data transformation and individual visual patterns added into the overall framework. The summary of design is: download dataset by URL list from the website, filter useless data out and transform daily dataset into

3D patterns over Google Earth; calculate and represent weekly and monthly ranking results as

2D tables over Google Maps, then add interaction components into both of them. The next chapter we will present the development and implementation of this project by following this design summary.

35

Chapter 4: Development and Implementation

This chapter will present development and implementation of this project step by step. As in chapter 3, we summarized system design into two parts, data side and view side; we will follow these two parts pattern in this chapter as well. Section 4.1 will present data exploration development in two data process modules and then section 4.2 will describe the methods of graphical development and related interactions.

4.1

Data Exploration

Data exploration of this project starts from data acquisition that obtains the data over a network and then parses and filters data by providing some structure for the data’s meaning and order it into categories. The next steps are apply methods from statistics or data mining as a way to discern patterns or place the data in mathematical context and improve the basic representation to make it clearer and more visually engaging. We will follow those two data modules we imported in the last chapter as a clue to implementation of data exploration.

These two modules are data access module and data manipulation module.

4.1.1

Data access module

In this module, we will develop such methods to implement data acquisition. The .zip type data files can be downloaded by URLs from UCD Energy Management Unit web server, afterwards unzip the .zip files to get Excel files. By selecting and reading daily data tables, data can be ordered into next steps.

36

.zip File

URL1 Save

.zip File

Web

Server

URL list

Local

Host

.zip File

.zip File

Download

Save

URL2

Collect Read Unzip

Data

Manipulation

Data Data Table Excel File

Collect Read

Data Data Table

Excel File

Figure 4.1.1-1 data access module

Considering the aim of the project is to build visualization over a dynamic website, the final end of the data must be a web page which is composed of HTML, JavaScript, Google

Earth API, Google Maps API and other structure components. The page composition is usually data-driven and collates information ad hoc each time a page is requested. JavaScript provides dynamic scripts to client-side via the web, in order to avoid any intentionally attack through browsers scripts run in a sandbox in which they can only perform web-related actions, not general-purpose programming tasks like creating files.

Furthermore, scripts are constrained by the security policy: scripts from one web site do not have access to information such as usernames, passwords, or cookies sent to another site. In the end, rather than using JavaScript, Java (J2SE) is more suitable for data access module.

4.1.1.1

Download by an URL list (HttpGet.class)

Vector class is used as URL container and file name container here. The Vector class contains components that can be accessed as same as Array. However, the size of a Vector is more flexible to accommodate adding and removing items. In addition Vector is synchronized, that means it is thread-safe. The class works in this way: the first step is to clear both of the vectors those have had URLs or name list in already; public void resetList() { vDownLoad.clear(); vFileList.clear();

37

}

And then, URL vector get URL list from website and file name vector gives corresponding file the name in the list public void downLoadByList() {

… for(int i=0;i<vDownLoad.size();i++){ url = (String) vDownLoad.get(i); filename = (String) vFileList.get(i);

Package java.net looks for the special "linked" symbol to identify those projects that are hosted on the website and then download them by stream. public void saveToFile{

… httpUrl = (HttpURLConnection) url.openConnection(); httpUrl.connect(); bis = new BufferedInputStream(httpUrl.getInputStream()); fos = new FileOutputStream(fileName);

At last, close and save files have been downloaded and disconnect from website httpUrl.disconnect();

4.1.1.2

Unzip Files (TestZip.class)

This class works in a simple way, package java.util.zip provides classes for reading and writing the standard ZIP and GZIP file formats. Files are saved by appointed file paths.

Figure 4.1.1.2-1 TestZip class diagram (without methods)

38

4.1.1.3

Excel Reader (ExcelReader.class)

Apache POI team releases the POI project which consists of java APIs for manipulating various Microsoft file formats. By import this project package, we can read and write MS

Excel files using Java. Figure below shows how this class read data from an Excel file:

Figure 4.1.1.3-1 ExcelReader class diagram (without methods)

By indexing sheets and rows, the class always returns the values in the last row which contains the current daily data as a String Array.

Concurrency and Timer

Data access performs in a particular processing order that is downloading first, unzipping second and reading last. Files are passed as objects between classes by referring file names. In order to control the process in sequence, concurrent programming must be taken into account.

Process may be interrupted or throws exceptions if a file was unzipped before this file has been completely downloaded, in other words, it’s a Producer-Consumer problem in these three threads. A counting semaphore is a counter for a set of available resources, rather than a locked/unlocked flag of a single resource, so that they may be employed when multiple threads need to achieve an objective cooperatively. public class DownloadThread extends Thread {

Semaphore a,b,c;

39

… public void run(){

… a.release(); try{

Thread.

sleep (1000);

} catch(InterruptedException e){

} public class ZipThread extends Thread {

Semaphore a,b,c;

… public void run(){ try{ a.acquire(); c.acquire();

}catch(InterruptedException e){} try{

Thread.

sleep (500);

} catch(InterruptedException e){

}

… b.release();

} public class XlsThread extends Thread {

Semaphore a,b,c;

… public void run(){ try{ b.acquire(); a.acquire();

}catch(InterruptedException e){} try{

Thread.

sleep (5000);

} catch(InterruptedException e){

}

… c.release();

} public class HttpTest { public static void main(String args[]){

40

Semaphore aArrived = new Semaphore(1);

Semaphore bArrived = new Semaphore(0);

Semaphore cArrived = new Semaphore(1); new DownloadThread(ht,aArrived,bArrived,cArrived).start(); new ZipThread(tz,aArrived,bArrived,cArrived).start(); new XlsThread(ex,aArrived,bArrived,cArrived).start();

}

Each acquire() blocks if necessary until the corresponding release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly. Furthermore, there is a "blocking semaphore" in the process which is a semaphore that is initialized to zero. This effect on any thread that does an operation on this semaphore blocks.

Another add-on in this module that control the data access process is a timer, which provided by java.util.Timer class, java.util.TimerTask calss and java.util.Calendar class.

TimerTask task = new TimerTask() { public void run() { try {

} catch (Exception err) {

}

}

};

Timer timer = new Timer(); timer.schedule(task, Calendar.

getInstance ().getTime(), Integer*

1000);

These three packages give a facility for threads to schedule tasks one-time or repeated execution by a Timer for future execution in a background thread.

4.1.2

Data manipulation module

If data access module is mainly about data acquiring then data manipulation module deals with data preprocessing on the whole. In the project so far, the data has been extracted into a string array. As the purpose of the project is to represent a visual impact of the energy consumption behavior and show energy peaks, patterns, and trend of and energy usage by

41

suitable ranking method based on the energy gradient, data are potentially necessary to be transformed into numeric type. In Java, conversions are performed automatically when the type of the expression on the right hand side of an assignment operation can be safely promoted to the type of the variable on the left hand side of the assignment. Thus we can safely assign string type to double type: double data=Double.

parseDouble ( rows[j]);

Although a simple subtraction on energy sums between two continued days is capable of showing energy gradient, this arithmetic is unfair during the ranking between two departments with obvious difference in scale. Therefore, to rank the percentage changes among the departments is a better way to show their energy gradient. To find the percentage change of a department, first find the amount of its sum change between two continued days, and then divide this amount by its sum value of the previous day. Because of the decimal is identical with percentage, it’s unnecessary to express this quotient as a percent.

4.1.2.1

Doubly-linked list and Ranking method

This project involves implementing a doubly-linked list with sentinel head and tail nodes to perform the ranking method. Linked list is a powerful data structure which supports sorting algorithms in Java . Unlike the Singly-Linked Lists which only allows movement through the list in the forward direction, each element in the doubly-linked list contains a piece of data, together with pointers which link to the previous and next elements in the list. Using these pointers it is possible to move through the list in both directions. Figure below presents the structure of the doubly-linked list

Figure 4.1.2.1-1 a doubly-linked list with sentinel head and tail nodes

The ranking table we need is not only contains percentage change values but also contains the

42

corresponding department names. Thus each node in the doubly-linked list consists of a pair of data records: a building name (String) and a percentage change value (Double). public Node(String a,Double b){ place=a; val=b;

} public String getPla(){ return place;

} public Double getVal(){ return val;

}

ExcelReader

Doubly Linked

List

Daily.xml

<Place>

Building name.xls

Node Ranked Doubly

Linked List

<Data> percentage change

Building name

Node

Bobble Sorting percentage change

Figure 4.1.2.1-2 ranking method

After ExcelReader class has completed the calculation of percentage change, this value and the first half of its file name that hints the building name of the value was placed into the doubly-linked list. Bobble sorting then takes all the double values in the list and executes sorting algorithm. And then, the doubly-linked list is rearranged by the recombination between the values and their corresponding names and upon sequenced double value. In the case of several buildings have the same percentage changes, once a node has been ranked, it needs to be deleted from the original doubly-linked list to trigger recombination with the same value. public String FindbyVal(double s){ current=head;

43

boolean found=false; while(!found&&current!=null){ if(current.val==s){ v=current.place; if(current==head){ head=current.next;

} else current.pre.next=current.next; if(current==tail){ tail=current.pre;

} else{ current.next.pre=current.pre;

} found=true;

} else{ current=current.next;

}

} return v;

}

Finally, the process calculate to two decimal places and writes the ranking table into an XML format file as daily record, in which buildings and values are indexed by tags. for(int cou=0;cou<count;cou++){

out.writeBytes("<place>"+h[0].FindbyVal(d[cou])+"</place>"+"\r

");

DecimalFormat fnum = new DecimalFormat("#0.00");

String str = fnum.format(d[cou]); out.writeBytes("<data>"+str+"</data>"+"\r");

4.2

Visualization Development

Before getting started developing Google Earth or Google Maps applications sign up for an

API key is the first task. The API key must specify a web site URL that will be used in the project. A single API key is valid for a single directory or domain. The key to the Google

Earth/Maps API is the JavaScript component that is loaded from Google each time you open a

Google Earth/ Maps web page. This JavaScript component provides the interface to the

Google Earth/Maps service and generates the map onscreen by loading the necessary image

44

components and tiling them onto the display. As we introduced before, this project is implemented by combining Google Earth API with Google Maps API, which for the sake of integrating 2D visualization techniques with 3D visualization techniques.

4.2.1

Google Earth API

Google Earth API is a JavaScript-enabled browser plug-in version of Google Earth, so that the visual components in this project were developed mainly based on JavaScript. All Google

Earth applications start off with a simple earth. The code below imports the Google Earth library. The key that has been generated before must be inserted here.

<script type="text/javascript" src="http://www.google.com/jsapi?key=ABQIAAAAwbkbZLyhsmTCWXbTcjbgbRSzHs7K5Sv aUdm8ua-Xxy_-2dYwMxQMhnagaawTo7L1FE1-amhuQxIlXw"></script>

The basic Google Earth interface shown as below:

Figure 4.2.1-1 a basic Google Earth

In order to provide the application functionality for data visualization, a number of different elements were created into this initial display. According to Google Earth API Developer's

Guide [39] , there are several commonly used elements:

1) Control: Basic interface controls that enable the user to zoom in and out of the earth and move about the earth effectively.

45

2) Overlays: consist of information that is displayed on top of the earth of a particular location.

3) Events: Occurrences of an operation, such as the user clicking on a point of interest.

Figure below presents the Google Earth with Controls and some layers

4.2.2

Figure 4.2.1-2 Google Earth with Controls and some layers

Controls

The Google Earth API provides standard Navigation controls, which allow the user to interact at a basic level with the earth that is panning, tilting, and zooming with buttons or by mouse over. The controls are created by the code below: ge.getNavigationControl().setVisibility(ge.VISIBILITY_AUTO) ;

4.2.3

Overlays

Overlays are the outcomes of top layer visual mapping in this project. All the overlays are displayed as different layers or objects over the earth.

1) Layers: ge.getLayerRoot().enableLayerById(ge.LAYER_NAME, true);

By default, the terrain layer is the only layer displayed when the Google Earth Plug-in

46

first loads. The other layers in this project are:

LAYER_BORDERS : shows country and area borders, and place labels for cities, states, countries, oceans, etc

LAYER_ROADS : displays roads and road names

LAYER_BUILDINGS : 3D buildings

Informational layer: displays current view information such as geographic coordinates, altitude of the terrain below the current cursor position or an inset map.

setStatusBarVisibility() setOverviewMapVisibility()

2) Objects

Representing data as objects over the Google Earth is the most important visual mapping in this project. Objects are enumerated, added, removed and manipulated by Object

Containers. For example, GEFeatureContainers contain features, as with folders in KML and GEGeometryContainers hold any number of geometries in a Multi-Geometry object.

Colorful polyhedron objects in KML and 3D building models are two features for the displays of geographic data and energy data in the project.

4.2.4

Data Displays

Google Earth API has the capability to show variations of daily energy through Belfield campus in 3D visual type. Two types of data have been saved into an XML file after data preprocessing. The first type of data is energy percentage changes, while the second type of data is department names. Another dataset is created here that includes coordinates of departments in order to transform names data into geographic data to be displayed as 3D buildings. Google Earth API is able to show 3D buildings and such structures in COLLADA format. In this project, 3D buildings are generated using a 3D modeling software which is called Google SketchUp . Different departments and buildings are identified by their 3D models over the Google Earth. Figure below presents Computer Science and Informatics as a

COLLADA 3D building on the Google Earth.

47

Figure 4.2.1.3-1 3D CSI model

3D buildings are loaded into Google Earth via the links to the model store on-line and the preset coordinates dataset. var placemark = ge.createPlacemark(''); placemark.setName('model'); var model = ge.createModel(''); placemark.setGeometry(model); var link = ge.createLink(''); link.setHref

(' http://….

' +' 3D model.dae

'); model.setLink(link); var lookAt = ge.getView().copyAsLookAt(ge.ALTITUDE_RELATIVE_TO_GROUND); var loc = ge.createLocation(''); loc.setLatitude

( LatitudeValue ); loc.setLongitude

( LongitudeValue ); model.setLocation(loc); ge.getFeatures().appendChild(placemark); lookAt.setRange(300); lookAt.setTilt(80); ge.getView().setAbstractView(lookAt);

However, 3D buildings have no capability to show value variations, so that five polyhedron objects in different colors are added to each 3D building to represent real time quantity variation. Percentage changes of daily energy costs can be read by Java Applet, which prevents the web browsers from accessing local data. Daily ranking result is saved in XML format, so that the quantity data can be read by index of <daily data> </daily data>tag while department name can be read by index of <place> </place> tag. Code slice as below: int startIndex = lineContent.indexOf("<place>"); int endIndex = lineContent.indexOf("</place>");

48

if ((startIndex != -1) && (endIndex != -1)) { ss = lineContent.substring(startIndex+7, endIndex);

pla[ii]=ss; ii++; int startIndex2 = lineContent2.indexOf("<daily data>"); int endIndex2 = lineContent2.indexOf("</daily data>"); if ((startIndex2 != -1) && (endIndex2 != -1)) { sss = lineContent2.substring(startIndex2+12, endIndex2); dat[i]=sss; i++;

And then, the method below decides which colorful polyhedron object of each department should be shown at that time by the percentage change. Java Applet refers a String array list with parts of the KML links to the HTML page whenever the methods are being called. for(int j=0;j<i;j++){

d=dat[j];

p=pla[j];

double dnum = Double.parseDouble(d);

if(dnum<=-1){

}

s=p+"DBlue"; // Dark Blue

else if(dnum>-1&&dnum<=0){

}

s=p+"blue"; //Blue

else if(dnum>0&&dnum<=0.5){

}

s=p+"yellow"; //Yellow

else if(dnum>0.5&&dnum<=1){

}

s=p+"red"; //red

else{

s=p+"DRed"; //Dark Red

}

sum[j]=s;

}

return sum;

The code slice above hints that the objects are created in different hues. In last chapter we mentioned that green objects may be obscured over the project visual location. Hence other three hues are imported into this project. The hue dimension is circular, typically drawn as a hue circle as below

49

Figure 4.2.1.3-2 hue circle

Since red is warm color while blue is cold color, and yellow can be the middle color, we use these three hues with different values to represent five different energy percentage change dimensions. For example, the percentage change is less than minus one that is energy cost decreased more than one hundred percent today, so that the object in dark blue is advisable to be displayed. HTML embeds Java Applet by <applet> tag, and then displays corresponding polyhedron objects in pursuance of current array list. Function below implements that display

KML files (polyhedron objects) beside corresponding 3D buildings. function UpdKMLbytime(){ arr=document.getdata.update(); if(kmlObjectsArray.length>0){ for(current=0;current<arr.length;current++){ ge.getFeatures().removeChild(kmlObjectsArray[current]);

}

} current=0; for(i=0;i<arr.length;i++){ kmlUrl='http://193.1.132.19/svn/spiderrepository/lei/UCDapp/kmls/'+arr[i]+'.kml'; google.earth.fetchKml(ge, kmlUrl, function finishfirstFetchKml(kmlObject){ if (kmlObject) { kmlObjectsArray[current]= kmlObject; ge.getFeatures().appendChild( kmlObjectsArray[current]); current++;

} else { alert('Bad KML'); return;

50

}

}

}

});

Google Earth fetches KML files by URL links, and then KML files are filled into an object array. In order to update KML files at specified intervals, the setInterval() method is created into the page. The process of updating KML files checks and removes previously KML files if any exists in the object array and reload new KML files after that. Figure 4.2.1.3-3 presents

3D Student Centre with blue polyhedron object

4.2.5

Figure 4.2.1.3-3 3D Student Centre with blue polyhedron object

Google Maps API ………

The basics of the Google Maps API interface has numbers of functionalities those are similar to Google Earth API, such like the ability to zoom; move about the map; and overlay information such as map markers, information windows, and routes on top of a Google Map.

Here we just describe such functionalities those are used in this project.

Markers: These are generated within the application through the GMarker class. In general,

51

an icon is used to highlight a specific item. Therefore, usually a marker gives a specific reference point of the location.

Bubbles: Bubbles are presented while clicking an icon or its associated description. The exact content of the bubble based on HTML and depends on the information that is available, so that the information content of a pop-up is completely customizable.

…………………

4.2.6

Interactions

The Google Earth API and Google Maps API provide a number of different events, which can be used with google.earth.addEventListener() and GEvent.addListener() to provide additional interactivity into the project. Since JavaScript within the browser responds to interactions by generating events, and expects a program to listen to interesting events, actions can be triggered on mouse events or screen events.

4.3

Conclusion

This chapter presents the development and implement of this project. We started from data exploration and data transformation and then introduced data views and user interface with functionality provided by Google Earth API and Google Maps API. Next chapter will evaluate this project.

52

Testing

Conclusion

Chapter 5: Evaluation

53

Chapter 6: Conclusion and Further Work

Information visualization encounters a wide variety of different data domains. The visualization community has developed representation methods and interactive techniques.

54

Reference:

[1].

http://energy32.ucd.ie/live_data_ucd.shtml

[2].

http://code.google.com/intl/en/apis/maps/

[3].

Jacques Bertin.

Graphics and Graphic Information Processing.

Walter de Gruyter, 1981.

[4].

B. Shneiderman. The eyes have it: a task by data type taxonomy for information visualizations,

IEEE Symposium on, 1996.

[5].

http://en.wikipedia.org/wiki/Geographic_information_science

[6].

http://www.gis.com/whatisgis/

[7].

http://www.esri.com/

[8].

http://www.informapusa.com/

[9].

Ware, C.

Information Visualization: Perception for Design , Morgan Kaufmann.2004.

[10].

Card, S., Mackinlay, J., Shneiderman, B. Readings in Information Visualization: Using Vision to

Think , Morgan Kaufmann,1999

[11].

P. Saraiya, C. North, and K. Duca. An insight-based methodology for evaluating bioinformatics visualizations.

IEEE Transactions on Visualization and Computer Graphics, 2005.

[12].

Cleveland W. S. Visualizing Data, AT&T Bell Laboratories, Murray Hill, NJ, Hobart Press,

Summit NJ, 1993

[13].

Daniel A. Keim Information Visualization and Visual Data Mining.

IEEE Transactions on

Visualization and Computer Graphics, 2002

[14].

Spence R., Tweedie L., Dawkes H., Su H. Visualization for Functional Design, Proc. Int. Symp. on Information Visualization, Atlanta, GA, 1995

[15].

D. F. Andrews, Plots of high-dimensional data, Biometrics, vol. 29, 1972.

[16].

P. J. Huber, The annals of statistics , Projection Pursuit, vol. 13, 1985.

[17].

J. J. van Wijk and R.. D. van Liere, Hyperslice , Proc. Visualization ’93, CA, 1993.

[18].

A. Inselberg and B. Dimsdale, Parallel coordinates: A tool for visualizing multi-dimensional geometry , Proc. Visualization 90, San Francisco, CA, 1990.

[19].

H. Chernoff, The use of faces to represent points in kdimensional space graphically , Journal

Amer. Statistical Association, vol. 68, 1973.

[20].

R. M. Pickett and G. G. Grinstein, Iconographic displays for visualizing multidimensional data, in

Proc . IEEE Conf. on Systems, Man and Cybernetics, IEEE Press, Piscataway, NJ, 1988.

[21].

D. A. Keim and H.-P. Kriegel, Visdb: Database exploration using multidimensional visualization ,

Computer Graphics &Applications, vol. 6, 1994

[22].

M. Hearst, Tilebars: Visualization of term distribution information in full text information access ,

Proc. of ACM Human Factors in Computing Systems Conf., 1995.

[23].

D. A. Keim, H.-P. Kriegel, and M. Ankerst, Recursive pattern: A technique for visualizing very large amounts of data , Proc. Visualization 95, Atlanta, GA, 1995

[24].

M. Ankerst, D. A. Keim, and H.-P. Kriegel, Circle segments: A technique for visually exploring large multidimensional data sets , Proc. Visualization 96, Hot Topic Session, San Francisco, CA,

1996.

[25].

Keim D. A. H.-P. Kriegel Database Exploration using Multidimensional Visualization, Computer

Graphics & Applications, 1994.

55

[26].

LeBlanc J., Ward M. O, Wittels, Exploring N-Dimensional Databases, Visualization ‘90, San

Francisco, CA, 1990.

[27].

Feiner S., Beshers C, World within World: Metaphors for Exploring n-dimensional Virtual Worlds ,

Proc. UIST, 1990.

[28].

Shneiderman B, Tree Visualization with Treemaps: A 2D Space-Filling Approach, ACM

Transactions on Graphics, Vol. 11, 1992.

[29].

Johnson B, Visualizing Hierarchical and Categorical Data , Ph.D. Thesis, Department of

Computer Science, University of Maryland, 1993

[30].

http://en.wikipedia.org/wiki/Web_mapping

[31].

Kraak, Menno Jan: Settings and needs for web cartography , Kraak and Allan Brown, Web

Cartography, Francis and Taylor, New York,2001

[32].

http://earth.google.com/

[33].

http://en.wikipedia.org/wiki/Google_Maps#Google_Maps

[34].

http://www.programmableweb.com/api/google-maps

[35].

http://www.programmableweb.com/api/google-earth

[36].

http://www.javascript.com/

[37].

code.google.com/apis/kml/documentation/

[38].

http://www.ph.ucla.edu/epi/snow.html

[39].

http://code.google.com/intl/en/apis/earth/documentation/index.html

56

Download