Towards Interactive Sustainable Neighborhood Design:

advertisement
Towards Interactive Sustainable Neighborhood Design:
Combining a Tangible User Interface with Real Time
Building Simulations
by
Cody M. Rose
B.S. Computer Science
Rensselaer Polytechnic Institute, 2008
SUBMITTED TO THE DEPARTMENT OF ARCHITECTURE IN PARTIAL
FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE IN BUILDING TECHNOLOGY
AT THE
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
JUNE 2015
© 2015 Massachusetts Institute of Technology. All rights reserved.
Signature of Author:
Department of Architecture
May 8, 2015
Certified by:
Christoph Reinhart
Associate Professor of Building Technology
Thesis Supervisor
Accepted by:
Takehiko Nagakura
Associate Professor of Design and Computation
Chair, Department Committee on Graduate Students
1
The work described in this thesis was made possible by the Center for Complex
Engineering Systems at the King Abdulaziz City for Science and Technology in Saudi
Arabia and the Massachusetts Institute of Technology.
The work described in this thesis was made enjoyable by the wonderful people at those
institutions, including my colleagues in Building Technology, our collaborators in the
Media Lab, and the KACST team, all of whom were an absolute delight to work with.
And if not for my parents, George and Carrie, and my advisor, Dr. Christoph Reinhart, I
wouldn’t be here at all, in more ways than one.
2
Towards Interactive Sustainable Neighborhood Design:
Combining a Tangible User Interface with Real Time
Building Simulations
by
Cody M. Rose
Submitted to the Department of Architecture
on May 8, 2015 in Partial Fulfillment of the
Requirements for the Degree of Master of Science in
Building Technology
ABSTRACT
An increasingly urbanizing human population presents new challenges for urban planners
and designers. While the field of urban design tools is expanding, urban development
scenarios require the input of multiple stakeholders, each with different outlooks, expertise,
requirements, and preconceptions, and good urban design requires communication and
compromise as much as it requires effective use of tools. The best tools will facilitate this
communication while remaining evidence-based, allowing diverse planning teams to develop
high quality, healthy, sustainable urban plans.
Presented in this work is a new such urban design tool, implemented as a design “game,”
created to facilitate collaboration between urban planners, designers, policymakers, citizens,
and any other stakeholders in urban development scenarios. Users build a neighborhood or
city out of Lego pieces on a plexiglass tabletop, and the system simulates the built design in
real time, projecting colors onto the Lego pieces that reflect their performance with respect
to three urban performance metrics: operational energy consumption, neighborhood
walkability, and building daylighting availability. The system requires little training, allowing
novice users to explore the design tradeoffs associated with urban density. The simulation
method uses a novel precalculation method to quickly approximate the results of existing,
validated simulation tools. The game is presented in the context of a case study that took
place at the planning commission of Riyadh, Saudi Arabia in March 2015.
Post-game analysis indicates that the precalculation method performs suitable
approximations in the Saudi climate, and that users were able to use the interface to improve
their neighborhoods’ performance with respect to two of the three offered performance
metrics. Furthermore, users demonstrated substantial enthusiasm for interactive, tangible,
urban design of the sort provided. Improvements to future versions of the design game
based on the case study are suggested, but overall, the work presented indicates that
collaborative, interactive design tools for diverse stakeholders are an excellent path forward
for sustainable design.
Thesis Supervisor: Christoph Reinhart
Title: Associate Professor of Building Technology
3
Table of Contents
1
Introduction ......................................................................................................................... 5
2
Case study ............................................................................................................................. 7
3
The game .............................................................................................................................. 8
4
5
3.1
Metrics .......................................................................................................................... 8
3.2
Interface ........................................................................................................................ 8
3.3
System implementation ............................................................................................ 10
3.4
Blocks.......................................................................................................................... 13
3.5
Metric calculation and scoring................................................................................. 14
3.5.1
Operational energy ........................................................................................... 14
3.5.2
Walkability.......................................................................................................... 18
3.5.3
Daylighting potential ........................................................................................ 20
Gameplay results ............................................................................................................... 23
4.1
Group 1 ...................................................................................................................... 23
4.2
Group 2 ...................................................................................................................... 23
4.3
Group 3 ...................................................................................................................... 25
4.4
Group 4 ...................................................................................................................... 25
Discussion .......................................................................................................................... 36
5.1
Utility as design assistant .......................................................................................... 36
5.2
Approximation accuracy .......................................................................................... 36
5.2.1
Energy ................................................................................................................ 37
5.2.2
Daylighting ......................................................................................................... 38
5.3
Summary ..................................................................................................................... 41
6
Conclusion .......................................................................................................................... 43
A
Assembly definitions ......................................................................................................... 44
B
Building definitions ........................................................................................................... 45
C
Sky view factor raytrace parameters................................................................................ 45
D
Walkability score decay ..................................................................................................... 46
E
Urban Daylight parameters .............................................................................................. 46
F
Building Presimulation Results ........................................................................................ 46
Bibliography ................................................................................................................................ 73
4
1 Introduction
Cities across the planet are growing at an unprecedented speed: The United Nations predicts
two million additional city dwellers per week until 2030 [1]. To accommodate this massive
growth, cities must both expand and densify, but the quality and efficiency of new and
existing neighborhoods must not therefore suffer. They should instead be designed for
resource-efficiency, with quality indoor and outdoor spaces that support communities and
favor human-powered modes of transportation such as walking and biking. In recent years,
the building performance simulation community has made significant progress towards
developing planning tools that predict various measures of urban sustainability, from
operational [2]–[4] and embodied building energy use [5] to daylight [6] and walkability [7].
Some of these tools are usable by urban designers and architects, but the neighborhood
design process generally involves many more stakeholders, including city governments,
citizens, developers, financiers, and others, and is far more complicated than simple selection
of the "best" solution as identified by a computer program. Diverse interest groups prioritize
different urban attributes, so maximal satisfaction requires compromise.
Unfortunately, with stakeholders’ varied interests come varied preconceptions and
assumptions. In order to make multi-stakeholder urban planning discussions more evidencebased and productive, design proposals must be evaluated based on meaningful urban
performance metrics so that tradeoffs can be better understood. Indeed, using urban
planning tools, a design team can nowadays prepare design variants before a stakeholder
meeting to guide a discussion, but consideration of new ideas during the meeting itself is
impractical due to model input and execution time. Consequently, the shared creative
wisdom of all involved stakeholders is underutilized, and a lack of participant “ownership”
over prepared solutions hinders buy-in. In the worst cases, this disengagement causes
discussants to revert to conversation grounded in their individual preconceptions, dismissing
(implicitly or otherwise) the evidence-based proposals presented entirely.
The goal of the work described in this manuscript was therefore development of a nextgeneration urban design tool that would (a) allow non-expert stakeholders to actively
contribute their ideas during a planning session and (b) provide real-time analysis feedback
on emerging design ideas in order to quickly advance the design process and help
participants identify acceptable solutions. This tool was jointly built by two MIT research
groups: the Sustainable Design Lab (SDL) within the Department of Architecture, and the
Changing Places group within the Media Lab. The two groups respectively contributed
expertise developing sustainable neighborhood planning software and experience creating
tangible user interfaces (TUIs) for collaborative design. A third collaborator, the Center for
Complex Systems Engineering Systems (CCES) at the King Abdullah Center for Science and
Technology (KACST) assisted with the tool development and coordinated a case study for
use of the tool in Riyadh, Saudi Arabia in March 2015.
The design tool took the form of a design “game” that players could play to develop a
neighborhood. Why a game? The introduction of simulation-guided design to novices goes
back at least to 1989, with Will Wright’s SimCity [8]. As a commercial entertainment
product, SimCity needed to be fun and engaging, and found great success, spawning four
sequels and a whole library of simulation-based spinoff games, many of which attempted to
“teach” through gameplay. SimCity’s creators were aware of both their games’ potential to
teach specific design lessons, such as the value of mass transit [9], as well as the effectiveness
of using a sandbox-based pedagogy to do so. Ocean Quigley, the creative director of the
latest iteration of the franchise, wrote “I don't want to enforce sustainable design principles
5
in the game -- I want them to emerge as natural consequences of your interaction with the
simulation.” [10]
SimCity was not “just” a game – its original simulation engine was based on Forrester’s
theory of urban dynamics [11]. And intuitive teaching software for novices was not just the
vision of SimCity, but the entire urban dynamics project itself. Louis Alfeld, one of the
original researchers, wrote that urban dynamics’ deployment in a particular urban planning
case study presented
…the perfect argument for repackaging urban dynamics into
an interactive, desktop learning program. Such a program,
combining answer-oriented analysis with an easy-to-use
interface, could give urban officials a firm grasp of long-term
dynamic principles…. [12]
Importantly, the goal was not novice creation of ideal urban plans, but to instead provide
“…a tool for organizing information and clarifying logic, rather than a single conclusion to
be either accepted or rejected.” The urban dynamics meta-model has fallen short of its
creators’ goal to revolutionize urban planning, but this point remains salient: The function of
teaching software is to abbreviate expertise, not replace it.
Even so, the utility of a tool with an inaccurate calculation engine is questionable. As an
example, consider energy. Urban-scale operational energy modeling has graduated from
infancy into early youth; Swan and Ugursal [13] provide an overview of residential energy
consumption models, dividing them into “top-down” functions of macroeconomic factors
that predict the behavior of a black box residential energy sector and “bottom-up” systems
that extrapolate from the energy performance of individual buildings, and Reinhart and
Davila [14] further divide these “bottom-up” models into models based on statistical
extrapolation and models comprising aggregated simulation of individual buildings.1 Reinhart
and Davila argue that this last category, the so-called “urban building energy models”
(UBEMs), is best suited for predictions about new construction that differs substantially in
scale or type from existing building stock – the kind of construction that holds the most
hope for meeting today’s urban building needs.
Could the appeal of a game, like SimCity, could be married to the predictive utility of an
UBEM? The problem of speed immediately presents itself. A McMillen and Hwang survey
of definitions and models of interactivity found interface responsiveness a common theme
[15], and Crawford succinctly stated that a game design “…ideal is to have the computer
moving at a speed that doesn’t inhibit the user.” [16] Detailed energy performance
simulation of even a single building fails this test, and a whole neighborhood is out of the
question. A design game, then, cannot simply be a thin wrapper around, or a gamified
interface for, an UBEM. The UBEM must retreat to an advisory role, or if it is to remain
director, tolerate a good deal of delegation.
The remainder of this document introduces a new collaborative urban design game and
a case study in which it was deployed. Section 2 describes the case study setting and goals,
including the performance metrics selected for evaluation. Section 3 describes the game’s
interface, operation, and implementation. Section 4 describes the results of gameplay during
the case study, and Section 5 discusses these results. Section 6 provides concluding remarks.
1
Reinhart and Davila also describe their categories as “top-down” and “bottom-up”, so
some models are “bottom-up” in one scheme and “top-down” in the other.
6
2 Case study
The city of Riyadh, Saudi Arabia, is expected by its planning commission, the Arriyadh
Development Agency (ADA), to double in population within the next decade. Within the
city, one neighborhood expected in particular to undergo substantial redevelopment and
densification is the Dhahira district. The area currently holds approximately 22,000 people,
but the ADA expects it to increase beyond 30,000. As the neighborhood is only one
kilometer square, the ADA’s projection would bring its population density above that of
Mumbai. The redevelopment challenges are substantial.
The case study presented here examined the effectiveness of using a design game to
assist ADA employees as hypothetical redevelopers of the Dhahira district. A one square
kilometer region of Riyadh encompassing Dhahira was selected as the study site for which
various redevelopment scenarios could be explored. The game players comprised 19 ADA
employees with varied backgrounds divided into four teams. Table 1 lists the participants’
training and positions within the ADA. Each team was presented an empty site replacing the
old Dhahira neighborhood and given 30 minutes to iteratively design and evaluate a new
neighborhood on the site in accordance with the game rules (given in Section 3.2). At the 30minute session’s conclusion, the team’s final design was saved and qualitatively evaluated by
the team itself.
Table 1: Gameplay groups
Group #
1
1
1
1
1
1
2
2
2
2
2
3
3
3
3
4
4
4
4
Background
Urban Planning/Design
Urban Planning
Urban Planning
Environmental Engineering
Systems Analysis
Industrial Engineering
Urban Planning
Urban Planning
Urban Planning
Architecture
Civil Engineering/Transportation Planning
Information Technology
Computer Science
Computer Science
Heritage Conservation
Urban Planning
Geographic Information Systems
Geographic Information Systems
Statistics
Position
Project Director
Project Manager
Engineer
Project Manager
Systems Analyst
Industrial Engineer
Urban Planner
Urban Planner
Urban Planner
Architect
Civil Engineer
Deputy General Manager of IT
Assistant IS Director
Database Administrator
Archaeology Researcher
Urban Designer
GIS Analyst
Information Technology
Statistician
Three performance metrics were considered: operational energy cost, neighborhood
walkability, and building daylighting availability. Section 3.1 discusses each in detail. During
gameplay, each metric’s overall neighborhood score was continually reported, as were blocklevel scores for one of the three metrics, which the players could switch at will. The
particular quantitative tradeoff intended to emerge during gameplay involved neighborhood
density, which would decrease operational energy costs and increase neighborhood
walkability at the expense of building daylight availability. Additionally, the game’s rules
would hopefully encourage, to some extent, aesthetic or other qualitative decisions. Section
3.2 elaborates on this.
7
3 The game
3.1
Metrics
3.2
Interface
The game’s objective was to help players develop a dense, resource-efficient, high-quality,
walkable neighborhood. Toward this end, three quantitative metrics were identified for
calculation: energy, walkability, and daylighting.
Buildings significantly contribute to cities’ energy needs, and despite recent attention to
embodied energy costs, operational energy analysis generally dominates discussions.
However, devising a metric useful for inter-building comparison is a challenge. One
commonly used energy metric is energy use intensity (EUI), which is defined as the ratio of a
building’s annual energy use to its net conditioned floor area, but while this metric is
particularly useful for comparing a building’s energy performance to that of another building
of the same type, its failure to account for building population diminishes its effectiveness
for urban analysis. For example, a larger single family building of high construction standard
may have a very low EUI compared to an apartment building because internal equipment
loads are spread across a larger area, but the efficiency per occupant is lower simply because
this building type tends to accommodate fewer residents. Normalization by population, not
area, therefore seems a better choice for a metric to promote urban density and efficiency.
Furthermore, the traditional expression in annual kWh is difficult for a layperson to grasp;
conversion to financial cost is far more intuitive for the game’s intended users. This has the
additional benefit of accounting for use-sensitive electricity costs. The final energy metric
used for the game was therefore cost in dollars per person.2
One benefit of urban density is that residents may walk or bike instead of relying on
non-human powered modes of transportation. This benefit is twofold: residents tend to be
healthier, and energy use related to transportation tends to be lower. A quantified walkability
score was therefore selected as the second performance metric for use in the game.
Daylight access is widely recognized as an indicator of the quality of a building and the
health and wellbeing of its occupants. The US Green building Council’s LEED green
building rating system as well as the Illuminating Engineering Society of North America
(IESNA) focus mostly on a space metric called daylight autonomy that describes the percentage
of the occupied time in a year when interior lighting levels due to daylight are above 300 lux.
A space is “daylit” if its daylight autonomy is above 50%, i.e. there is sufficient daylight at
least half of the year. The final daylighting metric for the game was spatial daylight autonomy
[17], which is the percentage of a building’s floor area that is daylit.
The game had two success criteria. The first was quantitative: Was improvement to
players’ designs correlated with their use of the game’s metric reports? Specifically, when
players selected one of the three metrics for block-level evaluation, did their neighborhood’s
performance with respect to that metric improve over time? The second goal was qualitative:
Did players demonstrate either an increased understanding of or enthusiasm for
collaborative design?
The game’s physical interface was a plexiglass table with a grid upon which players could
build their neighborhood by placing pre-assembled, white Lego blocks. Figure 1 shows an
example of each type of block the users had access to. A projector mounted above the table
projected a colored heatmap onto it, coloring each block according to a calculated
2
This was apparently preferable to cost in Saudi riyals per person.
8
performance score. The colors ranged from red for “bad” blocks, through yellow, to green
for “good” blocks; what defined “good” or “bad” depended on the active metric. Figure 2
shows a diagram of the table, with camera and projector connected to a computer
performing calculations and coordinating the system’s activity. There were additionally two
monitors displaying site-level performance scores. One showed the historical evolution of
the group’s design and the other showed a rendering of the current neighborhood design,
colorized in the same way as the projected heatmap, along with the current neighborhood
score for all three metrics. Figure 3 shows a photograph of the operational table. When a
block was removed, added, or updated, the colors were automatically updated as well.
Figure 1: The block types available for gameplay. In the upper left is a “park” block, and each of the other
fifteen blocks represents residential, commercial, or mixed-used development. Park blocks, never needing
colorization (because they had no buildings to score performance for), were built green.
Initially, the projected heatmap was disabled, and players were instructed to build up
their neighborhoods without consideration of its performance. Once a population target was
reached, the heatmap was enabled, and the players were instructed to begin optimizing their
neighborhoods with respect to any of the metric(s) they chose. Instruction to design first
without metrics was motivated by concern that players would either be initially overwhelmed
or overly focused on optimization at the expense of design. In practice, players generally
ignored the population targets, building their initial neighborhoods an eye toward aesthetics
or some other qualitative characteristic, so game facilitators subjectively determined when to
enable the heatmap and switch the groups into “optimization mode” based on time elapsed
and observed group activity.
9
Figure 2: A diagram of the interaction between
the table, the camera, the projector, and the
coordinating computer. The camera reads color
codes on the bases of the placed blocks and
sends the layout to the computer, which
performs score calculations and generates a
heatmap to be projected back onto the tops of
the blocks.
3.3
This system is an example of a tangible
user interface (TUI) [18]. The paradigmatic
feature of such an interface is its marriage of
representation and control: The Lego blocks
simultaneously allow input to the system and
report the system’s outputs. This contrasts
with, say, a traditional personal computer,
with its input interface (a mouse and
keyboard) physically distinct from its output
interface (its monitor and speakers). The
described design falls into the subcategory of
TUIs that operate within three-dimensional
space, taxonomized by Grossman and Wigdor
[19]. Its particular classification is “tabletop
spatially augmented reality,” so defined by
interfaces that make use of physical objects (as
opposed to purely virtual ones injected into
the user’s view via, for example, a special
viewing apparatus) [20]. Tabletop spatially
augmented reality for use in building and
neighborhood design and analysis has
precedent [21]–[26], but the consideration of
energy costs (in thermodynamic or financial
terms) remains relatively unexplored within
the subfield.
System implementation
The game’s software comprised four distinct programs running on the same computer;
Figure 4 shows a diagram of the relationships between them. The programs had the
following names and responsibilities:
 Colortizer: Scanned and detected changes to the game board configuration
 Simulator: Calculated metric scores for the game blocks
 Legotizer: Presented the current design’s scores to the users
 Viewer: Presented historical scores to the user
The Colortizer, responsible for game board scanning, was developed by the Changing
Places group and implemented in the Processing programming environment [27]. Each game
block had a color pattern on its base indicating its type (see Section 3.4), and a webcam
mounted under the table continuously scanned the bottom of the game board, allowing the
Colortizer to read the current neighborhood layout. Figure 5 shows the color patterns for
each block type. When a change was detected 3 , the Colortizer sent the new board
configuration to the Simulator via a TCP connection.
3
A new configuration had to remain constant for two consecutive scans to count as an
actual update, in order to filter out transient configurations generated during actual block
reorganization.
10
Figure 3: The physical game setup, with one display monitor shown. The block coloration on the table is
caused by the projection of a heatmap from a projector at the top of the post to the right. The active metric in
this picture is neighborhood walkability.
11
Hardware
Legotizer
Presentation of current
design’s results
UDP
Websocket
Viewer
Presentation of historical
results
TCP
Colortizer
Game board scanning and
interpretation
Hardware
Simulator
Score calculation
Projector
Hardware
Monitor
Hardware
Monitor
Camera
Figure 4: The relationships between the various hardware and software components of the game system. The
four central, named components are individual software programs, and the other components are hardware
they interact with.
The Simulator, designed at the SDL, was a custom plugin for the Rhinoceros 3D
modeler [28], written in C#. Upon receiving a game board update from the Colortizer, it
generated scores for the current board configuration, which included scores for each
individual block as well as neighborhood-level aggregate scores. This process is elaborated
upon in Section 3.5. Block and neighborhood scores were defined to vary between 0 and 1,
and were mapped to a color on a gradient that went from red to yellow to green.
The results, once generated, were sent via a continuously-polled file to the Legotizer.
Like the Colortizer, the Legotizer was created by the Changing Places group and
implemented in Processing. The Legotizer used block scores to generate and project the
neighborhood heatmap onto the physical game board. Additionally, the Legotizer used block
scores, together with geometry transmitted from the Colortizer via UDP, to render and
colorize the current neighborhood on a monitor seen by the users. On this monitor it also
displayed the neighborhood-level scores generated by the Simulator.
The fourth software component, the Viewer, was triggered not by table block
manipulation, but directly by user interaction with a keyboard connected to the host
computer. The viewer was implemented as a JavaScript web application, also at the SDL.
Whenever they wished, users could press a button to “save” their current game state, which
would append the current three neighborhood-level scores to time series shown on a display
screen. Scores were sent from the Simulator via a Websocket connection. This keyboard
12
interaction was also how users switched the currently active (projected) metric: Separate
button presses were defined for energy, walkability, and daylighting modes.4
Figure 5: The color codes for each block type. Using these codes, the table scanning software could identify
the type and orientation of each block placed on the game board.
3.4
Blocks
There were sixteen block types available, each representing an 80-meter by 80-meter urban
area. One type represented park space, and was used only in walkability calculations. Each
remaining type contained one to five buildings with the following characteristics predefined:
 Geometry within block
 Use (residential or commercial)
 Person count (residents for residential buildings, workers for commercial buildings)
 Physical locations of walking destinations (e.g. shops) within block
 Window-to-wall ratio
 Construction materials and assemblies
 Internal energy gains (equipment, lighting, occupancy)
 HVAC equipment and control regime
The “buildings” defined for the blocks were not always physically distinct buildings; for
example, mixed-used structures were defined as distinct yet physically adjacent commercial
and residential buildings. Each block was given a code describing its use: “P” blocks were
4
Keyboard inputs were actually issued by game facilitators at the request of game players, in
order to allow players to focus on learning the game board interface.
13
parks, “C” blocks were commercial blocks, “R” blocks were residential blocks, and “M”
blocks were mixed-used blocks. (An additional code type “ST” indicated blocks intended for
placement along major thoroughfares. ST1 and ST2 blocks were commercial-only, and ST3
blocks were mixed-used.)
Building assembly information is given in Appendix A, and full block and building
definitions are provided in Appendix B. The only variable users could control was block
orientation and placement with respect to neighbor blocks. This substantially simplified the
user interface and enabled fast simulation approximation (described in Section 3.5.1), but
users expressed frustration with the consequent constraints on their designs’ potential
performance. This is discussed further in Section 5.3.
3.5
Metric calculation and scoring
3.5.1 Operational energy
The first game metric was operational energy consumption. As described in Section 3.1,
operational energy was quantified as annual cost per person. Calculation of a block’s cost
proceeded as follows: For each building on the block, its annual operational energy was
calculated, multiplied by the appropriate price ($0.15/kWh for commercial buildings,
$0.08/kWh for residential buildings), and summed. This block energy sum was then divided
by the block’s total population.
Sufficiently fast operational energy calculations were therefore paramount. But how fast
was sufficient? Consider the case of a single changed block. Even assuming that only a
block’s neighbors affected its operational energy consumption, as many as nine blocks might
need recalculation, which could entail recalculation for up to 45 buildings. Even if each
building calculation took only one second, and the computation could be fully parallelized
across eight computer cores, the total calculation time of six seconds would be pushing the
limits of non-frustrating interactivity even before table scanning, heatmap generation and
projection, or any other metric calculations.
Detailed simulation, such as with EnergyPlus [29], was therefore implausible. Simulation
using Dogan and Reinhart’s Shoeboxer algorithm [30], which produces simplified
EnergyPlus models that run faster while retaining most simulation accuracy, still
experimentally proved too slow. While it is maybe possible with ruthless model
simplification to achieve a building energy simulation running time measured in milliseconds,
such simplifications defeat the purpose of using a true simulation engine in the first place.
An alternative approach might use back-of-the-envelope formulaic estimations, but some
simplifying features of Riyadh’s climate suggested a third strategy.
Recall that the only variable users had control over was each building’s geometric
relationship with its neighbor blocks; this means that the goal was to determine a
relationship between urban geometry and each building’s operational energy consumption.
This consumption can be decomposed into cooling, heating, electric lighting, and plug
loads,5 and the relationship between urban geometry and total energy consumption can be
expressed in terms of the relationship between urban geometry and each of these four
categories. Could these four relationships be expressed simply?
Plug loads were easiest: they were unaffected by urban form. Each building type’s plug
loads were wholly determined beforehand, and no real-time calculation was necessary.
5
A fifth category is domestic hot water, which, absent solar hot water systems, is unaffected
by urban geometry and was not considered.
14
Heating loads were also quite simple: There were none. In colder climates, a dense
urban environment might obstruct sunlight that could warm a cold building, but in Riyadh,
internal gains alone are generally sufficient to keep a building at a comfortable temperature.
For heating, therefore, no real-time calculation is necessary either.
Electric lighting loads were simplified by assuming away dimming controls, making the
loads entirely unaffected by daylight availability and therefore urban form, so as with
equipment loads, no real-time calculation was necessary.
The interesting relationship was therefore between cooling loads and urban form. A
dense urban form obstructs sunlight, reducing solar gains and therefore cooling load, but
also amplifies the urban heat island effect, increasing cooling load. For the game’s
implementation, the urban heat island effect was ignored. This simplification is consistent
with current practice in early-design simulation.
In this simplified model, then, energy loads vary only with solar gains. Determination of
a building’s solar gains is still a nontrivial task, and so the further simplifying assumption was
made to proxy these gains by the building’s sky exposure, which was defined by the sky view
factor (SFV) [31] of the building’s centroid projected onto the ground plane (henceforth
referred to as the building’s sky view factor for brevity’s sake). Informally, a point’s SFV is
how much of the sky a point can “see” – it assumes a sky hemisphere of uniform radiation.
This calculation can quickly be performed by rtrace, the backwards raytracer in the
Radiance [32] software suite. (Invocation parameters are provided in Appendix C). Sky view
factor then needed to be mapped to energy consumption. Discovering an analytic
relationship between the two would allow for generalization to unknown geometry, but was
unnecessary with entirely known geometry. Instead, a simple lookup table was used. Each
building was placed within a variety of shading scenarios; for each one, the building’s energy
consumption was simulated and its sky view factor calculated. These data pairs populated the
lookup table; Figure 6 shows example generated data. During gameplay, the energy
consumption of building B on block K was estimated as follows:
1. The “relevant neighbor” buildings of B were identified. These included the buildings
on all blocks neighboring K and the buildings on K that might possibly occlude the
windows of B. (For example, buildings entirely below B were omitted.)
2. The geometries of these neighbor buildings were assembled into a scene mesh.
3. The centroid of B was identified and projected onto the ground plane.
4. The SVF of this point was determined using rtrace and the scene mesh created in
step 2.
5. The nearest two SVFs in the presimulation table were located, and the energy
consumption of B was calculated as the average of the two corresponding
precalculated consumption values. (If the calculated SVF was smaller than the
smallest presimulated entry or larger than the largest one, only that one presimulation
entry was used.)
Each presimulation table therefore defined a stepwise mapping; one building’s such
function is shown in Figure 7.
The shading scenarios for the lookup table were generated by placing the building on an
empty block surrounded by an open box of gradually increasing height. The box’s height
increased in 3-meter intervals from 1 meter to 73 meters (1 meter higher than the tallest
available building). Figure 8 shows a building surrounded by the box at various heights. The
actual presimulation was performed by the Shoeboxer algorithm instead of a detailed
EnergyPlus simulation, because post-gameplay comparison of approximated results with
15
“real” results would entail simulation of multiple buildings, and faster simulations would
expedite that process. See Section 5.2.1.
Presimulated EUI - M4 Block, Large Commercial Building
170
168
EUI (kWh/m2)
166
164
162
160
158
156
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 6: Presimulated EUI results for one of the building types. Blocks of type M4 contained five buildings;
two were commercial, and these were presimulated EUI values for the larger of the two. The range of EUI
values generated only varies by about 12 kWh/m2, which is fairly small. This was characteristic of all
presimulated building types; loads in Riyadh are not particularly sunlight-dependent. (The vertical axis shows
consumption as EUI; this value was converted to total energy use by multiplying by the building’s floor area.
EUI was more intuitive to work with and verify than total energy use during game development.)
The only computationally intensive step in this process was the calculation of the
building’s sky view factor, and the whole procedure was sufficiently fast for interaction: only
once during gameplay did sky view factor calculation (for all changed buildings combined)
take longer than four seconds, and fewer than a dozen times did it take longer than two.
Once a block’s normalized energy cost was calculated from its approximated building
loads, it was scored so that its projection color could be selected. An immediate question
presented itself: Should block performance be scored relative to only blocks of the same
type, or relative to all blocks?
Under the first scheme, a block would a score of 1, mapping to green, if it could
perform no better than any other block of its type, and a score of 0, mapping to red, if it
could perform no worse than any other block of its type. As the only variable users could
control was building insolation (indirectly, by placement of surrounding blocks), a block’s
score would therefore be solely determined by its neighbors’ heights, and the only difference
between block types would be the curves describing how their colors changed with these
heights. Users would receive immediate and obvious feedback about the effects of block
density and height; very tall, dense neighborhoods would clearly be green, and very sparse
ones would clearly be red. The drawback, though, is that inter-block-type comparison would
essentially be impossible. This is counterintuitive; when a user sees a red block and a green
block, her impression will be that the green block is “better” than the red block in some
absolute sense, not that the green block outperforms other blocks of its type while the red
block does not. Furthermore, even if the user realizes this, she has no way to recover an
16
inter-block comparison. Since block selection decisions are supposed to be a critical part of
gameplay, and relative scoring eliminates the ability for users to make such decisions on the
basis of energy costs, it is undesirable.
Approximated EUI - M4 Block, Large Commercial Building
170
168
EUI (kWh/m2)
166
164
162
160
158
156
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 7: Use of presimulation results to approximate EUI during gameplay. During gameplay, when the sky
view factor of a large commercial building on an M4 block was calculated, this function, generated from the
data in Figure 6, was used to calculate its approximate EUI.
Figure 8: Three shading scenarios during presimulation, shown for blocks of type C2, which contain a single
building in their center. Each scenario created a data point in the presimulation table composed of the
building’s sky view factor and its energy performance. The shading object took heights ranging from 1 meter to
73 meters; shown here are shades of height 4 meters, 31 meters, and 61 meters. This process was repeated for
each building on each block type.
17
Therefore, energy was scored absolutely. An overall minimum and maximum annual
energy cost per person were selected, and a block’s score was its position between these
extremes. A block performing at the minimum received a perfect score of 1 and a green
color, while a block performing at the maximum received a score of 0 and a red color.
Appropriate overall extrema were determined via examination of each block’s extrema
generated during presimulation. Simply using the extrema of the entire presimulation data set
was unsuitable, as is illustrated in Figure 9, because poorly performing blocks of type R1
distort the color mapping. The blocks have essentially been divided three categories: R1,
ST1, and everything else. This is not productive gameplay feedback. Figure 10 shows what
happens when the cost maximum is reduced and blocks of type R1 are simply always
assigned a score of 0. The situation is improved, but can be improved even more by
adopting the extrema shown in Figure 11 that additionally exclude blocks of type ST1 and
C3. These were the extrema ultimately used.
In this scheme, no block could exhibit the full color range, which is appropriate given
each block’s narrow range of possible absolute energy costs. Table 2 lists the block energy
variance results for each block type; the maximum variance can be as little as 7%. In these
cases, substantial color variation would overstate the impact of users’ design choices.
3.5.2 Walkability
The second metric in the game was neighborhood walkability. A building’s walkability score
increased with its proximity to amenity (e.g. grocery stores and restaurants) capacity and its
proximity to a park. Methodologically, this calculation was derived from the freely-available
2011 version of the Walk Score [33] algorithm. Walkability was not calculated for
commercial buildings. The score for each residential building was calculated as follows:
1. The projection of the building’s centroid to the ground plane was calculated. This
was the departure point for all walking trips.6
2. The closest 24 commercial amenities (using the amenity destinations defined for
commercial buildings as described in Section 3.4) were located. If there were fewer
than 24 amenities on the game board, the rest were placed at infinity.
3. The Manhattan (orthogonal) distance from the departure point to each destination
was calculated.
4. Each calculated distance was scored using a particular fifth-degree polynomial
function that decayed from 1 at zero meters to 0 at 250 meters or greater.7 Figure 12
shows this function and its exact coefficients are provided in Appendix D.
5. These scores were averaged.
6. The Manhattan distance from the departure point to the nearest park block was
calculated 8 , again assuming a park at infinity if necessary, and scored using the
function in step 4.
7. The final walkability score was calculated by adding 70% of the score from step 5 to
30% of the score from step 6.
6
This is the same point used for SVF calculation; it was not actually re-calculated.
The function actually yields a score slightly above 1 for trips of less than 42 meters.
Mathematically, this allowed for aggregate walkability scores above 1, but block
configurations that would generate the number of sufficiently short trips required for this
were geometrically impossible.
8
It was actually calculated to the nearest park block corner.
7
18
Figure 9: A block color spectrum scheme spanning all possible approximation results. The top bar shows the
entire absolute color range, and each bar below it indicates where on this spectrum a particular block type can
fall. The extreme costs of R1 blocks distort the spectrum; almost every other block type is a similar shade of
green, and over a third of the color spectrum is left completely unused.
Figure 10: A block color spectrum scheme excluding R1 blocks (always coloring them red). The other block
types are easier to distinguish, although there is still unused spectrum, and most blocks still generally remain a
single color.
Figure 11: The final block color spectrum chosen for the game. Blocks of type C3, ST1, and R1 always receive
a minimum score. The rest of the blocks demonstrate more variation, although each particular block type’s
color is still fairly constrained.
19
Table 2: Energy cost ranges for each block type
Block
Type
R1
ST1
C3
C1
R2
M1
C2
M5
M4
R3
M2
ST2
M3
M6
ST3
Min Cost
($/person)
1113
738
661
508
490
492
507
477
475
434
410
374
378
342
327
Max Cost
($/person)
1315
809
723
565
564
564
548
525
519
492
460
415
410
366
356
Difference
($/person)
202
71
62
57
74
72
42
48
44
58
50
41
32
23
29
% Cost
Increase
18
10
9
11
15
15
8
10
9
13
12
11
8
7
9
Min
Score
0
0
0
0.04
0.05
0.05
0.11
0.21
0.23
0.34
0.47
0.65
0.67
0.85
0.89
Max
Score
0
0
0
0.28
0.35
0.34
0.28
0.40
0.41
0.57
0.67
0.82
0.80
0.95
1.00
Score
Change
0
0
0
0.23
0.30
0.29
0.17
0.19
0.18
0.23
0.20
0.17
0.13
0.09
0.11
This method made three simplifying assumptions. The first was that the only factor
affecting trip desirability was its length; aesthetic quality, route comfort, and trip safety were
ignored. The second was that the pedestrian travel network enabled orthogonal travel
between any two points on the game board, but forbade diagonal travel. The third was that
building egress happened from a building’s footprint centroid rather than its perimeter.
None of these assumptions necessarily inflates or deflates scores; the error they each induce
is a function of further specificity that this method fails to provide.
A block’s walkability score was calculated as an area-weighted average of its building’s
scores. As building walkability scores already fell within the zero-to-one range, no scaling
was necessary.
3.5.3 Daylighting potential
The final game metric was daylighting potential. As mentioned in Section 3.1, a building’s
daylighting score was its spatial daylight autonomy, which was the percentage of the
building’s work planes – surfaces 80 cm above the building’s floors – receiving at least 300
lux at least 50% of the hours between 8 AM to 6 PM on weekdays.9 Use of blinds was
simulated by reducing workplane illuminance by 50% when façade illuminance was greater
than 20,000 lux. However, game players were not explicitly informed of this
accommodation, which may have led to their discounting this metric’s relevance (see Section
5.3).
Software selection for daylighting presimulation proceeded analogously to selection for
operational energy presimulation. Like EnergyPlus, a "full" simulation engine for daylight
autonomy exists in DAYSIM [34], but this software is too slow for interactive use. And
analogous to the Shoeboxer, a faster but approximate implementation exists in Urban
9
sDA can be parameterized by any occupancy schedule. The simulation assumed a MondayFriday workweek, even though the Saudi workweek is Sunday-Thursday. This does not
matter for simulation purposes.
20
Daylight [6], that, while too slow for interactive use, was selected as the presimulation engine
in order to expedite post-gameplay evaluation of presimulation accuracy. Invocation
parameters are given in Appendix E. During presimulation, sDA values were calculated
simultaneously with energy consumption, and another lookup table for each building was
generated. Figure 13 shows example presimulation results, and curves for each building can
be found in Appendix F. During gameplay, daylighting was estimated using the same
averaging approach used for energy consumption; Figure 14 shows an example lookup
curve.
Figure 12: Walking trip scores as a function of distance. Walking trip scores decay with distance and reach a
minimum score of zero at 250 meters. This maximum distance was chosen based on results of a survey given
to Saudi residents about their walking habits and preferences; 250 meters was generally identified as the length
of the longest acceptable walking trip. The precise mathematical definition of the function is given in Appendix
D.
Like walkability, a block’s daylighting score was the area-weighted average of its
building’s scores, and since scores already fell within the zero-to-one range, no scaling was
necessary. However, like operational energy, a building’s predetermined characteristics
restricted its possible daylighting scores; very deep buildings, for example, were necessarily
less well daylit than shallower buildings. Figure 15 shows each block’s potential color range.
Building sky view factor affects sDA substantially more than operational energy; in the
case of R2 blocks, for example, varying shade levels can induce as much as a 69% change.
Table 3 shows the variance data for all block types.
21
Presimulated sDA - M4 Block, Large Commercial Building
0.4
0.35
sDA300,50%
0.3
0.25
0.2
0.15
0.1
0.05
0
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 13: Presimulated sDA values for one of the building types.
Approximated sDA - M4 Block, Large Commercial Building
0.4
0.35
sDA300,50%
0.3
0.25
0.2
0.15
0.1
0.05
0
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 14: The function mapping calculated sky view factors to approximate sDA during gameplay for one of
the building types.
22
Figure 15: Possible color ranges for block daylighting scores. Individual block ranges are generally wider than
they were for energy, but some block types (e.g. ST2 and ST3) are still somewhat constrained, due to the depth
of the buildings on these blocks.
Table 3: sDA ranges for each block type
Block Type
C3
M4
M3
M2
M6
ST3
R3
M5
M1
C2
ST1
R1
C1
ST2
R2
Min sDA
0.2657498
0.20524781
0.12153078
0.118659
0.09566727
0.09230017
0.084583
0.0674855
0.056548
0.025993
0.0161115
0.00375
0.000128
0
0
Max sDA
0.8540001
0.64729062
0.48104389
0.53158467
0.43621564
0.37142533
0.564495
0.5392985
0.53277333
0.308947
0.3976925
0.745
0.289926
0.232054
0.6861985
Difference
0.5882503
0.44204281
0.35951311
0.41292567
0.34054836
0.27912517
0.479912
0.471813
0.47622533
0.282954
0.381581
0.74125
0.289798
0.232054
0.6861985
4 Gameplay results
4.1
Group 1
Group 1’s design session was unfortunately disrupted by jostling of the scanning camera
which remained undetected for large portions of gameplay. As a result, developing a
narrative for this group’s design from gameplay logs is impossible, although individual game
states will be useful for the evaluation of the game’s technical performance done later in
Section 5.2.
4.2
Group 2
Figure 16 shows the neighborhood scores over time of Group 2’s design. The graph’s
background colors show the block-level scores projected onto the game board at each point
23
in the design process; this coloration shows five distinct design periods during their overall
session.
Figure 17 shows Group 2’s design at the beginning of their first substantial design
period, during which they chose to project walkability scores onto the game blocks. The
neighborhood has three notable features. The first is the high-density, mixed-use cluster in
the southwestern corner of the game zone. While residents in this group have good access to
commercial services, there is no nearby park, and so their walkability scores suffer. The
second feature is the mixed-used corridor running east to west through the game zone. The
group’s use of ST3 blocks almost exclusively provides a good opportunity to see how the
walkability calculation operated: The worst-scoring blocks are at the ends of the corridor,
because they are adjacent to only one mixed-use block, instead of two, and the best-scoring
blocks are the closest to parks. The third neighborhood feature is the recurring motif of
parks surrounded by residential-only blocks; the walkability of these blocks suffer because of
their distance from commercial services.
Neighborhood Progress - Group 2
0.6
0.55
Score
0.5
0.45
0.4
0.35
0.3
Iteration
Energy
Walkability
Daylighting
Figure 16: The scores of Group 2 over time. This group divided its time the most evenly. Note that the graph
progresses through iterations, which do not map linearly to time due to the unequal lengths of pauses.
Group 2’s design after two minutes of walkability-focused design is shown in Figure 18.
Two high-density mixed-use blocks have been placed to connect the central corridor with
residential clusters in the northeast and northwest, and a new park has been placed with
adjacent mixed-used development towards the southeast. The neighborhood’s walkability
score has increased from 0.31 to 0.36.
Group 2 next turned to daylighting performance, spending again approximately two
minutes. Figure 19 shows their neighborhood at the start, and Figure 20 when they were
done. The two designs are not substantially different; the only notable change was the
reorganization of the high-density cluster to the southwest, which resulted in a negligible
overall improvement of neighborhood score of approximately 0.0016.
Next was four minutes of energy optimization; Figure 21 and Figure 22 show “before”
and “after” views, respectively. When they began, Group 2’s neighborhood cost $12.6
million per year for energy, and contained 11384 jobs and 14808 residents, for a cost per job
or resident per year of $460, corresponding to a 0.47 score. After optimization, their
neighborhood cost $14.1 million per year, and contained 13416 jobs and 18384 residents, for
24
a cost per job or resident per year of $444, corresponding to 0.53 score. This improvement
was effected by the continued development of the dense, mixed-used cluster to the
southwest, and the addition of new M2 blocks to the southwest (yellow squares in the
figure), sometimes as replacements for existing, worse-performing blocks.
Group 2’s fourth design period was again walkability-focused, and about two and a half
minutes long; Figure 23 shows their neighborhood at its beginning, and Figure 24 shows it at
its end. Notably, the overall walkability score at this point was 0.43; previous improvements
to energy performance had the additional effect of substantially improving walkability. Two
minor changes improved the neighborhood’s walkability score to 0.44 during the design
session: The addition of a mixed-use block to a residential-only cluster, and the replacement
of a poorly-performing residential block in the north with commercial block (for which
walkability was not calculated).
Group 2 returned to daylighting performance for their final design period. Figure 25
shows their neighborhood at its beginning, and Figure 26 shows their final design. This final
design period was approximately two and a half minutes long. The substantial change was
the rotation of several blocks in the central corridor 180 degrees; this had the effect of
casting their shadow on the low-rise blocks on the opposite side of the corridor. However,
while this had a noticeable effect on those particular blocks, the overall neighborhood
daylighting score increased only by an inconsequential 0.005.
4.3
Group 3
4.4
Group 4
Figure 27 shows the neighborhood scores of Group 3’s design over time. Unlike the other
groups, Group 3 chose to focus exclusively on neighborhood walkability during their
gameplay session. The apparent large drop in their design’s walkability score about a fifth of
the way through their iterations was caused by the elision of a gameplay period during which
the camera was miscalibrated, causing the generation of incorrect heatmaps. Figure 28 shows
their neighborhood design when the heatmap was first activated, and Figure 29 shows their
design at the conclusion of gameplay. Their walkability score had only improved from 0.34
to 0.35, although their site now contained many more jobs and residents.
Figure 30 shows the neighborhood scores of Group 4’s design throughout their gameplay
session. Their time was divided into a five-minute walkability optimization session, a six and
a half minute energy optimization session, and then a return to walkability for four minutes.
Figures 31 through 36 show neighborhood views at the beginning and end of each of these
periods.
25
Figure 17: Group 2’s neighborhood at the commencement of their first optimization period, during which
they focused on walkability analysis. Three features are salient: a high-density, mixed-use cluster in the
southwest, a mixed-use corridor running east to west, and several low-density residential clusters surrounding
parks. Most block walkability scores are poor, because the low-density residential blocks have no nearby
commercial services, and the high-density mixed-use cluster has no nearby park. This neighborhood received
an overall walkability score of 0.31.
Figure 18: Group 2’s design after spending time optimizing for walkability. The denser central corridor has
“grown out” towards the residential clusters in the northeast and northwest, and new mixed-use construction
has been placed around a new park placed toward the southeast. This neighborhood received a walkability
score of .36, an increase of 0.05.
26
Figure 19: The daylighting performance of Group 2’s neighborhood when they first switched to that metric’s
projection. The neighborhood score for this design was 0.42.
Figure 20: Block-level performance of Group 2’s neighborhood at the conclusion of their first daylightingfocused design period. The neighborhood score remained unchanged (after rounding) at 0.42.
27
Figure 21: The block-level performance of Group 2’s neighborhood at the beginning of their energy-focused
design period. This neighborhood costs $460 per job or resident per year, corresponding to an energy score of
0.47. Note how easy it is to distinguish individual block performance due to stark color differences.
Figure 22: The block-level performance of Group 2’s design after optimizing for energy. Notable changes
include the expanded development of the high-density cluster to the southwest (with continued high density)
and densification by placement of new M2 blocks (the yellow squares) to the southeast. This neighborhood
costs $443 per job or resident per year, corresponding to an energy score of 0.53.
28
Figure 23: Group 2’s neighborhood at the beginning of their second period focused on walkability. This
neighborhood has an overall score of 0.43; the energy optimizations the group previously made had the side
effect of substantially improving the neighborhood’s walkability.
Figure 24: Group 2’s neighborhood after two more minutes of walkability optimization. No substantial
changes have been made; some areas show slight changes in one direction or the other. The net effect was a
very modest increase in walkability score of 0.01 to 0.44.
29
Figure 25: Group 2’s neighborhood at the beginning of their final, daylighting-focused design period. The
neighborhood has an overall score of 0.39.
Figure 26: Group 2’s final design, with block-level daylighting scores shown. Several blocks in the central
corridor have been rotated around so that they shade blocks on the opposite side, and while this slight effect
can be seen on those specific blocks, the overall change in daylighting score was an increase of only 0.005.
30
Neighborhood Progress - Group 3
0.5
0.45
Score
0.4
0.35
0.3
0.25
0.2
Iteration
Energy
Walkability
Daylighting
Figure 27: The scores of Group 3 over time. While the only metric this group examined was walkability, their
design decisions were driven also by qualitative factors, and their neighborhood score consequently suffered.
Figure 28: Group 3’s neighborhood design when block-level score projections were first enabled. The lowdensity construction surrounding parks to the north of the site was intended to model the current actual
construction at the site, despite the players being instructed to ignore this. This neighborhood design has a
walkability score of 0.34.
31
Figure 29: Group 3’s neighborhood at the end of their gameplay session. Its walkability score had only
improved to 0.35, although it now held more than three times as many jobs and residents. Despite the
existence of a dense, mixed-use cluster of buildings to the south, much of the site is dedicated to low-density
housing with few nearby commercial services, negatively impacting the score.
Neighborhood Progress - Group 4
0.5
0.45
Score
0.4
0.35
0.3
0.25
0.2
Iteration
Energy
Walkability
Daylighting
Figure 30: The scores of Group 4 over time. Walkability was analyzed before and after energy was; daylighting
was not considered.
32
Figure 31: Group 4’s design when metric heatmapping was first enabled. The commercial corridors to the
south and west are built along existing major thoroughfares that border the game site; they surround a section
of residential-only construction, which had low walkability scores as a result of its lack of proximity to
commercial services. The walkability score of this neighborhood as a whole was 0.29.
Figure 32: Group 4’s design at the conclusion of their first walkability optimization session. The most
substantial change was small, but significant: The easternmost park has been moved to the center of the
residential-only district. While the walkability of the buildings at the original location suffered somewhat, the
walkability of the residences at the destination substantially improved. The additional addition of a mixed-used
block at the southwestern corner of the residential district and another on the northern edge improved the
walkability score of the neighborhood by 0.02 to 0.41.
33
Figure 33: Group 4’s neighborhood at the beginning of their energy optimization period. This neighborhood
costs $515/person/year for an aggregate energy score of 0.25.
Figure 34: Group 4’s neighborhood after optimizing for energy. The neighborhood cost has been reduced to
$487/person/year for a substantial increase in energy score to 0.36. This was essentially achieved by the
addition of four of well-performing M6 blocks to the southwestern district (dark green) and the elimination of
several poorly-performing C3 blocks (blocks with towers only) in same.
34
Figure 35: The walkability of Group 4’s neighborhood after energy optimization had been performed. This
district had an aggregate score of 0.32. As with Group 2, high-density, mixed-used blocks added to improve
neighborhood energy performance incidentally improved neighborhood walkability.
Figure 36: The walkability of Group 4’s final neighborhood design. The neighborhood score had been
improved by 0.06 to 0.38 through a variety of measures. New mixed-used development was placed in the
northwest, the three parks in the southwest were redistributed to better disperse their effect amongst buildings
in the south, and several blocks with poor walkability in the east and northeast were simply removed or
replaced with commercial-only buildings (for which walkability was not evaluated).
35
5 Discussion
5.1
Utility as design assistant
5.2
Approximation accuracy
Between the three analyzed groups, there were two periods of energy optimization, five
periods of walkability optimization, and two periods of daylighting optimization. For both of
the energy optimization periods and four of the five walkability optimization periods, the net
effect on the respective metric score was positive, but for the daylighting period and the last
walkability period, the net effect was zero or even negative.
The apparent ability of the energy and walkability design views to facilitate
improvement of their respective metrics, and the corresponding failure of the daylighting
design view to do the same, is plausibly explained by the game’s clearer performance
feedback in the first two views. For energy, the narrow range of colors each block can
receive means that players could learn which blocks tended to be green, and which to be red,
and replace the latter with the former. For walkability, the drastically positive effect parks
and density had on scores was easily discoverable, and so players easily learned how to
improve their designs.
But the game was not similarly responsive to daylighting changes. Unlike with energy,
block score ranges overlapped substantially, so “good” blocks were harder to identify. But
unlike with walkability, block scores were not clearly and obviously wholly dependent upon
surrounding blocks. Energy scores were sensitive to block type, and walkability scores were
sensitive to block neighbors, but daylighting scores were not particularly sensitive to either.
The failure of Group 3 to improve their neighborhood’s walkability score can be
explained by two observations. The first is that in a sense it did not fail: For most of their
design period, their score was, in fact, increasing. The second is more illuminating: Before
the game’s heatmap projection was enabled, Group 3, led by a historical preservation
architect, attempted to build a design mimicking the current site – a design with poor
walkability by the game metrics. This attention to a qualitative factor (specifically, continuity
with existing architecture) likely led to worse quantitative results.
All four groups included at least one low-density, residential only area in their design,
and attempted to improve this area’s performance without substantially densifying it during
their play period. This suggests that players approached the game as a way to perform guided
design, rather than an optimization problem to solve.
Several players asked during and after gameplay how they were supposed to coax all of
their blocks to a green color during energy and daylighting analysis. The answer that those
blocks underperformed due to simulation parameters out of player control was unsatisfying.
Additionally, no group used game’s facility for tracking their progress over time, preferring
instead to work entirely “in the moment.” The suggestions this implies for future versions of
this game design are discussed in Section 5.3. Regardless, all players demonstrated substantial
enthusiasm for gameplay, and every group designed right up to the very end of their
gameplay period.
Largely, performance scores increased during gameplay, but meaningful score improvement
requires meaningful scores. Since the fast metrics were implemented as approximations of
more detailed simulations, they can be compared to those simulations’ results.
For walkability, this approximation had no obvious, more sophisticated, progenitor.
While the omission of specifically defined pedestrian travel network was a clear omission,
the generation of such a network for a given block configuration is not a problem with a
36
formal solution. Therefore, no comparison between “approximated” and “detailed” results
will be performed for walkability.
For energy and daylighting, the approximations generated during gameplay using the
presimulated performance lookup curves and results of post-facto game board simulation
using the Shoeboxer (for energy) and Urban Daylight (for daylighting) can be compared.
Unfortunately, comparing approximated and simulated scores for every placed block at every
game state is computationally infeasible; data point selection must be more discriminate.
Comparison points were selected by identifying board states that remained unchanged
for at least five seconds – under the presumption that during these periods game players
were using board coloration to evaluate their design – and finding the blocks in those states
that had changed from the previous state. (These blocks were the blocks that had been
replaced or together with those blocks’ neighbors.) States that largely duplicated other board
states were ignored, and block types that were unused during these pause periods are not
represented in the comparative results. Such blocks were generally little used; their omission
characterizes gameplay overall.10
5.2.1 Energy
Post-facto simulation used the same parameters as presimulation for lookup tables.
Figure 37 compares approximated block energy scores to corresponding simulated scores.
Block types C1, M1, and ST2 are not represented due to underuse during actual gameplay.
Data points above the diagonal
line represent blocks for which the approximation
underestimated the correct block score (overestimating the block’s energy cost); data points
below the line represent the inverse.
Most blocks were correctly scored, but highlighted in the graph are three outlier
clusters, each corresponding to a particular block type. Significant errors in energy
approximation occurred with types R3, M2, and M4. The R3 errors are easiest to explain. R3
blocks consist of two long, parallel buildings; during presimulation, these buildings were
oriented such that their longer dimension ran from north to south. Of the examined R3 uses
during gameplay, only two of them shared this orientation; the rest were rotated so that the
long dimensions of the buildings ran from east to west. Approximation of the two
“properly” oriented blocks displayed minimal error, but it overestimated the energy
consumption of the rotated blocks. This makes sense; the rotation of the long facades would
have substantially changed the buildings’ solar gains.
The error for M2 and M4 blocks can be explored by examining the approximation error
for individual buildings on those blocks. Figure 38 plots the approximated cost versus the
simulated cost for each building on an M2 or M4 block. Most of the buildings are
approximated accurately, but two in particular demonstrate error: the building representing
the residential, upper floors of M2 blocks, and a high-rise residential tower structure on M4
blocks. Figure 39 shows the EUI approximation curve for the former, and Figure 40 shows
it for the latter. Both curves contain substantial sections with negative slope, an unexpected
phenomenon. Each graph also contains lines showing where the sky view factors that were
actually calculated during gameplay fell. In both cases, the sky view factors fell outside the
negative-slope section of the graphs, suggesting that the negative-slope section signaled a
larger problem with the curve, such as the Shoeboxer’s discretization of these buildings
being inappropriate for the approximation task at hand.
10
Blocks of type M3 were omitted from analysis due to a technical problem with their postfacto simulation.
37
Approximated Versus Simulated Energy Scores
1
0.9
0.8
Simulated Score
0.7
R3
0.6
0.5
M2
0.4
M4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Approximated Score
Figure 37: Comparison of energy scores approximated during gameplay with scores of the same blocks fully
simulated afterwards. Data points above the diagonal line represent times that the approximation method
underestimated a block’s score (and thus overestimated its energy consumption), while data points below the
line represent the inverse. Discussion of the three block types highlighted with substantial error is given in the
text.
5.2.2 Daylighting
Analysis of daylighting approximation accuracy proceeded as it had with energy. (All data
points selected for comparison came from Group 2’s playthrough, as only that group
actually performed daylighting optimization.) Figure 41 plots the approximated daylighting
scores for buildings compared to their scores as simulated by Urban Daylighting postgameplay. Unlike with energy, almost every block type examined exhibits notable error,
although as with energy, each block type demonstrates a somewhat consistent error. This
consistency suggests that the errors stem from the approximation method’s failure to capture
geometric features specific to each block, rather than a fundamental incapacity of building
sky view factor to effectively proxy interior daylighting. This is unsurprising, given the
geometric sensitivity of the former domain; both energy and daylighting simulations need to
know how much sunlight enters a building, but only the daylighting simulation needs to
distribute it once inside.
This geometric sensitivity was likely exacerbated by the particular daylighting metric
parameters used for evaluation. The calculation of a building’s sDA proceeds by discretizing
a building’s floor area into patches and then determining how many of these patches are
sufficiently daylit. Since patches cannot receive “partial credit,” the precision of building’s
sDA value is entirely a function of the granularity of the patches. A building with large
patches will be more sensitive to changes in sunlight geometry, because any one patch
gaining or losing “daylit” status will have a proportionally greater effect on the building’s
sDA as a whole. The patch sizes used for these simulations, 2-meter by 2-meter squares,
38
were relatively large, so differences in the shape of the shade provided by the presimulation
shading box and that of neighbor buildings might have induced significant sDA differences
in simulated buildings, even if the two shading situations provided equal sky view factors.
Since block geometry never changed, each building would have had consistent distortionary
effects from other buildings on its block; these would explain the consistency in
approximation error.
Approximated versus Simulated Building Energy Costs
200000
180000
Simulated Cost ($/year)
160000
M2 Residential
140000
M4 Large
Towers
120000
100000
80000
60000
40000
20000
0
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
200000
Approximated Cost ($/year)
Figure 38: Comparison of energy costs approximated during gameplay and simulated after, on an individual
building level. These buildings were located on M2 and M4 blocks. Data points above the diagonal line indicate
buildings for which the approximation procedure underestimated a building’s cost, and data points below it
represent the inverse. Once again, the approximations of most buildings were accurate, but two error clusters
jump out.
39
Approximated EUI - M2 Block, Residential Building
155
EUI (kWh/m2)
150
145
140
135
130
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 39: The approximation curve for the residential building on blocks of type M2. Unlike most
approximation curves, this one contains a substantial section of negative slope. This phenomenon suggests a
problem with the generation of this curve, which likely explains the error exhibited in their approximation
during gameplay, despite the fact that sky view factors calculated for these buildings during actual gameplay
(shown as lines to the right) fall within the more “normal” section of the curve.
Approximated EUI - M4 Block, Residential High-Rise Towers
255
EUI (kWh/m2)
250
245
240
235
230
0
10
20
30
40
50
60
70
80
90
100
Sky View Factor (%)
Figure 40: The approximation curve for the residential high-rise structure on blocks of type M4. Unlike most
approximation curves, but like the one shown in Figure 39, there is a substantial section with negative slope.
Although the sky view factors actually calculated during gameplay (shown as lines to the right) again do not fall
within this range, the phenomenon suggests a problem with the generation of the curve, leading to the error it
induced.
40
5.3
Summary
This case study strongly suggested that games of this type can be helpful collaborative design
assistants. However, several specific lessons for future games made themselves clear.
Prevent unattainable coloration: Players demonstrated the most interest in walkability, which
was the most color-responsive game mode, and explicitly expressed frustration with the lack
of color-responsiveness of the other two game modes. Figure 11 and Figure 15 illustrate this
problem: Ideally, each block’s color bar spans the whole spectrum. Two solutions to this
problem present themselves. The first is to offer users more power over their design than
mere block placement, such as a way to alter window-to-wall ratio or some other simulation
parameter or set of parameters. The tradeoff is that this approach complicates the interface
and requires more knowledge from users. The second solution is to offer block variants,
such as a “small windows” and “large windows” version of each block. In this system, if a
block only allows for a small part of a color spectrum, another version of that block might
cover a different portion of the spectrum, and when the two versions are combined they use
the entire color space. In either case, the block library should be carefully tested beforehand
in order to ensure that all colors are not only available, but easily discoverable by novice
users.
Approximated Versus Simulated Daylighting Scores
0.7
0.6
C2
Simulated Score
0.5
ST3
R2
0.4
R3
M1
0.3
M2
M3
0.2
M4
M5
0.1
M6
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Approximated Score
Figure 41: Comparison of energy scores approximated during gameplay with scores of the same blocks fully
simulated afterwards. Data points above the diagonal line represent times that the approximation method
underestimated a block’s available daylighting, while points below it represent the inverse. Approximation of
daylighting was not nearly as accurate as approximation of energy, although the errors were still largely
clustered by block type.
Avoid unused blocks: Players largely ignored some of the available blocks. In particular,
blocks of type ST1, ST2, C1, and M1 found little use. This can partially be explained by the
fact that these blocks are strictly worse performing than other blocks of their respective
41
categories, but not entirely, because this is also true of blocks of type R1, which found
substantial use in gameplay despite their abysmal performance. What is more likely is that
players found no role for the ignored block types; R1 blocks, despite their poor performance,
are clearly appropriate for quiet, low-density neighborhoods, and players valued this fact
even though those blocks’ performance was poor. Future games could take advantage of this
phenomenon by designing the block library as a set of roles, rather than formic variants with
incidentally varying performance characteristics. Furthermore, making these roles explicit to
players help them move more quickly from the “learning” phase of gameplay to the “design”
phase.
Carefully select context-appropriate metrics: Players were puzzled by the inclusion of
daylighting analysis, and they largely ignored this design mode during gameplay itself.
Daylighting is simply not culturally relevant for Saudi urban design. Selection of a more
contextually appropriate metric, such as traffic congestion or water consumption, would
have been better.
Use longer design periods: Each game group played up until the very end of its allocated
time. Thirty minutes therefore seems like too short a play period; 45 or 60 minutes might
better allow groups to realize a design vision.
Historical results are unnecessary: The players completely ignored functionality for
examining the historical progress of their design. This is likely due to short gameplay periods
that left no time for retrospection. However, longer periods would probably have simply
revealed a new problem with the historical display: The table itself has no way to revert to
previous designs. Players noticing that a previous design outperformed their current one
could not recreate that design without attempting to re-build it from memory. This is an
unproductive workflow. Future games should eliminate the historical record entirely, and
focus on improving the rest of the interface.
Investigate simulation methodology: The game presented here was fundamentally agnostic
with respect to the method used to calculate block scores, as long as it ran fast, but the used
presimulation approach for energy and daylighting performed well. However, the degree to
which its correctness relied on the particulars of Riyadh’s climate is unclear. Game
implementations for other regions should verify the method for their corresponding
climates, and if it is unsuitable, devise another strategy. Even if presimulation is used, it
could be used with simulation engines other than the Shoeboxer and Urban Daylighting, or
parameterized differently (such as with smaller daylight analysis patches, or the inclusion of
dimming controls in the energy simulation). Whatever method is chosen, it should be
verified with each particular block type before gameplay in order to minimize errors.
Physical apparatus robustness: The game table was susceptible to player bumping, which
would disrupt the camera’s calibration and require a manual intervention to recalibrate it.
The time spent performing the actual recalibration was minimal, but the time spent
designing on a miscalibrated table and the interruption to player focus necessary to
recalibrate it were bigger problems. Mechanically, the system should somehow be made
more robust. This particular implementation consisted of a plexiglass plate resting on top of
metal stands, and somehow adhering the plate to the stands would have helped substantially.
One important lesson learned was a positive one: that players adapted the game to their
specific, circumstantial needs. All four groups used the table to analyze the performance of
low-density block clusters. In each case, their poor performance was revealed, and players’
later designs included higher-performing, higher-density areas elsewhere in the design
neighborhood, but no group completely eliminated low-density blocks. This suggests that
the players, despite their attention to neighborhood performance, understood that they were
42
designing a neighborhood in which some low-density construction would be called for due
to aesthetic or political concerns, poor performance notwithstanding. The players accepted
the game’s recommendations without allowing it to dictate requirements.
6 Conclusion
The game presented in this document showed substantial promise as an interactive,
collaborative, urban design assistant for groups of stakeholders with varying expertise and
backgrounds; game players showed great enthusiasm for gameplay and were able to use the
tool to improve their designs over time with respect to multiple quantitative metrics, while
remaining cognizant of their own qualitative concerns. The described case study was
especially encouraging, as it took place at the planning authority of a rapidly urbanizing city,
which is the type of environment in which fast, evidence-based, large-scale design is
tremendously relevant.
In addition to demonstrating the game’s potential, the case study revealed clear paths
for improvement of future games. The most straightforward is validation of the
approximation method used to calculate energy and daylighting performance quickly in
climates other than Riyadh. If it is insufficient, it must be replaced with some other suitable
method that can still execute at interactive speeds. If it is sufficient, its implementation can
still be improved by different parameterization, or substitution of other underlying
simulation engines for approximation. Other important lessons learned from the case study
are that metrics must be carefully chosen to suit players’ design needs, or they will be
ignored, and that players must feel that the game is responsive to their choices, as their
enthusiasm for a game mode was directly related to how much power they had within it, and
a lack of this power frustrated them.
But these opportunities aside, this project strongly suggests that games of this type
could be substantial boons for the type of fast, collaborative, urban design and planning that
will be necessary to meet the needs of a rapidly urbanizing population in the coming years.
43
A Assembly definitions
The following non-glazing material definitions were used for building energy presimulation:
Name
Conductivity
(W/m-°K)
Density
(kg/m3)
Brick
Cement mortar
Reinforced concrete
Aerated concrete block
XPS board
Ceramic tile
Sand
14.493
50
8.85
0.725
0.034
55.56
10.64
1300
2085
2400
489
35
2284
2600
Specific
Heat
(J/kg-°K)
796
837
921
879
1400
796
800
Thermal
Emittance
Solar
Absorptance
Visible
Absorptance
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
These materials were arranged into construction definitions as described in the following
tables. Residential and commercial constructions were identical except for wall insulation
thickness, which was 0.03 meters for residential buildings and 0.01 meters for commercial
buildings.
Interior Floor Material
Ceramic tile
Cement mortar
Sand
Reinforced concrete
Cement mortar
Ground Floor Material
Reinforced concrete
Sand
Cement mortar
Ceramic tile
Partition Material
Cement Mortar
Brick
Cement Mortar
Thickness (m)
0.02
0.03
0.04
0.15
0.02
Thickness (m)
0.20
0.04
0.03
0.02
Thickness (m)
0.02
0.09
0.02
Wall Material
Brick
Cement mortar
XPS board
Aerated concrete block
Cement mortar
Roof material
Cement mortar
XPS board
Cement mortar
Reinforced concrete
Thickness (m)
0.09
0.02
0.01 or 0.03
0.20
0.02
Thickness (m)
0.03
0.06
0.03
0.15
All buildings used double-paned windows consisting of two six-millimeter thick panes of
glass surrounding twelve millimeters of air. The outside panes of windows in commercial
buildings were given a high-reflectance coating. The following table provides physical
characteristics of the glass:
Property
Conductivity (W/m-°K)
Density (kg/m3)
Solar transmittance
Front-side solar reflectance
Back-side solar reflectance
Visible transmittance
Front-side visible reflectance
Back-side visible reflectance
Infrared transmittance
Front-side infrared emissivity
Back-side infrared emissivity
Uncoated
1.05
2500
0.78
0.07
0.07
0.88
0.08
0.08
0.01
0.84
0.84
44
Coated
1.05
2500
0.15
0.22
0.37
0.2
0.25
0.32
0.01
0.84
0.84
B Building definitions
Energy simulation of residential buildings was executed using a window-to-wall ratio of 0.3,
a cooling coefficient of performance of 2.5, and an always-on HVAC system. Internal loads
varied with each building as follows:
Building
Residents
ST3
R1
R2
R3
M1
M2
M3
M4 (mid-rise)
M4 (high-rise)
M5
M6 (small tower)
M6 (large tower)
72
8
60
144
80
288
608
80
320
192
60
120
Occupancy Density
(persons/m2)
0.033
0.01
0.025
0.033
0.025
0.033
0.039
0.04
0.04
0.04
0.05
0.05
Equipment Density
(kWh/m2)
11.1
3.33
8.33
11.1
8.33
11.1
13.1
13.32
13.32
13.18
16.65
16.65
Lighting Density
(kWh/m2)
6.67
2.0
5.0
6.67
5.0
6.67
7.89
8.0
8.0
7.92
10.0
10.0
Energy simulation of commercial buildings was executed using a window-to-wall ratio of 0.7,
a cooling coefficient of performance of 2.5, and an HVAC system that was on during
weekdays and off during weekends. Internal loads varied with each building as follows:
Building
Jobs
ST1
ST2
ST3
C1
C2
C3 (mid-rise)
C3 (high-rise)
M1
M2
M3
M4 (ground floors)
M4 (with offices)
M5
M6
60
288
480
120
288
120
180
40
120
320
80
160
240
672
Occupancy Density
(persons/m2)
0.025
0.033
0.05
0.025
0.033
0.05
0.05
0.025
0.025
0.025
0.05
0.05
0.05
0.067
Equipment Density
(kWh/m2)
6.25
8.33
12.5
6.25
8.33
12.5
12.5
6.25
6.25
12.5
12.5
12.5
12.5
16.67
Lighting Density
(kWh/m2)
6.25
8.33
12.5
6.25
8.33
12.5
12.5
6.25
6.25
12.5
12.5
12.5
12.5
16.67
C Sky view factor raytrace parameters
Calculation of a building’s sky view factor during presimulation and gameplay was
performed using an invocation of rtrace with the following parameters:
-I -h -dp 2048 -ms 0.063 -ds .2 -dt .05 -dc .75 -dr 3 -st .01 -lr 12
-lw .0005 -ab 1 -ad 1000 -ar 300 -aa 0.0
45
D Walkability score decay
meters was assigned a score of 0.0 for
250 and
.
otherwise, where
and the polynomial coefficients
are given in the following table:
-1.692047329
A walking trip of length
through
5.565669441
-5.002268632
0.2276159257
0.02505831988
1.003818614
E Urban Daylight parameters
Urban Daylighting presimulations were executed with a target illuminance of 300 lux and an
illuminance reduction of 50% once façade illuminance reached 20,000 lux (in order to
simulate use of blinds). The meshing resolution for both building envelope patches and floor
patches was two meters. The window-to-wall ratio was set to 0.3 for commercial buildings
and 0.7 for residential buildings.11
F Building Presimulation Results
On the following pages are presimulation and approximation curves for the operational
energy consumption and spatial daylight autonomy of each available building type. Note that
the vertical axis ranges vary between graphs in order to better reveal the details of each
particular curve.
11
The opposite ratios were used for energy simulation, but energy and daylighting
simulations were never directly linked to each other.
46
ST1 BUILDING
Simulated EUI
Simulated sDA
136
0.45
134
0.4
0.35
0.3
130
sDA300,50%
EUI (kWh/m2)
132
128
126
0.2
0.15
124
0.1
122
0.05
120
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
136
0.45
134
0.4
100
0.35
132
0.3
130
sDA300,50%
EUI (kWh/m2)
0.25
128
126
0.25
0.2
0.15
124
0.1
122
0.05
120
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
47
80
100
ST2 BUILDING
Simulated EUI
Simulated sDA
0.25
84
83
0.2
82
80
sDA300,50%
EUI (kWh/m2)
81
79
78
0.15
0.1
77
0.05
76
75
74
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.25
84
83
0.2
82
80
sDA300,50%
EUI (kWh/m2)
81
79
78
0.15
0.1
77
0.05
76
75
74
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
48
80
100
ST3 COMMERCIAL LOWER FLOORS
Simulated EUI
Simulated sDA
0.25
103
102
0.2
101
99
sDA300,50%
EUI (kWh/m2)
100
98
97
0.15
0.1
96
0.05
95
94
93
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.25
103
102
0.2
101
99
sDA300,50%
EUI (kWh/m2)
100
98
97
0.15
0.1
96
0.05
95
94
93
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
49
80
100
ST3 RESIDENTIAL UPPER FLOORS
Simulated EUI
Simulated DLA
200
0.7
198
0.6
196
0.5
192
sDA300,50%
EUI (kWh/m2)
194
190
188
0.4
0.3
186
0.2
184
182
0.1
180
178
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.7
200
198
0.6
196
0.5
192
sDA300,50%
EUI (kWh/m2)
194
190
188
186
0.4
0.3
0.2
184
0.1
182
180
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
50
80
100
C1 BUILDING
Simulated EUI
Simulated sDA
96
0.35
0.3
94
0.25
sDA300,50%
EUI (kWh/m2)
92
90
88
0.15
0.1
86
0.05
84
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated DLA
96
100
0.35
0.3
94
0.25
sDA300,50%
92
EUI (kWh/m2)
0.2
90
88
0.2
0.15
0.1
86
0.05
84
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
51
80
100
C2 BUILDING
Simulated EUI
Simulated sDA
111
0.35
110
0.3
109
0.25
107
sDA300,50%
EUI (kWh/m2)
108
106
105
0.2
0.15
104
0.1
103
102
0.05
101
100
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.35
110
109
0.3
108
0.25
106
sDA300,50%
EUI (kWh/m2)
107
105
104
103
0.2
0.15
0.1
102
0.05
101
100
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
52
80
100
C3 MID-RISE TOWER
Simulated EUI
Simulated sDA
240
0.9
0.8
235
0.7
0.6
sDA300,50%
EUI (kWh/m2)
230
225
220
0.5
0.4
0.3
0.2
215
0.1
210
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
240
100
0.9
0.8
235
0.7
0.6
sDA300,50%
EUI (kWh/m2)
230
225
220
0.5
0.4
0.3
0.2
215
0.1
210
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
53
80
100
C3 HIGH-RISE TOWER
Simulated EUI
Simulated sDA
245
1
0.9
240
0.8
0.7
sDA300,50%
EUI (kWh/m2)
235
230
225
0.6
0.5
0.4
0.3
0.2
220
0.1
215
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
240
100
1
0.9
235
0.8
230
sDA300,50%
EUI (kWh/m2)
0.7
225
0.6
0.5
0.4
0.3
220
0.2
0.1
215
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
54
80
100
R1 BUILDING
Simulated EUI
Simulated sDA
0.8
165
0.7
160
0.6
0.5
sDA300,50%
EUI (kWh/m2)
155
150
0.4
0.3
145
0.2
140
0.1
135
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.8
165
0.7
160
0.6
0.5
sDA300,50%
EUI (kWh/m2)
155
150
0.4
0.3
145
0.2
140
0.1
135
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
55
80
100
R2 BUILDING
Simulated sDA
185
0.8
180
0.7
175
0.6
170
0.5
sDA300,50%
EUI (kWh/m2)
Simulated EUI
165
160
0.3
155
0.2
150
0.1
145
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
185
0.8
180
0.7
175
0.6
170
0.5
sDA300,50%
EUI (kWh/m2)
0.4
165
0.4
160
0.3
155
0.2
150
0.1
145
100
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
56
80
100
R3 BUILDING
Simulated EUI
Simulated sDA
0.25
190
185
0.2
sDA300,50%
EUI (kWh/m2)
180
175
0.1
170
0.05
165
160
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
190
100
0.7
0.6
185
0.5
sDA300,50%
180
EUI (kWh/m2)
0.15
175
170
0.4
0.3
0.2
165
0.1
160
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
57
80
100
M1 COMMERCIAL LOWER FLOORS
Simulated EUI
Simulated sDA
120
0.5
0.45
118
0.4
0.35
114
sDA300,50%
EUI (kWh/m2)
116
112
0.3
0.25
0.2
0.15
110
0.1
108
0.05
106
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
120
100
0.5
0.45
118
0.4
0.35
114
sDA300,50%
EUI (kWh/m2)
116
112
0.3
0.25
0.2
0.15
110
0.1
108
0.05
106
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
58
80
100
M1 RESIDENTIAL UPPER FLOORS
Simulated sDA
155
0.6
150
0.5
145
0.4
sDA300,50%
EUI (kWh/m2)
Simulated EUI
140
135
0.2
130
0.1
125
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
155
0.6
150
0.5
145
0.4
sDA300,50%
EUI (kWh/m2)
0.3
140
0.3
135
0.2
130
0.1
125
100
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
59
80
100
M2 COMMERCIAL LOWER FLOORS
Simulated EUI
Simulated sDA
99
0.45
98
0.4
97
0.35
0.3
95
sDA300,50%
EUI (kWh/m2)
96
94
93
0.2
0.15
92
0.1
91
0.05
90
89
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
99
0.45
98
0.4
97
100
0.35
96
0.3
95
sDA300,50%
EUI (kWh/m2)
0.25
94
93
0.25
0.2
0.15
92
0.1
91
0.05
90
89
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
60
80
100
M2 RESIDENTIAL UPPER FLOORS
Simulated EUI
Simulated sDA
155
0.7
0.6
150
145
sDA300,50%
EUI (kWh/m2)
0.5
140
0.4
0.3
0.2
135
0.1
130
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
155
100
0.7
0.6
150
145
sDA300,50%
EUI (kWh/m2)
0.5
140
0.4
0.3
0.2
135
0.1
130
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
61
80
100
M3 COMMERCIAL LOWER FLOORS
Total Energy Load
Simulated sDA
140
0.35
0.3
138
0.25
sDA300,50%
EUI (kWh/m2)
136
134
132
0.15
0.1
130
0.05
128
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
140
100
0.35
0.3
138
0.25
sDA300,50%
136
EUI (kWh/m2)
0.2
134
132
0.2
0.15
0.1
130
0.05
128
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
62
80
100
M3 RESIDENTIAL UPPER FLOORS
Simulated EUI
Simulated sDA
0.6
204
202
0.5
200
0.4
196
sDA300,50%
EUI (kWh/m2)
198
194
192
0.3
0.2
190
188
0.1
186
184
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
204
100
0.6
202
0.5
0.4
198
sDA300,50%
EUI (kWh/m2)
200
196
194
0.3
0.2
192
0.1
190
188
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
63
80
100
M4 COMMERCIAL BASE BELOW MID-RISE TOWERS
Simulated EUI
Simulated sDA
170
0.4
168
0.35
0.3
0.25
164
sDA300,50%
EUI (kWh/m2)
166
162
0.15
160
0.1
158
0.05
156
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
170
0.4
168
0.35
100
0.3
166
0.25
164
sDA300,50%
EUI (kWh/m2)
0.2
162
0.2
0.15
160
0.1
158
0.05
156
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
64
80
100
M4 COMMERCIAL BASE BELOW HIGH-RISE TOWERS
Simulated EUI
Simulated sDA
162
0.5
0.45
160
0.4
0.35
156
sDA300,50%
EUI (kWh/m2)
158
154
0.3
0.25
0.2
0.15
152
0.1
150
0.05
148
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
162
100
0.5
0.45
160
0.4
0.35
sDA300,50%
EUI (kWh/m2)
158
156
154
0.3
0.25
0.2
0.15
0.1
152
0.05
150
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
65
80
100
M4 RESIDENTIAL MID-RISE TOWER
Simulated EUI
Simulated sDA
290
0.9
0.8
285
0.7
0.6
275
sDA300,50%
EUI (kWh/m2)
280
270
0.5
0.4
0.3
265
0.2
260
0.1
255
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
290
100
0.9
0.8
285
0.7
0.6
275
sDA300,50%
EUI (kWh/m2)
280
270
0.5
0.4
0.3
265
0.2
260
0.1
255
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
66
80
100
M4 RESIDENTIAL HIGH-RISE TOWERS
Simulated EUI
Simulated sDA
255
0.8
0.7
250
0.5
245
sDA300,50%
EUI (kWh/m2)
0.6
240
0.4
0.3
0.2
235
0.1
230
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
255
100
0.8
0.7
250
0.5
245
sDA300,50%
EUI (kWh/m2)
0.6
240
0.4
0.3
0.2
235
0.1
230
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
67
80
100
M5 COMMERCIAL BUILDING
Simulated EUI
Simulated sDA
192
0.45
190
0.4
188
0.35
0.3
184
sDA300,50%
EUI (kWh/m2)
186
182
180
0.2
0.15
178
0.1
176
0.05
174
172
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
192
0.45
190
0.4
188
100
0.35
186
0.3
184
sDA300,50%
EUI (kWh/m2)
0.25
182
180
0.25
0.2
0.15
178
0.1
176
0.05
174
172
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
68
80
100
M5 RESIDENTIAL BUILDING
Simulated EUI
Simulated sDA
240
0.7
0.6
235
0.5
sDA300,50%
EUI (kWh/m2)
230
225
220
0.3
0.2
215
0.1
210
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
240
100
0.7
0.6
235
0.5
sDA300,50%
230
EUI (kWh/m2)
0.4
225
220
0.4
0.3
0.2
215
0.1
210
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
69
80
100
M6 COMMERCIAL BASE
Simulated EUI
Simulated sDA
0.3
153
152
0.25
151
0.2
149
sDA300,50%
EUI (kWh/m2)
150
148
147
0.15
0.1
146
145
0.05
144
143
0
0
20
40
60
80
0
100
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
100
0.3
153
152
0.25
151
0.2
149
sDA300,50%
EUI (kWh/m2)
150
148
147
0.15
0.1
146
145
0.05
144
143
0
0
20
40
60
80
0
100
Sky View Factor (%)
20
40
60
Sky View Factor (%)
70
80
100
M6 RESIDENTIAL SMALLER TOWER
Simulated EUI
Simulated sDA
315
1
0.9
310
0.8
0.7
300
sDA300,50%
EUI (kWh/m2)
305
295
0.6
0.5
0.4
0.3
290
0.2
285
0.1
280
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
315
100
1
0.9
310
0.8
0.7
300
sDA300,50%
EUI (kWh/m2)
305
295
0.6
0.5
0.4
0.3
290
0.2
285
0.1
280
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
71
80
100
M6 RESIDENTIAL LARGER TOWER
Simulated EUI
Simulated sDA
325
1
0.9
320
0.8
0.7
sDA300,50%
EUI (kWh/m2)
315
310
305
0.6
0.5
0.4
0.3
0.2
300
0.1
295
0
0
20
40
60
80
100
0
20
40
60
80
Sky View Factor (%)
Sky View Factor (%)
Approximated EUI
Approximated sDA
325
100
1
0.9
320
0.8
0.7
sDA300,50%
EUI (kWh/m2)
315
310
305
0.6
0.5
0.4
0.3
0.2
300
0.1
295
0
0
20
40
60
80
100
0
Sky View Factor (%)
20
40
60
Sky View Factor (%)
72
80
100
Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
“World Urbanization Prospects: The 2014 Revision, Highlights,” United Nations,
Department of Economic and Social Affairs, Population Division, 2014.
C. Reinhart, T. Dogan, J. A. Jakubiec, T. Rakha, and A. Sang, “Umi-an urban
simulation environment for building energy use, daylighting and walkability,” in 13th
Conference of International Building Performance Simulation Association, Chambery, France, 2013.
A. Strzalka, J. Bogdahn, V. Coors, and U. Eicker, “3D city modeling for urban scale
heating energy demand forecasting,” HVACR Res., vol. 17, no. 4, pp. 526–539, 2011.
D. Robinson, F. Haldi, J. Kämpf, P. Leroux, D. Perez, A. Rasheed, and U. Wilke,
“CitySim: Comprehensive micro-simulation of resource flows for sustainable urban
planning,” in Eleventh International IBPSA Conference, 2009, pp. 1083–1090.
C. C. Davila and C. Reinhart, “URBAN ENERGY LIFECYCLE: AN ANALYTICAL
FRAMEWORK TO EVALUATE THE EMBODIED ENERGY USE OF URBAN
DEVELOPMENTS.”
T. Dogan, C. Reinhart, and P. Michalatos, “Urban daylight simulation calculating the
daylit area of urban designs,” Proc. SimBuild, 2012.
T. Rakha and C. Reinhart, “GENERATIVE URBAN MODELING: A DESIGN
WORK FLOW FOR WALKABILITY-OPTIMIZED CITIES,” Proc. SimBuild, 2012.
Maxis, SimCity. 1989.
R. RIVENBURG, “Only a Game? : Will your town thrive or perish? The fate of
millions is in your hands. Or so it seems. It’s your turn in SimCity.,” Los Angeles Times.
“SimCity 2013 Players Will Face Tough Choices on Energy and Environment.”
[Online]. Available: http://www.scientificamerican.com/article/simcity-2013-playersface-tough-energy-environment-choices/. [Accessed: 03-May-2015].
J. W. Forrester, “Urban dynamics,” 1969.
L. E. Alfeld, “Urban dynamics—the first fifty years,” Syst. Dyn. Rev., vol. 11, no. 3, pp.
199–217, 1995.
L. G. Swan and V. I. Ugursal, “Modeling of end-use energy consumption in the
residential sector: A review of modeling techniques,” Renew. Sustain. Energy Rev., vol. 13,
no. 8, pp. 1819–1835, Oct. 2009.
C. F. Reinhart and C. C. Davila, “Urban Building Energy Modeling – A Review of a
Nascent Field,” Build. Environ.
S. J. McMillan and J.-S. Hwang, “Measures of perceived interactivity: An exploration of
the role of direction of communication, user control, and time in shaping perceptions
of interactivity,” J. Advert., pp. 29–42, 2002.
C. Crawford, “Lessons from computer game design,” in The Art of human-computer
interface design, 1990, pp. 103–111.
I. IESNA, LM-83-12 IES Spatial Daylight Autonomy (sDA) and Annual Sunlight Exposure
(ASE). New York, NY, USA, IESNA Lighting Measurement, 2012.
B. Ullmer and H. Ishii, “Emerging frameworks for tangible user interfaces,” IBM Syst.
J., vol. 39, no. 3.4, pp. 915–931, 2000.
T. Grossman and D. Wigdor, “Going Deeper: a Taxonomy of 3D on the Tabletop,” in
Horizontal Interactive Human-Computer Systems, 2007. TABLETOP’07. Second Annual
IEEE International Workshop on, 2007, pp. 137–144.
R. Raskar, G. Welch, and H. Fuchs, “Spatially augmented reality,” in First IEEE
Workshop on Augmented Reality (IWAR’98), 1998, pp. 11–20.
73
[21] H. Seichter and M. A. Schnabel, “Digital and Tangible Sensation: An Augmented
Reality Urban Design Studio,” in Tenth International Conference on Computer Aided
Architectural Design Research in Asia, CAADRIA, New Delhi, India, 2005, pp. 193–202.
[22] B. Piper, C. Ratti, and H. Ishii, “Illuminating clay: a 3-D tangible interface for
landscape analysis,” in Proceedings of the SIGCHI conference on Human factors in computing
systems, 2002, pp. 355–362.
[23] C.-J. Huang, E. Yi-Luen Do, and D. Gross, “MouseHaus Table: a Physical Interface
for Urban Design,” in 16th Annual ACM Symposium on User Interface Software and
Technology (UIST), 2003.
[24] J. Underkoffler and H. Ishii, “Urp: a luminous-tangible workbench for urban planning
and design,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems,
New York, NY, USA, 1999, pp. 386–393.
[25] R. Raskar, G. Welch, and W.-C. Chen, “Table-top spatially-augmented realty: bringing
physical models to life with projected imagery,” in Augmented Reality, 1999.(IWAR’99)
Proceedings. 2nd IEEE and ACM International Workshop on, 1999, pp. 64–71.
[26] J. Halatsch, A. Kunze, and G. Schmitt, “Value Lab: A collaborative environment for
the planning of Future Cities,” in Proceedings of eCAADe, 2009, vol. 27.
[27] B. Fry and C. Reas, Processing. 2001.
[28] Rhinoceros 3D. Robert McNeel & Associates, 2013.
[29] U.S. Department of Energy, EnergyPlus. U.S. Department of Energy, 2013.
[30] T. Dogan and C. Reinhart, “Automated conversion of architectural massing models
into thermal ‘shoebox’models,” Proc. BS2013, 2013.
[31] R. G. Hopkinson, P. Petherbridge, and J. Longmore, Daylighting. Heinemann, 1966.
[32] G. Ward, Radiance. 1985.
[33] “Walk Score.” 2011.
[34] C. Reinhart, DAYSIM. 2001.
74
Download