Rochester Institute of Technology

advertisement
Rochester Institute of Technology
B. Thomas Golisano College of
Computing and Information Sciences
Master of Science in Game Design and Development
Capstone Final Design & Development Approval Form
Student:
Luis Bobadilla
Student:
Bill Phillips
Student:
Sebastian Hernandez
Student:
Andrew Wilkinson
Student:
Rob Link
Student:
Jia Xu
Student:
Nitin Nandakumar
Project Title:
Into the Paws of Madness
Keywords:
3D Platformer
David I. Schwartz
Supervising Faculty, Capstone Committee Chair
Brian May
Supervising Faculty
Christopher Egert
Supervising Faculty
Chris Cascioli
Supervising Faculty
David Simkins
Supervising Faculty
Adrian Decker
Supervising Faculty
Jessica Bayliss
Supervising Faculty
Tona Henderson
Director, School of Interactive Games and Media
Into the Paws of Madness
By
Luis Bobadilla, Sebastian Hernandez, Rob Link, Nitin
Nandakumar, Bill Phillips, Andrew Wilkinson, Jia Xu
Project submitted in partial fulfillment of the requirements for the
degree of Master of Science in Game Design and Development
Rochester Institute of Technology
B. Thomas Golisano College of Computing and
Information Sciences
May 15, 2013
ii
Acknowledgements
Thank you to all the artists that worked with us.
Thank you to RIT staff and students.
Corinne Dewey
David Schwartz
Jon Phan
Chris Cascioli
Jason Pries
Brian May
Micheal Borba
Adrian Decker
Richard Borba
Chris Egert
Daniel Strauss
Andrew Phelps
Alex Berkowitz
Jessica Bayliss
Brian Van Horn
John Araujo
Caitlyn Orta
Dan Whiddon
Tory Mance
Leigh Raze
Zachary O’Neill
Ivy Ngo
1
Executive Summary
Into the Paws of Madness is a 3D platformer that explores the ideas of unpredictable and
combinatorial difficulty. As the game progresses, rifts in time and space open up around the area.
Each rift causes an effect in the world, like shrinking the player or covering the area in darkness. If
the player can reach a rift, they can collect it and disable the effect. Once the player has collected
four rifts, they can go back to the center and win. However, if time runs out before they can, then
Pomerazziag, the Elder God of Small Annoying Dogs, will awaken and destroy the world.
Our biggest challenges throughout the project were finding a balance between fun and
frustration with the rift effects and communicating how the game works to the player. We went
through several iterations of both the rifts effects and the win condition to find something that people
could play and enjoy.
However, our original goal of making a game around unpredictable and combinatorial
difficulty survived. The random spawning of the rifts and the way that their effects stack with each
other create a number of interesting combinations and a slightly different experience for everyone
that plays the game.
Another success has been the workflow from our Unity prototype to our custom engine. In
addition to creating the engine from scratch, we developed a way to export information from the
Unity prototype directly into the XML format for the engine, turning Unity into our level editor.
This process significantly reduced our workload for both engine programming and level design.
2
Table of Contents
1
Introduction ............................................................................................................................. 7
1.1
References ........................................................................................................................ 9
2
Game Development Production .............................................................................................. 11
3
Game Design Summary .......................................................................................................... 19
4
5
3.1
The Evolution of the Game ............................................................................................. 19
3.2
Game Design .................................................................................................................. 21
Technical Overview ............................................................................................................... 25
4.1
Introduction .................................................................................................................... 25
4.2
Architecture Overview .................................................................................................... 25
4.2.1
Message Handling................................................................................................... 25
4.2.2
Centralized, Dynamic Class Factory ........................................................................ 26
4.2.3
XML-Based Soft Architecture ................................................................................. 26
4.2.4
Three-Tier Architecture........................................................................................... 27
4.3
The BTO Renderer ......................................................................................................... 29
4.4
Testing Methodology ...................................................................................................... 32
4.5
Results............................................................................................................................ 33
4.6
References ...................................................................................................................... 33
Asset Overview ...................................................................................................................... 34
3
6
5.1
Platforms ........................................................................................................................ 34
5.2
Details ............................................................................................................................ 35
5.3
UI ................................................................................................................................... 35
5.4
3D Characters ................................................................................................................. 35
5.5
Audio ............................................................................................................................. 36
Research Topics ..................................................................................................................... 37
6.1
3D Model File Reader Implementation — by Luis Bobadilla .......................................... 37
6.2
A Scalable, Data-Centric Multicore Game Loop Architecture — by Sebastian Hernandez
37
6.3
Jumping Through Negative Space: A Reconstruction of Super Mario Galaxy’s Matter
Splatter Mechanic — by Rob Link ............................................................................................. 38
6.4
Simulation and Rendering of an Advanced Particle System Engine on the GPU — by Nitin
Nandakumar .............................................................................................................................. 38
6.5
Stepping Out of Your Skin: Roleplaying as Non-Humans — by Bill Phillips ................. 38
6.6
The Endless Game: Using Heuristics to Explore the Gameplay of Sandbox Games — by
Andrew Wilkinson ..................................................................................................................... 39
7
6.7
Holding on to the Edge: Achieving Dynamic Ledge Grabbing with Vector Math — by Jia
Xu
39
Play Testing and Results......................................................................................................... 40
7.1
The Method .................................................................................................................... 40
7.2
Playtest One Results ....................................................................................................... 41
7.3
Playtest One Changes ..................................................................................................... 42
7.4
Playtest Two Results....................................................................................................... 43
4
8
7.5
Imagine RIT Results ....................................................................................................... 43
7.6
Survey Figures (Survey Posted on 1/26/2013) ................................................................. 44
7.7
Informal Observations (Observations gathered from Imagine RIT on 5/6/2013) .............. 62
7.7.1
Gameplay Experience Successes ............................................................................. 62
7.7.2
Gameplay Experience Problems .............................................................................. 62
Post Mortem........................................................................................................................... 64
8.1
What Went Right ............................................................................................................ 64
8.2
What Went Wrong .......................................................................................................... 66
8.3
What We Learned ........................................................................................................... 68
8.4
Future Work ................................................................................................................... 69
8.5
Summary ........................................................................................................................ 70
5
List of Figures
Figure 2.1 The Revaluation of Design, Week 1 of Winter ............................................................... 12
Figure 2.2 Proposed Schedule ........................................................................................................ 13
Figure 2.3 Scrum Process, from Wikipedia ..................................................................................... 14
Figure 2.4 Product Backlog ............................................................................................................ 14
Figure 2.5 www.pawsofmadness.com/design-document ................................................................. 15
Figure 2.6 Task Board .................................................................................................................... 16
Figure 3.1 The Tablet UI ................................................................................................................ 24
Figure 4.1 Layered Engine Architecture Graph for Lance ............................................................... 27
Figure 4.2 Three Tier Architecture ................................................................................................. 27
Figure 4.3 Service Provider/Consumer Relationship ....................................................................... 29
Figure 4.4The BTO Renderer Class Diagram ................................................................................. 31
Figure 7.1 Intro to Paws of Madness Survey................................................................................... 44
Figure 7.2 Page one of Paws of Madness Survey ............................................................................ 45
Figure 7.3 Page two of Paws of Madness Survey ............................................................................ 46
Figure 7.4 Page three of Paws of Madness Survey .......................................................................... 47
Figure 7.5 Page four of Paws of Madness Survey ........................................................................... 48
Figure 7.6 Page five of Paws of Madness Survey........................................................................... 49
Figure 7.7 Page six of Paws of Madness Survey ............................................................................ 50
Figure 7.8 Pictures associated with Page 5’s questions .................................................................. 50
Figure 7.9 New pictures associated with page 5’s questions........................................................... 51
Figure 7.10 Results 1 Figures (Results gathered on 3/2/2013) ......................................................... 57
Figure 7.11 Results 2 Figures (Results gathered on 4/26/2013) ....................................................... 61
6
1 Introduction
Into the Paws of Madness is a third-person platforming game that explores the idea of unpredictable
difficulty. It’s designed for repeated short play sessions (8-12 minutes), whereby the player tries to
beat their score on subsequent playthroughs. The player must navigate around an open world to
collect Chaos Rifts and cancel their effects before Pomerazziag, the Elder God of Small Annoying
Dogs, draws enough power from them to awaken.
This project started as Great Bingo in Busy City at a meeting almost a year ago. We were
trying to figure out our capstone partners, splitting into semi-random groups and using
www.videogamena.me to generate seed ideas to discuss, talking with each other about what roles we
would fill in the group. At the end of several rounds of this brainstorming, we decided to try a round
with all of us in one massive group. We got “Great Bingo in Busy City,” and a game idea quickly
emerged—a story of the largest bingo game of all time gone horribly wrong.
Even though we hadn’t intended to use that meeting to initiate a capstone project, Great
Bingo in Busy City stuck with everyone. The idea of unpredictable difficulty, the different effects
stacking upon each other to create unforeseen combinations and interesting situations, just stuck with
people. We went from joking about the game to seriously talking about it, and we were all pretty
much on-board from the beginning. Even as the game morphed into “Into the Paws of Madness,” the
entire team stuck together because of the core ideas of unpredictable and combinatorial difficulty.
Many games have elements of Into the Paws of Madness, but ’we cannot find anything
exactly like it. In terms of 3D platforming, we mostly compared ourselves to Super Mario Galaxy
mixed with the jumping puzzles of Guild Wars 2 [1-2]. The levels of both games often have
disconnected floating platforms, which inspired our level design and aesthetics. We sought other
sources for level design inspiration in terms of floating platforms: Alice: Madness Returns, DMC:
7
Devil May Cry, and the animated movie Dragon Hunters [3-5]. Our character controller behaves
similarly to Guild Wars 2’s, with fairly straightforward jumps and control while airborne [2].
In terms of unpredictable and mounting difficulty, we have a number of games to compare
to. For mounting difficulty, we have the Flash game Tower of Heaven in which you have to ascend a
tower while dealing with an increasingly complex and sadistic set of rules [6]. The game even
eventually takes away your ability to see the list of rules currently in effect [6]. ’Despite the extreme
difficulty and unfair rules, fans enjoy the great progression of difficulty as it adds extra rules. Guild
Wars 2’s jumping puzzles, Goemm’s Lab, also demonstrates an example of mounting difficulty [2].
The puzzle adds new environmental effects incrementally—e.g., bolts of lightning, gusts of wind,
and areas of cold—ending with a section combining all three.
In terms of unpredictable difficulty, see You Monster DLC for Defense Grid [7]. In the
campaign missions, the rules constantly change as you progress though the missions [7]. The set of
available towers might change, or perhaps the AI will build towers that help the aliens instead of you
[7]. You lack any security, because your status can change in an instant.
There is one game that perfectly combines these two ideas, though it’s not a video game: the
card game Fluxx [8]. It has both unpredictable and mounting difficulty, but more importantly, the
player has some measure of control over both forces [8]. As a player, you can play these additional
rules yourself, adding new rules when you feel that they’ll help you [8]. In addition, if you don’t like
a particular rule, you can change or remove it. This design prevents the rules from overwhelming the
player while providing the player a great amount of power and agency. The rules will mount over
time, but you can mold the way that they change to your advantage. Of course, since other players
also jockey for control over the rules and everyone draws cards at random, you don’t have complete
control, which provides the unpredictable difficulty.
Fluxx’s gameplay matches everything we
wanted: a world whose rules constantly change, giving the illusion of player control. Some of them
8
you’ll learn to use in your favor, and some you’ll strive remove as soon as possible because they
completely ruin your playstyle.
Although this combination of unpredictable and combinatorial difficulty exists in Fluxx and
other non-digital game, we ’couldn’t find any similar digital games akin to our design goals. Yes,
Rogue-likes are the epitome of mounting and unpredictable difficulty, but their challenge builds over
hours of play.
We’ve built a game that presents interesting and unknowable combinations of
challenges within ten minutes. Each effect makes the game more difficult both on its own and when
layered on top of the other effects. In the rest of this document, ’we present:

Chapter 2: The processes we went through to make this game.

Chapter 3: How the game itself changed over time and what our final design was.

Chapter 4: The technical details that power the project.

Chapter 5: An overview of our assets.

Chapter 6: The abstracts of our personal research projects.

Chapter 7: The results of our playtesting.

Chapter 8: How well the project fulfilled all of our goals.
1.1 References
[1] Super Mario Galaxy. Nintendo EAD Tokyo. 2007. Video Game.
[2] Guild Wars 2. ArenaNet. 2012. Video Game.
[3] Alice: Madness Returns. Spicy Horse. 2011. Video Game.
[4] DMC: Devil May Cry. Ninja Theory. 2013. Video Game.
[5] Ivernel, Guillaume, dir. Dragon Hunters. Dir. Arthur Qwak. Futurikon, 2008. Film. 14 May
2013.
9
[6] Tower of Heaven. AskiiSoft. 2010. Video Game.
[7] Defense Grid: You Monster. Hidden Path Entertainment. 2011. Video Game.
[8] Fluxx. Looney Labs. 1996. Card Game.
10
2 Game Development Production
At the end of the summer, Paws of Madness was born as its initial concept, “Great Bingo in Busy
City.” As a ten person team, we began the first production run taking place during the ten weeks that
make up Fall quarter. We used that time to get a head-start on the project and use the review process
due at the end of the class as our first prototype. The team consisted of the Paws of Madness team as
it is now plus two undergraduates, Ivy Ngo, and Leigh Raze, as well as another graduate student,
Dan Whiddon. We made the decision from the beginning to prototype the game using Unity for
quick implementation times and the ability to port to anything would allow us to easily find
playtesters. We broke into three teams:

Engine: Sebastian, Nitin, Luis, and Andrew.

Asset creation: Rob, and Jia.

Prototyping: Dan, Ivy, and Leigh.

Producing: Bill (from outside the class).
During this period of time, we had no development process outside of check-ins during
class, and no schedules being kept. In the second week of production, another graduate student,
Johnny Araujo, advised us that the game idea didn’t make sense and should become more coherent.
Bill, Sebastian, Ivy, and Rob sat down with Johnny, at which point the true Paws of Madness was
born under the working title “Chaos Quest.” For the remaining eight weeks, everyone stayed focused
on their own tasks without much communication or collaboration.
At the beginning of Winter quarter, as a group, we made a big change between production
styles. We spent the first three weeks ensuring everyone knew the focus of the game. The notes of
our meetings can be seen in Figure 2.1.
11
Figure 2.1. The Revaluation of Design, Week 1 of Winter
The team reduced down to the seven graduate students, and the team had an important meeting to
shift responsibilities into new roles.

Rob moved into Level Designer from 3D Artist.

Jia into Gameplay Engineer from 2D Artist and UI Programmer.

Andrew moved away from engine programmer to Assistant Producer / Programmer so that
he could create schedules, run meetings, and ensure we attempted to unify our direction.

Bill kept the same position from Fall but accepted more responsibilities as Producer. He also
began maintaining the Game Design Document and finding outside talent to create our
assets, as our team now lacked dedicated artists.

Sebastian, Luis, and Nitin maintained the same jobs working on the core engine, the physics
system, the graphics system, and the sub-system interfaces—the interfaces for the core
engine to use the physics, graphics, audio, etc.
12
Once everyone identified and chose their roles, we decided as a group to pursue a
development process loosely based on SCRUM. We created short fixed-period schedules called
Sprints and had bi-weekly standup meetings to discuss progress since the last meeting. We would
spend Mondays and Thursdays during class time to do the standup meetings and discuss goals. We
then took two steps towards ensuring the project was visible not only to ourselves, but to the faculty
interested in our project. The first step, and only thing keeping our work visible to each other, was to
create the Sprint schedules. We had chosen to create them in Google Docs, and then visualize them
as Gantt Charts by setting cell colors in a spreadsheet, as seen in Figure 2.2.
Figure 2.2. Proposed Schedule
The schedules themselves followed the SCRUM Model, as seen in Figure 2.3, in that we
chose sprint tasks from deconstructed product backlog features.
13
Figure 2.3 Scrum Process, from Wikipedia
However, the deconstruction of the product backlog was so fine-grained that the initial
connections they made to our backlog stopped making sense. Ultimately, we dropped the product
backlog, although well-constructed, in lieu of making decisions on adding, removing, or editing
features as we implemented them. See Figure 2.4 for the project backlog.
Figure 2.4. Product Backlog
14
The second part of this unification process was to create a website, as seen in Figure 2.5, that
represented everything we were working on, again using Google’s tools. We had always planned to
keep our team’s design document, blogs, schedules, and current prototype visible and easily
accessible to the public so that could accept advice from anyone. Once we posted the website, we
could run two play-tests, and receive data from a survey we also hosted on the website, assisting us
in our game design direction.
Figure 2.5. www.pawsofmadness.com/design-document
During this quarter, we also worked with three external teams to create assets:

3D Environment team: Daniel Strauss, Jason Pries, Michael Borba, and Richard Borba.

3D Character team: Corinne Dewey, and Jon Phan.

Audio team: Tory Mance and Zachary O’Neill.
Immediately, we knew that assigning the coordination of all three asset teams under one person
would be too much. And so, we gave management of each group to a corresponding team member
15
while keeping Bill in the loop on all three teams. Rob led the Environment artist team, Luis led the
Character Artist team, and Bill led the Audio team. But, we couldn’t fold those external teams into
our bi-weekly meetings, or get them to give us concrete schedules. This disconnect in our schedules
was not the only lack of structure in our development process at the time.
Internally we had always been two teams split between engine production and prototype
production. Beyond the schedules and bi-weekly meetings, we had too loose a structure setup for the
teams to communicate. Because of this communication difficulty, we had to make another big
production change for Spring quarter.
We decided we should strictly adhere to the SCRUM style
with daily meetings and reduce the amount of written scheduling since we were working on this
project full time. Once we claimed a lab for all the graduate capstones, we replaced the schedule
system with a physical task board, shown in Figure 2.6. It was essentially a giant piece of foamboard taped to the wall and covered in sticky notes. We each had our own column on the foamboard, and the five different colors of sticky notes denoted different types of tasks: prototyping,
porting, engine, assets, and administrative.
Figure 2.6. Task Board
16
This visual representation of tasks, allowed us with quick glances to know who was most
swamped and encouraged people to divide large tasks and track progress. Although Sebastian kept a
personal Trello for tracking engine tasks, we used the physical task board during SCRUM meetings,
class meetings, and the primary way for tracking remaining work. In addition to the task board,
having everyone working in the same room significantly boosted the speed, quality, and cohesion of
production.
The team remained committed to the same tasks with a few changes. Luis, who had been
working on the engine, finished the animation system. With this system completed, he could shift to
a 3D artist position, fixing all the art from the previous quarter and making new assets. We also
gained four more external artists:

Brian Van Horn rigged and animated still models from the character artists in Winter.

Alex Berkowitz created a few environment pieces and all the icons for the rifts in the UI.

Caitlyn Orta created an introduction comic and was the only artist we commissioned.

Ryan Wilkinson consulted for audio mixing levels and tweaks to the background music.
Rob, who had been implementing and cleaning the prototype, had yet another task before
beginning level design.
With only basic environment art, the level lacked detail meshes for
ambiance. Without these meshes, the level would feel bland and non-immersive. Ultimately, we
found eighty-two additional models from open sources, and thus, Rob switched over to level design.
In the final weeks, and before each of our deadlines (Game Developers Conference, RPI’s
Game Fest, Imagine RIT), we planned to make engine builds with content from the prototype. The
final development plan of Paws of Madness was using Unity as our prototyping engine, and level
editor. We would then use a custom built exporter to convert the level layout into an XML file,
which was read by the engine team’s engine. Jia wrote the exporter and worked with Sebastian to
smooth the conversion process as much as possible, but we lacked a structure for this
17
communication. Nitin, Luis, and Andrew also needed to ensure there were no bugs in the graphics
renderer, animation system, or audio system, which up to this point had not been extensively tested
in the engine. In the final weeks, we dropped nearly all forms of SCRUM structure and task board
use. Instead, we relied solely on Rob and Sebastian to dictate which areas of the engine and game
still needed polish to finish. Sebastian finished working on the engine and moved to integrating and
making new UI Assets, as well as new models for the rifts in game.
The Post-Mortem section discusses the results of these various processes, how we arrived at
the decisions, and reacted to the changes.
18
3 Game Design Summary
3.1 The Evolution of the Game
As mentioned in previous sections, this project was originally Great Bingo in Busy City, the story of
the greatest bingo game ever played in a particularly busy city. Unfortunately for them, this dense
collection of the elderly created an Old Person Singularity, which a massive amount of bingo
paraphernalia warped even further. A beam shot out of the Singularity and into the sun, turning it
into a giant bingo cage. Bingo balls started to fly out of the sun-cage, landing in the city and
inflicting random effects on the city. The player would hold the only bingo card that escaped the
Singularity. Adventuring around the city, the player would need to collect the bingo balls while
dealing with their effects. Once he had gotten a Bingo on his card, he could return to the center of
the city and cancel out the Singularity. If he failed to control the number of active bingo balls or the
world has too many simultaneous active balls, the buildup of chaotic energy would cause reality to
fracture, ending the game.
This original idea was a top-down 2D game much like the original Grand Theft Auto and
focused far more on navigation and route planning. We had planned for 25 different effects, ranging
from actually gameplay effecting materials, like

icy ground

zombie crowds
or purely visual effects, like

waves

buildings turning into cheese.
19
We also had some interesting ideas for multiplayer, whereby players would race to complete a bingo
and return to the singularity first, scoring bonus points for any extra balls collected or Bingos
completed.
The design changed considerably during the Fall quarter. The game changed from top-down
2D to a 3D platformer. The 25 effects, a completely over-scoped idea, reduced to five effects with
five levels of severity each. We also quickly scrapped the idea of multiplayer to further reduce
scope. We then took a radical change and completely overhauled the game’s setting. We scrapped
the stories of bingo magic and the modern setting in favor of something more fantasy-based. We
moved back in time to 1830’s Prussia, the province of Pomerania. The giant “shakeup event”
became the accidental summoning of Pomerazziag, the Elder God of Small Annoying Dogs. Instead
of a city, the game now happened in an Indiana Jones-esque set of ruins floating in lava. The
random effects of the bingo balls were replaced by rifts in time and space. The bingo card was
replaced by a tablet of Pomerazziag with a 3x3 grid on it, and instead of getting a bingo, the player
needed to get three-in-a-row on the grid. We worked on this version of the design during 3D
Graphics Programming throughout fall quarter.
However, this version demonstrated a major problem: our rifts weren’t fun. During this
version of the game, we had Fast Speed, High Jump, Low Friction, Bubble Platforms, Ghosts, Fast
Traps, Low Gravity, Darkness, and Fire Bursts. Unfortunately, some of these effects combined to
create some of the worst 3D platforming ever conceived, particularly when Fast Speed and Low
Friction in play simultaneously. This pairing caused repeated trips straight into the lava, over and
over again. The player could barely control the character. High Jump and Low Gravity became
boring very fast while you waited for your character to come back down. Bubble Platforms were
hard to design levels for without just cutting off areas when they were inactive. Most of our effects
were simply not fun, not on their own and certainly not when combined.
20
When actual capstone started, we did some “soul-searching” and did another re-design,
although much more minor this time. We designed much better rift effects and changed the level
setting again from a lava-filled ruin to “chunks” floating in the sky. The rifts separated into three
different sections of the map, three in each section, and we changed them to affect only their section.
The biggest change arrived very late but significantly for the better. Players generally found
the tablet, with its 3x3 grid and tic-tac-toe requirement for victory, to be arbitrary and/or cryptic. We
replaces it with a pair of status bars, one for the player and one for Pomerazziag. The player’s bar
fills by a certain amount every time the player collects a rift, while Pomerazziag’s bar increases over
time as the active rifts generate chaos points. Finally, it no longer matters in what order you collect
the rifts or which ones you collect—you just need to get four of the six rifts to win the game.
3.2 Game Design
Into the Paws of Madness (PoM) is a navigation-focused 3D platformer that explores the ideas of
unpredictable and combinatorial difficulty. The player, a monk that is unscathed by Pomerazziag’s
magic, has to navigate around the shattered shards of her former monastery. Pomerazziag lies
sleeping at the center, and the rest of the map splits into three sections themed after the outside, the
inside, and the dungeons of the monastery. The game starts with two chaos rifts active. A new pair
spawns every two and a half minutes, adding to the number of currently active rifts, or if there are no
rifts currently open. There are two rifts per section for a total of six effects. Unfortunately, we had
to cut three rifts from the final version to ensure that we could polish the remaining six. Having
fewer effects allowed us to make the effects global again, rather than restricting each to their section.
The rifts that survived scoping are as follows:

Gusts of Wind: Areas of wind around the map activate and will blow the player in their
direction. Some of the wind areas blow constantly, and others activate on a cycle.
21

Small Player: The player model shrinks to 25% of their normal size. The player’s jump arc
and speed reduce slightly, but the main challenge of this rift is the change in perspective.
The player also takes double damage from all sources while shrunk.

Poltergeist:
Small clusters of household objects float up nearby the player and fling
themselves at the player. If they hit the player, they deal damage.

Meteors: Meteors fall periodically at set points across the map, heralded by a sound effect
and a growing light. Being hit by a meteor doesn’t hurt the player. Instead, it sends them
flying across the map. If the player excels at in-air control and knows the map well, they can
use the meteors as a sort of fast-travel system.

Darkness: A globe of darkness covers the area, severely limiting the player’s sight. The
player emits light, as do torches placed around the map. The beams of light from the rifts
stay visible regardless of distance.

Battle Cultists: The cultists of Pomerazziag activate around the area, patrolling their areas
and chasing after the player if they get too close. The cultists won’t leave their assigned
areas, but they do damage to the player if they touch her.
We had to cut these three rifts: Fast Traps, Ghost Platforms, and Portals, described below:

Ghost Platforms required a complicated shader that we didn’t have time to perfect.

Portals needed extra level design time and ’significant experimentation to become engaging.

Fast Traps proved to be boring.
Each rift projects a beam of colored light into the sky when active. These beams of light
provide the only form of explicit navigation cues for the player. While active, each rift generates a
chaos point every tick of the update loop. These chaos points are what fuel Pomerazziag. When the
chaos point pool reaches a certain number, then Pomerazziag awakens, and the player loses.
22
Therefore, the more rifts that are in play, the less time the player has to complete the game. On the
other hand, every rift collected actively slows down Pomerazziag. With two active rifts at any time
during the game, it takes roughly ten minutes for the chaos points to fill.
However, every time the player reaches and touches one of the rifts, then the rift will close
and the player’s energy bar will fill by one quarter. Two things stop: the effect of the rift and the
production of chaos points. If the player fills up their bar, then a final rift spawns at the center of the
map, right in front of Pomerazziag. At that point, the player’s last task to win is returning to the
center of the map and touching the final rift to banish Pomerazziag.
Other obstacles block the player. Mostly in the Shattered Acres area, which is the outdoors
section, a number of traps add adversity.’ The traps come in three fun and deadly varieties, all of
which damage the player:

The Geyser of Steaming Ouchies: A crack in the ground that shoots fire via a timer.

The Windmill of Death: Miniaturized windmills with sharp and deadly blades.

The Arboreal Bombardment Cannon of Eggy Doom: A tree trunk that fires rotten eggs.
The player has five health units, represented by five hearts down at the bottom of the screen.
Every hit from a damaging object—any of the traps, active cultists, poltergeist objects—will remove
one of the hearts and stun the player for two seconds. The player gets two seconds of invulnerability
after the stun ends. If the player loses all of their hearts, then they will black out and respawn at the
last checkpoint that they passed. The same thing will happen if the player falls off a platform.
Respawning takes very little time, and with many checkpoints placed around the map, death does not
greatly slow down gameplay. However, every time the player dies, a small amount of chaos points
immediately add to Pomerazziag’s total as a small penalty. The penalty dissuades players from
using the checkpoints and jumping off the platforms to travel across the map.
23
The UI at the bottom of the screen, as seen in Figure 3.1, shows the current state of the
game. The UI contains the two progress bars for the player and Pomerazziag so that the player can
clearly see who’s farther ahead. Underneath the progress bars are the player’s hearts and a status bar
for the rifts. The six spots for the rifts begin as empty. As rifts spawn, their symbols fill in the
empty slots from left to right. The rift symbol matches the color of the beam of light that the rift
emits, and active rifts have a purple glow to denote that they add to the purple bar of Pomerazziag.
Once the player collects a rift, then the symbol and the slot both turn gold and glow orange to
indicate that they are closed and giving power to the player. The health hearts are on the right side of
the UI. When the player takes damage, a heart turns to stone. If Small Player is active, the hearts
display as smaller than normal to help communicate the more drastic damage to the player.
Figure 3.1. The Tablet UI
24
4 Technical Overview
4.1 Introduction
Once the Game Design goals were established during the pre-production stages of the project, it
became apparent that Paws of Madness presented significant technical risks. With game mechanics
that explicitly relied on combinatorial explosion, and complex, interdependent interactions, even
slight changes to one of the game mechanics had the potential to unravel large portions of the code
base, should these dependencies be hard-coded into the system itself.
Having faced similar challenges in previous Flash game development projects, Sebastian,
acting as the Paws of Madness technical lead, decided to design the game engine around a
component-based architecture. The Lance Engine’s principles had already been tried and tested in
these previous projects, which left only the challenge of translating the architecture to low-level C++
code. To accelerate the porting process, Sebastian implemented Lance with heavy reliance of thirdparty libraries, including Boost [1] and XNAMath [2].
4.2 Architecture Overview
The Lance game engine revolves around three main technologies, as discussed below and
summarized in Figure 4.1. Please refer to the appendices for further details.
4.2.1
Message Handling
Message handling allows game entities to communicate without requiring knowledge of their
underlying interfaces, and contributes to minimize code inter-dependencies and maximize
modularity. By allowing message handling to propagate to a class’ child components, game classes
could defer entire behaviors to these components, which could then be easily replaced, modified and
extended in response to design changes.
25
Figure 4.1. Layered Engine Architecture Graph for Lance
4.2.2
Centralized, Dynamic Class Factory
By providing a globally accessible, automated means to instantiate and configure game entities
without knowledge of their implementations, Lance allows for extreme flexibility in entity and level
construction. A single, data-driven command can create entire hierarchies of component-based game
objects.
4.2.3
XML-Based Soft Architecture
Most classes in Lance are data-driven, deferring their behaviors to run-time configuration parameters
that can be read from external XML files. Programmers and designers can easily read, modify, and
26
automate these files. The hierarchical nature of XML also proves perfect to describe a game entity’s
composition and can feed into the Class Factory to automatically populate entire scenes of the game.
4.2.4
Three-Tier Architecture
The Lance architecture organizes in a hierarchical model of three Tiers, as shown above in Figure
4.1 and below in Figure 4.2. The main classes for each of the three tiers are relatively simple
component containers, which handle the great majority of the game entity’s behavior. Game events
received by an entity propagate to their components, allowing update cycles and message handling to
trickle down to the lowest levels of the hierarchy.
Figure 4.2. Three Tier Architecture
27
4.2.4.1 Game and Service Tier
The top-level tier manages the game Timing, Class Factory, and Configuration systems. The Game
class is responsible for creating and maintaining a list of components that provide various services,
such scene rendering, audio playback, input etc. The Game and its Service components are globally
accessible through singleton pointers. Many game entities act as Service consumers by requesting
information and resources from each of the game’s services. Consumers access game services
through abstract interfaces. Service consumers interact with the interfaces rather than the services
themselves, enforcing interchangeability and platform neutrality.
4.2.4.2 Level Tier
The second tier manages game flow by implementing a variety of interaction nodes, like menus,
credit screens and gameplay scenes. Implemented as a state machine stack, game flow can reroute by
swapping the current Level state by a new one, which effectively terminates the current scene and
replaces it for another. Game flow can also temporarily suspend by pushing a level on the stack
without releasing the existing state. This design comes in handy when implementing temporary
menus and pause screens that return back to the previous game state.
Levels create and maintain their own scene graphs, as well Level States that provide a finer
degree of control. While Levels can represent entire scenes, Level States can represent loading
screens, winning / losing pop-ups, and other temporary changes to game flow that do not require
altering the current scene. Levels also provide a list of Level Manager components, which provide
scene-wide behaviors, like camera control, game rules and score tracking.
4.2.4.3 Actor Tier
The lowermost tier contains all individual instances of game entities that exist on a given scene.
Actor Components constitute the workhorse of the Lance Engine and implement most of the
interactive behaviors in the game, from simple colliders to playable character controllers. Actors
28
provide a Transform and Velocity in the game world, which their components can access and modify
through their behaviors. Actors and components interact with each other through messages and are
the main consumers of game Services, as shown in Figure 4.3.
Figure 4.3. Service Provider/Consumer Relationship
4.3 The BTO Renderer
Paws of Madness uses the BTO Rendering System, as seen in Figure 4.4, a self-contained DirectX
renderer designed and developed by several members of the team. BTO compiles into a separate
Dynamic Link Library, and its interactions with the Lance Engine mediate through an
implementation of the Graphics Service interface. Since BTO features its own window creation and
management functions, several modifications were performed to defer Win32 window management
to the parent Lance system.
In addition to rendering models to the screen and managing Texture, Material, Geometry,
and other resource information, the BTO library provides its own resource import methods. The
BTO library can read model and animation data from .obj, .fbx and the native .bto formats.
29
The system also features an animation system that can define and play named animation sequences
for individual model instances, as well as group and skinned-mesh 3D model rendering.
BTO uses custom DirectX 10 shaders to implement a Deferred Rendering pipeline. Shader
features include deferred scene lighting with point, directional and spot light sources, hardwaresimulated particle systems, and scene post-processing.
30
Figure 4.4. The BTO Renderer Class Diagram
31
4.4 Testing Methodology
Since the engine relies heavily on its messaging, instantiation and configuration features, we
emphasized testing and optimizing these core technologies early on. The original implementations of
the message handling and configuration systems using STL containers and Boost classes proved to
be too slow and were replaced for optimized versions. In some cases, the performance gains reached
almost 900%. As the person responsible for building most of these systems, Sebastian led the testing
phase.
Once the core systems were optimized, the class factory and soft architecture ensured that the
unit testing and integration processes were seamless, with minimal time dedicated to test setup. By
modifying XML configuration files, key components could be tested in isolation as they were
implemented by their respective programmers, then integrated into relevant entities, and finally
incorporated into the full game.
XML testing generated clashes once the main content integration process began. As more
developers began modifying the XML files, we needed to separate the production files from the test
environment. Any miscommunication or delay risked causing these two configuration environments
to fall out of sync, preventing bug fixes from being applied to the game and, in some cases,
overwriting entire configuration updates with old code. With no access to a dedicated configuration
tool, the only option left was to painstakingly review the files and ensure that the most recent values
applied, which strained the development team in the late stages of the project.
In retrospect, one area that lacked proper testing was Service unit and integration benchmarks.
Without performing proper stress tests on the rendering system, key performance bottlenecks were
left unchecked until extremely late in production. They became apparent by the game performance
steadily dropping every time more content added to the level. Moreover, the loss in performance had
detrimental side-effects to other systems, including a loss in collision detection accuracy which
would sometimes cause game objects to fall through the ground. While fixes were eventually
32
applied, the resulting performance of the game diminished from what could have been accomplished
if the renderer was properly benchmarked and optimized early on in the process.
4.5 Results
If we consider the overall technical scope of the project, using a modular architecture was a success.
With nearly 50,000 lines of code and 10,000 lines of XML configuration, the scope of the overall
game is significantly large given the 20-week production schedule. With an easy to automate soft
architecture layer, most of soft architecture was programmatically generated and ported from the
game prototype with a great degree of fidelity, simplifying the level design and content integration
processes immensely.
The modular, component-based architecture of the Lance engine managed to implement the
complex, combinatorial game mechanics with no significant issues. Nearly all of the known bugs are
intrinsic to the various third-party technologies utilized, rather than a result of mishaps or corner
cases of component interaction.
The interchangeability of game services was put to the test when Sebastian decided to replace
physics engine technologies after a significant portion of dependent code had already been written.
The migration process from Havok to Bullet physics was completed in little over two weeks, with
minimal impact to the rest of the code base. Therefore, the complex, modular architecture paid off by
minimizing risk throughout the entire production process.
4.6 References
[1] “boost.org.” boost C++ Libraries. Boost, n.d. Web. 9 May 2013. http://www.boost.org/
[2] Sawicky, Adam. “Introduction to XNA Math.” N.p., 4 May 2010. Web. 9 May 2013.
http://www.asawicki.info/news_1366_introduction_to_xna_math.html
33
5 Asset Overview
The direction we took with the assets has changed drastically from where we began to where the
project ended. The number of total artists we had working on our project had reached seventeen by
the end, including both members of our team and external teams, not even including the ninety-five
assets we obtained from the web. The challenge of incorporating all of these assets was trying to
maintain consistency. We used references and example material whenever possible to mitigate this
issue. The rest of this section describes our decisions towards the specific asset groups. The Art
Bible appendix lists the actual final assets we used.
5.1 Platforms
During the Fall quarter, Ivy made the initial platforms as large, unit-square chunks in Maya. These
platforms had a simple, single texture that we intended for procedural level generation in the
prototype game. We had to drop her work in the final product because we changed the setting from
the lava temple of the floating platforms.
At the beginning of Winter quarter, we built a website to chronicle the development of the
project. Part of the website included a master list of platforms that the artists could reference during
their development time. This list contained platform name conventions, environment reference
photos, material reference photos, and general top-down shapes of the required platforms, as well as
approximate platform dimensions in Maya units. Once this list was complete, the team of four
environment artists worked autonomously on their designed platforms, checking in each Friday of
the week to show progress and give updates on their work.
During Spring quarter, we made simplified colliders of the commissioned platforms. Then,
Rob created prefabs in Unity that would later interface with the XML exporter. Luis reworked many
of the models mostly concerning homogenizing textures across dirt and brick, and adding features to
brick platforms to improve fidelity.
34
5.2 Details
Work on obtaining detail models did not start until the Spring quarter. During this time, Rob
gathered 86 models from across the Web, accounting for licenses and attribution requirements and
Luis cleaned and processed nearly all of them by homogenizing scales, centering pivots, and
renaming models and materials to fit our internal conventions. Rob then created prefabs that
included primitive collision data in Unity that would later interface with the XML exporter. Also at
this time, Bill created .bullet colliders for the few detail models that required a complex mesh
collision surface.
5.3 UI
At the beginning of Fall, Jia’s created and implemented the UI for our original prototype. The choice
of system, Awesomium, uses Javascript to load web pages as the UI. We had to scrap a majority of
Jia’s work because the game no longer had the original game design ideas the artwork represented.
The work didn’t restart until the Spring quarter. The team re-envisioned appearance of the interface,
discarding the tablet entirely in favor of a more streamlined interface. Because of this change,
Sebastian re-designed and re-purposed a majority of the UI, while Alex re-designed the rift icons.
Once the rift icons completed, 3D models of the rifts were created to represent the rifts in the world.
5.4 3D Characters
During the Fall quarter, we only had a single member of the team dedicated to character creation.
Because of this limitation, only one character—the player character—was scheduled to be worked on
during the Fall. Also, the engine could only support group-based animations, and so, we designed
the character to be built from a series of separate limb, torso, and head models overlapped at the
joints. This character was ultimately replaced by a new, single-mesh character with bone animations
after the engine supported bone-based animations. The artists worked on this version of the
character, along with Pomerazziag and the cultist character, during the Winter quarter in a manner
35
similar to the production of the platforms. During Spring quarter, Brian Van Horn rigged and
animated these models, and we implemented them into both the prototype and final game.
5.5 Audio
Audio continues to be the final piece of the project that capstone teams (including ours) tend to leave
until the end. During the fall, we worked on implementing sound in the engine and had just acquired
placeholder sounds and music to test in the engine. It wasn’t until Winter quarter that we sought a
team to handle our audio. Although they produced the main music for the game, they only gave us
seven sound effects mostly targeting the character. We found the rest of the sound effects late in the
Spring from online free sources. None of the sounds were mixed to relative levels, or cleaned for
looping, and the music had repetitive sections, which prompted us to get an external consultant.
Ryan Wilkinson spent his free time making sure to address the issues listed above, and returned the
work relatively fast. We observed Super Mario Galaxy to try and create a list of acceptable levels
for various sound effects and music in the video game.
36
6 Research Topics
Please refer to the included Appendix disc for the complete write-ups. Below we summarize each
topic.
6.1 3D Model File Reader Implementation — by Luis Bobadilla
A graphics engine or rendering engine is a system designed to display graphics on a computer
screen. The graphics engine is usually part of a larger system that manages events and creates
responses to them. Implementing a controlled model loader that provides all the specified
functionality the graphics engine requires is imperative. Even though the basic DXUT layer in the
DirectX API provides part of this functionality, DXUT cannot implement all the underlying needs
our engine requires. We need a customized model loader that would let the main engine add and
modify materials, joints, and animations at runtime. In the present study, I discussed why the
implementation of an FBX file reader is required, what kind of information needs to be extracted
from it, what are the tools available extract said information, as well as suggestion for future work
with the file format.
6.2 A Scalable, Data-Centric Multicore Game Loop Architecture — by
Sebastian Hernandez
This paper presents and analyzes Task-centric and Data-centric multithreaded game engine
architectures, and proposes an alternative architecture that combines the benefits of both approaches
and addresses some of their weaknesses. The proposed architecture includes several techniques to
solve three common data concurrency problems in game engines.
.
37
6.3 Jumping Through Negative Space: A Reconstruction of Super
Mario Galaxy’s Matter Splatter Mechanic — by Rob Link
The following research set out to discover the possibility of recreating a unique visual and gameplay
effect of invisible platforms seen previously in Super Mario Galaxy. The experimentation was
conducted in Unity, and the results proved the effect is possible in a modern, component-based game
engine.
By applying the results of this study, future designers can create more compelling
experiences in their future games, or find interesting new ways to utilize the gameplay system.
6.4 Simulation and Rendering of an Advanced Particle System Engine
on the GPU — by Nitin Nandakumar
The goal of the research was to implement a particle system, to effectively simulate the behavior and
quality of an advanced game engine’s particle system on the GPU. I built the engine in DirectX10
using the Geometry Shader Stream-Output functionality. For the experiment, I used the Unity game
engine as a reference and the results were tested and compared with the proposed engine. The
proposed system can further extend to perform all the simulation properties used by Unity’s
Shuriken particle system to completely replicate the system. This approach can serve as a guide for
porting an existing game engine’s particle system to perform their simulation and rendering on the
GPU.
6.5 Stepping Out of Your Skin: Roleplaying as Non-Humans — by Bill
Phillips
Races in fantasy RPGs always seem to be cut from the same cloth, with an overpopulation of
humans, elves, dwarves, and a few other human-like races that only have minor cosmetic
differences. However, a subset of the population prefers to play something dissimilar from
humanity. The core RPG books could better serve this market. This paper provides evidence that
players of all skill levels can play and enjoy playing as distinctly non-human characters, and that
adding options for them could benefit RPGs.
38
6.6
The Endless Game: Using Heuristics to Explore the Gameplay of
Sandbox Games — by Andrew Wilkinson
Sandbox games have become increasingly prevalent over the past couple of years. Games, like
Minecraft and Elder Scrolls V: Skyrim set the standard for undirected free play. Their success show
there is an increasing market for such games. This paper explores the assertion that sandbox games
need specific design decisions due to their unique properties. The approach to this problem relies
heavily on prior work, including: the concept of using game heuristics to evaluate games, a set of
established heuristics to use, and the idea that by using heuristics, you can determine if a genre of
games has specific design considerations. Previous researchers developed heuristics for design by
reading game reviews, and so, I applied those design heuristics to problems found in sandbox game
reviews from the same website. The results showed that the reviewers blamed artificial intelligence
problems in nine of the thirteen sandbox games reviewed. This study provides an initial glimmer into
the importance of artificial intelligence affecting the immersion of undirected free-play.
6.7 Holding on to the Edge: Achieving Dynamic Ledge Grabbing with
Vector Math — by Jia Xu
Ledge-grabbing is widely used in action games to enrich the gameplay and level design. There are
various ways to implement ledge-grabbing, such as collider volumes placed around grab-able edges
or ray-casting. However, because we were unsure of the capability of our engine, neither of the
established approaches would suit our game. To achieve the goal, I decided to adopt a method by
using only one collider attached to the avatar. Once the collider detects a grab-able surface, the
hanging point will be calculated via vector math. In the prototype, this approach is fast and robust for
most convex shaped platforms. For concave-shaped platforms or platforms with center points
outside, more pre-requisite work is required.
.
39
7 Play Testing and Results
With Paws of Madness, we sought to create a fully functional prototype and run user tests to evaluate
our designs. During the Fall quarter, we hadn’t pursued much external feedback on our ideas, which
resulted in somewhat of a failure at the end of quarter presentation. But, midway into the Winter
quarter, ’we made a functioning game that we could play test. The resulting feedback fed into a
redesign of a few mechanics and controls. ’We waited until late Spring to run another playtest that
could evaluate the degree of fun. Overall, the development involved an iterative approach based on
the feedback from playtests, faculty, students outside the project, and friends of the team.
7.1 The Method
Creating the Paws of Madness website, by way of Google Sites, helped us advertise our survey. We
developed our surveys in Google Forms because it allows direct embedding into our website. Our
team designed the website so that the prototype—which ran in the Unity Web App--exhibited on the
front page with a short description of any changes from previous prototypes (if any). We embedded
the survey below it. The description above the Unity WebApp asked the player to remember to take
the survey after playing the game. The WebApp stored and displayed player information, e.g.,
best/worst play time, later in the survey.
We hoped for more time to make playtest data collection far more effective by adding user
metrics to the prototype. However, we couldn’t figure out an efficient way for transmitting that data
to a remote server using Unity, and the capable team members had already completely filled their
time. Google forms offer variety of benefits, especially automatic collation of all answer submissions
as a Google spreadsheet. Additionally, Google displays all the information as relevant graphs, e.g.,
histograms or pie charts. This technology facilitated the overall analysis of our playtests, but we still
struggled to find associations and relationships among the data.
40
Team members with usability and HCI experience constructed the survey. The questions
focused on the mechanics and controls of the character using pseudo-Likert scales and established a
brief but relevant profile on the “gamer-type” of the tester. Multiple faculty checked the survey to
ensure the wording wasn’t leading or confusing. We changed the survey slightly between playtests
but only to remove questions about gameplay no longer relevant to our prototype.
Once we implemented the website and survey, our team decided on running two playtests.
We hoped to compare the second playtest results against a possible baseline from the first playtest.
Ideally, this plan would have resulted in a clear indicator to us with data that our game had improved
since the first playtest. The first playtest focused on win condition, movement of the player, and
interpretation of our initial level. We had not implemented rift effects for this playtest because we
chose platforming as our most important game mechanic and wanted to build the baseline from that.
Later when we added the rifts in playtest two, we had hoped to discover if the rifts improved the
gameplay experience of freeform platforming, or subtracted from it. The goal for the playtest was to
reach out to students from our department using Facebook, friends of the team via online chat
programs, different faculty from classes, and Steam members on Greenlight Concepts.
7.2 Playtest One Results
The first playtest resulted in forty-seven takers and useful information about the state and playability
of our game. We learned that movement and control of our character ’needed adjusting, based on
answers from Page three of our survey, as seen in Figure 7.4. The majority of people responded to
“In general, how did you feel about the control over the character?” as seen in Figure 7.10.9, and
nearly seventy percent had said that including in-air momentum would improve gameplay, as seen in
Figure 7.10.12. Players experienced with platformer games have a certain level of expectation about
how the character controls. The data allowed us to re-address movement as an issue and research
examples from other platformers, such as Assassin’s Creed Mario Galaxy. We also noticed the
41
camera initially seemed tricky for people based on the answers from page four of our survey, as seen
in Figure 7.5. The slight majority had mentioned they felt like they were falling of platforms due to
the camera, as seen in Figure 7.10.14. Simultaneously, they didn’t feel like the camera slowed them
down, was bad, or would be better as a smart non-user controlled camera, as seen in Figures 7.10.15,
7.10.13, and 7.10.16 respectively. We interpreted this feedback to mean that people couldn’t see
holes in the level and not a fault of the camera feel.
We learned that our UI was fairly unintuitive, and most people did not understand the role of
some of the game objects or never interacted with them, as seen in Figure 7.6. The win zone was the
least recognizable or seen thing in the game, as seen in Figure 7.10.19, followed by the tablet
interface displaying what rifts had been collected, as seen in Figure 7.10.21. Upon informal followup questions to some of the participants after the survey, many people determined that the tablet
represented the rifts you had collected, and to win the game you needed to collect all of them. A
large majority of people had said their gameplay experience would have improved had these game
elements the game them explained at the beginning, as seen in Figure 7.10.22.
Playtest One also taught us that a majority of people did not play our game more than once,
as seen in Figure 7.10.6, and more than half of the players failed to win. Even more disheartening,
more than a 25% of our players stopped playing before winning or losing. But, the playtest
succeeded in providing useful data and setting up a baseline for future playtests. Still, many of our
assertions seemed inconclusive.
7.3 Playtest One Changes
After Playtest One, we drastically changed the user interface and win condition to hopefully improve
the poor usability feedback we had received. In addition to the new UI, we added tighter controls to
the player movement, allowing momentum in the air and a heavier, less “floaty” feel inspired by
42
Mario and other popular platformers. We also added a tutorial at the beginning to walk people
through all the game elements and ensure nothing goes unnoticed.
7.4 Playtest Two Results
Unfortunately, development demanded more of our time, and we couldn’t devote as much time to
gathering play testers for Playtest Two. We had only received eight survey results—only four of
them hadn’t played the game before. We recorded and collated the results, but due to the lack of
participants, we could not claim improvements or otherwise from Playtest One.
7.5 Imagine RIT Results
We had a much more successful informal playtest at Imagine RIT, whereby team members would
note observations of players our game and check for significant problems. We also observed the
level of frustration versus joy, albeit sometimes from a distance. In general, the player results were
very positive with surprisingly high praise among children, who are not considered our target
audience. We believe that the perception of our game as a student project skewed the opinions of
many when encountering game breaking bugs, or glitches with a general leniency that a professional
game wouldn’t face. In light of that, however, we still had many positive reactions about our theme,
controls, and level layouts. With a more significant playtest than the last one, and without many
people playing the tutorial, we could more accurately test our UI. The majority of people did not
understand the UI’s meaning or how to win until a team member explained to them. However, once
we explained the UI, players could effectively use the UI to check on the current state of their
progress in the game at a glance.
Towards the end of the event, the Thomas Golisano College of Computing and Information
Sciences awarded Paws of Madness the “Best Illustration of Creativity.” Although these results did
not validate the game’s fun, we can say that people enjoyed playing our game and have expressed
interest in playing it again in the future.
43
7.6 Survey Figures (Survey Posted on 1/26/2013)
Figure 7.1. Intro to Paws of Madness Survey
44
Figure 7.2. Page one of Paws of Madness Survey
45
Figure 7.3. Page two of Paws of Madness Survey
46
Figure 7.4. Page three of Paws of Madness Survey
47
Figure 7.5. Page four of Paws of Madness Survey
48
Figure 7.6. Page five of Paws of Madness Survey
49
Figure 7.7. Page six of Paws of Madness Survey
Figure 7.8. Pictures associated with Page 5’s questions
From left to right: Checkpoints, Win Zone, Rifts, Tablet User Interface
50
Survey 2 Edits (Survey posted on 4/17/2013)
Difference in questions from previous survey:
We added: “Have you played one of our previous play-tests?”
We replaced: “If you won the game, what was your best time?” and “If you won the game
more than once, what was your worst time?” with “If you won the game, what was your best score?”
We removed: “Our character has no momentum in the air (the character will stop moving
mid-air if you let go of W, A, S, or D). Would you say that including air momentum would improve
gameplay?”
We Removed: “How do you think the gameplay would be affected by including a smart
camera you didn’t have to control?”
We Removed: “See Picture 4 below the survey depicting a user interface object. At what
point did you recognize what this user interface object was displaying?”
We Removed: “Would your gameplay experience have been affected if all these elements
were explained at the beginning before of the game?”
Figure 7.9. New pictures associated with page 5’s questions
From left to right: Rifts, Win Zone, User Interface/Player Progress
51
Figure 7.10.1. Results for “Please check
off your TWO most commonly played
genres of game.”
Figure 7.10.2. Results for “On Which
Platform do you primarily play games?”
Figure 7.10.3. Results for “When you play
games on your home computer do you
play more often with a keyboard + mouse,
or a gamepad?”
Figure 7.10.4. Results for “Did you play
this version of "Into the Paws of
Madness" more than once?”
52
Figure 7.10.5. Results for “About how
long did you play this version of the
prototype across all play sessions?”
Figure 7.10.6. Results for “When you
played "Into the paws of Madness" (the
first time if you played more than once),
did you:”
Figure 7.10.7. Results for “If you won the
game, what was your best time?”
Figure 7.10.8. Results for “If you won the
game more than once, what was your
worst time?”
53
Figure 7.10.9. Results for “In general, how
did you feel about the control over the
character?”
Figure 7.10.10. Results for “How do you
feel about the character speed?”
Figure 7.10.11. Results for “How do you
feel about the character jump height?”
Figure 7.10.12. Results for “Our character
has no momentum in the air (the
character will stop moving mid-air if you
let go of W, A, S, or D). Would you say
that including air momentum would
improve gameplay?”
54
Figure 7.10.13. Results for “In general,
how did you feel about the control over
the camera?”
Figure 7.10.14. Results for “Did you ever
feel like you fell off of a platform due to
the camera?”
Figure 7.10.15. Results for “Did you ever
feel that controlling the camera slowed
you down?”
Figure 7.10.16. Results for “How do you
think the gameplay would be affected by
including a smart camera you didn’t have
to control?”
55
Figure 7.10.17. Results for “At any point Figure 7.10.18. Results for “See Picture 1
did you give up because you didn’t know
below the survey depicting a game
what you were supposed to do, or where object. At what point did you know what
to go?”
impact this game object had on the
game?”
Figure 7.10.19. Results for “See Picture 2 Figure 7.10.20. Results for “See Picture 3
below the survey depicting a game
below the survey depicting a game
object. At what point did you know what object. At what point did you know what
impact this game object had on the
impact this game object had on the
game?”
game?”
56
Figure 7.10.21. Results for “See Picture 4 Figure 7.10.22. Results for “Would your
below the survey depicting a user
gameplay experience have been affected
interface object. At what point did you if all these elements were explained at the
recognize what this user interface object
beginning before of the game?”
was displaying?”
Figure 7.10. Results 1 Figures (Results gathered on 3/2/2013)
Figure 7.11.1. Results for “Have you
played one of our previous play-tests?”
Figure 7.11.2. Results for “Please check
off your TWO most commonly played
genres of game.”
57
Figure 7.11.3. Results for “On which
platform do you primarily play games?”
Figure 7.11.4. Results for “When you play
games on your home computer do you
play more often with a keyboard + mouse,
or a gamepad?”
Figure 7.11.5. Results for “Did you play
this version of "Into the Paws of
Madness" more than once?”
Figure 7.11.6. Results for “About how
long did you play this version of the
prototype across all play sessions?”
58
Figure 7.11.7. Results for “When you
played "Into the paws of Madness" (the
first time if you played more than once),
did you:”
Figure 7.11.8. Results for “If you won the
game, what was your best score?”
Figure 7.11.9. Results for “In general, how
did you feel about the control over the
character?”
Figure 7.11.10. Results for “How do you
feel about the character speed?”
59
Figure 7.11.11. Results for “How do you
feel about the character jump height?”
Figure 7.11.12. Results for “In general,
how did you feel about the control over
the camera?”
Figure 7.11.13. Results for “Did you ever
feel like you fell off of a platform due to
the camera?”
Figure 7.11.14. Results for “Did you ever
feel that controlling the camera slowed
you down?”
60
Figure 7.11.15. Results for “At any point Figure 7.11.16. Results for “See Picture 1
did you give up because you didn’t know
below the survey depicting a game
what you were supposed to do, or where object. At what point did you know what
to go?”
impact this game object had on the
game?”
Figure 7.11.17. Results for “See Picture 2 Figure 7.11.18. Results for “See Picture 3
below the survey depicting a game
below the survey depicting a user
object. At what point did you know what
interface object. At what point did you
impact this game object had on the
recognize what this user interface object
game?”
was displaying?”
Figure 7.11. Results 2 Figures (Results gathered on 4/26/2013)
61
7.7 Informal Observations (Observations gathered from Imagine RIT on
5/6/2013)
7.7.1

Gameplay Experience Successes
Most people seemed to generally enjoy the game and would often play long enough to lose,
or play multiple times to complete at least one objective.

Children in particularly audibly showed their approval of our game to us and to each other
regardless of whether they were winning or not.

Player’s seemed to like the speed of the game and the movement.

Paws of Madness was awarded with “Best Illustration of Creativity” from the Thomas
Golisano College of Computing and Information Sciences.

Most people seemed to find the theme humorous and enjoyed the description of the story
when it was told to them.
7.7.2

Gameplay Experience Problems
Players encountering game breaking bugs would often write them off, the idea of a student
game seemed to sway their opinion on quality of our product.

Players couldn’t decide which way was the right way between camera with invert-Y and
normal-Y.

Players were un-aware of the story almost always, and couldn’t piece it together without the
help of one of the team members.

Tutorial level doesn’t accurately explain the win condition.

Players don’t face Pomerazziag when they start the game, they are facing the other way, and
don’t see him.

Players cannot always see the light pillars in the game, and hence, do not understand the
objective.
62

The Rift effects affecting the world, is not clearly understood by most players.

Players do not realize that they can double-jump.

Some players felt the camera was too-close to the player when they turn.

Speed of the animations of the player, do not match the run speed.

Darkness Rift too dark/difficult, and most of the playtime is filled with darkness.

Wind direction not noticeable to most players.

Some people felt it odd controlling the player movement in mid-air and most of them did not
realize it.

People did not realize the hearts or representation of health in the UI.

The player checkpoint particles are distracting, and people confuse them for important
objectives than just save points. People try to interact with the checkpoint book.

Battle Cultists seem more important when remaining inactive, and the players run towards
them and try to interact with them.

Players were found often retracing their steps when confronted with non-commonly placed
dead-ends (Rifts in the Fractured Sanctuary in particular).

Bugs found: players jumping into platforms, players getting stunned permanently by the
windmill, player wall sliding makes camera go jerky, collision mesh visible on some walls,
and a very rare infinite stun bug caused by traps.
63
8 Post Mortem
8.1 What Went Right
One of the most critical elements that lead to the successful development of Into the Paws of
Madness (or just, Paws) was the acquisition and repurposing of lab space by the graduate students.
Because of the large team size, having a space free from outside distractions was instrumental to the
flow of development. This space allowed our team to co-locate 100% of the time, facilitating very
fast problem turnarounds via team communication. The dedicated space also helped lead to regular
team member attendance and even internally enforced daily work hours. Also, the dedicated space
resulted in a much more tension-free work environment compared to working in a lab shared with
many other students. We often found ourselves talking over other students, which seemed unfair.
The dedicated workspace alleviated these problems, and we felt much more comfortable leaving
project supplies (such as the project task board) within the area itself.
Another aspect of development that went well was the internal roles that each team member
assumed. By having a strong lead engine architect, those working on the engine team could quickly
go to the engine lead for guidance and technical help. Another important role that was fulfilled was
the role of producer, who monitored each team member’s progress to help keep everyone on the
same page throughout development, which included hosting the SCRUM meetings.
Generally
speaking, the re-purposing of roles after Fall quarter led to a better usage of each team member’s
strengths and allowed each team member to work towards their own education goals.
As mentioned previously, we refocused our efforts after the Fall quarter, which proved very
beneficial for the development of the game. During the first few weeks of Winter quarter, we
emphasized evaluating Paws’ state to get everyone on the same page. We help many meetings early
on in the quarter to ensure that everyone focused their effort on providing fun gameplay. This series
of meetings affirmed that using a SCRUM-style of development with regular meetings would help
64
keep development moving forward. These regular meetings especially helped at the end of Fall
quarter. Thus, we decided early on in Winter quarter to continue with SCRUM meetings twice
weekly, which eventually turned into daily SCRUM meetings during Spring quarter. This refocusing
of efforts combined with the regular meetings likely played a large role in the generally good group
cohesion and helped prevent group disintegration.
In terms of development, many things went well. The decision to have a continuously
developed and iterated prototype in Unity helped Paws emerge. The inclusion of Unity in the longterm development of the project also allowed us to host playtests on our website at all times, which
immensely benefited receiving feedback from distant friends and family. Unity also served as an
ancillary purpose as a level design tool, allowing content developers to port their systems into the
engine with some nifty XML usage. Development of the Lance engine itself generally went well,
and Paws’ final build has almost all of the many necessary services.
Including outside artists on the project also helped. Although we had negative issues (see
next section), the artists brought fresh perspective, as we lacked the internal art talent necessary to
complete a game of this magnitude.
Another one of the key ingredients to the successful development of Paws was the unique
timing of the many extra-curricular events that cropped up during Spring quarter. By showcasing
our game at GDC 2013, and participating in events such as RPI GameFest and Imagine RIT, we
could enforce secondary deadlines. The sequence of deadlines resulted in a series of functional,
content-filled builds near the end of Spring quarter, which also let us re-evaluate Paws each week.
We could join other students in showcasing our hard work to interested people while not putting all
of our eggs in one basket for the final capstone defense.
Generally speaking, much went well for the development of Paws, especially considering
just how much could have gone horribly wrong with a capstone of this scope. By the end of Spring
quarter, the rifts were generally entertaining to our playtesters, a marked improvement to the un-fun
65
rift effects of Fall quarter. The final level was generally well-received at Imagine RIT, and the
project was awarded with a creativity award from the school. One of the most positive aspects of the
Imagine RIT event was that kids really enjoyed playing the game, and even a few parents had asked
how they could get a copy of the game for their kids.
8.2 What Went Wrong
Although we could have achieved more with the playtests, they were useful in the discussion among
our group. But, we often ignored the data in design discussions because the team did not trust the
validity of it and/or the analysis wasn’t thorough enough. We made changes based off of hunches
we already had and were often only strengthened by the agreement of others through the user
surveys. Still, we cannot concretely claim that our game is fun, as fun is hard to define, but we can
say that people have enjoyed playing our game, and have expressed interest in playing it again in the
future.
Also, many team members working on the project during Fall quarter took roles that did not
capitalize on their strengths. The work lacked transparency in Fall quarter, and some of the most
critical tasks, like character control and XML exporting, were delegated to classmates who did not
continue with the project. Also, the lead designer was not enrolled in the Fall quarter class, which
meant design issues that would pop up in class meetings would often have long response times or
information disconnects. These various follies ultimately led to a prototype-level game at the end of
Fall quarter, whereby we discovered that many of our rifts were just downright annoying. However,
we addressed all of these issues during the first few weeks of Winter quarter.
However, we didn’t resolve all issues before full capstone production. With further preproduction planning, we could have focused on the porting of the various rift mechanics from Unity
into the Lance engine. Although mechanic prototyping overall went well, we did not outline the
systems beforehand, leading to some systems requiring a redesign and restructuring by the porter to
66
comply with the requirements of the engine. This issue may have been alleviated if all members of
the team had a hand in designing mechanical prototypes early on in the process, before full-on
production started.
Concerning management, we also lacked a strong bridge between the primary team members
and the external artists. For the most part, external teams worked autonomously but attended weekly
meetings with one or two members of the group who had other primary responsibilities. However,
this structure led to us recreating some assets, as they did not match assets developed by other artists.
A dedicated Art Lead or Concept Artist within the team could have properly interfaced with outside
talent while crafting a strong visual identity for the project that other artists could rally behind.
A few other technical issues arose during development. We chose Havok early, only having
to scrap all of the hard work and start from scratch halfway through using Bullet.
We also
discovered problems with the animation system too late into the development process, although we
remedied them in time for big events, like RPI Game Fest and Imagine RIT. The version control
system had incongruences, particularly in regards to files in the Unity repository randomly
disappearing or duplicating themselves, even between commits. We ultimately resolved the issue
with a deep cleaning of the repository to destroy duplicate assets, but some additional research into
Unity version control solutions early on could have mitigated more than a few headaches. Lastly, we
encountered framerate issues very late into the project. The graphics programmer’s time was quite
fragmented between a wide variety of responsibilities, such as lights, cameras, particle systems, and
engine interfaces. This issue could have been addressed by delegating some of the tasks to other
programmers.
Although we accomplished many of the tasks we sought to do, we had to cut some of them
to find time to work on the more important aspects of the game. One of the biggest items that we
had to cut was the scoring/achievement system. Sadly, this system gave Paws replayability, as it
could invite the player back to unlock hidden achievements or give goals parallel to the banishing of
67
Pomerazziag. We also had to cut a system for gathering playtest data during runtime. We hoped we
had more time to make play test data collection far more effective by adding user metrics to the
prototype. However, we couldn’t figure out an efficient way for transmitting that data to a remote
server using Unity, and most team members by this point had their time completely filled with other
tasks.
We also need proper documentation throughout the code, scripts, and even XML. Perhaps
one of the reasons the project lacks much of the in-line documentation is actually because of the
strict co-location and ease of finding and talking to other members of the team throughout
development.
Lastly, there were a few issues with overall group interaction. For example, we learned that
we were often afraid of confronting each other over important issues that we could have resolved
through more regular group discussions. Also, group-wide virtual task boards, such as Trello and the
Google Site asset tables, ultimately did not work out. However, the physical task board and Post-It
note strategy was well received, as the giant board on the wall acted as a constant reminder of
everyone’s current and future tasks. Also, aside from the occasional peer-programming session, we
never had group-wide code review sessions to update everyone.
8.3 What We Learned
Below, we summarize the lessons learned from the development of Into the Paws of Madness:
1. Communication and co-operation can be more valuable than individual skill. A team is
more productive when working together to solve problems than when everyone pulls in a
different direction.
2. Knowing your target audience is critical to being successful. Kids love our game, even
though they were not necessarily our target audience to begin with.
68
3. Regularly getting outside people to playtest your game can yield invaluable knowledge
about your own work and reveal truths that may not be immediately visible.
4. It’s extremely important to update everyone on the team and ensure that everyone is “on the
same page,” i.e., working on something that they themselves enjoy.
5. Large group projects mean communication and transparency is essential, not optional. Colocation and regular working hours, paired with visual task boards and SCRUMs where team
members feel they can ask for help will improve productivity.
6. Find the fun early on, and keep iterating on it.
7. Systems can always be improved and features added, but knowing when to say “No” to
feature-creep can be just as important. Keep the project within scope.
8. It’s important for everyone to understand what the project is about. A game that showcases
technical ingenuity and a game that aims for artistic expression have different goals.
Knowing where your game fits on the spectrum can help you figure out what you want to get
out of that project.
8.4 Future Work
The following list details some of the features that could be expanded upon or improved in future
versions of the game:
1. Score/achievement system: Reward the player for overcoming challenging play.
2. A more intuitive tutorial: Teach the player through actions instead of words.
3. Project documentation: Extensively documenting code, scripts and shaders.
4. More community interaction: Leverage our website and Steam Greenlight page for added
visibility.
5. A level editor for the Lance Engine: Visually layout levels without Unity.
69
6. Dynamic shadows: A planned feature that did not make it into the final build.
7. Finishing the .pom format/Documenting it: A custom file format for reading model data.
8. Fixing the skybox: Adjust the skybox to correct the lines around some of the edges.
8.5 Summary
Needless to say, the team behind Into the Paws of Madness is quite proud of the game we have
created for this capstone project. We have accomplished the majority of the goals we set out to do
and have had players respond well to our final product. We learned that internal communication and
transparency were vital to the success of the project. With the help of other students, faculty, and
play testers, we could identify problem areas of the game design and address them. We have learned
how to leverage each other’s strengths while being aware of the team’s weaknesses. By iterating on
our past failures, we created something that others can enjoy, and we are proud of that.
70
Related documents
Download