3D Flight Racer

advertisement
Research Project: 3D Steering Behaviors and the
Illusion of Intelligence
Adam Estela—DigiPen Institute of Technology
adam.estela@digipen.edu
Ever since the age of four, I've wanted to experience new levels of adrenaline in the form
of speed. I often terrified my mom, claiming I would be a race car driver or a fighter pilot
one day. While I've grown up to become a programmer, my four year old interest still has
a strong influence over my work, which has led me to this project to answer a few
questions. How can we program steering behaviors for an airplane in an open 3D space,
and more importantly, how do we make it look human?
What Makes AI Intelligent?
Before we start with any code, it is good to decide what will make an AI agent seem
intelligent so that we have some sort of framework to lead us. From observation and light
testing, I propose this hypothesis: In order form an AI that successfully gives illusion of
intelligence it must:
 Have multiple behaviors
 Have actions and reactions
 Have a clear goal
To convince you I haven't just pulled this theory from thin air, close your eyes and
imagine some of your favorite or most memorable AI opponents from a video game.
Being able to show multiple behaviors demonstrates a deeper dimensionality and
decision making capability that make an AI appear as if it is giving some thought. This
should hold true for nearly every AI you have encountered in a game.
Secondly, an AI should be able to act on its own, whether it is moving autonomously or
starting a conversation with another AI. An equally important trait is that an AI should
react. Reaction typically occurs via interaction between the player and non-player
characters (NPC's). You've undoubtedly experienced this level of reaction when fighting
a boss or playing a spy game where you can psych out enemies.
Finally, a good AI needs a clear goal. If you saw a screen with an airplane moving on it,
is it smart? There is no way to tell if it is thinking if we aren't sure what it is trying to do.
Similarly, would an AI seem smart if it started planting a farm when the player pointed a
gun at it? Not entirely. An AI needs a clear context, purpose, and goal. As the player
notices an AI performing actions that help it reach its goal, the player will start perceiving
it as an intelligent agent.
3D Steering Behaviors
Now that we know what our AI should have to appear human, let's start! For this project
we are going to make a racing game with airplanes in 3D. It's an open world game where
the leader emits rings for the other racers to follow. The rest of the game is irrelevant
since we won't be completing it. Given this context, it is reasonable to assume we will
need a wandering behavior for the AI to lead the race, and a fly-to-target behavior for
racers to pursue a leader.
Pursue: A Naive Approach
To determine which direction to fly in a 3D space, break the problem into two 2D spaces:
the x-z plane for left and right movement, and the y-z plane for up and down movement.
Refer to Figure 1.
Figure 1. Decide to travel up or down and left or right using the dot product.
To determine if the plane should travel left or right, perform the dot product between the
planes right vector and the vector to our target. If our result is negative, our target lies to
our left, and vice versa if our dot product yields a positive value. Similarly, we perform a
dot product with the planes up and destination vector to determine whether to travel up or
down toward our target.
Once we have determined which way the plane needs to travel, increment or decrement
the pitch and yaw by some value. (Roll, pitch, and yaw need to be stored somewhere.) If
the angle is small enough, do not steer the plane to avoid twitching.
The problem with this is that we have a pre-determined increment value causing the plane
to steer at one set rate. If our velocity is too fast, the plane may be incapable of turning
sharp enough to hit its target. In order to mitigate this, add a dependency between the
velocity and the angle. See Figure 2.
Figure 2. Slow the plane down depending on the severity of our turn.
Assuming our velocity is a very basic arcade style velocity:
velocity = direction * speed
we want to add a multiplier to this equation that will slow or speed the plane up
depending on the urgency to turn.
newVelocity = Direction * speed * multiplier
We know the multiplier should slow the plane to almost a halt if the turn is most severe,
meaning the angle is 180 degrees, and speed the plane up as the plane approaches 0. For
this, use: (360 – (α1 + α2)) / 180 (Refer to Figure 2.) This takes both angles into account
and maps our multiplier value between [0, 2]. Doing this gives the desired output, an
airplane in 3D that can successfully fly toward a target or pursue another agent in the
game.
Pursue: A Physics Approach
As an alternative approach, we can use forces and damping for the desired pursuit
behavior.
Figure 3. Find the steering force based on direction and our target.
It would seem reasonable to find the resulting vector from subtracting our destination
vector and direction vector and apply that force to steer the plane. This however will give
us a steering force that is far too weak, assuming our direction vector is normalized as it
should be. Using the velocity vector instead of direction gives us a steering force that is
too strong causing the plane to overshoot its target, and compensate by applying a force
in the opposite direction. This gives us a drunken fly effect where the plane spends a few
seconds overshooting and correcting until it finally pinpoints its target.
To apply a smooth steering force without negative side effects, we'll apply dampening to
the previous force. This dampening value should be based on the angle of the turn, much
like velocity was based on the angle of the turn in the naive approach. Use α / 90 to
determine our dampening value. This maps our value between [0, 2], meaning it will
critically damp the steering force as the angle approaches 0, and accelerate the force if the
target is over 90 degrees from us.
Which Approach to Use?
Both of these methods give the desired output, but which one should we apply? The
physics approach has fewer immediate calculations, but requires a physics system,
whereas the naive approach is a few more calculations in the logic, but it is all self
contained and works directly with graphics. The answer is, it depends. Does your game
need a physics system regardless? Do you want arcade style movement or realistic
behavior? How much control do you want over your movement? Make your decision
based on the context of your game and what you need.
Wander: A Naive Approach
One behavior down! The last behavior we'll make for our simple game is wandering. This
will allow the AI to lead the race and create a path for players to follow that is relatively
interesting. Wander seems like a no brainer. Couldn't we just apply a random force to the
plane each frame? This behavior will wander, but the movement will be sporadic and
seem far from human. See Figure 4.
Figure 4. Applying random force will cause sporadic behavior.
This is clearly not what we want for our racing game.
Wander: The Right Way
In order to wander like a human, we need to have some sort of control over the random
force and direction we choose to travel. To do this, define a circle in front of the plane
and choose a random point on the circumference to be our target. When choosing our
next target, define an offset that restricts the range of values to choose from. See Figure
5. [Reynolds]
Figure 5. A target is chosen on the circumference of the larger circle. The small circle
represents the offset. This is the range of possible values for our next random target.
To apply this to our 3D game, simply define a sphere in front of the plane rather than a
circle. Given the random target vector, apply a force toward it by subtracting our velocity
from it and using the result as the force. Another correct way to wander would be to use
the x, y, and z as the pitch, yaw, and roll.
Tying it Together
Now that we have a couple behaviors that will be useful for the game, we need some way
to switch between them to give a single AI multiple behaviors.
Finite State Machines
A reasonable way to tie these states together would be by adding a finite state machine to
update and change between them. A FSM is a bit over-scoped if we were only ever going
to have two states, but let's assume that more states will be added later in the
development process. For the sake of this paper I'll review the basics.
A state machine is simply an entity that can update logic in a given state. It is also
responsible for switching between states. A state machine typically has pointers to its
previous state, current state, and next state, as well as a list of states that it owns.
Each state is simply logic necessary to behave a certain way. This should inherit from a
base state and have the following functions: OnEnter(), Update(), and OnExit(). OnEnter
is an action that will perform exactly one time when the state is first entered. Update
contains the logic that the state machine will update during run time. Finally, OnExit will
perform exactly one time as the state is ending. [Buckland43]
Each object in the game can own a state machine which contains all of the states (or
logic) applicable to that object. It is wise, if the scope of the game is large, to have a state
machine manager which will run through every owned state machine and update their
current state.
Figure 6. Basic finite state machine structure.
With state machines implemented, the AI now has multiple behaviors and has the clear
purpose of leading the race. The only problem is, it has nothing to react to.
Player Interaction
This final layer to our AI is arguably the most important. When a user finally has the
ability to interact with a deceivingly intelligent agent, they perceive the AI as being
smarter than it really is. Don't believe me? Next time you play a game, don't do anything.
Watch how the AI behaves when you don't interfere with it. Depending on the game, the
AI will appear much less human when you just observe it, and in some cases you'll
realize it is downright stupid. It is only when we get in its way we assume that the
machine is plotting against us.
For our game, we want the AI to chase us when we're leading the race, and wander to
lead the race when we fall behind. To achieve this we'll have to write some logic and
answer a few questions.
Who's in Front?
Our game is unique in that there is no race track. This presents us with an interesting
problem and the opportunity to answer an uncommon question. How do we know who's
in front of who in an open, trackless, 3D environment?
In a 2D game, This would be as simple as checking track check points or using a grid or
perhaps using the dot product to compare every player. This won't work for us, as
demonstrated in Figure 7.
Figure 7. How do we determine who's in front in each scenario?
In our open world, there is no way to calculate a leader because situations arise where
there is nothing mathematically unique between the leader and another player. To fix this,
set a default leader at the beginning of the race, or design some way to quickly determine
a leader early on. Once we have set a leader, set up conditions for a pass.
In our game we know that to pass the passing ship must be in front of the leader, and be
facing the same general direction, as seen in Figure 8.
Figure 8. Conditions to pass the leader.
In order to determine if the new leader is, in fact, leading, perform the dot product
between vector V (see Figure 8.), and the old leaders direction, dir1. If the result is
positive, the new leader is in front. Secondly, we need to make sure it is passing in the
same direction. This time, perform a dot product between the directions of the two ships,
dir1 and dir2. If the result is positive, they are facing the same direction within 89
degrees.
Layering Actions and Reactions
Because everything we've written so far is very foundational, it has taken quite a bit of
work to implement each feature. However, this is not always the case. The more layers of
action and reaction an AI has, the more intelligent and human it will appear. These do not
need to be complicated. Using what we have, it would be easy to implement a barrel roll
when the player gets close, or a bunch of back-flips to boast when the AI is leading by a
long shot. We could also improve what we have and cause the AI to roll into its turns in
order to fly like a real pilot. Add as many simple layers as possible! Each little feature
dramatically increases the players reaction to your AI.
Conclusion
In order form an AI that successfully gives the illusion of intelligence it must:
 Have multiple behaviors
 Have actions and reactions
 Have a clear goal
A clear goal allows the player to understand what the AI is trying to do, and cheap
actions and reactions make the player feel as if the AI is responding to them like a
human. Multiple behaviors give an AI depth, and the ability to make more decisions. This
can be applied to any game; I encourage you to try it in your next project and watch how
players react!
When using steering behaviors, consider your implementation options and make an
intelligent decision based on what your game or project needs. The illusion of
intelligence is more smoke and mirrors than brute technological force. You'll find that
that cheap tricks tend to be the ones that woo the audience.
References
[Buckland43] Buckland, Mat, Programming Game AI by Example, Wordware
Publishing, Inc., 2005.
[Reynolds] Reynolds, Craig, “Steering Behaviors for Autonomous Characters,” available
online at http://www.red3d.com/cwr/steer/Wander.html/, June 6, 2004.
Download