CGDD 4003
• Human-level intelligence - an unsolved problem
• AI “describes the intelligence embodied in any manufactured device”*
• Need AI for both enemies and allies
*again, your text, chapter 5.3
• AI must be intelligent, yet purposely flawed
– Opponents must present a challenge
– Must keep the game fun
– Must lose to the player in a fun manner
• AI must have no intended weaknesses
– No “golden paths” for defeating the AI
– AI must not fail or appear “dumb”
• AI must perform within CPU/memory of game
– Real time
– Receives only 10%-20% of frame time
• AI must be configurable
– Designers can adjust the difficulty
• AI must hit the shipping date
– Actually, AI techniques must be proven early
– If AI can evolve, it must be thoroughly tested
• AI has perfect data, yet…
• Goal is not perfect AI
– Perfect AI is no fun and expensive to compute
– Goal is fun AI that is extensible/customizable
• How many AI developers?
– AAA/RTS: maybe 3 full time?
– Others: maybe one part time?
• A.k.a. Non-Player Characters (NPCs)
1. Sensing
– Have perfect info, so must cripple!
– Vision – limit distance (human limitations)
– Hearing – distance, shots fired
– Communication with other agents
– Reaction times (must delay)
2. Thinking (making decisions)
– Expert knowledge (simple rules, finite state machines, decision trees)
– Search – use an algorithm to derive a nearoptimal solution (e.g. path finding)
– Machine Learning (not used often) – neural networks, genetic algorithms, decision trees
– Flip-flopping – deciding every frame (stick with your decision!)
3. Acting – if your NPC is intelligent, the user must see/hear intelligent things
– Picking up weapons, running for cover
– If agent knows it will die, it should scream (to show comprehension)!
• Learning/Remembering
– agent gets better
– Smart terrain can be marked as “dangerous”
• Easy to make AI too hard. Instead:
– Make less accurate
– Longer reaction times
– One-on-one fighting (think Kung-Fu)
• Agent cheating:
– Agents can be omniscient
– AI can have unfair advantage to make it harder
– Should you let the player know?
• Most common AI
• Simple to understand, implement, debug
• Basic idea: change from state to state based on input (and maybe some randomness)
• Extending
– Can store states in a stack when returning to previous tasks
– Can transition to a new FSM
– Can have multiple FSMs
– Visualize state by displaying state above head
http://www.ai-junkie.com/architecture/state_driven/tut_state1.html
(but not scalable, uses “polling” instead of events, and in C instead of Lua) enum states { WANDER , ATTACK , FLEE }; void RunLogic ( int * state ) { switch ( *state ) { case WANDER :
Wander (); if ( SeeEnemy ()) { *state = ATTACK ;} break ; case ATTACK :
Attack (); if ( LowOnHealth ()) { *state = FLEE ;} if ( NoEnemy ()) { *state = WANDER ;} break ; case FLEE :
Flee (); if ( NoEnemy ()) { *state = WANDER ;} break ;
}
}
• A* pathfinding – fast at finding cheapest path
• Behavior Tree – hierarchical FSM
– Non-leaves determine when
– Leaves do actual work
– Halo 2 and 3
• Command Hierarchy – military hierarchies
– General makes high-level decision
– Foot soldier fights
• Dead reckoning – “leading the target”
• Emergent behavior – behavior from simpler behaviors
• Flocking – moving groups of creatures
– Separation
– Alignment towards average heading of flock
– Steer towards average position of flock
• Formations – similar to flocking, but keep formation
• Avoidance – steer to avoid crowding
• Alignment – steer towards average heading of flock
• Cohesion (midpoint) – steer toward midpoint of flock
From http://www.red3d.com/cwr/boids/
• Influence mapping – cell-based weighting to determine the power within the game
– Assign a value based on number of units+neighbors
– Good for path planning
• Level-of-Detail AI – closer == better AI
• Manager Task Assignment
– Have a manager to prioritize tasks
– Put the best candidate on that job
• Obstacle Avoidance (must avoid clutter)
• Terrain Analysis – identifying strategic locations
(Working our way up to A*)
• Grid – each cell marked as passible or not
• Waypoint Graphs – “I walk the line”
• NavMesh - If an edge isn’t shared, you shall not pass!
• Move towards goal, then trace around obstacle (CW or CCW)
• Each cell has
– A position
– A pointer to another cell (which cell led us to this cell)
• Store an open and closed list
– Open – all paths that still need to be processed
– Closed – nodes that aren’t the goal but have been processed
• The difference is in which node in the open list it decides to process
– Breadth-First – waiting the longest
– Best-First – closest to goal
– Dijkstra – cheapest to reach from start cell
– A* - cheap AND close to goal
• Ply-by-ply, pushing new nodes on back of queue
• Memory hog (exhaustive search) http://realtimecollisiondetection.net/blog/?p=83
• Node that is closest to the goal is processed
(heuristic search) http://theory.stanford.edu/~amitp/GameProgramming/AStarComparison.html
Breadth/Dijkstra Best-First http://theory.stanford.edu/~amitp/GameProgramming/AStarComparison.html
• Similar to breadth-first, but calculates cost to get to a node
• Can understand weighted regions (e.g sand vs swamp vs road vs water)
• May find new/better paths for each node
• Exhaustive and always find optimal path!
• Combines best-first and Dijkstra
– Cost paid to get to that node (given cost - Dijkstra)
– Has an estimated cost (heuristic cost – best)
– Heuristic cost is usually the distance
Final cost=Given cost +
(Heuristic cost*Heuristic weight)
From Wikipedia.org
1) Create the rootNode
- set its x and y according to the startPoint
- set its parent to NULL
- set its given cost to 0
2) Push the rootNode onto the open list
3) While the open list is not empty
A) pop the node with the lowest givenCost from the open list and assign it to the currentNode
B) if the currentNode’s x and y correspond to the goalPoint then goto step 4
C) foreach nearbyPoint around the currentNode a) if this nearbyPoint is in a spot that is impassible then skip to next nearbyPoint b) create the successorNode
- set its x and y according to the nearbyPoint
- set its parent to the currentNode
- set its givenCost to currentNode’s givenCost + cost of going from currentNode to successorNode c) if a node for this nearbyPoint has been created before then if successorNode is better than oldNode then pop the oldNode and delete it else skip to next nearbyPoint
D) push the currentNode onto the closed list
4) If the while loop exits without finding the goal, goalPoint must be unreachable