Neural Networks

advertisement
Neural Networks AI Engine Programming
-
-
-
-
Neural Networks (NN)
NN are sometimes called Artificial Neural Nets (ANN). It tries to behave like the human brain
Neural Nets in Nature:
o Animals’ brains are large cluster of interconnected nerve cells called neurons
o Human brain is composed of about 100 billion neurons. Each neuron has a number of connections to other
neurons both coming in and going out (humans have about 10000 connections per neuron)
o Connections coming in called dendrites and going out are called axons
o Actually neurons are not connected but rather the dendrites
of a neuron come very close to axons of other (0.01 micon)
and spaces between them is called synaptic gap or synapse
o Working Mechanism of NN:
 A charge is transmitted to the neuron
 If this charge gets too large (above certain
threshold)it fires the collected energy down its axons
that is called an action potential
o When neuron fires so much in specific situation it learns that
it should fire up in this situation. Also opposite is applicable
o Introducing inhabitation (‫ & )كبت‬exhibition (‫)إعانة‬
What we want to emulate:
o Take input, recognizing patterns within the input and take
decision based on these patterns
Human brain works 100Hz a fraction of the speed of modern computers
Because of the symbolic way that our brains store knowledge we can employ many levels of efficiency that allow us
to parallel processing
Notes:
o NN tends to classify and recognize patterns within input
o NN without direction restriction are called recurrent network
 In this system info can go from input to output and back again allowing feedback within the system
 To facilitate this feature a number of state variables associated with neurons
o Most AI programmers use FF (Feed Forward) systems because they are easier to understand
o Recurrent system can be simulated using multiple FF systems
o Sparsely connected systems can be simulated using fully connected
where weight of connection is small
o NN could be used as a basis for player modeling (so AI system can
predict player actions)
o NN could learn in live gameplay so it could learn adaptive techniques
Artificial Neural Nets Overview:
Neuron Structure:
o Value associated with a neuron is the sum of all input values
multiplied by their connection weights added to the neurons bias
value
o Bias refers to the inhibitory or exhibitory effect the neuron has
within the network
o Axon of neuron is represented by its output value (optionally
filtered through activation function)
Neural Network Structure:
o In the following diagram:
 circles are neurons (nodes)
 Lines between nodes are connections between them
 Neurons in the 1st column represent input layer
 Neurons in the 2nd column represent hidden layer
1|Page
Neural Networks AI Engine Programming


While this layer gets bigger, more patterns are recognized
Perceptron: occurs when you map input to output layer without hidden layer
o These are used for some linear pattern recognition
 Neurons in the 3rd column represent output layer
 Contains the actual categories that the network is trying to impose on its inputs
 Each connection has:
 an associated value (weight)& direction
o Weight biologically equal to the strength of the connection between 2 nodes
o Nodes Connectivity:
 Fully connected: where each node is connected to other nodes next layer
 Sparsely connected: opposite of fully connected
- NN in World Business:
o US Postal systems: which uses heaving trained NN for handwriting recognition when reading address on mail
o Predict weather, Judging credit card fraud, voice recognition, diagnosis of diseases, filter internet sites
o In Games:
 Used in pattern recognition and prediction:
 Patterns are recognized to take decisions
 Enemy patterns are recognized and stored to used in prediction
 Example:
 NN can be trained to become a black box for potentially expensive operations like animation
selection (what animation the basketball player should perform right now given state of
game, his skill, surrounding player, the spread point, difficulty level of game)
- Step of adding a NN to your Game:
o Set up your network
o Train it using specially prepared data
o Use it in live-games
- How Set Up Your Network:
o Structure:
 Refers to both type (FF and recurrent ) and organization (how many nodes/how many hidden layers)
 Input Layer:
 Number of variables the NN can categorize or pattern match on determines the # of input
nodes
 abstract variables that represent combinations or calculations based on simpler variables are
better suited to NNs (i.e. in AIsteroid we could have danger abstract variable calculated from
others)
 The fewer you can get away with, the better
 The more nodes you include in a NN, the larger search space becomes that the NN is
slogging through to arrive at a suitable solution
 Hidden Layer:
 There are no real guidelines about how many hidden layer (but 1 layer in recommended)
 A common practice is to use medium # of hidden nodes (two times the # of input nodes) and
then go few up or down and compare the performance until you see it tapering off
 Some Criteria in determining # of hidden nodes:
o # of training cases & complexity of function being solved,
o amount of noise (variance) in outputs
 NN in capable to encapsulate a nonlinear function that maps the inputs to the outputs
 Perceptron is capable to find linear correlations between input and output so, adding a
nonlinear activation function (AF) on the nodes to give it nonlinearity
 Any nonlinear function is suitable except polynomials. For backpropagation learning the AF
must be differentiable and it helps if the function is bounded
 Output Layer:
2|Page
Neural Networks AI Engine Programming


-
The # of output nodes = #of outputs that are required from NN
In continuous output, the level of the neuron’s activation (NA)would tell you how much to do
o NN could tell you to Turn Right or Turn Left and NA would tell you how much to turn
 Activation is calculated using an activation function
 Common activation functions:
o Step function, hyperbolic tangent, logistic sigmoid functions, Gaussian function
o Learning Mechanism:
 After setting the NN you should determine how you want to train it
 Common Types of NN Learning:
 Supervised:
o involves having training data that consists of input output pairs
o Backpropagation Method:
 Here you feed input into NN, and adjust weights of network if there is
discrepancy between output from NN and the expected output given in
training data
 Training continuous until a certain level of accuracy achieved
 It’s called backpropagation because the way you adjust the network
parameters is from back to front
o Reinforcement Learning:
 Here outputs is not given to the algorithm but the network is rewarded (or its
behavior reinforced)when it performs well
 Some implementations also punish when system performs poorly!
 Unsupervised Learning:
o Involves statistically looking at the output and adjusting the weights accordingly
o Perturbation Learning:
 Similar to simulated annealing
 Here you test your NN then adjust some values a small amount and try it
again. If you get better performance, you keep going by repeating the
process; otherwise you get back to your last network settings
o Using Genetic Algorithms to adjust weight values of NN
o Creating Training Data:
 This task is more important for supervised learning
 Recording:
 Record a human performing the same tasks you are trying to teach to your system then use it
 Preceding technique is great because it could be used to build human-level performance
system
 Using Programs:
 Another way, is to write program that generate reasonable input scenarios and have a
human say which output should come up
 This method is time consuming for person involved
 This method is suitable for discrete outputs and for small output values
 Enhancement:
o you could generate random input/output pair and validate then storing them if
winners
 It’s recommended to have at least 10 times as many training cases as input units (10N)
NN is about determining the pattern between input and output whereas GA is optimizing a set of numbers to
maximize some fitness function
NN are used primary are:
o regression: finding a function that fits all the data points within some tolerance
o classification: classifying given input into output
Functions types: under-fitting, fit and over-fitting
3|Page
Neural Networks AI Engine Programming
-
Example of using NN:
o Feature that need AI: Building a NN to help your AI enemy evading bullets shot by the player by sidestepping
out of the way
o Inputs to NN: enemy face direction, enemy position and player position
o Output of NN: NN will determine a movement vector for the enemy
-
Most of the success for a NN is based on capturing good data
Optimizations:
o Optimizing NNs generally involves optimizing the training phase to get better because most of NN are used
offline
o Lessing the training time to construct a viable network design and creating high effective, relevant training
data
o Some points to consider:
 If your networks seems to be stuck in local maxima, where error becomes stable but still higher than
threshold
 You might be using too few training data
 Your hidden layer might be too small
 If your training seems unstable (meaning that the error seems to jump all over the place)
 You might have too many hidden layers
 Network has essentially been given too much room to experiment within
 Over-fitting:
 You might have few training set
 Having too many training iterations with the data
 Under-fitting:
 You might have large amount of very noisy training sets
 You don’t train for enough iterations
 If errors seems to be oscillating between values
 You may be using too large a learning rate or momentum might be too high
 Gradient descent in a greedy algorithm and will perform poorly if the step size is too high
o Here you can use Newton’s method (but it’s more costly)
Pros of Neural Net-Based Systems:
o NN are great way to find abstract relationships between input conditions
o NN can extract very complex math functions solutions
 Here you saving your CPU time
 It has been math. proven that an NN with at least one hidden layer and nonlinear activation function
can accurately depict almost any finite dimensional vector function on a compact state set
o Can derive meaning from nonlinear to imprecise data
o Training takes a fraction of CPU time as trial and error methods take
o Humans make sense of them (can understand NN)
Cons of Neural Net-Based Systems:
-
-
4|Page
Neural Networks AI Engine Programming
o
-
-
Determining how to train an NN is usually cost
 Here mainly we’ve shifted the problem from how to solve the problem to how teach NN solving them
o NN could learn bad relationships so the output is not as expected
 This problem could be caused when you use arbitrary, numerous or bogus input to the network
o NN is a math. Black box and thud hard or even impossible to debug
o All input fields must be numeric
o NNs are difficult to implement
 Due to high number of factors that must be determined without guidelines or rules
 These factors are: network structure, input/output choices, activation function, learning rate, training
data issues, weights initialization
o Over-fitting is very common
o NNs sometimes suffer from phenomena called catastrophic unlearning
 This occurs when a NN given additional training data that completely undoes all previous learning
o In complex learning scenarios, lots of training data and CPU time are required from training
o NN larger than thousands nodes are not very stable
 The curse of dimensionality seems to cause the learning ability of large nets implode somewhat
 The network cycles within varying its weights forever never getting closer to a solution!
Extensions to Paradigm:
Other types of NN:
 Simple recurrent Networks:
 Here the hidden layer of the network is also connected back to the input layer with each
connection having weight of 1
 Could be used in sequence prediction
 Hopfield nets:
 Used to mimic associate memory within the brain
 These networks allow entire “patterns” to be stored and then recalled by the system
 Useful in image recognition
 # of nodes needed to store information are calculated directly
 Committee of Machines:
 Multiple NNs are trained on the same data but with different initialization
 During usage all the nets run on input data and the best output from these NNs is chose
 Self-Organizing Maps (SOM):
 Useful for classification tasks
 Used to visualize relationships and classes within large, highly dimensioned inputs
 Used in games in player modeling:
o By taking a # of dimensions of behavior into account and giving the AI system a
better picture of the kind of player the human tends toward
 Uses unsupervised learning technique called competition learning
Other Types of NN Learning:
o Reinforcement Learning:
 It’s called learning with a critic (‫)ناقد‬
 Here the network outputs values and the supervisor simply declares if the results was a good one
 Here it seems to be working unsupervised (because the critic could be the outer environment)
o Unsupervised Learning:
 Genetic Algorithms are used to find best weight on the connections between neurons
 Perturbation Learning, where weights are iteratively and randomly adjusted and tested for
improvement
 Competitive Techniques, where a # of neurons “competing” for the right to learn by having their
weights adjusted within the net
o Problems for which the output can’t be known ahead of time, in this case the main job to the network is to
classify, cluster, find relationships and compress the input data in specific areas (SOMs are example of this)
5|Page
Neural Networks AI Engine Programming
-
Design Considerations:
o Types of solutions:
 Great when you have simple, modular system that map inputs too outputs (in surprising, nonlinear,
black-box ways)
 They are suitable for tactical level
 Can’t be used in diplomacy systems in RTS Games
 You can use it when you have specific inputs (incoming ball, player’s position, player skills) and
specific output (each of possible catch animations to get the ball)
o Agent Reactivity:
 Contribute for faster reactivity for the agent
o System Realism:
 It gives a the system realism
 Take care when to use NN because their faults in some games seems as “Stupid AI” rather than “a
human mistake”
o Development Limitations:
 This is the most concern it’s NN requires investment and energy to develop
 Also, if you will use NN in online learning, its more harder and harder to develop
6|Page
Download