491-210

advertisement
Reinforcement learning in predator-prey interactions
A.TSOULARIS
IIMS, Massey University, Albany, PO BOX 102 904, Auckland, New Zealand
Abstract
In this paper we analyse the interactions of three biological species, a predator and two types of prey, the
models and mimics. The models are noxious prey that must be avoided by the predator and the mimics are
palatable prey that resemble in appearance the models, thus escaping consumption. We identify the predator
as a learning automaton with two actions, consume prey or ignore prey that elicit favourable and
unfavourable probabilistic responses from the environment. Two kinds of environment are considered,
stationary with fixed penalty probabilities and nonstationary with variable penalty probabilities. Both models
and mimics are assumed to grow logistically. A benefit function is constructed for the predator that
measures the consumption level at each stage of predation. Finally, strategies for increasing consumption are
derived in terms of the parameters of the learning process.
Keywords: learning automaton, reinforcement learning, mimics, models.
1. Introduction
In this paper we analyse in detail a linear reinforcement learning algorithm designed to allow a predator
(the learning automaton) to operate efficiently in terms of acceptable prey consumption in an environment
occupied by palatable and unpalatable prey and characterized by a penalty probability for each predator
action. The predator chooses to either ignore prey or consume prey.
A learning automaton is a deterministic or stochastic algorithm used in discrete-time systems to improve
their performance in random environments A finite number of decisions (actions) are available to the
system to which the environment responds probabilistically either favourably or unfavourably. The purpose
of the learning automaton is to increase the probability of selecting an action that is likely to elicit a
favourable response based on past actions and responses. Greater flexibility can be built into modelling the
predatory behaviour by considering the predator as a variable-structure stochastic automaton with action
probabilities being updated at every stage using a reinforcement scheme. The book by Narendra and
Thathachar [1] offers a comprehensive introduction to the theory of learning automata. Linear reinforcement
algorithms are based on the simple premise of increasing the probability of that action that elicits a
favourable response by an amount proportional to the total value of all other action probabilities. Otherwise,
it is decreased by an amount proportional to its current value. In this work we adopt the Linear RewardPenalty (LR-P) scheme as the predatory action strategy [2]. The probability updating algorithm for the two
predator actions, a1 (ignore) and a2 (eat), with respective to penalty probabilities c1 and c2, is a Markov
chain and has the following form:
p1 (k  1)  p1 (k )   1  p1 (k )
 a(k )  a1 , response is favourable, 0    1
p2 (k  1)  (1   ) p2 (k ) 
(1)
p1 (k  1)  (1   ) p1 (k )

 a(k )  a1 , response is unfavourable, 0    1, (    )
p 2 (k  1)  p 2 (k )   1  p 2 (k )
The expectation of the consumption probability, p2(k+1), conditioned on p2(k), is given by:
1
p 2 (k  1)  E[ p 2 (k  1) | p 2 (k )]  (c 2  c1 )(   ) p 22 (k )  1   (c1  c 2 )  2c1 p 2 (k )  c1 (2)
2. Prey population growth
The mimic and model populations, X and M respectively, at
stage k+1 grow logistically as follows:

X (k ) 
  (1  c2 ) p2 (k ) X (k )
X (k  1)  X (k )  g1 X (k )1 
K1 


M (k ) 
  c2 p2 (k ) M (k )
M (k  1)  M (k )  g 2 M (k )1 
K 2 

where g1 , g 2 are the
(3)
growth coefficients and K1, K2 the carrying capacities.
3. The benefit function
The net expected benefit to the predator is assessed in terms of capturing a palatable mimic and the
unnecessary energy expended in capturing an unpalatable model [3]. If b and c are the parameters
associated with the consumption of a single mimic and model respectively, the expected net change in
benefit at stage k+1 is given by
B (k  1)  EB(k  1) | k   B(k )  bX (k ) p2 (k  1)(1  c2 )  cM (k ) p2 (k  1)c2
(4)
The objective of the predator is to modify its consumption probability at stage k+1 by adjusting accordingly
the learning parameters  and , at stage k, so that the net change in benefit at the end of stage k+1 is
maximal.
4. Stationary prey environments
A stationary pey environment is one in which the penalty probabilities c1 and c2 remain. constant. In this
case the consumption probability given in (2) converges to the asymptotic value:
p 
*
2
with    , c1  c2 .
2c1   (c2  c1 )   2 (c1  c2 ) 2  4c1c2  2
2(   )(c2  c1 )
(5)
p2* is asymptotically stable since 1   2 (c 1  c 2 ) 2  4c 1 c 2  2  1 .
The predator’s net benefit increases monotonically if the penalty probability from consuming a model
remains below the proportion of benefit from mimics in the entire prey population:
c2 
bX (k )
bX (k )  cM (k )
for all k
The maximum rate of net benefit increase is determined by the sign of the
p2 (k  1) p2 (k  1)
B (k  1) B (k  1)
, and consequently by the sign of the derivatives,
:
,
,




2
(6)
derivatives,
p 2 (k  1)

p 2 (k  1)

 (c 1  c 2 ) p 2 ( k ) p 1 ( k )
 (c 1  c 2 ) p 22 (k )  2c 1 p 2 (k )  c 1
The following table outlines the course of action for the predator on the basis of the two penalty
probabilities and the current consumption probability, provided (6) holds:
Penalty probabilities
c1  c2
c1  c2
c1  c 2
c1  c 2
c1  c 2
Consumption probability at stage k
p 2 (k ) 
c1
c1  c 2
c1
p 2 (k ) 
p 2 (k ) 
Action at stage k+1
Retain the existing values of  and 
c1  c 2
c1
Retain the existing values of  but set 
=0
Retain the existing values of  but set 
=0
c1  c 2
c1
p 2 (k ) 
c1  c 2
p 2 (k ) 
1
2
  0,   0 , as no further
learning is necessary, and set p2 (k )  1
Set both
Retain the existing values of  and 
Table 1. Actions for benefit improvement.
5. Nonstationary prey environments
In this section we analyse the performance of the learning algorithm of the last section when each penalty
probability, ci, i = 1,2, is a monotonically increasing function of the respective action probability, ai, i =1,2.
We base our decision on the reasonable assumption that if the predator is ignoring all prey with a certain
frequency, palatable prey amongst them are essentially ignored at a less frequent rate, and by the same
token, we extend this assumption to the frequency of consumption. Thus at each stage k:
c1 (k )  r1 p1 (k ), 0  r1  1
c 2 (k )  r2 p 2 (k ), 0  r2  1
The two coefficients, r1 and r2, can be interpreted respectively as the fraction of falsely avoided mimics
in the proportion of overlooked prey, and the fraction of falsely consumed models in the proportion of
consumed prey. Values of either factor close to 0 indicate that the predator commits either penalty
infrequently, whereas values close to 1 indicate a large penalty frequency. The complementary expressions,
1  r1 and 1  r2 , may be thought of as the predatory efficiency in avoiding the wrong prey and consuming
the right prey respectively.
The expectation of the action probability, p2(k+1), conditioned on p2(k), is a third-order polynomial in p2(k):
3
p2 (k  1)  E[ p2 (k  1) | p2 (k )]  (r1  r2 )(   ) p23 (k )  (3r1  2r1  r2 ) p22 (k )  (1  r1  3r1 ) p2 (k )  r1 (7)
The asymptotic probability can be found as one of the three roots of the resulting cubic polynomial, based
on the work of Cardan [4]. For algebraic convenience we shall confine ourselves to the case    , in
which case:
p2 (k  1)  E[ p2 (k  1) | p2 (k )]   (r1  r2 ) p22 (k )  (1  2r1 ) p2 (k )  r1
(8)
with r1  r2 . The scheme admits the asymptotically stable probability:
p 2* 
r1
(9)
r1  r2
The expected net change in benefit is now
B (k  1)  EB(k  1) | k   B(k )  bX (k ) p2 (k  1)  r2 bX (k )  cM (k )p22 (k  1)
(10)
where


p22 (k  1)  E p22 k  1) | p1 (k ), p2 (k ), c1 (k ), c2 (k )  
 p2 (k )  p1 (k )  p2 (k )(1  c2 (k ))  p1 (k )c1 (k )  (1   )2 p22 (k ) p1 (k )(1  c1 (k ))  p23 (k )c2 (k )
2
We treat the learning parameter, , as the decision variable at each stage, k, that influences the
magnitude of the expected change in the net benefit, at the next stage, k+1. To test whether the expected
B 2 (k  1)
B (k  1)
0
benefit is continually increasing we consider the partial derivative,
. Since
 2

B (k  1)
always, if a maximum benefit is attainable at the next stage it can be found by setting
 0 , and

solving for the optimal  :

 * (k ) 


r1
 p2 (k ) 
bX (k  1)  2 p2 (k )r2 bX (k )  cM (k )


r

r
r

r
1
2 
1
2 

2r2 bX (k )  cM (k ) p1 (k ) p2 (k )  p12 (k )r1  p22 (k )r2  p1 (k )  p2 (k )
r1  r2  p2 (k ) 
r1




(11)
assuming p2 (k )  0 and r1  r2 .
Table 2 below states the necessary conditions on the portion of the available palatable benefit (due to
mimics) in the entire prey population and the range of consumption probability for the existence of a
nonnegative  * (k ) .
4
Mimic benefit portion
Consumption probability at stage k
bX (k )
 r2
2bX (k )  cM (k )
0  p 2 (k ) 
r1
r1  r2


r1 
r1 
bX (k )
bX (k )




min
,
,
  p2 (k )  max



 2r2 bX (k )  cM (k ) r1  r2 

 2r2 bX (k )  cM (k ) r1  r2 

bX (k )
 r2
2bX (k )  cM (k )
Table 2. Consumption probability range for maximum benefit increase.
The following four figures display the consumption probability and the quantity x(k ) 
and the net
benefit
change
for
    0.5, r1  r2  0.2, p2 (0)  1.0,
X (0)  100, K  1000, M (0)  500, K 2  5000.
bX (k )
,
2bX (k )  cM (k )
b  2, c  1, g1  g2  2.5,
Probability and ratio m(k)
1.5
1.4
Consumption probability and m(k)
1.3
1.2
1.1
1
0.9
p2(k)
m(k)
0.8
0.7
0.6
0.5
0
10
20
30
40
50
60
70
80
90
100
Stage k
Figure 1. The consumption probability p2 (k ) is outside the range given in table 2 for some k.
5
Net benefit change
600
Net expected change in benefit
500
400
300
200
100
0
0
10
20
30
40
50
60
70
80
90
100
Stage k
Figure 2. The net benefit change oscillation in time.
6. Discussion
In this paper we have explored the concept of a predator as a learning automaton feeding on prey that can
be broadly categorized as either palatable (the mimics) or unpalatable (the models). The predator’s actions is
to either attack the prey or simply ignore it. Each action elicits a probabilistic response from the
environment that is classified as favourable or unfavourable. A response is deemed favourable if the prey
consumed is of the palatable type or if the prey ignored is unpalatable and deemed unfavourable if the prey
ignored is palatable or the prey consumed is unpalatable. This distinction made when ignoring prey is
related to the predator’s ability to discriminate effectively against models. If the predator senses that the
prey ignored is of palatable nature it will decrease the frequency of avoidance and vice versa. A suitable
function has been constructed to take into account the net energetic benefit to predator. Conditions for
maximal increase in benefit have been derived dependent upon the prey populations, and the key
coefficients r1 and r2. We have attempted in this work to outline a simple theoretical framework for
predator learning from which more comprehensive models can originate in the future. We believe that the
learning automaton methodology can be a useful tool in modeling discriminatory predatory behaviour.
References
1. K.Narendra, M.A.L.Thathachar, Learning Automata: An Introduction, Prentice Hall, Englewood Cliffs NJ,
1989.
2. M.L.Tsetlin, Automaton Theory and Modeling of Biological Systems, Academic Press, New York, 1973.
3. G.F.Estabrook, D.C.Jespersen, Strategy for a predator encountering a model-mimic system, The American
Naturalist, 108(962) July-August 1974 443-457.
4. W.L.Ferrar, Higher Algebra, Oxford University Press, Oxford, 1962.
6
Download