Go and Percolation Theory

advertisement
Go and Percolation Theory
Author: James Kirkpatrick
Partner: Karen Chalmers
29th April 2003
In this project, a program is developed to analyse a database of over 7000 professional
Go games obtained from gobase.org. The objective is to study the probability density
function of cluster size and to compare it with the results predicted by percolation theory.
The results from real games are compared with those from games generated randomly.
The hypothesis being tested is that Go games are similar at all times to a percolation
system at its critical point: there exists a power law region and the cut-off cluster size for
this power law behaviour is larger and slower in growing than the cut-off for random
games (that correspond exactly to a percolation problem). Applications to the
development of computer Go strategist algorithms, that is algorithms that consider long
term, poorly defined goals will be considered.
4
1.0 Introduction
This project will study the link between percolation theory and the game of Go.
In percolation theory a large lattice is randomly occupied by coloured dots with a
particular probability. When this probability reaches a certain value - 0.59 for a 2-d
lattice- the clusters of adjacent dots start spanning from side to side of the lattice, this is
known as percolation.
The motive behind comparing this model to the game of Go is simple: the
patterns produced in the game seem remarkably similar to those produced by the fractal
percolating clusters. The question then is whether this superficial similarity masks a
deeper connection. We will try and answer this question by analysing the distribution of
cluster sizes1 and by seeing whether or not this distribution follows patterns similar to
those of percolation theory.
1.1 The game of Go and computer Go
Go is probably the oldest board game on Earth: it was developed over 4000
years ago in China, the first text concerning itself exclusively with the game is the Yi Zhi
(essence of Go), by Ban Gu(1) and dates back to the 1st century AD. The game was
introduced to Japan in the 17th century and enjoyed such popularity and prestige that
annual Go tournaments were held annually at the Shogun’s court from 1667 to 1863. So
what is it that makes this game so fascinating? One of the factors that certainly fascinates
people are the beautiful patterns that are drawn on the board at the end of the game,
patterns which form a ‘mind map’, a record of the players’ intentions and strategies.
Studying the nature of these patterns is the aim of this project, but first a couple of words
on the rules that govern the game. A summary of the rules of Go can be found in
appendix A, whereas for a better account of the rules of the game one can check
http://gobase.org/studying/rules.
Computer Go, on the other side, is a recent phenomenon: although people had
written some programs for Go in the ‘70s as AI exercises, it was not until the ‘80s that
Go started occupying the limelight of games programmers. Go is a fascinatingly complex
game to model(2): unlike chess, the number of possible moves at any point is huge and
very rarely does a single move change the board’s position radically. This means that if
one were to write a program that simply went through all the possible moves, using some
heuristic function to calculate which would be the ideal move, the program could not
consider many moves ahead because of the rapid increase in the number of permutations.
In other words, because the search tree is very wide, it cannot be very deep. It is possible
nevertheless to write good Go ‘Tacticians’2, that is programs that can accomplish certain
1
In Percolation Theory the cluster number is normally considered: the cluster number is the number of
clusters size of a particular size per lattice site. In our case, we simply looked at the cluster size
distribution, because all the systems analysed were of the same size.
2
This goal had been achieved by the very first computer Go programs, written by Al Zobrist and Ryder (2).
5
goals - e.g. hinder connection of enemy stones, save program’s stones, make program
eye, Atari enemy cluster etc. – whilst avoiding certain other situations – e.g. being killed,
playing in the enemies territory, playing an easily isolated stone etc. Each of these goals
is given a certain positive or negative value and the ‘best’ moves are analysed further.
The Go Tactician does not have an ‘understanding’ of the game, all it does is try
and achieve goals which are secondary to the main goal of Go: conquering territory. The
purpose of our research is to find a simple mathematical quality that clusters that achieve
conquest of territory possess. This principle could then be used as a Go Strategist, i.e. as a
part of the program that analyses the moves suggested by the Tactician (or by a database
of patterns) and helps choosing which one is best. We believe that this simple principle is
that successful Go players subconsciously use clusters which are similar to percolation
theory clusters at the critical point, i.e. clusters whose size distribution probability density
function (PDF) follows a power law behaviour. One of the aims of this project is to
measure the specific value of the critical exponent.
6
2.0 Method
The main difficulty with this project is that some of the basic assumptions of
percolation theory are clearly not applicable to Go games. In particular percolation theory
assumes an infinite cluster size, or at least a cluster size for which single lattice points are
irrelevant. Go on the other hand is played on a ‘tiny’ 19x19 board, which makes the total
number of lattice point 361. The biggest possible cluster would then be of size 180. In
general, scale-free behaviour is considered to be happening when we see power laws
spanning over several decades, in our case we clearly cannot expect such a range, seeing
as cluster sizes barely span over 2 decades! Unfortunately there is nothing that can be
done to overcome this difficulty, so that our results will unavoidably be open to
interpretation.
The real game data was obtained from gobase.org, which stores over 7000
professional games in the same digital format, the standard Go format (SGF) (5). The
number of games we are averaging our results over and the statistical techniques –such as
logarithmic binning – we used at least ensure that the PDFs produces look smooth and
are easy to fit. This is important because sophisticated mathematical fitting of the curves
and adjusting of the parameters will not be used.
The PDFs at various times of the game are obtained by implementing a
particular modification of the Hoshen-Kopelman(3) algorithm, as explained in Appendix
B.
The results from real games were then compared to results from randomly
generated games, these games provide a ‘neutral’ example, as we expect them to show
scale-free behaviour only when the probability of occupation approaches the critical
probability of occupation. The random games mimic percolation theory’s dependence on
occupation probability almost perfectly, the actual games will show if Go players play
stones in such a way as to create a system which is closer to criticality.
7
3.0 Results
In this section we will consider the evidence that points towards the existence of
scale free behaviour in the game of Go. We will begin by looking at the results from
random games and then compare those results to the ones from actual games.
3.1 Random Games
The PDF for random games were generated by generating 4000 games of 300
moves each, and calculating the aggregate PDF for the whole set. In order to cope with
the fact that very few data points are available for high cluster number, the PDF was
plotted in logarithmic binning(4): in other words the data was plotted on bins whose width
was growing geometrically. This ensures that, at high cluster number, there would still be
a sufficient number of data points. The same technique was used to improve the data
from the actual games. Figure 1 shows the difference between logarithmically and nonlogarithmically binned data from a particular series of the actual games.
1
1
10
100
0.1
0.01
0.001
0.0001
0.00001
0.000001
binned data
un-binned data
Figure 1: Logarithmically binned and non-logarithmically binned probability density functions for
cluster number distribution.
The logarithmically binned data is clearly closer to what we might expect to see: it is
monotonically decreasing and clearly it seems to follow a smooth curve of some kind.
8
Before looking at the data, lets remind ourselves what we expect to see from
percolation theory(4). Percolation theory predicts that the distribution of cluster numbers
should look like:
n(s,p) = q0 s- f(q1s |p-pc|)
(3.1)
We chose to use the function f(x) = exp(x1/) so that we obtained the following
ansatz for the PDF:
n(s,p) = q0 s- exp(-s * |p-pc|1/) = q0 s- exp(-s/s)
(3.2)
Which assumes a particular relation between the cut-off cluster number s and p:
s = |pc-p|-1/
(3.3)
s is a measure of the correlation length of the clusters. We plotted our PDF
every tenth move and fitted equation 3.2, adjusting the parameters until the graphs
seemed to overlap, at least for s>2. We then inferred the correct values of ,  and s.
There will definitely be a source of error in these ‘manual’ estimates, but seeing as there
are many other statistical defects in the data, a more accurate and laborious method for
fitting the curves was not deemed practical.
So, here are three probability density functions after 60, 120 and 240 moves
respectively, corresponding to a probability of occupation of 0.166, 0.332 and 0.665
respectively. The complete set of PDFs is shown in Appendix C. It should be noted that
these probabilities of occupation are aggregate values of the black and white probabilities
of occupation, i.e. the probability of occupation of white or black stones only would be
half as much.
60th move
10
1
0.1 1
10
100
0.01
0.001
0.0001
data
fit
0.00001
0.000001
0.0000001
0.00000001
9
120th move
10
1
0.1 1
10
100
0.01
0.001
data
0.0001
fit
0.00001
0.000001
0.0000001
0.00000001
240th move
10
1
0.1 1
10
100
0.01
0.001
0.0001
data
fit
0.00001
0.000001
0.0000001
0.00000001
Figure 2: Log-log plots of the probability density functions of cluster number distribution at the 60 th,
120th and 240th move respectively. The fit corresponds to equation 3.2 with  =2.
It should be noted that the fits corresponded to the ansatz 3.2, except for the
value of the constant of proportionality q0, which had to be adjusted to make sure the data
and the fit were overlapping. The value of , the critical exponent, is 2. It is easy to pick
out the trend in these graphs: the section that looks like a straight line on the log-log plot
(i.e. the ‘scale-less’ section) becomes longer as the probability of occupation increases.
This means that the value of s is increasing as p approaches pc (in percolation theory pc =
0.59, seeing as we calculate the probability of occupation as the sum of p for the white
and black stones, pc = 1.18).
10
We also plotted s against (pc – p):
100
10
cut-off
cluster
size
1
0.1
0.1
|p-pc|
Figure 3: Log-Log plot of the variation of cut-off cluster size with (pc-p), the value of which varies
between 0.3 and 1.1. the purple line is a simple (pc-p)-1/ plot, with  = 0.4. The value of pc was taken
as 1.18
As we can see, this agrees pretty well with equation 3.2 over most of the range. We have
now calculated the values of  and  and we can compare them to the results for
percolation theory. In the random games  = 20.1  = 0.40.013, percolation theory for
a 2-d lattice predicts  = 187/91=2.0549 and  = 36/91= 0.3956 in good agreement with
the theory.
What does this represent? The random games are indeed a good percolation theory
model, in fact the only difference between our random games and a true finite percolation
system is that in our games stones can be eaten. However it seems that –at least for small
lattices and for p<0.8 – this does not introduce any significant differences from the
theory. Having looked at these simple results it will be easier to have a good frame of
reference from which to consider the actual games.
3
The error is estimated as the smallest variation of the parameters used to fit the data.
11
3.2 Actual Games
At first, the actual games were analysed just like the random games, the value of
the PDF was calculated every 10 moves, i.e. at equal intervals of p. The following graph
shows what the results of this experiment were. The PDFs have not been normalised to
the total number of clusters, hence the curve grow further ‘out’ as more stones are played.
non time-normalised
1000000
100000
10000
1000
100
10
1
1
10
100
s
Figure 4: Non-time normalised total number of clusters of size s for 7000 actual games (log-log plot).
The various curves represent different values of p.
Unfortunately this data does not look very promising: we cannot see a power
law region, and the curve is not very smooth. In order to obtain better results, we decided
to try and normalise the game length, i.e. to sample the cluster number distribution after a
different number of moves for each game to ensure that we would sample the PDF 30
times from each game.
12
With this procedure, the graph analogous to Figure 4 is:
Time normalised
1000000
100000
10000
1000
100
10
1
1
10
100
Figure 5: Time normalised total number of clusters of size s for 7000 actual games (log-log plot). The
various curves represent increasing percentages of the complete game.
This time normalisation certainly produces a curve which is more regular and
which has a greater section exhibiting power law behaviour. The time normalisation
unfortunately introduces a very grave problem: much of the analysis that we performed
for the random games cannot be done anymore because each PDF does not correspond
uniquely to a value of p. Also the method of dividing the length of games by 30, creates
between 30 and 39 PDF per game. This means that when we are averaging, whereas the
first 30 PDFs are averaged over the whole set of games, the PDFs between 31 and 39 are
incomplete and might therefore show some peculiar behaviour.
Once we accept that the probability of occupation is not a relevant variable, we
realise that the need for a time normalisation actually represents something very specific:
although a game is formally over only when both players pass and the future of every
piece of territory has been completely determined, very often games will end when one of
the two players realises he will not be able to win and resigns. To interpret this with our
assumption on percolation and the conquest of territory, the game is over when one of the
two players realises that he will not be able to achieve his goal, he will not be able to
reach percolation. The whole game should therefore be interpreted as a struggle to
achieve the percolating cluster and it is clear in retrospective that the relevant ‘time-scale’
of the game is not so much the number of stones on the board, but the stage that the
‘percolation-battle’ is at.
Having clarified what we mean by time normalisation, we can now look at the
data that we obtained for the actual games. We picked the 6th, 12th and 24th time steps out
of 30 to represent the whole data set. All the PDFs are shown in Appendix C.
13
6th PDF (out of 30)
1
0.1 1
10
100
0.01
0.001
fit
0.0001
data
0.00001
0.000001
0.0000001
0.00000001
12th PDF (out of 30)
1
0.1 1
10
100
0.01
0.001
0.0001
fit
data
0.00001
0.000001
0.0000001
0.00000001
14
24th PDF (out of 30)
1
0.1 1
10
100
0.01
0.001
fit
0.0001
data
0.00001
0.000001
0.0000001
0.00000001
Figure 6 Log-log plots of the probability density functions of cluster number distribution at the 6 th,
12th and 24th thirtieths of the game respectively. The fit corresponds to equation 3.3
We cannot use the scaling ansatz 3.1 anymore, because of the lack of definition
of the probability of occupation. However, inspired by equation 3.2 we can assume a
distribution of the type:
n(s,p) = q0 s- exp(-s/s)
(3.3)
i.e. a power law modulated by a stretched exponential. In the following diagrams
we show how the quantities ,  and s change at various stages of the game.
critical exponent Tau
1.8
1.6
1.4
Tau
1.2
1
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
30th of the game
15
exponent alpha
2
1.8
1.6
alpha
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
30th of the game
Cutoff cluster size
12
cutoff cluster size
10
8
6
4
2
0
0
5
10
15
20
25
30
35
40
45
30th of the game
Figure 7: values of the exponent and parameters in equation 3.3 at various thirtieths of the game.
Note that the value for sections after 30 are not statistically significant.
There are various observations to make about these graphs, for starters one
should note that the values after the thirtieth thirtieth are not significant due to the
considerations above on time normalisation. Also, one might notice that the graphs are
very step-like: this is because the data fitting of the curves was performed by manually
changing the values of the parameters in equation 3.3 until the best fit was obtained. If we
16
wanted to estimate the errors in these measurements they would therefore be 0.1, 0.05,
1 for the measurements of ,, and s respectively, these figures –as in the case of the
random games- are the smallest variation of the parameters used in fitting the data. With
these error margins in mind, the graphs seem to suggest that  and  are constant
throughout, whereas s increases
monotonically and certainly at a slower rate than s from figure 3. In fact, to better
observe this fact, let us plot the variation of s at various thirtieths of the game for both
the actual and the random games.
Cutoff cluster size
16
cutoff cluster size
14
12
10
real games
8
random games
6
4
2
0
0
5
10
15
20
25
30
30th of the game
Figure 8: Cut-off cluster size for real and random games.
The significance of this graph is indeed huge: the cut-off cluster size increases as
an exponential and if we were able to occupy 59% of the board with stones of a particular
colour, we know that it would diverge, in accordance with percolation theory. The cut-off
cluster size for the random games, however, decreases much more slowly, yet it is
usually greater, indicating that the power law region were scale free behaviour is present
is longer. This is in direct agreement with the hypothesis put forward: in Go the system is
always in a critical state, the only reason why we see a finite cut-off cluster size is
because the board is finite in size, and the reason why we see an increase in the cut-off is
simply because more stones are being placed on the board.
17
4.0 Conclusion
We have analysed the PDFs for random and actual games. Both show section
were the cluster size distribution follows a power law, but the actual games have longer
sections, with a cut-off that does not seem to have the same dependence on occupation
probability as the random games. In fact, probability of occupation does not seem to have
a significant role in the actual games. The data collected seems to suggest that cluster in
actual games are similar to percolation cluster, we would therefore expect that a close
investigation of these clusters would reveal fractal dimensionality: the cluster size
varying to some power of the cluster radius by a constant D different from 2. It might be
interesting to do some analysis on the clusters, trying to measure the exact value of D.
More importantly however we have discovered a trend in the distribution of
cluster sizes and given a simple explanation for it: Go is all about the conquest of the
largest territory with the least number of stones, percolating clusters provide an ideal
structure to achieve this goal, hence players unconsciously play in such a way as to
develop percolating clusters, whilst impeding the opponents attempts at developing their
own percolating clusters. It might be interesting to use the code developed to analyse
large number of games between different computer Go programs, to see if the PDFs of
cluster numbers follow a similar pattern to those made by human players.
The observation we made might furthermore be useful to the development of
algorithms for computer Go, in fact knowing what the distribution of clusters looks like
might give some suggestion of when to reinforce a cluster and when to play isolated
stones. This clearly cannot be the only criterion in the computer Go program, as there are
many tactical issues that the computer would have to consider. However the importance
of such an algorithm is that it translates a strategic goal - conquer territory with as few
stones as possible – into a goal which an algorithm could try to achieve far more easily,
i.e. try and build percolating clusters by creating cluster size distribution that follows a
power law with a particular value for the exponent. Such a strategic goal would allow to
make the tree search for moves thinner, allowing the computer program’s choice of
moves to be smaller. More importantly it would allow the program to have a long term
goal, i.e. it would make the move search tree much deeper, making it more similar to a
human player’s.
The next stage would therefore be the implementation of a computer Go
program with a ‘strategy algorithm’ as outlined above. The observations made in this
report, however, seem to suggest that percolation theory must have a role in the game of
Go.
18
Appendix A
Rules of the game of Go
Go has very few and simple rules. The game is played on a 19x19 board by two
players. In turns, the players place a stone on the intersections. Stones cannot be moved
once they are on the board, however if they are totally surrounded by opponent pieces
they will be removed. The aim of the game is to surround as much territory as possible,
the finale score will be calculated by summing the number of intersections enclosed by
one’s stones and adding the number of opponents’ pieces taken. A couple of diagrams
will illustrate these rules.
Diagram a: Territory
A diagram showing the meaning of territory, the
intersections marked with a red dot are black territory, as
they are completely surrounded by black stones.
Diagram b: Eating
In this diagram we can see that the white stones are
entirely surrounded by black pieces, except for the
intersection labelled a. If black played in a, the three white
stones would be removed from the board.
These are all the rules we have had to implement. The only other rules are that
‘suicide’, i.e. playing in an intersection that is surrounded, is illegal and a rule to prevent
infinite cycles, which prohibits a certain class of moves in a certain situation4. Both of
these rules do not need taking into consideration in our program, because we are
analysing data from games that will not contain illegal moves.
4
The so called Ko rule states that at any point no move can be made which will bring the board to exactly
the same position as it was in the move before.
19
Appendix B: Program Outline
In this section, we shall discuss the code that we used process the digital Go
games to obtain probability density functions of the cluster sizes. The algorithm usually
used to identify the sizes of clusters in percolation theory is the Hoshen-Kopelman5
algorithm (HK). HK is highly optimised at recording the cluster size PDF of the whole
board at a certain time, whereas we need to record the cluster size PDF as it changes
through the game. In other words all we need to measure is the changes that occur
around the last stone placed. There are other pieces of code that we have had to
implement: code to recognise when stones should be taken off the board and code to
generate random Go games.
B.1 The Burning algorithm
The basic principle of the HK algorithm is a system to index the clusters
depending on their adjacency: to do this it uses two registers. The first one (cluster class)
records the position of the points on the lattice and labels them according to their
adjacency and according to their colour. White stones were labelled with even labels and
black with odd labels. The other register (label class) records the size of the cluster: the
ith element of this register will contain a number describing what size the cluster labelled
with i in the cluster register is. We borrowed these two data classes as the two
fundamental data classes for our algorithm, and added another one (board class), which
labels points on the grid exclusively depending on whether they are black (assigns them a
–1 value) or white (assigns them a 1 value). This class is redundant, but useful in the
development of the code. An example of what these registers would show for a particular
game position is:
a
b
c
d
e
0
1
2
0
0
0
1
2
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
3
0
a b c d e
Figure 9: Cluster class.
a
b
c
d
e
5
a b c d e
0 0 0 0 0
-1 -1 0 0 0
1 1 0 0 0
0 0 1 0 -1
0 0 0 0 0
-2 -2 -1 -1
Figure 10: Board class.
Figure 11: Label class
(J. Hoshen and R. Kopelman, Phys. Rev. B, 14, 3438 (1976).)
20
The values on the label register are all negative, because they represent cluster
sizes: positive values represent adjacency between two different clusters. So, for
example, if white played in bd, connecting the 2 and 4 cluster, the value of label[2] would
change to –4 and that of label[4] would change to 2.
All our program has to do is to update these registers appropriately every time a
new stone is played, updating the information on the clusters surrounding it.
B.2 Eating Algorithm
The other piece of code we have written is the code which removes stones to be
eaten: this piece of information is unfortunately absent from the digital Go games,
unfortunately. In order to do this we tried to implement a simple algorithm. When we
want to decide whether or not to eat a stone, we want to look at its neighbours: if they are
of the same colour, then we need to check those neighbours to, if they are white they are
ignored and finally if they are blank cells, the algorithm is stopped and the stones
removed. To do this we created two registers: one contains the stones visited (i.e. whose
neighbours have been investigated) the other contains the stones to visit (i.e. those who
are in the same cluster as the stone investigated). When the two registers are the same,
and none of the neighbours of any cell in the registers has proved to be blank, the cluster
is removed and the registers are updated.
21
Appendix C
This appendix contains all the PDfs generated. Firstly we will show 10 graphs for the
random games. The fits (purple data points) correspond to formula 3.2, with the values
for s according to figure 3 from chapter 3.
3
6
1
1
10
100
10
0.1
1
0.1 1
0.01
PDF(s)
0.001
PDF(s)
10
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.00000001
0.0000001
s
s
9
12
10
10
1
0.1 1
1
0.1 1
10
100
10
100
0.01
PDF(s)
0.01
PDF(s)
100
0.01
0.001
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
15
18
10
10
1
0.1 1
1
0.1
1
10
10
100
0.01
PDF(s)
0.01
PDF(s)
100
0.001
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
22
24
21
10
10
1
0.1 1
1
0.1 1
10
100
100
0.01
PDF(s)
0.01
PDF(s)
10
0.001
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
27
30
10
10
1
0.1 1
1
0.1 1
10
100
PDF(s)
0.01
PDF(s)
10
100
0.01
0.001
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
The next group of graphs is the 13 graphs from the actual games. The title contains the
formula of the fit (blue in these graphs).
3=s^(-1.8)*EXP(-((s/4)^1.6))
6 =s^(-1.6)*EXP(-((s/4)^1.5))
1
1
0.1 1
10
10
100
0.01
PDF(s)
0.01
PDF(s)
0.1 1
100
0.001
0.0001
0.001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
s
s
23
12=s^(-1.4)*EXP(-((s/5)^1.4))
9 =s^(-1.6)*EXP(-((s/5)^1.5))
1
0.1 1
1
0.1 1
10
100
0.0001
0.00001
0.0001
0.000001
0.00001
0.0000001
0.000001
0.00000001
0.0000001
s
s
15=B1^(-1.4)*EXP(-((B1/6)^1.4))
1
0.1 1
18=B1^(-1.4)*EXP(-((B1/7)^1.4))
10
1
0.1 1
100
10
0.001
0.0001
0.00001
0.000001
0.001
0.0001
0.00001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
21=s^(-1.4)*EXP(-((s/8)^1.4))
1
0.1 1
24=s^(-1.4)*EXP(-((s/9)^1.4))
10
1
0.1 1
100
10
100
0.01
0.001
0.001
PDF(s)
PDF(s)
0.01
0.0001
0.00001
0.0001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
0.00000001
s
s
30=B1^(-1.3)*EXP(-((B1/11)^1.3))
series 27=s^(-1.4)*EXP(-(s/10)^1.4))
1
0.1 1
10
1
0.1 1
100
10
100
0.01
0.01
0.001
PDF(s)
0.001
0.0001
0.0001
0.00001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.00000001
s
0.00000001
33=B1^(-1.3)*EXP(-((B1/10)^1.3))
39=s^(-1.3)*EXP(-((s/7)^1.9))
36=s^(-1.3)*EXP(-((s/8)^1.7))
1
1
1
10
100
0.1 1
10
0.1 1
100
PDF(s)
0.001
0.0001
0.00001
0.000001
0.001
0.0001
0.00001
100
0.001
0.0001
0.00001
0.000001
0.000001
0.0000001
0.0000001
0.0000001
0.00000001
0.00000001
0.00000001
s
10
0.01
0.01
PDF(s)
0.01
PDF(s)
100
0.01
PDF(s)
PDF(s)
0.01
0.1 1
100
0.001
PDF(s)
PDF(s)
0.01
0.001
10
0.01
s
s
24
References
(1) senseis.xmp.net/?GoHistory
(2) Computer Go, Bruce Wilcox, Computer Games II 1987 Springer-Verlag
(3) J. Hoshen, R. Kopelman Phys Rev. B 14 p3438-3445 (1976)
(4) D. Stauffer and A. Aharony Introduction to Percolation Theory p.49 (Taylor &
Francis 1994)
(5) www.mit.edu/afs/athena/activity/g/go/src/standard/formspec.txt
(6) gobase.org
25
Download