report_VirpobePaireepinart

advertisement
Algorithmic Analysis of N-Queens and Game of Life by Virpobe Paireepinart
Algorithm Design and Analysis, Fall 2009, Texas State University, Oleg Komogortsev
N-Queens problem
For this problem, I implemented the AddQueens program presented in the class, but I used
Python, so I had to rewrite it. Then I set up the output so that I could get data for the graphs that will
reflect their runtime. After I gathered the data I needed in order to construct the graphs, I changed the
output so that it would be easier to use. Now when you run the program it will iterate through board
sizes of 4 – 16 and ask you if you want to see the solutions outputted each time.
Please see report_addition1.pdf for the data I will be referring to in the following section.
In the graphs for Time, it can be seen that the time it takes for each nxn board to be solved
grows exponentially with the size of the board. The log version is linear, implying that the growth of the
original graph is exponential, since log base 2 of n^2 is the same as n, which is linear. So if the
logarithmic graph is linear, that means the original graph is exponential, and the runtime of the
algorithm is O(n^2).
Thus, the number of AddQueen calls and the number of solutions found also grow exponentially
with respect to the board size, which is shown on each respective graph.
Game of Life v1 analysis
For this problem, I implemented a version of the Game of Life algorithm in Python. I also wrote
the program to be able to take an input of a map file, as well as generating random distributions of life
cells.
Please see report_addition2.pdf for the data I will be referring to. For all graphs, the x axis is the
iterations of the simulation. Every simulation was run with 25 iterations, because this was the best
balance of detail and not overwhelming the graphs with too many data points.
Going through the graphs in order, Page 1 contains the graphs for GoL1. It was executed twice,
with 25 iterations, on two boards of the same gridsize (250x250) but with different distributions of life
cells. The blue line was graphed based on data for a run with a 0.1 probability of the starting cells being
alive. That is, the life cells were very sparse, so there were not a lot of state changes (since there were
not a lot of cells to change in the first place.) Therefore, it can be seen in graph 1 “Changes for GoL1
(different density)” that there were a lot more changes for the more dense distribution (55% probability
of cells being alive to start) than for the more sparse distribution. However, graph 2 “Time for GoL1
(different density)” it can be seen that the runtime of the algorithm is pretty much the same for both
cases. There are some jitters in the graph due to my system clock not being able to be completely
accurate at measuring the times, but it gives a good general idea that the times are similar. The Y axis is
milliseconds taken to execute each iteration.
Page 2 contains the graphs for the Game of Life v1 that was run with different gridsizes. For this
run, files map3.txt and map4.txt were used. Map3.txt is a randomly generated distribution of about
60% life cells on a 250x250 grid. map4.txt is the same dataset as map3.txt, but on a grid size of
1379x1379 (I.E. it is padded out with empty cells.) The graph 3 “Changes for GoL1 (different gridsize)”
shows that the number of cells that changed between each iteration are approximately equal (they’re
slightly different because some cells that would grow outside of the 250x250 grid and would be killed
will survive and propagate outward on the 1379x1379 grid.) However, graph 4 “Time for GoL1 (different
gridsize)” clearly shows that the runtime is substantially increased for the larger grid even though the
number of changes are different.
Combining the data from Page 1 and Page2, it can clearly be seen that Game of Life v1 execution speed
depends almost entirely upon the grid size, and not on the number of cells changing state per iteration.
Pages 3 and 4 contain the raw data that was used to generate the graphs.
Game of Life v2 analysis
For this problem, I implemented the second version of the Game of Life algorithm in Python. I
also wrote the program to be able to take an input of a map file, as well as generating random
distributions of life cells.
Please see report_addition2.pdf for the data I will be referring to. For all graphs, the x axis is the
iterations of the simulation. Every simulation was run with 25 iterations, because this was the best
balance of detail and not overwhelming the graphs with too many data points.
Going through the graphs in order, Page 5 contains the graphs for GoL2. It was executed twice,
with 25 iterations, on two boards of the same gridsize (250x250) but with different distributions of life
cells. The blue line was graphed based on data for a run with a 0.1 probability of the starting cells being
alive. That is, the life cells were very sparse, so there were not a lot of state changes (since there were
not a lot of cells to change in the first place.) Therefore, it can be seen in graph 5 “Changes for GoL2
(different density)” that there were a lot more changes for the more dense distribution (55% probability
of cells being alive to start) than for the more sparse distribution. Also, graph 6 “Time for GoL2
(different density)” shows that the runtime of the algorithm is clearly related to the number of changes.
When the number of changes were reduced in the 0.1 probability run, the time taken by each iteration
was also reduced substantially.
Page 6 contains the graphs for the Game of Life v2 that was run with different gridsizes. For this
run, files map3.txt and map4.txt were used. Map3.txt is a randomly generated distribution of about
60% life cells on a 250x250 grid. map4.txt is the same dataset as map3.txt, but on a grid size of
1379x1379 (I.E. it is padded out with empty cells.) The graph 7 “Changes for GoL2 (different gridsize)”
shows that the number of cells that changed between each iteration are approximately equal (they’re
slightly different because some cells that would grow outside of the 250x250 grid and would be killed
will survive and propagate outward on the 1379x1379 grid.) However, graph 8 “Time for GoL1 (different
gridsize)” clearly shows that the runtime of the algorithm is not affected by the increase in gridsize.
There are some jitters in the graph due to my system clock not being able to be completely accurate at
measuring the times, but it gives a good general idea that the times are similar. The Y axis is
milliseconds taken to execute each iteration.
Combining the data from Page 5 and Page6, it can clearly be seen that Game of Life v2 execution speed
depends almost entirely upon the number of cells changing state per iteration, and not on grid size.
Pages 7 and 8 contain the raw data that was used to generate the graphs.
Therefore, in conclusion, Game of Life v1 is unaffected by the number of state changes of each cell per
iteration, but is greatly affected by the grid size. Game of Life v2 is unaffected by the grid size, but its
runtime depends on the number of cells changing state per iteration.
So running a very large, sparse map will be much more efficient with v2 of the algorithm. A very dense
map may be more efficient with GoL v1 but it doesn’t necessarily have to be. Also, most dense maps
become sparse quickly due to overcrowding (to illustrate this, try running the program with 25 iterations
on a 20x20 board with .9 probability) so over time GoL v2 will most likely be more efficient even for
dense graphs.
Therefore, for most cases, GoL v2 will be the better choice.
Download