Mastermind Report - Electrical and Computer Engineering

advertisement
Mastermind: An AI Approach to the Game
Lizbeth Matos Morris, Harry Rosado Padilla and Carlos M. Rubert Perez
Department of Electrical and Computer Engineering - University of Puerto Rico at Mayaguez
Abstract:
To complete the requirements for the course
ICOM 5015 – Artificial Intelligence we had to
incorporate the algorithms discussed in class
towards the resolution of a particular problem.
In the case of the course the problem was a game
and our game was “Mastermind”, in which an
agent has to decipher a code set by another agent
by means of guesses and clues for the solution.
Our implementation of the game will incorporate
an algorithm in which a solution will be found in
the least number of attempts and always finding
one. By using a combination of C++ and C# we
have implemented a possible solution for the
Mastermind problem.
I. INTRODUCTION
The mastermind game consists of one agent A
deciphering a code set at random by another
agent B. Agent A tries to guess the code set by
the other agent B. The latter will provide with
clues to the former if the code is correct or has
part of the code correct. Eventually agent A will
decipher the code or fail after trying a certain
number of attempts.
This project, as a requirement for the Artificial
Intelligence course, has to implement a working
version of this game and make use of any of the
algorithms used to solve problems in this field.
A requirement was that a solution would be able
to be found as quickly as possible. In terms of
the game that the code would be deciphered in
the least amount of attempts by doing an
efficient search of possible solutions until
finding the correct one.
We have opted to attempt to solve this problem
by using two programming languages towards
the design and implementation: C++ and C#.
The algorithm was implemented and tested by
using C++ programming techniques while the C#
language was used to design a graphical user
interface (GUI) that would implement the
functions written in C++ and give a more visual
approach towards the solution of the Mastermind
problem.
The following report will give a more insight
into the Mastermind problem and the approach
that was implemented to find an optimal solution
to this problem.
II. PROBLEM STATEMENT
The Mastermind game is usually played
using: (a) a decoding board, composed of
usually from eight to twelve rows containing
four large holes next to a set of four small holes,
(b) code pegs of six to eight different colors,
with round heads, which will be placed in the
large holes on the board, and (c) key pegs, some
black, some white, which are flat-headed and
smaller than the code pegs; they are placed in the
small holes on the board.
Fig.1, Mastermind Board Game
The codemaker chooses a pattern of four code
pegs. Duplicates are allowed or not, empty
allowed or not, as the players choose to set the
rules. The codebreaker tries to guess the pattern,
in both order and color, within the limit of turns.
Each guess is made by placing a row of code
pegs on the decoding board. Once placed, the
codemaker provides feedback by placing from
zero to four key pegs in the small holes of the
row with the guess. A black key peg is placed for
each code peg from the guess which is correct in
both color and position; a white peg indicates the
existence of a correct color peg placed in the
wrong position. Once feedback is provided,
another guess is made; guesses and feedback
continue to alternate until either the codebreaker
guesses correctly, or all the guesses made are
wrong. Several versions of the Mastermind game
hav been implemented on computers to program
them to solve the code-breaking with the use of
Artificial Intelligence theory. Wouldn’t be
interesting to see how would we program a
computer to solve a Mastermind game?
III. OBJECTIVES
The objective of this project is to develop a
computer program, either console application or
GUI driven, that implements the Mastermind
game, within a defined set of rules. There is no
limitation in the programming language
requirements as we decide which one to choose
to implement the game, since the performance of
oru final product will not be determined on how
long it takes to return one “guess” of the code at
each attempt, but how many attempts does it take
to break it. The game application should be able
to break the code every time the game is run, that
is, it should be able to always reach a solution
and never get stuck in a case like not having an
available answer for the problem. Therefore, our
code should be able to reach a solution at the
least number of tries possible, this being verified
by matching our results with data of previous
tests of current algorithms used to solve the
Mastermind problem, as we know it.
It has to be specified among our objectives
that the algorithm that the wokgroup decides to
choose for the application design has to
implement some Artificial Intelligence concepts
in its code-breaking strategy, as informed search,
exploration of alternatives and decision making.
If any solution to the Mastremind problem is
implemented making no use of Artificial
Intelligence techniques, it is our responsibility to
justify such decision convincingly.
IV. METHODOLOGY
The development of a well-defined work
plan before doing anything is crucial for defining
the success of any project. Three main issues
have to be represented in that work plan: the
final definition of the game rules, as they have to
be simulated in the application, the programming
tools to be better suited for our intentions, and
the proper algorithm to be in use. Said so, we
handle the design methodology at follows.
A. Game Rules
First of all, the game rules must be set.
Several examples of online games across the web
were examined to look for a trend on how long
the code to be broken should be, how many
different values should be available, and so on.
The most common rules found are stated as
follows:
 Code length to be broken equals 4.
 Total number of elements (either integer
numbers or colored pegs) equals 6.
 Total number of attempts to break the
correct code equals 8.
 Duplicate element values are allowed.
 Empty element values are not allowed.
These rules, meeting the most common
examples of the Matermind game, are the ones
goin to be applied on our version of the game.
However, a more flexible version that receives
some of the previously mentioned values as
input parameters to create a general Mastermind
application could be available at the end of this
work.
B. Programming Language Selection
As we mentioned earlier in the objectives,
our goal is not to to determine the performance
of the final product by measuring how long it
takes to compute and return one solution for each
attempt, but how many attempts it would take
the program to reach a solution, so the size of the
machine code generated by the compiler of any
programming language is not an issue of
concern. Some programming languages, mainly
the logic languages like Prolog, are better suited
than others in knowledge expressing, inference
and decision-making, ideal for some AI
implementations. However, such decision means
to start learning a new language from scratch,
which would not be an issue if not for time
constraints. Anyways, a widely known generalpurpose language like C++ or Java is more than
enough to solve the Mastermind problem and
almost any other existing problem that can be
solved by a computer application. The whole
application logic, and thus the console app, will
be designed in the C++ language; at the same
time, the C# language will be better suited to the
design of a better looking graphical interface
resempling the color version of the game. The
C# version of the game will implement the same
functions as the C++ version so that we are able
to see in a more graphical matter how this
problem is solved.
C. Algorithm Selection
The Mastermind game has its own scoring
system, defined previously in the problem
statement, based on how many elements are in
the right place and how many others are in the
wrong place. No more information of which ones
are or are not in the right place, but that’s ok. We
need to develop an algorithm that makes use of
the information that this scoring system is
throwing away to find the best alternatives
available from all possible code combinations,
which suggests the use of a heuristic search
strategy, being the scoring system our heuristics.
The same should take advantage of and
interesting symmetry fact, described by:
Score (code, guess) = Score (guess, code)
This means, that the score generated by the
comparison of the correct code with the guess
code returns the same value as the score
generated by the comparison of the guess value
with the correct code. The performance of the
final algorithm shall be affected by other
decision like whether to use just the current
generated guess and its score or to keep track of
previous guess codes and scores. More on the
final algorithm lies ahead.
four values to give the computer’s guess its
proper score: a “2” for each integer in the right
place (black peg), a “1” for each integer at a
wrong place (white peg), and zeroes for each
wrong value. Once the user is done, the
computer analyzes the user input score; if the
guess war right (score = [2 2 2 2]), then the
program forces out of the loop with a Boolean
won=true, otherwise the program makes use of
the current guess and the obtained score to
generate a list of possible combinations based on
those values. More on how the combinations are
generated will be discussed later.
B. Second-to-eight Attempts
The last seven iterations behave a little
different from the first one. Instead of generating
a combination of four values randomly, the
program chooses randomly one of the code
combinations generated on a list in the previous
iteration. As in the first iteration, the program
displays the chosen code on the screen and waits
for the user to enter the proper proper score, as
previously described. Once the user is done, the
computer analyzes the user input score; if the
guess war right (score = [2 2 2 2]), then the
program forces out of the loop with a Boolean
won=true, otherwise the program makes use of
the current guess and the obtained score, this
time, to prune (or filter, as we prefer to visualize
this process) the list of possibilities based on
those values. More on how the combinations are
filtered ahead. After this, the program jumps to
the next iteration and repeats the process until
the eight and last iteration.
IV. APPLIED ALGORITHM
C. Guessing Generation and Pruning
In the “computer guessing” mode, which is
the one that we are interested most, the
application asks the user to think about a
combination code, which the user can either
memorize it or write it down on a paper. When
the user is done, he/she presses any key and the
program gets in a finite loop, lasting eight
iterations, one for each time the computer makes
an attempt on guessing the correct code.
A. First Guessing Attempt
In the first iteration of the main application
loop, the computer generates a random
combination of four values, each one ranging
from one to six, as the rules specify; these values
are stored in an array of integer values. The
program displays the generated “guess” code on
the screen and waits for the user to enter a list of
The list of possibilities used to guess the
users correct code is driven by two algorithms,
one for generate it, the other for filter it, that use
a trick that prunes possible solutions pretty
quickly. The generation of possibilities look for
all the possible combinations, from [1 1 1 1] to
[6 6 6 6], for a total of 1296 possible
combinations. Each one of those combinations is
compared against the first computer guess, and a
score is assigned to that combination, similar to
the score system the user is required to input. If
the returned score is equal to the score the user
assigned to that first guess, then that combination
is considered a possible candidate to be the
coorect code, and is assigned to a list of arrays
containing all othe possible candidates. This
comparison guarantees that the correct code will
be among the chosen ones, never being left out.
This technique has been proven to eliminate as
much as 80-90% of all wrong combinations at
once, thus showing why its quickness in the
pruning process.
The filtering process algorithm is somewhat
of an inverse process of the generated algorithm.
It takes the remaining possible combinations in
the list one by one, comparing them against the
current compuer guess code and returning the
corresponding score. If the returned score is
somewhat different to the score the user has
currently entered (that is, if the number of black
pegs or the number of white pegs are different),
then that combination is considered as an invalid
selection and is deleted from the list of possible
combination. This guarantees most of the invalid
combinations will be wiped out each time this
filtering algorithm is run, again, by as much as
80-90%, as some tests have shown.
C. Application Flowchart
Flowcharts normally are more useful to help
understand an algorithm better than any well
written descriptive paragraphs. The next figure
tries to describe visually how the Mastermind
program tries to guess the user’s correct code.
Fig.2, Mastermind Application Flowchart
Once the application gets out of the loop, it
prints a message to the screen indicating the end
of the game, with the content of the message
being different in the case that the computer wins
the game from the case that the computer could
not break the code as it should be. For the game
being a success, is indispensable that the user
doesn’t make a mistake when entering the score
corresponding to the computer’s current guess.
V. RESULTS: FINAL PRODUCTS
Not so far from the end of our work plan, a
fully operational application that implements the
Mastermind game had been finalized. The same
communicates with the user from the console
terminal, applying the algorithm previously
discussed. What it’s visible to the user is the
console app asking the user to think a code,
throw values to guess the code and ask the user
for a response of how good its guess was. As you
can see in the figure below, the 2’s represent
integers in the right place, 1’s for integers in a
wrong place, and zeroes for wrong values.
to how many attempts do the application takes to
guess the correct code. For this, several
thousands of tests for this application should be
run to get a better idea of the closest average
number of attempts taken because this quantity is
not a constant. For making this testing process
less tedious, another copy of the Mastermind
console app has been modified to run the game
continuously, generating itself the code sequence
to be broken and trying itself to guess it. This
code tester displays on the screen how many
tests of the game have been run, along with the
average attempts the tests have taken to guess the
correct code. Figure 5 shows an example run of
this application after 500 tests.
Fig.3, Mastermind Console App
In the end, this logic was translated to the
C# language to get a complete color-coded
version of the Mastermind game. This one being
more dynamic in the sense that the user can
choose the maximum number of different values
(colors) an element can have, as to set a level of
difficulty, and he/she can choose either the
computer to try to guess his/her code or the user
to try to guess the computer’s generated code.
II. PROBLEMATICS
Fig.4, Mastermind Graphical Interface
Each one of these executables, as well as the
source code used to develop these applicatios,
are available with this paper in the same
directory as a package. Source codes are
properly documented so you can easily identify
which modeules correspond to the modules
illustrated in the flowchart. If you have any
doubts or comments about the code, please
submit them to harry.rosado@gmail.com.
VI. RESULTS: STATISTICAL ANALYSIS
As mentioned earlier, the performance of
this application has to be measured in accordance
Fig.5, Mastermind Application Tester
From a first sample of 10,000 tests run, the
computed average of tries our algorithm takes to
break the correct code lays around 4.656
attempts per game, while another sample of the
same quantity of tests gave later an average of
4.6333. Although several more thousands of tests
might be required to get a more exact value, this
average is enough to get a good idea of how well
does it performs against other proven algorithm
and even against average human players, if such
a test exists. The tests have proven the code
taking no longer than seven tries per game to
break the code, it just seems that never needs to
get to the eight and last try. Althoug this
algorithm has certainly been capable of breaking
the correct code after just the second try, don’t
expect such success very often; it just seems to
happen like once in every one hundred tests.
VII. CONCLUSIONS
Blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah.
Blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah
blah blah blah blah.
VIII. REFERENCES
[1]
S. Rusell, P.Norving, “Artificial Intellige:
A Modern Approach” – Second Edition, Prentice
Hall Series in Artificial Intelligence, 2003.
[2]
Mastermind Game, DelphiForFun Home http://www.delphiforfun.org/Programs/Mastermi
nd.htm - April, 2006.
[3]
Investigations into the Mastermind Gamehttp://www.tnelson.demon.co.uk/mastermind/ April, 2006.
[4]
Mastermind, the Game, GeNeura Team http://www.geneura.ugr.es/~jmerelo/newGenM
M/index.html - April, 2006.
Download