Pool Playing Robot - School of Mechanical Engineering

advertisement
The University of Adelaide
Department of Mechanical Engineering
Final Year Project
Pool Playing Robot
Final Report
November 2002
Justin Ghan
Tomas Radzevicius
Will Robertson
Alexandra Thornton
Supervisor: Dr Ben Cazzolato
Executive Summary
This report covers the complete design, construction, commissioning and
preliminary testing of a robot that is capable of competing against a human
opponent in a game of pool.
A number of robots have been designed and built in the past for the purpose of playing pool, or similar games. Upon investigation of these previous
projects, it was found that none of these robots were able to play a real shot
(as would occur in a real game) successfully.
The aim of this project was to achieve what past projects have not — that
is, to design and build a robot which operates independently of human input
and can play pool to a degree approximating a human player.
The process that the robot goes through is functionally similar to that
of a human. A vision system determines the location of the balls on the
table. This information is then processed by a shot selection program that
determines the best shot to play, then outputs a series of commands to a
dedicated mechatronic system that translates these commands into physical
actuation in order to sink a ball.
The vision system consists of a computer controlled camera and image
processing software to determine the ball locations. The scope of the vision
system covers the determination of the location of red, yellow, black and white
balls. This information is then analysed by a shot determination algorithm,
to find the shot with the highest probability of success.
The position and force requirements of the shot are converted by interface
software to a series of commands, which are sent to a dedicated hardware
control card that controls the actuators.
The actuation system for the robot is a modified Cartesian design driven
by stepper motors, with four degrees of freedom to ensure spatial location of
the cue tip over any position on the pool table. The cue tip is actuated by
a long stroke solenoid, modified from a washing machine to operate on an
adjustable power supply.
The aim of the project has been achieved to a certain degree, with a robot
capable of independently playing a game of pool, although the speed and
accuracy of the robot are below initial expectations. The changes necessary
to significantly improve the performace of the robot are outside the scope of
what was achievable in the time frame.
ii
Acknowledgements
We would like to thank first and foremost our supervisor, Dr Ben Cazzolato, for his unending enthusiasm, support, guidance and confidence in our
ability to complete Eddie!
Thank you to Joel, Silvio, Derek and George in Instrumentation for indulging our frequent visits and demands quickly, efficiently, professionally and
humorously!
We would like to acknowledge the assistance of the Mechanical Engineering
workshop staff for their invaluable contribution: Bob, Ron, Malcolm, but
especially Bill!
Thanks to Stephen Farrar for his ideas and assistance for the image processing software, and to Pierre Dumuid and Ben Longstaff for helping out
with programming. Thanks to Nicole Carey for editing the draft report.
And finally, thank you to all of our family and friends for supporting us,
feeding us and generally keeping us going throughout the year.
iii
Contents
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
1 Introduction
1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.3 Aim and significance . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2 Literature review
3
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.2 The game of pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.3 The eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.4 The brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.5 Previous pool playing robots . . . . . . . . . . . . . . . . . . . . . . .
5
2.5.1
The Tianjin Normal University model . . . . . . . . . . . . . .
5
2.5.2
The University of Bristol model . . . . . . . . . . . . . . . . .
7
2.5.3
The Massachusetts Institute of Technology model . . . . . . .
8
2.5.4
The University of Waterloo model . . . . . . . . . . . . . . . .
9
2.5.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 The game of pool
14
3.1 Rules of 8-Ball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Physics of pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
iv
4 The eye
17
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3.1
Camera types . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3.2
Selection criteria . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.3
Camera options and selection . . . . . . . . . . . . . . . . . . 25
4.3.4
Camera choice . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3.5
Image processing in hardware . . . . . . . . . . . . . . . . . . 27
4.4 Image representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Template matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5.1
Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.2
Speed considerations . . . . . . . . . . . . . . . . . . . . . . . 34
4.6 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6.1
Centroid method . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6.2
Erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.7 Distortion filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.8 Locating edges and pockets . . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Final eye software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 The brain
47
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Listing shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Possibility analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
v
5.3.1
Calculating ball paths . . . . . . . . . . . . . . . . . . . . . . 50
5.3.2
Trajectory check . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.4 Choosing the best shot . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.4.1
Difficulty of a shot . . . . . . . . . . . . . . . . . . . . . . . . 56
5.4.2
Usefulness of a shot . . . . . . . . . . . . . . . . . . . . . . . . 62
5.4.3
Overall merit function . . . . . . . . . . . . . . . . . . . . . . 63
5.5 Complete simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.6 Final brain software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6 The arm
67
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.2 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.3 Overall function and subfunctions . . . . . . . . . . . . . . . . . . . . 67
6.4 Solution principles to subfunctions
. . . . . . . . . . . . . . . . . . . 68
6.5 Concept solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.6 Analysis of design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.6.1
Energy transfer to cue ball . . . . . . . . . . . . . . . . . . . . 78
6.6.2
Drive mechanism and traverse mechanism . . . . . . . . . . . 80
6.6.3
Control of actuation system . . . . . . . . . . . . . . . . . . . 84
6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7 Interface
85
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Software development . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
vi
7.2.1
Actuator control requirements . . . . . . . . . . . . . . . . . . 86
7.2.2
Command structure development . . . . . . . . . . . . . . . . 86
7.2.3
Pulse train generation . . . . . . . . . . . . . . . . . . . . . . 88
7.2.4
Code algorithms
. . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3.1
Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3.2
Solenoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.4 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8 Commissioning and results
99
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.2 Software integration
. . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.3 Position calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.4 Cue ball location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.5 Cue tip length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.6 Carriage wheels and motor torque . . . . . . . . . . . . . . . . . . . . 104
8.7 Cross traverse shearing . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.8 Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.9 Vision system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.10 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.10.1 The eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.10.2 The brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.10.3 The arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
8.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
vii
9 Future work
112
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.2 The eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.3 The brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.4 The arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10 Conclusion
116
References
117
A Robot code
119
A.1 Main program code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
A.2 Eye software code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
A.3 Brain software code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
A.4 Interface software code . . . . . . . . . . . . . . . . . . . . . . . . . . 143
A.5 Miscellaneous code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
B Visual Basic code
153
C dSPACE code
159
C.1 C code for DS1102 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
C.2 Matlab code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
D Control card circuit
161
E Cost analysis
163
viii
F Solenoid force experiment
165
F.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.2 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.3 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.5 Results and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
F.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
G Technical drawings
168
ix
List of Figures
1.1 Block diagram of the robot subsystems. . . . . . . . . . . . . . . . . .
2
2.1 The University of Bristol robot. . . . . . . . . . . . . . . . . . . . . .
7
2.2 Schematic for the University of Waterloo robot. . . . . . . . . . . . . 10
2.3 The mushroom shaped paddle used in the University of Waterloo robot. 11
2.4 Typical worm gear configuration. . . . . . . . . . . . . . . . . . . . . 12
3.1 Setup for a game of 8-Ball. . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1 Geometry of a shot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Errors in the positions of the cue and target balls. . . . . . . . . . . . 21
4.3 Template of the white ball. . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 Cross-correlations between an image and the template. . . . . . . . . 31
4.5 Neighbourhood of a peak of the averaged cross-correlation. . . . . . . 31
4.6 Results of the normalised cross-correlation interpolation program. . . 32
4.7 Histogram of the blue components of different coloured pixels. . . . . 37
4.8 A depiction of lens distortion. From ?. . . . . . . . . . . . . . . . . . 41
4.9 A photo of a grid of squares over the pool table, taken by the robot’s
camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.1 Region of interference with the path of a ball. . . . . . . . . . . . . . 53
5.2 Diagram for error magnification analysis. . . . . . . . . . . . . . . . . 58
6.1 SCARA manipulator. . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.2 Cartesian manipulator. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
x
6.3 Articulated manipulator. . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.4 Typical stepper motor, and schematic. . . . . . . . . . . . . . . . . . 72
6.5 Solenoid taken from the starter motor of a car. . . . . . . . . . . . . . 78
6.6 Solenoid taken from a washing machine. . . . . . . . . . . . . . . . . 79
6.7 Wheels locating in modified channel section. . . . . . . . . . . . . . . 82
6.8 Tooth belt and pulley. . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.1 A GUI allowing a user to set various options and trigger the robot to
take its shot.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7.2 A GUI displaying the chosen shot and allowing a user to choose from
other alternatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.1 Robot at the three datum positions. . . . . . . . . . . . . . . . . . . . 101
8.2 The solenoid shaft with the cue tip attached. . . . . . . . . . . . . . . 104
8.3 The solenoid shaft with the cue tip attached. . . . . . . . . . . . . . . 105
8.4 The cables and wires setup. . . . . . . . . . . . . . . . . . . . . . . . 107
8.5 The attachment of the camera to the ceiling. . . . . . . . . . . . . . . 107
F.1 Force experiment setup. . . . . . . . . . . . . . . . . . . . . . . . . . 166
xi
List of Tables
4.1 Tolerance of aim angle for given situations. . . . . . . . . . . . . . . . 22
4.2 Various digital cameras considered. . . . . . . . . . . . . . . . . . . . 25
4.3 Error in centre location using spline interpolation. . . . . . . . . . . . 33
4.4 Error in centre location using bandlimited interpolation. . . . . . . . 34
4.5 Error in centre location using the centroid method. . . . . . . . . . . 39
5.1 Some shot path types. . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.1 Selection critera weightings. . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 Concept design criteria evaluation. . . . . . . . . . . . . . . . . . . . 78
7.1 Actuators command structure. . . . . . . . . . . . . . . . . . . . . . . 88
7.2 Solenoid power levels and commands. . . . . . . . . . . . . . . . . . . 94
E.1 Cost analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
F.1 Washing machine solenoid impulse time results. . . . . . . . . . . . . 167
xii
1 Introduction
1.1
Introduction
Pool is a game with a high degree of complexity, which requires human players to
be adept in concepts of kinematics as well as possessing a high level of hand eye
coordination and strategy.
From a robotic perspective, the game of pool takes on added levels of complexity,
which turn what initially appears to be a frivolous concept into an experiment in
vision systems, artificial intelligence programming, automation and robustness. This
project will address these and other associated issues, culminating in the creation
of an independent pool playing robot.
1.2
Problem definition
The overall goal of this project is to design and build a robot that is capable of
competing against a human opponent in a game of pool (more specifically, a modified
version of 8-Ball).
This will be achieved by utilising a number of different systems and processes. Primary among these will be a vision system which will determine the position of all of
the balls on the pool table. This information will then be interpreted by a software
program which will use intelligent algorithms to determine the appropriate shot for
the robot to play. This output will be translated into physical actuation through
the utilisation of appropriate robotic and mechatronic systems. A block diagram of
the system is shown in Figure ??.
1
1.3 Aim and significance
The Eye
The Brain
The Arm
Ball Position
Recognition
Shot Selection
Command to
Actuate
Camera
Pool Table
Actuator
Software
Hardware
Figure 1.1: Block diagram of the robot subsystems.
1.3
Aim and significance
As detailed above, the main aim for this project is to design and build a robot
which operates independently of human input, and can play pool with speed and
competency comparable to that of a human player.
Unlike most research projects that have open ended targets, the goal of this project
is extremely well defined, with a clear quantifiable target that is easily verifiable.
The significance of this project has its basis not in the application for which this
robot is being constructed, but rather in the development of the systems and structures behind the application that have the potential for future usage in a wide variety
of industries. These include fields such as automotive, defence, space, manufacturing, and many other areas where there is a reliance on automation.
The technology employed in this project could be transferred to any number of more
practical and relevant industrial applications.
Applying the technology to the game of pool removes the need to research and
understand a complex industrial application for the robot, thereby allowing more
attention to be focused on the design and implementation of the robot, rather than
on the application for which it will be used.
2
2 Literature review
2.1
Introduction
Due to the practical nature of this project, traditional literature review techniques
are not entirely appropriate. While the literature review forms the basis for the
overall design process, this project does not seek to build on previous work, or fill
in gaps in the research, as is the case in most engineering research projects.
This section will look into the physical mechanics of playing pool and how these can
be realised in the robotic arena, as well the different design concepts that have been
used to achieve this in the past.
2.2
The game of pool
To look at how to translate a fairly complex human game into a robotic process,
it was necessary to obtain a detailed understanding of both the kinematics and
physical descriptors associated with the game of pool. It was also necessary to gain
an understanding of the rules governing the play of the game.
? has covered all of the major areas concerning the physics involved in the game
of pool, including basic two-dimensional kinematics, the effects of spin, sliding and
rolling friction, the inelasticity of the collisions, and the effects of different surfaces
in the game. There were a number of important facets of pool that were discussed
in this paper and used during the design phase of this project. For example, the
relationship between cue tip point of contact, spin and the corresponding loss of
translational momentum was a major consideration when designing where to position the cue tip.
A more detailed analysis of the game of pool is given in Chapter ??.
The rules governing a pool game differ widely, depending on the location and situation in which the game is played. There is, however, a core of rules that remain
3
2.3 The eye
unchanged despite localised modifications. The rules used for this project are based
on a set of 8-Ball rules as defined by the Billiard Congress of America (?). Some
modifications or simplifications were made to the rules for the project, in order to
reduce the level of complexity in the design requirements.
For a complete description of the rules of 8-Ball, see Section ??.
2.3
The eye
In order to design the vision system software, a wide range of image processing
techniques were investigated. ? and ? cover many relevant techniques, particularly
those relating to template matching and segmentation. The possible applications of
these techniques to the robot vision system are closely examined in Chapter ??.
Lens distortion correction is a topic that has been widely covered in literature. Most
of the papers written focus on deriving an equation that models the curve of the
distortion surface.
The Open Source Computer Vision Library (?) is a comprehensive collection of
image processing functions. ? has implemented some of these in an extensive
Matlab toolbox that can determine the intrinsic distortion properties of a lens
from a number of calibration photos which consist of a checkerboard photographed
at different angles. From these intrinsic properties, an undistorted image may be
produced from the original using filtering, as mentioned.
Others have derived simpler equations which rely on various assumptions, for example ?, ?, ?. These methods rely on a single calibration image, and Harri Ojanen
has written Matlab software that implements the work he has published.
? look at removing the effects of lens distortion in the absence of knowledge relating
to the properties of the lens. In fact, their approach is unique in that is looks
at the fact that “lens distortion introduces specific higher-order correlations in the
frequency domain”, and thus lens distortion can be reduced by minimising these
correlations. However, their results do not provide an exact method of removing
lens distortion.
4
2.4 The brain
2.4
The brain
Many computer programs have been written which simulate the games of pool or
snooker. (The two games are sufficiently similar that most of the game strategies
are common to both, and thus relevant to this project.) Many of these programs not
only allow a human player to interactively take turns in the games but also simulate
computer players who independently decide their own shot and execute them.
One such program is The Snooker Simulation, written by Peter Grogono. His
notes (?) discuss the formulation of automated strategies. The basic strategy implemented in his program is to first generate a number of shots which might be played,
and then to assess the feasibility of the shot and estimate its usefulness. The best
shot is then selected based on these criteria. This method is discussed in more detail
in Chapter ??.
2.5
Previous pool playing robots
There have been numerous previous attempts at designing and constructing a robotic
pool player from as far back as the early 1980’s. This report will analyse four of
these. Due to the limitations of the available technology at the time, all of them
have made concessions to both the method of play and the overall goals of their
robots. All have approached the problem from a different design perspective that
has had an impact on both the design and success of their robots.
2.5.1
The Tianjin Normal University model
The Tianjin Normal University robot (?) utilised a two degrees of freedom mobile
cart and a four degrees of freedom manipulator.
In designing this robot, Qi and Okawa made a number of major simplifications to
the game and the way in which it would be played. This robot was designed to play
a much simplified version of the game of pool: it could only work when there were
two balls on the table and it played a limited number of shots. Qi and Okawa also
appear to have based their concept design on the manner in which a human being
5
2.5 Previous pool playing robots
would play pool. The robot moves freely around and over the table without being
attached or restricted in any way.
As a result they have created a system that would allow, upon further development,
greater flexibility in the environment that it could be used in. However in doing so,
they have also raised the level of complexity associated with the design. In creating
a free standing, independent, human sized robot, as Qi and Okawa have attempted
to do, it is necessary to make concessions with regard to the level of performance
gained from the system.
The fundamental objective and perspective on which their project was based is
different to those of this project. They decided to design a robot that would imitate
a human, rather than to view the design of the robotic pool player pool as an
industrial application to be modelled and implemented. The final result of this
difference in perspective is manifested in the overall design which added a number
of avoidable levels of complexity that ultimately served to hamper the ability of the
robot to operate effectively.
This occurs as a result of the manner in which human beings play pool. For example,
humans have an ability to judge depth and angle from an oblique location (for
example, at the end of a pool cue) that is difficult to imitate in robotic form. Rather
than taking the view that a single camera from above can locate the balls on the
table, they followed the path of determining where the balls are located by using the
robot itself as a reference point. By placing the camera such that a ball occupies
a certain position in the image, the robot can calculate the position of the ball on
the table from the angles of its joints. As mentioned in the report, this allows for
greater accuracy, but ultimately would require a method of approximately knowing
where on the table the balls are in order to quickly determine where to move the
camera. Additionally, because the camera moves in three-dimensional space, the
algorithms involved are necessarily much more complex.
2.5.2
The University of Bristol model
An example that is more closely associated with this goal is the robot that was built
by the University of Bristol (?), which utilised a modified IBM industrial robot as
6
2.5 Previous pool playing robots
the actuation mechanism. Pictures of this robot in action can be seen in Figure ??.
Figure 2.1: The University of Bristol robot. From ?.
This robot was developed by the University of Bristol’s Department of Mechanical
Engineering in the mid-1980’s.
It utilised an IBM 7565 gantry assembly robot, an Automatix AV4 vision system,
and an IBM 6150 (RT) supporting an expert system written in SLOP (a Support
LOgic Programming language). A snooker table measuring 4 feet by 6 feet was used.
The IBM 7565 could reach 90% of the playing area of the snooker table.
This robot had a much more advanced vision system than did the Qi and Okawa
version, and could play on a table that was setup as for a standard pool game.
The University of Bristol robot did not have the ability to perform in real time
against a human opponent. This is a property that all of these previous pool playing
robots share.
This robot could not compete against a human, as it did not have the ability to
7
2.5 Previous pool playing robots
allow human competitors to take their turns. All of the shots had to be taken by
the robot, as it was not programmed to differentiate between turns.
One other obvious restriction on the ability of human competitors to play against
it was the physical construction of the robot.
The traverse of the actuator, as seen in the images above, was located above the
pool table and was bolted to the table via four posts. This would have actively
prohibited the ability of a human to play on this table as they would not have been
able to get as close as necessary to play or to see all of the balls accurately.
Given that the manipulator for this robot was a pre-assembled industrial robot that
was simply modified to hold a cue tip, there was very little that could be done to
improve the accessibility for a human player.
2.5.3
The Massachusetts Institute of Technology model
The MIT model (?) also makes concessions to the overall performance of their robot
in order to reach the design goals.
The report states that a pool playing robot requires up to five degrees of freedom to
be fully functional, but in order to simplify their design, only two degrees of freedom
are present in their final design. This limited the range of the robot to a single fixed
position from which to take a shot.
Their robot only considered a single ball located on half of a pool table. The cue
can only hit from a fixed position — in other words it plays the same shot each time.
The cue is driven by a combination of a stepper motor and spring that provides the
energy to the actuator. The cue rests on a platform with the motor drawing back
the spring, which is then released by a solenoid.
There was no design process outlined in the published paper. It was therefore
difficult to understand why various decisions were made and in what context certain
options were discarded. Without the necessary technical support it was found to be
almost impossible to venture reasons as to why any of the decisions were made, for
example why there was a major difference between the degrees of freedom present
and those laid out in their report. There was no electromechanical control system
8
2.5 Previous pool playing robots
for the robot, beyond rudimentary computer-based control that only extended as
far as directing the robot when to fire the solenoid.
The focus of their project, as far as image processing is concerned, was not the
determination of the ball locations, but rather the tracking of a moving ball. As
such, the features of the vision system of the MIT model has little in common with
the desired features of the robot to be built for this project.
This project was completed in 1991, and it is interesting to compare the differences
between the technologies used then and now. Rather than processing the image using
computer software, which even today is relatively slow, this project used dedicated
hardware to analyse the image coming from the camera. This allowed the robot to
track the movement of the ball in real time, at approximately 30 frames per second.
The hardware simply used a threshold on the image and averaged the pixels to
estimate the location of the centre of the ball (this method assumes that only one
ball is within view). While this method is not required for accuracy, this shows how
much faster it is to use hardware than software for data processing, even ten years
ago. However, the disadvantage of using dedicated hardware for image processing is
the cost. A typical hardware board for image processing costs about AU$10,000 (?).
Regardless, the image processing techniques used for the MIT project are not useful
to this project, as the type of image processing being performed is different.
2.5.4
The University of Waterloo model
The most promising previous incarnation of the pool playing robot was the robot
designed and constructed by undergraduate students at the University of Waterloo (?).
A schematic of the initial robot design is shown in Figure ??.
The actuation system for this robot is based on a Cartesian robot traverse that
allows the actuator to move to any position on the table in order to play a shot.
The system is driven by stepper motors which are attached to worm gear shafts that
are controlled by a PC.
The traverse is fixed to the table and the robot moves around on a series of rails.
9
2.5 Previous pool playing robots
Figure 2.2: Schematic for the University of Waterloo robot. From ?.
The motors are controlled directly through the PC via a CIO-DAS08/Jr-AO I/O
card with two input signals and five output signals.
However, again there were major simplifications made to the manner in which the
robot played pool. The most obvious is the choice of cue. Instead of using a
shaft with a traditional cue tip attached, the designers of this robot used a plastic
mushroom shaped paddle in order to knock the balls around (shown in Figure ??).
Figure 2.3: The mushroom shaped paddle used in the University of Waterloo robot.
From ?.
This causes immediate restrictions on the ability of the robot to play a realistic
game of pool. In any game of pool, situations arise where the white ball is located
10
2.5 Previous pool playing robots
extremely close, or indeed touching, another ball. In these situations, it is imperative
that the actuator be able to compensate for this, and to play the desired shot without
too much interference with the other balls on the table. The robot designed by
Richmond and Green could not achieve this.
Any balls near to the target ball would have to be moved outside the radius of the
paddle, in order to not inhibit the ability of the robot to play the desired shot.
Clearly this is not a desirable restriction, but it was one which Richmond and Green
chose to concede to in the design of their robot.
Due to the type of actuator chosen for the robot, the angle of rotation for the cue
tip is not relevant. This reduces the level of control programming associated with
the robot.
The time taken for the robot to reach its desired shot position was another factor
that Richmond and Green decided should not be a major consideration in their
robot’s performance criteria. This is evident in their selection of worm gears as the
drive mechanism for their robot.
A schematic of a typical worm gear is shown in Figure ??. Worm gears operate in
areas that require high location accuracy as well as minimal backlash. They achieve
this by having a property that no other type of gear can achieve: the worm can easily
drive the gear, but the gear cannot force the worm to rotate, thereby removing any
potential backlash. This is because the angle of the worm gear teeth relative to the
gear is so small that, when the gear attempts to spin the worm, the friction between
the worm and gear teeth holds the worm in place. For the application of a robotic
pool player this is an important consideration, due to the relatively large impulsive
force that is applied to the traverse at the moment of contact with the cue ball. If
the traverse cannot adequately resist the backlash generated, then this may cause
inaccuracies in future shots due to the errors generated in the calculation of the
position of the actuator.
Worm gears are relatively slow to drive, owing to their unusual geometry, which is a
major drawback. For this project it was felt that there should be a balance between
the necessity for speed and the requirement of accuracy of location. There are other
alternatives for minimizing backlash in the system that do not reduce the overall
speed of the actuator.
11
2.5 Previous pool playing robots
Figure 2.4: Typical worm gear configuration. From (?).
Digital cameras are substantially cheaper and higher quality now than at the time
of the University of Waterloo project, so the vision system was not advanced as the
one planned for this project. In order to determine the positions of the balls, a mean
of the location of the pixels comprising the image of each ball was used. To isolate
the edges of the table image, a Sobel Operator was used with a threshold, and the
straight lines obtained were located with the Hough Transform.
2.5.5
Summary
As shown above, all of the previous incarnations of a mobile pool playing robot have
made significant reductions to the accuracy of the robot, and its ability to play pool.
None of the systems outlined in the literature review had managed to incorporate
real time play against a human opponent into their designs.
12
3 The game of pool
3.1
Rules of 8-Ball
A thorough knowledge of the rules of pool is crucial in the design of a pool-playing
robot. The rules are, in effect, an idealised complete specification of the requirements
of the robot.
During the design process, it became apparent that constructing a robot which could
strictly obey all of these rules would be infeasible within the time frame. For this
reason, some of the rules were modified in such a way as to preserve as much as
possible the spirit of the game, but to allow a less complex robot design.
There exist a multitude of different versions of pocket billiards (that is, billiards
played on a table with pockets), each of which has different rules of play. The game
of 8-Ball was chosen to form the basis for this project as it is the simplest and most
easily modifiable version. The others, including snooker and American 9-Ball, all
have rigid play structures that add unnecessary complexity to designing a robotic
player, at least in its first iteration.
8-Ball is a game played with a cue ball (the white ball) and fifteen object balls,
which are (in a standard game set) numbered 1 through 15 (?). One player must
pocket the balls numbered 1 through 7 (the solid balls), while the other player must
pocket those numbered 9 through 15 (the striped balls). The first player to pocket
all balls in his group, and then legally pocket the 8-ball, wins the game. A overhead
view of the pool table can be seen in Figure ??, with the balls arrayed as for the
start of a game.
The game is played in an alternating order — one player breaks and continues to
play shots, until such time that a target ball they have hit with the cue ball fails to
be pocketed, or they foul. Then the other player begins their turn. This is repeated
until a player has won.
8-Ball is technically a call shot game, meaning that, before executing a shot, the
player must nominate the pocket into which the target ball selected for the shot will
13
3.1 Rules of 8-Ball
Figure 3.1: Setup for a game of 8-Ball. From (?).
be pocketed. This rule is not generally enforced, and most games are played on a
much more casual basis. However, it is possible, in the instance where the computer
has calculated the desired shot, for the robot to adhere to this rule.
A major modification to the rules was made with regards to the break. The break
is the opening shot of the game, and there is a strict code for what constitutes a
legal break. The rules from the Billiard Congress of America, state that the breaker
must either pocket a ball, or drive at least four balls to the rail. This rule may be
disregarded, as it was felt that this would complicate the overall design process by
enforcing a minimum upper bound for the force supplied by the actuator that may
not be feasible.
The rules concerning what constitutes a foul shot are fairly detailed and play a
major part in the outcome of any game.
A foul shot results in that person who played the foul losing a turn, or equivalently,
giving the other player two shots in a row. Games can be won or lost on the basis
of a foul.
As a result, it was felt that it would be appropriate to ensure that as many of
the rules as possible concerning fouls were incorporated into the game, with the
exception of the foul rule regarding the legality of a shot.
A shot is only legal if the shooter hits one of their group of balls first, and either
14
3.2 Physics of pool
pockets a ball or causes the cue ball or any ball to contact a rail. This is again felt
to increase the level of complexity required of the system beyond a reasonable level
for a first time version of this robot.
This rule does not significantly influence the outcome of a game in the way that
other foul rules do, such as fouling by hitting an opponent’s ball before the player’s.
Other foul shots include: illegally pocketing an opponent’s ball; not making contact
with any of the target balls with the cue ball; pocketing the cue ball; and moving
any of the target balls with anything other than the cue ball.
All of these fouls are crucial to the overall outcome of the game, and should be taken
into account when considering the design of the robot.
3.2
Physics of pool
In order to design and build a robot that can compete against a human being, it is
necessary to ensure that the algorithms governing the control and performance of
the robot take into consideration the physical behaviour that describes a game of
pool.
Pool is a game that, when looked at in any detail, has a remarkable level of complexity governing the behaviour of the balls on the table. These complexities have
the potential to render hopeless any attempt to design a robot to handle these situations. Therefore it is necessary to make appropriate assumptions in order to obtain
results that are reasonable, but remove some of the more extreme complications
from the calculations associated with shot selection and actuation.
The shot selection software must clearly use principles of kinematics to calculate ball
trajectories. At this point, all analysis has been performed under the assumption
that balls travel atop the table without horizontal spin. This is not the case in
reality. Balls often have a significant amount of horizontal spin, which causes their
paths to curve, as well affecting the forces during collisions with other balls and the
table cushions. However, the degree of complexity in calculating horizontal spin and
its effects is very high, and thus the shot selection algorithm of the robot neglects
these effects. This is explained in more detail in Chapter ??.
15
4 The eye
4.1
Introduction
The function of the vision system, or the “eye” of the robot, is to accurately determine the positions of the balls on the table.
While there are several possible concepts for physical systems which would be able
to determine the location of the balls on the pool table, the use of an optical camera
was the only one which had the potential to provide accurate results for a reasonable
cost. Thus the hardware component of the eye of the robot consists of a camera
which captures images of the table for software processing.
A table with blue felt measuring 1800 mm × 950 mm was used for the project. In
order to simplify the image analysis, it was decided that the robot would be programmed to play 8-Ball with a set of “casino” balls. “Casino” balls consist of four
types of monochromatic balls: a white ball (the cue ball), a black ball (the eight
ball), seven red balls (the “bigs”) and seven yellow balls (the “smalls”). There
were two advantages of using these balls. Firstly, the total number of different ball
colours that had to be recognised was reduced to four. Secondly, as these balls are
monochromatic, they have the same appearance irrespective of their position or the
angle from which they are viewed. It should be noted that the use of “casino” balls
does not in any way affect gameplay.
The vision system software must be able to instruct the camera hardware to take
a photo and download that photo. The crux of the software’s role is to determine
the arrangement of balls on the table, identifying each by its colour and estimating
its position. The software must perform these functions with sufficient speed and
accuracy. Determining the positions more accurately requires more computation
and thus more time, so a compromise between speed and accuracy is required.
16
4.2 Error analysis
4.2
Error analysis
A key step in the design of the vision system was the determination of the resolution
required to accurately play a successful shot. Therefore, an analysis of the errors
involved in a shot was performed.
It is important to note that for some sets of pool balls, the radius of the cue ball is
slightly different to the radii of the other balls. While this is taken into account to
some extent here, all three dimensional effects caused by this difference are neglected.
For example, it is assumed that two balls are touching when the horizontal distance
between their centres is equal to the sum of their radii. This is not strictly correct, for
if the two balls have different radii, then their centres will be at different heights from
the table, so they will not be quite touching. The errors caused by this assumption
are very small, so it is always assumed that the centres of all the balls lie on a
single horizontal plane. When coordinates are used, this plane is the x-y plane of
the coordinate system.
Let rc be the radius of the cue ball, and let r be the radius of all other balls.
Denote the initial stationary positions of the cue ball and the target ball by C and
T , respectively, and denote the position of the desired target position (which will
usually be a pocket) by P . Let d, b denote the distances |CT |, |T P | respectively.
−→
Let φ be the angle of CT with respect to a fixed direction i.
In order that the target ball moves in a direction towards P , it must be struck by
the cue ball at the position X such that X, T, P lie on a straight line in that order,
and |XT | = rc + r, as shown in Figure ??. (This is true so long as the effects of
spin and friction between the balls is neglected.) Let a denote the distance |CX|,
and let α and β denote the angles 6 T CX and 6 CT X respectively.
If the positions C, T and P are given, then d, b and β are known. Then a can be
calculated using the cosine rule:
a2 = d2 + (rc + r)2 − 2d(rc + r) cos β.
(4.1)
Then α can be calculated using the sine rule:
sin β
sin α
=
.
rc + r
a
17
(4.2)
4.2 Error analysis
T
X
a
b
P
β
d
α
φ
C
i
Figure 4.1: Geometry of a shot.
Thus the necessary direction (that is, at an angle θ = φ + α to i) in which the cue
ball should be struck in order to collide with the target ball and propel it towards
P has been determined.
Now suppose that the cue ball is instead struck with an angle error of ∆θ, so that
it is propelled at an angle θ + ∆θ to i.
Assuming that the cue ball still collides with the target ball, denote by X 0 the
position of the cue ball as this collision occurs. |X 0 T | = rc + r, as these are the
positions of the balls as they collide. Let α0 = 6 T CX 0 = α + ∆θ. Let a0 denote the
distance |CX 0 |, and let β 0 denote the angle 6 CT X 0 .
6
CX 0 T = 180◦ − α0 − β 0 , so the sine rule yields
sin(180◦ − α0 − β 0 )
sin α0
=
,
d
rc + r
(4.3)
which can be used to solve for β 0 , keeping in mind that 6 CX 0 T = 180◦ − α0 − β 0
must be an obtuse angle, from the geometry of the situation.
−−→
This collision will cause the target ball to travel in direction X 0 T , which gives an
angle error of ∆β = β 0 − β in its trajectory. As the distance to the pocket is
18
4.2 Error analysis
b, the distance, δ, by which the centre of the target ball will miss the point P is
approximately
δ = b sin ∆β.
(4.4)
Now suppose that the cue ball and the target ball are at positions C(xC , yC ) and
T (xT , yT ), and suppose that the vision system then estimates their positions as
Ĉ(x̂C , ŷC ) and T̂ (x̂T , ŷT ).
Assuming that the errors (∆xC , ∆yC ) and (∆xT , ∆yT ) are small, the errors in the
estimation of d, b and β are also small. Therefore, equations (??), (??) will produce
little error in the estimation of a and α. A Matlab program was written to check
that the errors in the estimation of the positions of the cue and target balls produced
greater errors in the estimation of φ than in the estimation of α. It demonstrated
that the errors in α were generally around a factor of 10 smaller than those in φ for
a wide range of configurations. The hypothesis was thus verified, so it is reasonable
to not consider the error in α.
Let ∆vC and ∆vT be the components of the errors (∆xC , ∆yC ) = (x̂C − xC , ŷC − yC )
−→
and (∆xT , ∆yT ) = (x̂T − xT , ŷT − yT ) in a direction perpendicular to CT , as shown
in Figure ??.
It can be seen that
tan(φ̂ − φ) =
∆vC + ∆vT
,
b̂
(4.5)
so that, since the error in the angle is small,
φ̂ − φ ≈
∆vC + ∆vT
.
b
(4.6)
Let ∆θE denote the component of the error in the trajectory angle of the cue ball
which is due to the vision system. So
∆θE = (φ̂ + α̂) − (φ + α) ≈
∆vC + ∆vT
,
b
(4.7)
since it has been shown that α̂ −α can be approximated as zero. There will also be a
component, ∆θA , of the error in the trajectory angle which is due to the actuation.
Thus the total trajectory error is ∆θ = ∆θE + ∆θA .
19
4.2 Error analysis
∧
T
T
∆vT
∧
φ
i
d
φ
i
C
∆vC
∧
C
Figure 4.2: Errors in the positions of the cue and target balls.
A Matlab program was written to find δ, given rc , r, d, b, β and ∆θ, using equations
(??), (??), (??) and (??). Assuming rc = r = 25 mm, the allowable tolerances ∆θ
in the trajectory angle were calculated, for various values of d, b and β, which would
ensure δ < 10 mm. From this, the allowable tolerances ∆v in the location of a ball
position in any given direction were calculated (assuming that ∆vC = ∆vT = ∆v).
The results are shown in Table ??.
The results show how accurate the vision system needs to be if the robot is to
be able to consistently successfuly perform shots of each type. As the length of
the table is approximately 1500 mm, shots over distances greater than 1000 mm are
near the difficult end of the scale. Additionally, shots with a high turn angle, β, are
also difficult shots. The vision system should not necessarily be designed to achieve
these shots. However, considering those shots which an average human pool player
might hope to consistently achieve, it can be seen that the vision system should be
able to locate the position of a ball in a given direction (for example, in the x or y
directions) to within approximately 0.5 mm.
20
4.2 Error analysis
Table 4.1: Tolerance of aim angle for given situations.
d
100 mm
100 mm
100 mm
300 mm
300 mm
β
0◦
30◦
60
0
◦
◦
30◦
b
∆θ
∆v
100 mm
[−5.682◦ , 5.682◦ ]
4.959 mm
300 mm
[−1.908◦ , 1.908◦ ]
1.665 mm
1000 mm
[−0.5729◦ , 0.5729◦ ]
0.5000 mm
100 mm
◦
[−3.110 , 2.382 ]
2.078 mm
300 mm
[−0.9513◦ , 0.8704◦ ]
0.7595 mm
1000 mm
[−0.2767◦ , 0.2695◦ ]
0.2352 mm
100 mm
[−0.1760◦ , 0◦ ]
0 mm
◦
◦
◦
300 mm
[−0.01874 , 0 ]
0 mm
1000 mm
[−0.001664◦ , 0◦ ]
0 mm
100 mm
[−1.145◦ , 1.145◦ ]
2.997 mm
300 mm
◦
[−0.3819 , 0.3819 ]
0.9999 mm
1000 mm
[−0.1146◦ , 0.1146◦ ]
0.3000 mm
100 mm
[−0.9458◦ , 0.8607◦ ]
2.253 mm
300 mm
[−0.3059◦ , 0.2965◦ ]
0.7761 mm
◦
◦
0.2354 mm
100 mm
[−0.4248◦ , 0.3161◦ ]
0.8276 mm
300 mm
[−0.1293◦ , 0.1172◦ ]
0.3069 mm
◦
0.09536 mm
◦
1000 mm [−0.09078 , 0.08993 ]
300 mm
60◦
◦
1000 mm [−0.03751 , 0.03642 ]
1000 mm
0◦
◦
◦
100 mm
[−0.3015 , 0.3015 ]
2.631 mm
300 mm
[−0.1005◦ , 0.1005◦ ]
0.8772 mm
1000 mm [−0.03016◦ , 0.03016◦ ]
1000 mm 30
1000 mm 60
◦
◦
◦
◦
0.2632 mm
100 mm
[−0.2637 , 0.2467 ]
2.153 mm
300 mm
◦
[−0.08603 , 0.08413 ]
0.7342 mm
1000 mm [−0.02561◦ , 0.02544◦ ]
0.2220 mm
◦
100 mm
[−0.1491◦ , 0.1217◦ ]
1.062 mm
300 mm
◦
[−0.04663 , 0.04360 ]
0.3805 mm
1000 mm [−0.01367◦ , 0.01340◦ ]
0.1169 mm
21
◦
4.3 Hardware
4.3
Hardware
The hardware component of the vision system must be able to take a photo of the
table with sufficient resolution and download the image to the computer so that it
can be interpreted by the software. These two tasks should be fully automated.
4.3.1
Camera types
A typical digital camera consists of an image sensor chip, a lens, and an interface
between the chip and the user. The image sensor chip is a cluster of light sensitive
diodes, each of which converts the light intensity at their location to a voltage. The
set of voltage outputs corresponds to the set of pixels of an image. If the size of the
chip is greater, then so is the number of pixels of the resultant image, and thus the
resolution of the camera is increased. The lens focuses light onto the chip. Methods
of interface between a camera and a computer are dependent upon the application
of the camera.
4.3.1.1
Still cameras
Consumer still cameras have a compact detachable storage medium on which the
captured images are stored. These cameras usually also have a USB or FireWire port
for downloading the images onto a computer. The range of still cameras available
is now almost as broad as that of conventional film cameras.
Some cameras have the ability to be controlled directly through commands sent
from a computer using a USB or FireWire connection. This function is a desirable
feature for this project, since the robot needs to be able to operate autonomously.
Theoretically, a digital still camera could be used, even if it does not have the
capability for remote operation, by building a mechanical switch that physically
pushes the button on the camera to take a photo. However, this option was deemed
unnecessarily complex, so such cameras were discounted for use in this project.
22
4.3 Hardware
4.3.1.2
Video cameras
A wide variety of video cameras exist, but for the purposes of the project, they may
be broadly distinguished by one feature, namely, the nature of their interface with
equipment, which may either be analog or digital. In order to transmit an image
from a video camera to a computer, a frame grabber must be used. (An analysis
of the motion of the balls is beyond the scope of this project, and thus only still
images are required.)
A frame grabber is an input card into a computer that reads the video feed coming in,
and copies one frame of the video as a digital image upon command. A frame grabber
is relatively cheap if the input is digital (for example, in the case of a webcam), but
expensive if the input is analog, due to the inclusion of a high precision analog-todigital converter (for example, in the case of a BNC equipped security camera).
4.3.2
Selection criteria
The criteria considered when choosing the camera were connectivity, accuracy and
price. Additionally, the camera should ideally have an interchangable lens so that
different lenses may be fitted. This provides the camera with flexibility, allowing
it to be used for a variety of other applications within the University of Adelaide,
rather than being limited to use in this project, which would not be cost effective.
4.3.2.1
Accuracy
As can be seen from Section ??, the greater the accuracy with which the balls can be
positioned, the wider the range of shots the robot will be able to play successfully.
The accuracy of the camera has a significant impact upon the accuracy of the vision
system as a whole.
The accuracy of the camera is determined by two things: the number of pixels and
the distortion of the lens. Distortion from the lens is related to the width of the
image compared to the distance the camera is away from the object. A webcam, for
example, is designed to be mounted close to a user’s head, while providing a wide
field of vision. This introduces a very high amount of distortion.
23
4.3 Hardware
While it is desirable to minimise distortion, the camera cannot be mounted so high
above the table that the setup becomes impractical. However, the distortion may be
adjusted using a correction algorithm, as is discussed in more detail in Section ??.
Since video cameras are designed to capture image data many times per second,
their image sensors cannot process as much data as a still camera. For this reason,
the resolution of most digital video cameras does not exceed around one megapixel.
Still cameras, on the other hand, have the capacity for much greater resolutions.
4.3.3
Camera options and selection
A variety of cameras were investigated. A sample of those looked at are tabulated
in Table ??.
Camera
Number of pixels
Price (May 2002)
Webcam
640 × 480
$150
pixeLink
Canon
Kodak
1280 × 1024
$3500
≈ 6 million
>$10k
≈ 6 million
$4500
Table 4.2: Various digital cameras considered.
Webcams were deemed unsuitable for the task. As well as having low resolution,
the large distortion of the lens would prevent the robot from accurately determining
the location of the balls without the use of coarse distortion filtering.
Video cameras with analog outputs such as security cameras do not have much
better resolution than a webcam, although a lens could be used that would give less
distortion. Additionally, an analog frame grabber would be necessary to transfer the
information to a computer, which would increase the cost of the system substantially.
The pixeLink camera provides high resolution with ease of automation. It also has an
interchangeable lens. However, its only interface is through a FireWire connection,
so the camera could only be used for applications in which it was controlled throuh
a computer program. This makes it difficult to use for other applications which
require more flexibility.
24
4.3 Hardware
The Canon and Kodak ranges of professional still cameras were also considered.
These cameras have very high resolution, as well as very high colour sensitivity.
Their lenses are interchangeable, and their software allows remote control of the
camera through a digital connection. They can also be used as regular (“point and
click”) cameras.
The price of the camera (except for a webcam) is far beyond the budget of the
project. The Department of Mechanical Engineering was prepared to contribute
money towards the purchase of a camera, but only so long as it were suitable for use
in applications other than the pool playing robot project, including use as a regular
camera.
4.3.4
Camera choice
The camera that was used was a digital SLR camera, the Canon EOS D60. It was
chosen for two main reasons: it has a high quality CMOS image sensor that has a
resolution of 6.29 megapixels, allowing very accurate analysis and location of the
ball positions; and it could be interfaced with the computer extremely easily.
Third party software for the camera was purchased that allowed remote operation
from within MATLAB. This software, called D30Remote, is available at the Breeze
Systems website (?). This program is functionally similar to the Remote Capture
software that was supplied by Canon with the camera. However, D30Remote also
allows other programs to link to it with a dynamic link library. An example console
program called D30RemoteTest, written in Visual C++, was supplied with the
software. D30RemoteTest was capable of commanding the camera to take a photo,
downloading the photo, and saving the image in the current directory. This program
was modified cosmetically for the project, and renamed TakePhoto.
The program TakePhoto ensured very easy connectivity between the camera and
the computer, because it was able to be called from Matlab. This made it very
simple to obtain a photo image of the table during a Matlab program.
25
4.4 Image representation
4.3.5
Image processing in hardware
Rather than processing the photo from the camera using software on a computer,
it is possible to design dedicated hardware to accomplish the same tasks. Image
processing hardware is used in industrial applications where no interface with a
computer is required. This allows the image processing unit to be much more compact and mobile. However, this is not an advantage for the purposes of this project,
since a computer is required to run the shot selection software in any case.
Image processing hardware has another advantage over computer software. Hardware is much faster than software, as can be seen by its use over ten years ago to
track a moving pool ball at 30 frames per second (?). This processing speed cannot
be achieved with software analysis even on today’s computers.
Image processing hardware, however, is less flexible than its software counterpart.
A given hardware processor has a number of built-in functions, but does not have
the flexibility to perform any others.
Since the University of Adelaide does not own any image processing hardware, the
equipment would have to be purchased. Because of this, the cost is too prohibitive
for use in this project.
4.4
Image representation
A discrete two-dimensional greyscale (black and white) image is represented by a
matrix, f , where the entry fij represents the intensity of the pixel in the ith row
and the jth column. Equivalently, the image can also be represented by a function,
f , with f (i, j) = fij .
A discrete two-dimensional colour image must be represented by three twodimensional arrays, fR , fG , fB , where fR (i, j) represents the intensity of the red
component of the pixel in the ith row and the jth column, and fG (i, j) and fB (i, j)
represent the intensities of the green and blue components respectively.
26
4.5 Template matching
4.5
Template matching
It is known what the image of a coloured ball against the felt background will
approximately look like. Thus, a small template of this image can be created. Then
the regions of the image of the table which match this template most closely must
be found. This technique is known as template matching, and is discussed in many
image processing textbooks, for example, ?.
Let f be a greyscale image, and let g be a template. Consider a region R(x, y) of f
which is the same size as g and displaced by (x, y). (For example, if g was a 2 × 3
image, then for R(4, 6) = {(5, 7), (5, 8), (5, 9), (6, 7), (6, 8), (6, 9)}.) To compare the
template g to this region of the image f , the mismatch energy is defined as (?)
σ 2 (x, y) =
X
(f (i, j) − g(i − x, j − y))2 ,
(4.8)
where the sum is taken over the indices (i, j) ∈ R(x, y). Expanding the brackets,
σ 2 (x, y) =
X
f (i, j)2 +
X
g(i − x, j − y)2 − 2
X
f (i, j) g(i − x, j − y).
(4.9)
Therefore, this can be minimised by maximising the cross-correlation,
cf g (x, y) =
X
f (i, j) g(i − x, j − y).
(4.10)
This cross-correlation can be computed very quickly in the frequency domain, as
Cf g (u, v) = F (u, v) G? (u, v),
(4.11)
where Cf g (u, v), F (u, v) and G(u, v) are the two-dimensional discrete Fourier transforms of cf g (x, y), f (x, y) and g(x, y), and
?
denotes complex conjugation.
However, the cross-correlation (??) is not necessarily the best estimate of similarity
between the template g and the region R(x, y) of the image f . In equation (??), it
can be seen that large variations in the average intensity over the region R(x, y) of
the image f will greatly affect the mismatch energy.
A better measure of the similarity is given by the normalised cross-correlation,
[f (i, j) − f¯R(x,y) ][g(i − x, j − y) − ḡ]
qP
,
[f (i, j) − f¯R(x,y) ]2
[g(i − x, j − y) − ḡ]2
P
γf g (x, y) = qP
27
(4.12)
4.5 Template matching
where f¯R(x,y) is the mean value of f over the region R(x, y) of the image, and ḡ is
the mean value of g over the entire template. The computation of the normalised
cross-correlation (??) is significantly slower than that of the (unnormalised) crosscorrelation (??).
These cross-correlation functions only work for greyscale images. In order to correlate two colour images, it is necessary to split each image into their three colour
components, then perform the normalised cross-correlation between corresponding
images. In order to then find the locations at which there is high correlation in all
three colour components, it is necessary to combine the three results in some manner
to obtain a single correlation map. This may be achieved by simply averaging the
three cross-correlations:
γ̄f g (x, y) =
4.5.1
γf g,R (x, y) + γf g,G (x, y) + γf g,B (x, y)
.
3
(4.13)
Interpolation
Once the correlation map between the photo and the template has been found, it is
not difficult to find a peak. The location (m, n) of the maximum value in the crosscorrelation is determined. (If there is more than one position with this maximum
value, then any one is chosen.) Here, m and n are the integer indices of an entry in
the matrix where the maximum value is found.
However, this peak will only provide the location of the centre of a ball in the
photograph to the nearest pixel. For subpixel accuracy, the “true” location of the
maximum of the peak (that is, where the maximum would occur if the photo were
perfect and continuous, rather than noisy and discrete) must be interpolated from
the known values.
To achieve this, a neighbourhood region of (m, n) is examined. A region consisting
of a rows and columns on each side of (m, n) is extracted from the cross-correlation,
resulting in a (2a + 1) × (2a + 1) array of values (m + i, n + j) for i, j = −a, −a +
1, −a + 2, . . . , a − 1, a.
This region is then interpolated k times, each time increasing the number of points
in each direction by a factor of 2. This results in a (2k+1 a + 1) × (2k+1 a + 1) array
of interpolated values (m + i, n + j) for i, j = −a, −a +
28
1
, −a
2k
+
2
,...,a
2k
−
1
, a.
2k
4.5 Template matching
Figure 4.3: Template of the white ball.
The maximum of the new array of interpolated values of the cross-correlation can
now be found. If the interpolation method approximates the “true” shape of the
cross-correlation peak sufficiently well, then the location of this new maximum
should provide a more accurate approximation of the “true” location of the maximum of the peak.
The interpolation method was tested on a real photo taken of a portion of the table.
A template of the white ball on the blue background is shown in Figure ??. The
white colour used in the creation of this template were found by taking a similar
photo, and averaging the colours of a set of pixels known to belong to the white ball.
Similarly, the colours of the yellow, red and black balls, as well as the blue colour of
the felt, were found in this fashion.
This template was then cross-correlated with the image of the table, in each colour
component. The three resulting cross-correlations were then averaged. Results are
shown in Figure ??.
The highest peak in the averaged cross-correlation was found, and a neighbourhood
region of this pixel was analysed — in this case, a 3 × 3 region (since a = 1). The
values were then interpolated, by performing spline interpolation k = 4 times. This
is shown in Figure ??. Thus the location of the peak could be estimated more
accurately.
This was repeated with templates of the red and yellow balls, until the locations of
all balls were found. The original image of the balls is shown in Figure ??, with
circles plotted over the image where the program estimated the balls to be. The
results seemed very accurate. However, it is difficult to quantify the error, since the
true positions of balls within a photo cannot be found to sufficient accuracy.
29
4.5 Template matching
red cross−correlation
blue cross−correlation
1
green cross−correlation
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
1
averaged cross−correlation
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
Figure 4.4: Cross-correlations between an image and the template.
Figure 4.5: Neighbourhood of a peak of the averaged cross-correlation. Initially, a
3 × 3 pixels region around the peak is considered (left). Then spline interpolation
is performed 4 times to produce the interpolated cross-correlation (right).
30
4.5 Template matching
Figure 4.6: Results of the normalised cross-correlation interpolation program.
A Matlab program was written to investigate the accuracy produced by different
methods of interpolation. The program created an ideal image of a coloured ball
against a blue background, with radius r and centred at a random position C. A
pixel with centre P was coloured iff |P C| < r. A template of the coloured ball
was also created in a similar fashion, except that the image of the ball was centred
at a known position. The normalised cross-correlation between the image and the
template was then performed for each colour component, and the results averaged.
The averaged cross-correlation was then interpolated (using various methods), and
the peak of the interpolated cross-correlation Ĉ was then found. This was compared to the true centre of the ball, C. The program was repeated for 100 random
placements of the ball, and the root mean squared error |C Ĉ|rms was found.
At the time of these tests, a camera for use in the project had not yet been selected,
so a camera of resolution 1920×1440 pixels was assumed. Then, with a ball of radius
approximately 25 mm and a table of length approximately 1600 mm, the maximum
radius of the ball in an image which captures the entire table is
pixels. This was the value used in the program.
31
25 mm
1600 mm
· 1920 = 30
4.5 Template matching
Matlab offers two (non-trivial) methods of two-dimensional interpolation, these
being cubic interpolation and spline interpolation. Additionally, a Matlab function
for performing two-dimensional bandlimited interpolation (achieved by zero-padding
in the frequency domain) was written. Preliminary testing showed that the error in
the location of the ball determined using cubic interpolation was, in general, twice
as large as that obtained using spline interpolation, so cubic interpolation was not
considered further.
The values of the parameters a (relating to the size of the region interpolated)
and k (the number of times interpolation is performed) were varied in order to
find a combination which yielded the least error. Results obtained using spline
interpolation and bandlimited interpolation are shown in Table ?? and Table ??
respectively. Note that these results only demonstrate the accuracy of the method
when an ideal image is used.
Table 4.3: Error in centre location using spline interpolation.
a=1
a=2
a=3
a=5
k = 2 0.0901 pixels
0.1020 pixels
0.0959 pixels
—
k = 3 0.0650 pixels
0.0727 pixels
0.0800 pixels
0.0726 pixels
k = 4 0.0550 pixels
0.0702 pixels
0.0699 pixels
0.0672 pixels
k = 5 0.0558 pixels
0.0683 pixels
0.0688 pixels
0.0677 pixels
k = 6 0.0573 pixels
0.0684 pixels
0.0699 pixels
—
k = 7 0.0570 pixels
0.0707 pixels
—
—
It was found that using bandlimited interpolation with the parameters a = 4, k = 6
yielded the most accurate results. However, the interpolation was quite slow, since
the amount of computation required increases exponentially with k. Therefore,
as a compromise between accuracy and speed, it was decided that bandlimited
interpolation with the parameters a = 4, k = 5 would be used. This method
estimated the location of the centre of the ball with a root mean squared error of
0.0433 pixels, which is equivalent to an accuracy of 0.0361 mm.
32
4.5 Template matching
Table 4.4: Error in centre location using bandlimited interpolation.
4.5.2
a=2
a=4
a=8
k=1
0.1518 pixels
—
—
k=2
0.0927 pixels
—
—
k=3
0.0658 pixels
0.0521 pixels
0.0514 pixels
k=4
0.0562 pixels
0.0454 pixels
0.0471 pixels
k=5
0.0500 pixels
0.0433 pixels
0.0481 pixels
k=6
0.0564 pixels
0.0414 pixels
0.0436 pixels
Speed considerations
It was found that the using cross-correlation and interpolation as described in the
previous section was slow when actual high resolution images were processed. Locating balls of four different colours requires twelve cross-correlations (three colour
components for each ball colour), which took over a minute, using the normalised
cross-correlation on 640 × 480 images. Since the images to be used in this project
were over twenty times larger than this, the actual time would be many times longer
than this. This length of time was considered to be unacceptably large.
Whether the unnormalised cross-correlation may be used instead of the normalised
cross-correlation has not been determined. The unnormalised cross-correlation is
more difficult to interpret. For example, the red component an image of the yellow
ball produces a higher peak when cross-correlated with the red component of a
template of a red ball, than when with the red component of a template of a yellow
ball. However, it may be possible to interpret these results correctly, using a more
complex method of combining the three cross-correlation results for each colour
component. This could be advantageous, as the unnormalised cross-correlation is
much faster to compute.
The use of the unnormalised cross-correlation would also be advantageous in that
the two-dimensional bandlimited interpolation can be implemented extremely simply
and quickly in the frequency domain. Since the unnormalised cross-correlation is
first computed in the frequency domain, the result can be immediately zero-padded
33
4.5 Template matching
before conversion back into the space domain, thus saving the time which would
have been required to perform two 2-D discrete Fourier transforms. The normalised
cross-correlation, however, cannot be computed completely in the frequency domain,
and therefore the Fourier transform must be calculated before the interpolation can
be performed.
Another possible method considered for speeding up the program was to produce
only one template of a white ball on the black background (instead of having four
templates of different coloured balls on a blue background). This would mean that
only three cross-correlations would need to be performed. Different coloured balls
would then produce peaks of different heights in the cross-correlation, and it was
hypothesised that, by analysing the magnitudes of the peaks at corresponding locations in the three cross-correlations, the colour of the ball producing the peaks
could be determined.
The problem of speed comes down to the fact that for the large images produced
by the camera, only a small percentage of the image will contain information regarding the ball locations. The process of locating, say, the white ball would be
significantly faster if it was known that the white ball was located within a smaller
region. Methods of searching the image quickly to locate the balls approximately
were investigated. Using such a fast algorithm, the ball locations could be determined approximately, so that only small regions from the photo where the balls were
known to be would be cross-correlated with the templates.
One method of quickly finding the approximate ball locations is to first perform the
cross-correlations using decimated copies of the original image and the template.
Then smaller regions in the neighbourhoods of the peaks found could be searched
using the high resolution image and the template. This method is known as a
coarse-fine search, and is the method that was finally implemented in the program
due to its simplicity. Other methods researched have been included in the following
sections.
34
4.6 Segmentation
4.6
Segmentation
Several techniques for locating the centre of the balls depend upon the ability to
decompose an image into its components. In other words, it is necessary to know
which pixels belong to the image of the white ball, which pixels belong to images
of red balls, which pixels belong to the image of the blue felt, and so on. This is
known as image segmentation.
? lists several image segmentation techniques. The most important of these is
amplitude thresholding. For greyscale images, the pixels of an object are characterised by an amplitude interval. For example, the pixels of a bright image against a
dark background may be identified by finding all pixels with an intensity amplitude
greater than a certain threshold value.
For colour images, instead of a characteristic amplitude interval, pixels would be
characterised by a region in the space where the three dimensions are the red, green
and blue components of the colour. For example, white pixels might be characterised
by the region in which all three components are greater than 0.9. The difficulty
with this method lies in choosing characteristic criteria that are robust, and will not
miscategorise pixels.
One way of choosing criteria is to plot a histogram of each colour component of an
image. For example, if a camera produces values ranging from 0 to 255 for each
colour component, then a histogram of the red component shows how many pixels
have a red component of each value from 0 to 255. Thus if an image of a red ball
and a yellow ball against a blue background is used, then the red, yellow and blue
pixels can be plotted separately on the histogram, and suitable threshold values for
separating these pixels can be chosen graphically. An example of this is shown in
Figure ??.
The problem with this is that there is often significant overlap between, for example,
red ball pixels and yellow ball pixels in the histogram for any one colour component.
Therefore, for the characteristic criteria to be robust, it is not sufficient to simply
find threshold values for one or more colour components.
Another graphical method is to create a three-dimensional histogram, by plotting
the pixels in a three-dimensional space with orthogonal axes corresponding to the
35
4.6 Segmentation
Histogram of blue component
0.04
white pixels
yellow pixels
red pixels
blue pixels
normalised number of pixels
0.03
0.02
0.01
0
0
0.1
0.2
0.3
0.4
0.5
0.6
intensity of blue component
0.7
0.8
0.9
1
Figure 4.7: Histogram of the blue components of different coloured pixels. This can
be used to determine threshold values for segmentation. For example, all pixels with
a blue component of less than 0.2 could be defined as yellow pixels. It can be clearly
seen that this criterion is not robust.
36
4.6 Segmentation
red, green and blue colour components. Then a three-dimensional region can be
chosen which characterises, for example, yellow ball pixels.
It can be easier to determine such a region if the axes corresponded to colours other
than red, green and blue. For example, pixels could be represented as a sum of
“ball red”, “ball yellow” and “felt blue” components, where “ball red” refers to
the specific shade of red which is the averaged colour of red ball pixels found in
images used. Then by plotting a three-dimensional histogram with these axes, the
pixels belonging to different elements of the image may be more clearly separated.
Through testing in Matlab, it was found that this idea of changing the basis of
the colour space to define regions was no more robust than using the normal RGB
(red, green and blue) colour components.
Ideally, the thresholding would be performed automatically, based on a sample image
of the table. However, this would be difficult to implement, since the techniques for
defining characteristic criteria are largely based on estimating values from graphical
data.
Another method of segmentation is component labeling. Two neighbouring pixels
are said to be connected if they are sufficiently similar. Of course, this must be
rigorously defined. For example, it might be decided that if the three corresponding
colour components of two neighbouring pixels each differed by less than 0.1, then
the two pixels are connected. Then the image can be divided into connected sets.
Again, the difficulty here is defining connectivity criteria which were sufficiently
robust. This method is even more sensitive to poorly defined criteria than thresholding, for if two pixels are found to be connected by the criteria, when in fact
they should belong to different sets, then the two sets are merged into one set of
indistinguishable pixels.
It has been found through experimental Matlab testing that these methods of
segmentation fail in large part due to the texture of the blue felt. Because the
colour distribution is very uneven and widely spread, it is difficult to define criteria
which will not fail at some point due to variations in the felt colour. Filtering out the
texture of the background could make these segmentation methods more suitable.
37
4.6 Segmentation
4.6.1
Centroid method
A correctly segmented image should consist of several sets of pixels, each corresponding to the image of a ball, and the remaining set representing the felt background.
It is now necessary to find the position of the centre of the ball. A simple method
for doing this is to find the centroid of all the pixels in the set. For a set of N pixels
with coordinates (xi , yi ), the centroid is found at coordinates
N
N
1 X
1 X
xi ,
yi .
N i=1
N i=1
!
A Matlab program was written to determine the accuracy of this method. The
program created an ideal image of a coloured ball against a blue background, with
radius r and centred at a random position C. A pixel with centre P was coloured iff
|P C| < r. The centroid Ĉ of all the coloured pixels was then found and compared
to the true centre of the ball, C. The program was repeated for 1000 random
placements of the ball, and the root mean squared error |C Ĉ|rms was found.
Testing was undertaken to determine the accuracy of the ball centre location method
of finding the centroid. Results are shown in Table ??. Note that these results only
demonstrate the accuracy of the method when an ideal image is used.
Table 4.5: Error in centre location using the centroid method.
camera resolution
radius of ball
640 × 480 pixels
10 pixels
0.0524 pixels = 0.131 mm
20 pixels
0.0340 pixels = 0.0425 mm
30 pixels
0.0276 pixels = 0.0230 mm
40 pixels
0.0247 pixels = 0.0154 mm
1280 × 960 pixels
1920 × 1440 pixels
2560 × 1920 pixels
|C Ĉ|rms
Effects on the ball image, such as reflections and edge aliasing, introduce small areas
of colour that do not closely match the actual colour of the ball. If thresholding is
used to determine which pixels are part of the ball and which are not, it will produce
a set of pixels that does not represent the entire ball. This will bias the average and
produce an offset centre location estimate.
38
4.7 Distortion filtering
For this reason, even though the results given by the centroid method are approximately twice as accurate as those given by the interpolated normalised crosscorrelation method (both methods being applied to ideal images), it was decided
that the latter method was likely to be more robust to image effects such as reflections and glare. Initial experiments in adding noise to ideal images created in
Matlab, in order to simulate camera image noise, supported this hypothesis.
However, this method is much faster at providing an inaccurate estimate of the
position of a ball than using cross-correlation.
4.6.2
Erosion
The morphological erosion operation was also considered as a means of finding the
centre of a ball, once the pixels of the image of the ball have been successfully
segmented. The erode operation works by assigning a pixel a value of 1 if that pixel
and the four adjacent pixels are all equal to 1, and assigning a value of 0 otherwise.
An extension to this function is the ultimate erode function, which has the result
of iterating the erode function until only one pixel from each connected set of 1s
remains equal to 1.
Thus, if a set of pixels of an image were determined to be yellow pixels, then the
ultimate erode function could be applied, leaving one pixel remaining from the image
of each yellow ball, which would be at the approximate centre of the ball image.
However, this function, when used with an actual photo, proved to have very high
levels of noise associated with the output. This was due to effects such as reflections
on the balls and non-uniform felt colour. Also, because the function used iteration,
it was not as fast as the centroid method.
4.7
Distortion filtering
In order for a lens to produce a wide angle image, the three dimensional world is
projected onto a curved surface, which is in turn projected onto a plane (?). This
allows a camera to be positioned closer to its subject without losing the breadth of
39
4.7 Distortion filtering
field, and is a property of all lenses. Therefore, for the purpose of this project, this
lens distortion has the undesirable effect of warping the image such that the image
of the pool table that is processed by the computer is not an ideal projection, but
rather one that has a non-linear distortion. See Figure ?? for an depiction of this.
Figure 4.8: A depiction of lens distortion. From ?.
There are many methods published for removing the effects of lens distortion (see
Section ??). All of the methods introduced use various assumptions of the lens to
generate equations that describe lens distortion factors and functions to undistort an
image using these parameters. They share one unfortunate aspect in that they are
all computationally intensive, and thus, if used for this project, would take too long
to perform their task at the beginning of each turn. For example, using a test image
that was one quarter the size of the images used by the robot, it took somewhere in
the order of 30 seconds to undistort this image using Bouguet’s Matlab undistort
toolbox (?). To perform this function (which would take much longer than 30
seconds with the full size images from the camera) at the beginning of every shot
that the robot plays was considered infeasible.
A new method of removing the effects of distortion was required. Undistorting the
image and then finding the ball locations clearly was a technique that took too long
due to the large size of the images. Instead, the balls were located in the distorted
image, and the positions were subsequently adjusting to their actual locations.
In order to achieve this, the lens distortion was modelled by Matlab very simply.
A grid of squares constructed from string was placed across the surface of the pool
table. A photo of the table and the grid was then taken. The photo can be seen in
Figure ??. The pixel locations of each of the vertices of the grid were determined and
40
4.7 Distortion filtering
entered into Matlab. The actual spatial locations of the vertices of the grid were
also entered. A piecewise linear transform was then created to map corresponding
sections of the grid to each other.
Figure 4.9: A photo of a grid of squares over the pool table, taken by the robot’s
camera.
Once the locations of the balls within the distorted image were found, the transform
could then be applied to these ball location coordinates in order to produce the
actual spatial coordinates of these points on the table. While this method definitely
reduces the errors caused by distortion, it is still an inexact method. The main
problem is that locating the balls in the distorted image using cross-correlation will
produce anomalous results, since in the distorted images, the balls often do not
appear to be round. However, the template is of a perfectly round ball, and so
the template matching process will not identify the centres of the balls reliably.
Therefore, a solution which removes the lens distortion before the image processing
occurs would be preferable.
41
4.8 Locating edges and pockets
4.8
Locating edges and pockets
The task of finding the pockets and the edges of the table is one that needs to
be performed only when the camera is moved, and perhaps also periodically for
calibration. Because these operations only have to be performed occasionally, it is
not imperative that they are fast.
Finding the location of the pockets and finding the edges of the table are interconnected. Once the lines corresponding to the edges of the table have been found,
then the gaps in these lines will correspond to pockets.
Only a limited amount of work was spent investigating algorithms to locate the
edges of the table. It was assumed that position of the camera would remain fixed
with respect to the table for long periods, so manually inputting the positions of
the edges of the table area to the software was an adequate solution. Of course,
automated detection is preferable, particularly if the results are more accurate than
those obtained by manual input.
There is a Matlab function which finds edges in an image, using a gradient operator
to determine where the intensity of the image undergoes a sufficient change to qualify
as an edge. Early experiments in using gradient operators produced images full of
noise, thus failing to find connected lines for the edges of the table.
Once the image has been successfully converted to a binary image showing the lines
of the edges of the table, the Radon transform may be used to determine where the
lines are located. The Radon transform is closely related to the Hough transform,
which was used by the team at the University of Waterloo (refer to Section ??) to
detect the edges of their pool table.
The biggest problem involved in the edge detection is due to lens distortion. The
edges of the table are located in the image in the areas that are affected most by the
distortion. Without filtering to remove the distortion, the images of the table edges
will be curved lines, and will therefore not be revealed by the Radon transform. So
there are two ways to find the table edges with this in mind. The first is to apply
a filter to remove the lens distortion from the image, creating an image of the table
with straight line edges, which can be located using the Radon transform. The other
method involves finding specific points on the cushions and constructing the table
42
4.9 Final eye software
edges around these points. These single points can be undistorted after they are
found, using the undistorting transform used to adjust the ball positions.
4.9
Final eye software
The vision system software begins by calling the program TakePhoto. A photo of
the table is taken, and saved to an image file. The software reads in this image
file and crops it, so that the brown outside edges of the table and all extraneous
surroundings are removed.
The image is decimated by a factor of 8, so that it is 64 times smaller. Then, for
each colour of ball (white, black, red, yellow), the positions of the balls of that colour
on the table are found using a coarse-fine search.
First of all, the decimated image is cross-correlated with an appropriate computergenerated template image of the ball to be found. Three cross-correlations are
performed, one for each of the red, green and blue colour components, and the
results are averaged.
If the white or black ball is being located, then it is known that there is exactly one
ball of this colour on the table. Therefore, the highest peak in the average crosscorrelation is found, and this is taken as the approximate position of the white or
black ball.
If coloured balls, say red, are being located, then it is not known how many balls of
the colour are on the table. The peaks in the average cross-correlation corresponding to the red balls should be higher than the peaks corresponding to differently
coloured balls. The highest peak is found by finding the maximum value of the
average cross-correlation. If the maximum value is higher than 0.65 (the normalised
cross-correlation ranges from -1 to 1), then the software determines that this peak
corresponds to a red ball. If the maximum value is less than 0.65, then the software
determines that there are no red balls on the table.
The software then examines the next highest peak. It achieves this by eliminating
the first peak from the average cross-correlation, setting a circular region (the same
size as the image of a ball) around the maximum value to zero. The maximum value
43
4.10 Summary
of this modified cross-correlation is then found, and is again compared to 0.65 to
determine whether this peak corresponds to a red ball. This process it repeated
until there are no more peaks remaining which are higher than 0.65, so that all of
the red balls have been found.
The value 0.65 was chosen after examination of a large number of average crosscorrelations. The maximum values of the peaks corresponding to the balls being
located were generally 0.8 or higher, while the maximum values of the peaks corresponding to differently coloured balls were generally less than 0.5. Using the threshold value of 0.65 was found through testing to be a robust method of distinguishing
between balls of different colours.
Once the approximate positions of all the balls of a specific colour are found, these
positions can be refined by examining the original (undecimated) image. For each
of the ball positions found, the original image is cropped to a small region around
the position. Then cross-correlation and bandlimited interpolation are carried out,
as in Section ??, to find the positions accurately.
The Matlab code for the program is listed in Appendix ??.
4.10
Summary
A variety of cameras were considered as options for the image system hardware. The
one chosen was the Canon EOD D60 for its high resolution, easy connectivity, and
flexibility. This was interfaced with the computer with third party remote capture
software that provided a DLL that could interface with the program. A console
program, which could be run directly from Matlab that utilised this DLL was
written.
The function used to determine the ball positions was the 2-D cross-correlation. In
order to compute the functions in an appropriate time frame, a coarse-fine search
was used which cross-correlated a decimated image to find the approximate ball
locations. Small regions of the original image where balls were known to be were
then cross-correlated with the template image in order to find the ball location to
the accuracy of one pixel. Bandlimited interpolation was then used to find the ball
44
4.10 Summary
positions to sub-pixel accuracy. Once the ball positions were known, they were
passed through a filter which corrected their positions to account for distortion of
the lens.
45
5 The brain
5.1
Introduction
Algorithms for determining the best shot to take range from exceedingly simple to
tremendously complex. In general, a more complex procedure has the potential to
result in a more successful strategy. In this section, many possible design concepts
will be discussed, not all of which have been implemented. The “brain” of the robot
was programmed to be as intelligent as possible, given the time available to be spent
on its design.
Given the positions of the balls on the table, the program must:
• construct a list of shots,
• decide which shots are possible,
• choose the best shot, based upon some set of criteria.
5.2
Listing shots
At the beginning of any turn, there is a non-empty subset of balls on the table which
the robot should sink. For example, before any balls have been sunk in the game,
the robot can sink any red or yellow balls. Otherwise, the robot should sink either
the remaining red balls on the table, or the remaining yellow balls, or just the black
ball.
Let B0 denote the cue ball. Let n be the number of other balls remaining on the table
at the start of the robot’s turn. Let m be the number of balls on the table which the
robot should sink, so 0 < m ≤ n. Let these balls be denoted by B1 , B2 , . . . , Bm . This
set of balls which the robot should sink is always the same as the set of balls which
the cue ball may first collide with, in order to not foul. Let the remaining balls be
denoted by Bm+1 , Bm+2 , . . . , Bn . Finally, let the cushions on the four sides of the
46
5.2 Listing shots
table be denoted by C1 , C2 , C3 , C4 , and let the six pockets of the table be denoted by
P1 , P2 , P3 , P4 , P5 , P6 .
First consider a shot where the cue ball strikes a ball Bj , causing the ball Bj to
sink in pocket Pi . (It is assumed implicit, in this description and similar following
shot descriptions, that there are no other intermediate collisions with other balls, or
cushions.) Denote this shot path by Bj Pi . As there are 6 pockets Pi , and m balls Bj
which the robot should sink, there are 6m combinations for this type of shot path.
Next consider a more complex shot path where an extra ball is involved. First,
suppose the cue ball strikes a ball Bj , which then strikes another ball Bk , which is
then sunk in pocket Pi . Denote this shot path by Bj Bk Pi . As there are 6 pockets
Pi , m balls Bk which the robot should sink, and m − 1 other balls Bj which the cue
ball may first hit, there are 6m(m − 1) combinations for this type of shot path.
Alternatively, the cue ball could strike a ball Bj , causing the cue ball to change
direction and strike a second ball Bk , which is then sunk in pocket Pi . Denote this
shot path by [Bj ]Bk Pi . As for the previous case, there are 6m(m − 1) combinations
for this type of shot path.
Another possibility is that the cue ball strikes a ball Bj , which then strikes a ball
Bk , causing the ball Bj to change direction and be sunk in pocket Pi . Denote this
shot path by Bj [Bk ]Pi . Here, there are 6 pockets Pi , m balls Bj which the robot
should sink, and n − 1 other balls Bk , so there are 6m(n − 1) combinations for this
type of shot path.
A shot path can also be made more complex by introducing rebounds off the cushion.
Suppose the cue ball first bounces off cushion Ca and then strikes a ball Bj , which
is then sunk in pocket Pi . Denote this shot path by [Ca ]Bj Pi . There are 6 pockets
Pi , m balls Bk which the robot should sink, and 4 cushions Ca , so there are 24m
combinations for this type of shot path.
If instead the cue ball first strikes a ball Bj , which then bounces off cushion Ca and
is sunk in pocket Pi , then denote this shot path by Bj [Ca ]Pi . As for the previous
case, there are 24m combinations for this type of shot path.
Clearly more intermediate steps can be introduced in any of the three forms, Bk ,
[Bk ] or [Ca ], ad infinitum. With each additional step, the number of combinations
47
5.2 Listing shots
increases greatly. Table ?? shows a list of several shot path types, and the number
of different shot combinations for each type in terms of m and n. Specific values
are shown for m = 14, n = 15, corresponding to a game situation before any balls
have been sunk, so the shots considered are those where any red or yellow ball is
sunk. It is clear that there exists a systematic method of generating shot paths. The
number of these shots which should be considered is dependent upon the complexity
of calculation required for each shot, the amount of computational power available,
and the usefulness of the inclusion of the shots.
Table 5.1: Some shot path types.
Shot path type
Combinations
Number
Shot path type
Combinations
Number
Bj Pi
6m
84
24m
336
Bj Bk Pi
6m(m − 1)
1092
[Ca ]Bj Pi
24m
336
6m(n − 1)
1176
24m(m − 1)
4368
Bj [Bk ]Pi
[Ca ]Bj Bk Pi
Bj Bk [Ca ]Pi
24m(m − 1)
4368
24m(m − 1)
4368
24m(n − 1)
4704
24m(n − 1)
4704
[Bj ]Bk Pi
6m(m − 1)
1092
Bj Bk Bl Pi
6m(m − 1)(n − 2)
14196
Bj [Bk ]Bl Pi
6m(m − 1)(n − 2)
14196
6m(n − 1)(n − 2)
6m(m − 1)(n − 2)
14196
[Bj ]Bk Bl Pi
Bj Bk [Bl ]Pi
Bj [Bk ][Bl ]Pi
[Bj ]Bk [Bl ]Pi
[Bj ][Bk ]Bl Pi
Bj [Ca ]Pi
Bj [Ca ]Bk Pi
6m(m − 1)(n − 2)
14196
[Ca ][Bj ]Bk Pi
6m(m − 1)(n − 2)
14196
15288
[Ca ]Bj [Bk ]Pi
6m(m − 1)(n − 2)
14196
Bj [Ca ][Bk ]Pi
[Bj ][Ca ]Bk Pi
[Bj ]Bk [Ca ]Pi
Bj [Bk ][Ca ]Pi
24m(m − 1)
4368
24m(m − 1)
4368
24m(m − 1)
4368
24m(n − 1)
4704
The more complex a shot path is, the lower the probability that it can be successfully
executed. The initial launch of the cue ball will be subject to some random and bias
error due to the inevitable imperfections of the vision and actuation systems. Each
step in the shot path magnifies these errors, so that the executed shot deviates more
and more from the desired path. When a shot path is sufficiently complex that the
magnification of error is large, the probability of successfully executing the shot is
near enough to zero that it should not be considered a viable option.
Some steps in a shot path create a greater magnification of error than others. Specifically, planning a shot involving a collision between two moving balls can be discounted as infeasible. Deviations of the two balls from their paths will cause a
significant error in their point of collision (if they still manage to collide), which will
48
5.3 Possibility analysis
result in much greater deviations in the two balls’ paths after the collision. Additionally, calculating and executing such shots would require complete simulation of
the physics of the game (see Section ??), as well as extremely accurate calculation
and control of the force imparted to the cue ball. These shot paths will therefore
not be considered further.
The shot path determines the angle at which the cue ball must be struck. To
complete the description of a shot, the force with which the cue ball should be
struck must also be known. A certain minimum force will be required to ensure
that the shot path is carried out to its conclusion (that is, the sinking of a ball). It
may be desirable to consider a variety of shots using the same shot path but with
a different stroke force, as these would result in different configurations of the balls
at the end of the turn, some of which may be more favourable than others. Such
considerations are discussed in Section ??.
Note that in formulating the list of shots, only shots which result in the sinking
of a ball are considered. However, often in a game of pool, a player is faced with
a configuration such that the probability of sinking a ball is very low. In such a
situation it may be more desirable to play a shot which does not sink a ball, but
improves the configuration of the balls on the table, either increasing the potential
of future shots to sink balls or decreasing the potential of the opponent to to sink
balls. Again, this is discussed in Section ??.
5.3
Possibility analysis
A shot is said to be possible if it is geometrically achievable for the balls to interact
in the prescribed manner with the desired outcome. This can be checked by first
calculating all the required ball paths on the table, and then checking to see if the
determined trajectory is feasible.
5.3.1
Calculating ball paths
The process of calculating the paths of the balls is best explained via an example.
49
5.3 Possibility analysis
Say there are six balls other than the cue ball remaining on the table. Let Ti be the
initial position of Bi , for i = 0, 1, 2, 3, 4, 5, 6. The shot path B1 [B3 ]B4 [C1 ]B5 B2 P5 will
be considered.
First consider the last ball in the path, B2 . In order for it to move towards P5 , it
must be struck by the ball B5 at a specific position, call this X5 . (See Section ??
for details.)
Next consider the previous ball in the path, B5 . In order for it to move towards X5 ,
it must be struck by the ball B4 at a specific position, call this X4 .
The previous ball, B4 , must arrive at X4 via the cushion C1 . If conservation of
translational kinetic energy is assumed so that, when a ball rebounds off a cushion,
the angle of incidence is equal to the angle of reflection, then it is clear that there is
only one direction in which B4 may be struck in order to achieve this. This can be
found mathematically by considering the line parallel to C1 but offset away from it by
a distance equal to the radius of the ball — this line is the rebound line associated
with [C1 ]. The point obtained by reflecting X4 through this reflection line is the
point towards which B4 should be struck. Denote by V4 the intermediate position
of B4 as it collides with the cushion C1 . In order for B4 to move towards V4 , it must
be struck by the ball B1 at a specific position, call this X1 .
The first ball, B1 , must arrive at X1 via a collision with B3 . Again, if conservation
of translational kinetic energy is assumed, and it is also assumed that the masses of
the balls are equal, then it can be shown (?) that, after the collision, the velocities
of B1 and B3 are perpendicular. Consider the position V1 of B1 as it collides with
−−−→
B3 , then the velocity of B1 is in the direction V1 X1 and the velocity of B3 is in the
−−→
direction V1 T3 . Thus 6 T3 V1 X1 is a right angle, and |T3 V1 | is equal to the sum of the
radii of the balls, so the position V1 can be found. In order for B1 to move towards
V1 , it must be struck by the cue ball B0 at a specific position, call this X0 .
Hence all the ball trajectories necessary for this shot to succeed have been determined: B0 must move to X0 , then B1 must move to V1 then to X1 , then B4 must
move to V4 then to X4 , then B5 must move to X5 , and finally B2 must move to P5 .
All of these are straight line paths. (The effects of ball spin are considered to be
negligible.)
50
5.3 Possibility analysis
Note that, when analysing the path of a ball rebounding off a cushion or another ball,
conservation of translational kinetic energy must be assumed. In practice, kinetic
energy is not perfectly conserved, due to energy losses through heat and sound. More
significantly, translational kinetic energy may be converted to rotational kinetic
energy during a collision, and vice versa.
For this reason, rebounds of these types were physically investigated with pool balls
on the pool table, in order to verify whether the actual behaviour of the balls would
approximate the behaviour predicted by the idealised analysis.
For rebounds off cushions, the prediction that the angle of incidence is equal to
the angle of reflection was found to be valid over a wide range of incident angles
and velocities. Therefore analysis of shots with cushion rebounds could be carried
out reasonably accurately using this prediction. However, in order to have the
robot playing such shots more accurately, it would be necessary to carry out further
investigation into the real world kinematics of the rebound.
The prediction of the behaviour of a ball following a rebound off another ball, namely
that the velocities of the balls following the collision are orthogonal, was found to be
less valid. For most of the trials of these collisions, the deviations from this predicted
behaviour were quite significant. Therefore it was decided that the brain program
would not consider shots involving collisions of this type (for example, [Bj ]Bk Pi and
Bj [Bk ]Pi ).
Accurate analysis of these shots is possible (for example, see ?), but is very complicated. Considering all the variables required to accurately model the kinematics
of collisions then verges on complete simulation (see Section ??).
5.3.2
Trajectory check
Once all of the ball paths in a shot have been geometrically determined, several
conditions must be checked in order to establish whether or not the shot will be
physically achievable on the table.
For each of the straight line paths of balls in the shot, whether there are any obstacles
blocking the trajectory of the ball must be checked.
51
5.3 Possibility analysis
Say ball 1 of radius r1 at A(xA , yA ) is to move (in a straight line path) to B(xB , yB ).
The program must check whether there is a clear path for the ball between these
two points. Consider any other ball, call it ball 2, on the table. Say it has radius r2
and is at P (xP , yP ). Then ball 2 will not interfere with the motion of ball 1 if and
only if it is never within a distance r1 + r2 of the path AB. This corresponds to P
lying outside the region shown in Figure ??.
B1
r +r
1
2
B
A1
B2
A
A
2
Figure 5.1: Region of interference with the path of a ball.
First the program checks whether P lies within the rectangle A1 A2 B2 B1 . Let D be
the foot of the perpendicular from P dropped down to AB. Then D lies on the
same side of A as B iff
−→ −→
AP · AB > 0.
(5.1)
Similarly, D lies on the same side of B as A iff
−−→ −→
BP · BA > 0.
(5.2)
These two conditions together are equivalent to D lying between A and B.
To lie within the rectangle, it is also necessary that |P D| < r1 + r2 . The perpendic-
ular distance from a point (X, Y ) to a line ax + by + c = 0 is given by
d=
|aX + bY + c|
√
.
a2 + b 2
(5.3)
The equation of line AB is
(yA − yB )x − (xA − xB )y + (xA yB − yA xB ) = 0.
52
(5.4)
5.3 Possibility analysis
Thus requiring that P D < r1 + r2 is equivalent to the condition
|(yA − yB )xP − (xA − xB )yP + (xA yB − yA xB )|
q
(yA − yB )2 + (xA − xB )2
< r1 + r2 ,
(5.5)
which can be rewritten as
|(xP − xB )(yA − yB ) − (xA − xB )(yP − yB )|
q
(xA − xB )2 + (yA − yB )2
< r1 + r2 .
(5.6)
Thus P lies within the rectangle A1 A2 B2 B1 if and only if inequalities (??), (??) and
(??) all hold.
It is known that P does not lie within the semicircle with centre A (see Figure ??)
as otherwise balls 1 and 2 would initially be intersecting. Thus it remains to check
whether P lies within the semicircle with centre B. It is sufficient to check whether
P lies anywhere within the entire circle, for if it lies within the other half of the
circle, then it lies within the rectangle A1 A2 B2 B1 and thus ball 2 interferes with the
motion of ball 1 in any case.
P lies within the circle with centre B and radius r1 + r2 iff
q
(xP − xB )2 + (yP − yB )2 < r1 + r2 .
(5.7)
Thus the complete criteria can be stated as follows: ball 2 interferes with the motion
of ball 1 if either inequalities (??), (??) and (??) all hold, or inequality (??) holds.
In cases where balls are extremely close to, but not within, the region of interference, a tiny error in the trajectory of the shot may cause an undesirable collision.
Therefore, in the program, the region of interference was enlarged by a small safety
margin to compensate for such errors. Thus in equations (??) and (??), the term
r1 + r2 was replaced by r1 + r2 + , where the safety margin, , was set to = 0.1r.
Continuing the example analysis from the previous section, each of the calculated
trajectories must be checked to determine whether the positions of the other still
stationary balls on the table will interfere with them. That is, it must be checked
whether any of B1 , B2 , . . . , B6 block the path of B0 to X0 . However, when considering
the path of B5 to X5 , only B2 and B6 are checked for interference, because the
positions of B0 , B1 , B3 and B4 are not known as they have moved.
53
5.3 Possibility analysis
In addition to checking whether there are balls blocking the trajectories, it must be
ensured that the paths are not blocked by the cushions. For example, if a ball, Bk ,
is very close to a cushion, and it is to be struck by a ball, Bj , to move in a direction
away from the cushion, then the position, Xj , at which Bj should strike Bk can be
calculated. However, it may be the case that Bj is not be able to reach the position
Xj because it will hit a cushion first. To check whether this will occur, it is sufficient
to find the distance of Xj from the cushion. If the distance is smaller than the r,
the radius of Bj , or else if Xj is on the wrong side of the cushion, then Bj cannot
reach Xj and the shot is impossible.
Another situation where a cushion may block the trajectory of a ball is when the ball
is heading towards a pocket. The four corner pockets of the table may be reached
by a ball at any point on the table, so cushions never block these paths. However,
there is a maximum angle at which balls may enter the two middle pockets. If the
entry angle is wider than this, then the cushion on the side of the pocket will block
the path of the ball. Brief experimentation yielded an estimate for this maximum
entry angle as 60◦ . Therefore, when a shot results in a ball being sunk in one of
these two middle pockets, then if the entry angle is greater than 60◦ , the shot is
impossible and must be taken off the list.
Now consider a ball, Bj , bouncing off a cushion, Ca , in order to reach a position, Xj .
The intermediate position, Vj , as Bj rebounds off the cushion is calculated geomet-
rically using reflection through the rebound line associated with [Ca ]. However, the
cushions are curved in regions near the pockets of the table, so that if the rebound
occurs within these regions, the path of the ball will not reflect through the same
line. This chiefly occurs around the pockets in the middle of the top and bottom
cushions. Therefore, the distance, c, from the pocket that the cushion is curved
must be found. Then for every such rebound off the top and bottom cushions, the
distance of Vj from the pocket along the rebound line must be found. If this distance
is less than c, then Bj will not rebound off Ca to reach Xj as calculated, so the shot
must be eliminated from the list.
A similar effect may occur around the four corner pockets of the table, but these
regions are far smaller, and rebounds very close to these corner pockets are rare and
generally not useful. Therefore, it was unnecessary to perform the same check for
thse pockets.
54
5.4 Choosing the best shot
If the curvature of the cushion near the pockets were modelled in the program,
then it would still be possible to allow rebounds in these regions. However, the
modelling would have to be extremely detailed in order to predict such rebounds
accurately, and the calculations required to determine the intermediate rebound
position would be much more complex. Therefore, this was considered to be an
unnecessary complication.
When the cue ball has been sunk by the opponent and is replaced on the table,
there is an additional rule constraining the allowable shots, namely that the cue ball
must be struck in a “forwards” direction. That is, the component of the cue ball’s
velocity parallel to the longer sides of the table must be positive towards the far end
of the table. Therefore, when such a situation arises, the program must rule out
those shots where the cue ball is not struck forwards.
5.4
Choosing the best shot
In order to decide which of the list of possible shots is the “best” shot to attempt,
a numeric value needs to be associated with the shot, estimating how “good” the
shot is. Therefore, a merit function of a shot is defined, based on its difficulty and
its usefulness.
5.4.1
Difficulty of a shot
The difficulty of a shot is the probability that the desired goals of the shot will not
be achieved. Usually this can be considered to depend only upon the probability
that a ball will be sunk. However, in a situation where it is impossible to sink a
ball, the difficulty may be related to the probability that the cue ball will manage
to hit one of the object balls (and thus not foul).
The probability of sinking a ball can be found by calculating the allowable errors in
the paths of each ball in the shot path, working backwards as in Section ??. Using
the example from that section, the allowable deviations in the path of B2 which will
still result in the ball sinking in pocket P5 can be calculated (from the geometry
of the table). From this, the allowable range of positions X5 at which B2 may be
55
5.4 Choosing the best shot
struck by B5 can be determined, and thus the allowable deviations in the path of B5
can be calculated, and so on, until the allowable deviations in the path of the cue
ball B0 is determined.
If the random and bias errors inherent in the vision and actuation systems were
known, the exact probability of propelling the cue ball at an angle within the required
tolerance to sink the target ball could be found. This would provide an excellent
indication of the difficulty of the shot.
However, much computation is required to perform these backward error analyses.
Additionally, it is difficult to quantify the errors present in the system. Besides,
all that is required is a indication of the relative probabilities of success for the
shots considered. Therefore, a simple function will be derived which provides an
indication of the probability of success of a shot.
The analysis is initially very similar to that performed in Section ??. Let B1 and
B2 be two balls of radii r1 and r2 , respectively, at positions T1 and T2 , respectively.
Consider a shot in which B2 is to be struck by B1 in a direction at angle θ2 with
respect to a fixed direction i. Then B1 must strike B2 at a position, X1 .
Let a = T1 X1 and d = T1 T2 . Let α = 6 T2 T1 X1 and β = 6 T1 T2 X1 . Let θ1 be the
−−−→
angle of T1 X1 with respect to i. All angles are signed, so that they can take on
positive or negative values. A diagram of the setup is shown in Figure ??. Using
the sine rule, it is clear that
sin β
sin α
=
.
r1 + r 2
a
(5.8)
Now suppose that B1 is instead struck with an angle error of ∆θ1 , so that it is
propelled at an angle θ10 = θ1 + ∆θ1 to i.
Assuming that B1 still collides with B2 , denote by X10 the position of B1 as this
collision occurs. Let ∆θ2 be the resulting error in the trajectory angle of B2 , so that
−−−
→
X10 T2 is at an angle θ20 = θ2 + ∆θ2 to i.
|X10 T2 | = r1 + r2 , as these are the positions of the balls as they collide. Let a0 =
|T1 X10 |, α0 = 6 T2 T1 X10 = α + ∆θ1 and β = 6 T1 T2 X10 = β + ∆θ2 .
Using the sine rule again,
sin β 0
sin α0
=
,
r1 + r 2
a0
56
(5.9)
5.4 Choosing the best shot
θ2
T2
β
i
r2
r
1
X
1
d
a
α
T1
θ1
i
Figure 5.2: Diagram for error magnification analysis.
or, written in terms of ∆θ1 and ∆θ1 ,
a0 sin(α + ∆θ1 ) = (r1 + r2 ) sin(β + ∆θ2 ).
(5.10)
If d r1 + r2 , which is the case in most situations, it is reasonable to use the
approximation a0 ≈ a, so that, using the sine addition formula,
a sin α cos ∆θ1 + a cos α sin ∆θ1
≈ (r1 + r2 ) sin β cos ∆θ2 + (r1 + r2 ) cos β sin ∆θ2 .
(5.11)
Since ∆θ1 and ∆θ2 are small,
sin ∆θ1 ≈ ∆θ1 ,
cos ∆θ1 ≈ 1,
sin ∆θ2 ≈ ∆θ2 ,
cos ∆θ2 ≈ 1.
(5.12)
Therefore,
a sin α + (a cos α)∆θ1 ≈ (r1 + r2 ) sin β + ((r1 + r2 ) cos β)∆θ2 .
(5.13)
Using equation ??, the first terms on each side are equal, so
(a cos α)∆θ1 ≈ ((r1 + r2 ) cos β)∆θ2 .
57
(5.14)
5.4 Choosing the best shot
Finally,
∆θ1
(r1 + r2 ) cos β
≈
.
∆θ2
a cos α
(5.15)
Thus an expression relating to the magnification of the angle error due a collision
between balls has been obtained.
It is interesting to note that the value of
very small, then
∆θ1
∆θ2
∆θ1
∆θ2
is not always less than 1. If a is
can be greater than 1, so that ∆θ1 > ∆θ2 . In other words,
the collision is not causing a magnification of angle error at all, but rather it is
diminishing it. This is somewhat counter-intuitive, as one would expect that extra
collisions necessarily cause an increase in error. This formula shows that, if a ball
travels a very small distance in a shot, then the shot can be played more accurately
than the same shot without this ball in the shot path.
Now let a1 be the distance travelled by B1 before it collides with B2 , so a1 = a. Let
a2 be the distance travelled by B2 before it reaches a pocket at P . Let δmax be the
maximum distance by which the centre of B2 may miss the point P and still sink in
the pocket.
Then ∆θ2max , the maximum allowable error in the trajectory of B2 , is given by
∆θ2max ≈
δmax
.
a2
(5.16)
Then, using (??), ∆θ2max , the maximum allowable error in the trajectory of B2 , is
given by
∆θ1max =
∆θ1max
(r1 + r2 ) cos β
∆θ2max ≈
δmax .
∆θ2max
a1 a2 cos α
(5.17)
Now to state this result in a more applicable form, consider a shot Bj Pi . Naming
all variables using the previous conventions, the maximum allowable error in the
trajectory of the cue ball, B0 , is given by
∆θ0max ≈
(rc + r) cos β0
δmax ,
a0 aj cos α0
(5.18)
where δmax is the maximum distance by which the centre of Bj may miss the point
representing Pi and still sink. It will be assumed from this point onwards that δmax is
a constant distance, independent of the pocket and of the initial position of the ball
58
5.4 Choosing the best shot
to be sunk. This is not true, since greater or smaller deviations may be permissible
depending on which pocket is targeted, and on the entry angle of the ball. However,
the effect of the variation of δmax on the difficulty function to be defined has not
been investigated.
For shots of the type Bj Pi , define the dimensionless function ℘(Bj Pi ) to be
℘(Bj Pi ) =
(2r)2 cos β0
·
,
a0 aj cos α0
(5.19)
so that (making the valid assumption that rc ≈ r)
∆θ0 max ≈ ℘(Bj Pi )
δmax
.
2r
(5.20)
It can be seen that if ℘(Bj Pi ) is greater, then the maximum allowable error in the
trajectory of the cue ball is greater, so the probability of successfully carrying out
the shot is greater. In other words, the function ℘ provides an indication of the
shot’s probability of success.
Now consider a shot Bj Bk Pi . The maximum allowable error in the trajectory of the
cue ball, B0 , is given by
∆θ0max =
(rc + r)(2r) cos β0 cos βj
∆θ0max ∆θj max
δmax .
∆θkmax ≈
∆θj max ∆θkmax
a0 aj ak cos α0 cos αj
(5.21)
Therefore we define ℘(Bj Bk Pi ) to be
℘(Bj Bk Pi ) =
(2r)3 cos β0 cos βj
·
·
,
a0 aj ak cos α0 cos αj
(5.22)
so that
∆θ0max ≈ ℘(Bj Bk Pi )
δmax
.
2r
(5.23)
It can be seen that the function ℘ is defined in such a way so as to provide a
comparison of the difficulty of shots, even if they are of different types.
Similarly to the previous example, for a shot Bj Bk Bl Pi , the value of the function is
(2r)4
cos β0 cos βj cos βk
℘(Bj Bk Pi ) =
·
·
·
.
a0 aj ak al cos α0 cos αj cos αk
(5.24)
In order to find the value of the function ℘ for shots involving cushion shots, it
is first of all assumed that the angle of incidence is exactly equal to the angle
59
5.4 Choosing the best shot
of reflection. Therefore, the shot may be considered equivalent to an imaginary
shot where all of the positions of the balls involved in the shot before the cushion
rebound are reflected through the rebound line. For example, the shot B1 [C4 ]P2
may be considered equivalent to a shot B1 P2 where the cue ball and B1 have been
reflected through the rebound line associated with [C4 ]. The function ℘ can now
be calculated for the imaginary equivalent shot, and this value can be taken as the
value of the function ℘ for the original cushion rebound shot.
However, because the angle of incidence is not exactly equal to the angle of reflection
in practice, there is likely to be magnification of trajectory error due to the cushion
rebound. Therefore, an empirical method of scaling the function ℘ appropriately
is to multiply the value of the function by λn , where n is the number of cushion
rebounds in the shot, and λ < 1 is an appropriate penalty factor.
Apart from the geometry of a shot, another factor affecting the probability of the
success is the force with which the cue ball is struck. Ball spin will cause a ball
to deviate from the path calculated assuming conservation of translational kinetic
energy (which is usually invalid when ball spin is non-zero). However, the greater
the velocity of the ball, the smaller the error introduced by the ball spin. Thus, in
general, striking the cue ball with a greater force increases the probability that the
shot will deviate from the calculated shot path by a lesser extent.
Of course, the cue ball must be hit with enough force so that enough momentum is
transferred to the final ball in the shot to reach the pocket. For a shot Bj Pi , the
minimum initial velocity, umin , of the cue ball which will result in the target ball
reaching the pocket is given by (?)
u2min =
where κ =
1
98g
24
µs
+
25
µr
aj
2
κ cos (θj
− θ0 )
+ 2gµr a0 ,
(5.25)
, g is gravitational acceleration, µr and µs are the rolling
and sliding friction constants of the pool balls on the felt. This can be rearranged
to yield
u2min
= 2gµr
!
1
aj
a0 +
.
·
2
2gµr κ cos (θj − θ0 )
(5.26)
Using the values µr = 0.01 and µs = 0.2 (?), it can be shown that u2min is proportional
60
5.4 Choosing the best shot
to
!
1.87
a0 +
aj .
2
cos (θj − θ0 )
(5.27)
This expression was used to determine how hard the cue ball should be struck.
5.4.2
Usefulness of a shot
The following attributes of a shot increase its usefulness, to varying degrees:
• a target ball (or more than one target ball) is sunk,
• the shot is not a foul,
• the robot’s target balls are positioned in regions from which they are easier to
sink,
• the opponent’s target balls are positioned in regions from which they are more
difficult to sink,
• the next turn belongs to the robot, and the cue ball is left in a good position
for the robot,
• the next turn belongs to the opponent, and the cue ball is left in a bad position
for the opponent.
So far, the only shots considered are those which result in a target ball being sunk,
so they are all equally useful in this respect. In order to assess the other attributes
of a shot, it is necessary to be able to predict, to some degree, the configuration of
the table following the shot.
This can be achieved with a complete simulation of the shot (see Section ??). Failing
this, empirical methods may be used to estimate the approximate positions of balls
following a shot, similarly to how humans would predict the outcome of a shot. For
example, if the shot B1 P1 is being considered, then it may be guessed that, following
the shot, the ball B1 will most likely be sunk and therefore no longer on the table.
From the level of force used to strike the cue ball, and the distances involved, the
61
5.4 Choosing the best shot
velocity of the cue ball immediately before the collision can be estimated, and therefore the velocity after the collision can also be estimated. Finally, the assumption
that the cue ball does not hit any more balls could be made, so that the final resting
position of the cue ball could be estimated.
If the configuration of the table resulting from a shot can be predicted in this way,
then the resulting configuration can be assessed according to the criteria listed above
in order to evaluate how useful it is. For example, if one of the robot’s target balls is
moved very close to the middle of one of the shorter cushions, then this ball is in a
very difficult position to be sunk, and thus this makes the configuration less useful.
Algorithms to carry out an empirical prediction and then evaluation of the table
configuration, as described above, have not been designed in detail.
5.4.3
Overall merit function
The overall merit of a shot is related to both the difficulty and the usefulness of the
shot. If a function were defined to assign a numerical value to the usefulness of a
shot, as has been done for difficulty, then the two functions relating to difficult and
usefulness could be combined to produce a overall merit function, for example, by
multiplying the two functions. In order for this to work, both functions must be
defined in such a way that they take on non-negative values only, a greater value
indicating a “better” (that is, less difficult, or more useful) shot.
Once the overall merit function has been defined, the value of the function for all of
the possible shots found by the brain program should be calculated. Then the shot
with the highest value (that is, the greatest merit) should be chosen as the best shot
to play.
Defining the merit as a function of two numbers associated with the shot, the difficulty and the usefulness, may not be the best approach to achieve the most intelligent
play. A shot may have multiple goals, in which case it is appropriate to consider the
probabilities of each combination of goals being achieved. For example, say shot A
has a very small probability of sinking a ball, but in the event of failure, will leave
the table in a configuration which makes it very difficult for the opponent to sink
his final ball, whereas shot B has a slightly higher probability of sinking a ball, but
62
5.5 Complete simulation
in the event of failure, will leave the opponent with an easy shot to win the game.
Then the value of the merit function for shot A should be higher than that for shot
B, despite the fact that shot B has a lower difficulty than shot A for the singular
goal of sinking a ball. Thus evaluating difficulty and usefulness separately produces
a skewed viewpoint of a shot, under some circumstances.
Clearly the development of a intelligent, robust merit function is a complicated task,
chiefly due to the need to be able to quantify what is usually a vague, subjective
estimation of a shot’s usefulness.
5.5
Complete simulation
In many cases, simply analysing the collisions involved in a shot does not provide
enough information to judge the possibility, difficulty and usefulness of the shot.
To gain all the information about a shot, it would be necessary to simulate it in
its entirety. This involves calculating, at discrete time intervals, the positions of
all balls on the table, as well as their translational and rotational velocities. Each
successive time step is computed by analysing the forces acting on each of the balls
(including frictional forces and collision forces, from other balls or cushions) and
updating all of the information.
This is computationally feasible, as can be seen from the many programs available
which simulate pool and snooker. However, the complexity of such a program would
require a lot of programming time. Even if code from an existing simulation could be
adapted, a great deal of work would need to be done in order to update the program
from an ideal simulation to one incorporating all real world effects. It can be seen in
? and ? that the physics of pool becomes extremely complex if all physical effects
are to be accounted for, which would be necessary in order for the simulation to
provide any useful information to the robot beyond what is already known without
the simulation.
Additionally, it would be necessary to ensure that the kinematics of the simulation
matched the kinematics of the actual table used by the robot extremely closely,
which would require a large amount of time spent in testing. One of the most
difficult aspects would be modelling the variations in smoothness of the felt, and the
63
5.6 Final brain software
unevenness of the surface, both of which have significant effect upon the kinematics
of the balls.
Finally, the extra information which could be provided by such a simulation would
only be very useful in analysing quite complicated shots, and due to the difficulty of
such shots, they should not be often attempted by the robot in any case. Thus, while
the possession of such a simulation would enable the robot to play pool extremely
well, it has been decided that it is far beyond the scope of this project.
5.6
Final brain software
Once the positions of all the balls on the table are found, the shot selection software
begins by compiling a list of shots. Since the robot has no way of detecting when
balls have been sunk, the software must be told if the table is still open, or else
whether its target balls are the red set or the yellow set. Once one of the two
coloured sets is chosen, the software remembers the choice for the remainder of that
game.
The shot path types which the software is programmed to consider are Bj Pi ,
[Ca ]Bj Pi , Bj [Ca ]Pi , Bj Bk Pi , [Ca ]Bj Bk Pi , Bj [Ca ]Bk Pi , Bj Bk [Ca ]Pi and Bj Bk Bl Pi . The
trajectories of the shots on the table are calculated as the list is created.
The software was not programmed to consider the effects of varying the force imparted to the cue ball. Therefore, only a single shot was created for each shot path,
rather than a number of shots with different shot strengths.
The overall merit function of the shots was based only upon the difficulty of the
shots. No software was written to judge the usefulness of the shots, so all shots
were regarded as equally useful in that they resulted in a target ball being sunk.
So the merit function implemented was equal to the difficulty function described in
Section ??, with a cushion rebound penalty of λ = 0.5.
Since the calculation of the merit function of a shot is much faster than the possibility
analysis, instead of performing the possibility analysis on all of the shots listed, the
merit function of all the shots is first calculated. The program then sorts the shots
in order of merit, and, starting with the highest scoring shot, performs possibility
64
5.7 Summary
analyses on the shots in this order, until a suitable number of possible shots is found.
This means that the possibility analysis is only carried out on potentially good shots,
saving much time.
Once a shot has been chosen for play, the software must decide the level of power
with which to strike the cue ball. The expression (??) was utilised to determine
which of the three power levels to use.
In the rare case that no possible shots are found, the robot is programmed to hit
the cue ball towards the centre of the table with high power.
The Matlab code for the program is listed in Appendix ??.
5.7
Summary
In order to determine the best shot to play, the shot selection software first systematically creates a list of shots up to a high level of complexity. A merit score
is assigned to each of these shots, based on the probability that the shot can be
executed successfully, resulting in a target ball being sunk. Finally, possibility analysis is performed on the shots with the highest merit scores, until a possible shot is
found.
65
6 The arm
6.1
Introduction
The concept design process for this project was based on the process outlined in ?.
By basing the methods employed in fulfilling the design requirements of the project
on a legitimate foundation, the likelihood of redesigning at a later stage was reduced.
This method also encourages the integration of lateral thinking and logic into the
design process in order to ensure that the final design achieves the goals in the most
efficient manner. Another benefit of following the appropriate design process was
to ensure that the solution that was finally derived had been subjected to rigorous
scrutiny that expedited the construction and testing phases.
As outlined in the literature review, there were a number of important facets to take
into consideration in the design of the actuation subsystem, or “arm”, of the robot.
In order to ensure that these considerations are taken into account (so that the final
design reflects the prerequisites of the system) a detailed concept design process is
necessary.
This chapter outlines the processes followed, and details the final design for the arm.
6.2
Specification
The purpose of the robotic actuation system is to convert the theoretical output
from the brain into mechanical action. It is required to play the desired shot as
accurately as possible. In order to achieve these goals, appropriate functions and
subfunctions were defined.
6.3
Overall function and subfunctions
The overall function of the actuation system is defined as follows:
66
6.4 Solution principles to subfunctions
To propel the cue ball in a given direction, at a given speed, from a given point on
the pool table.
The subfunctions utilised to achieve this overall function are:
1. Energy transfer to cue ball — what type of actuator will be utilised to drive
the cue tip into contact with the cue ball whilst imparting a known and desired
force?
2. Postitioning of cue: traverse mechanism and drive mechanism — how will the
cue and actuation system arrive at the correct position in order to transfer
energy to the cue ball?
3. Control of actuation system — how will the actuation system overall be controlled, and how will it interface with the other systems of the project?
6.4
Solution principles to subfunctions
The solution principles for the subfunctions were derived from:
• a general investigation of the various options available as solutions for the
required subfunctions,
• previous examples of this technology as outlined in the literature review,
• a general focus on lateral thinking and brainstorming.
At this level of the design process, no potential solution principle is ruled out on
any grounds. A brief description of each solution principle is also provided.
1. Energy transfer to cue ball:
(a) Long stroke/fast response solenoid:
Similar to those found in starter motors in cars, or in any number of
industrial or appliance applications, such as washing machines. Solenoids
actuate a rotor in the center of their coil, that is accelerated when a
67
6.4 Solution principles to subfunctions
current is applied to the coils of the solenoid and a magnetic field is
generated.
(b) Pneumatics:
Such as those utilised in the University of Bristol model, with a compressor mounted under the table providing the necessary actuation force to
the cue tip.
(c) Spinning contacts:
Could be constructed as a rotating cylinder that would be lowered onto
the cue ball and impart a rotational momentum to the cue ball, causing
it to “skid” across the table.
(d) Paddle:
Essentially, the actuator would rotate a piece of material like a paddle
until it made contact with the cue ball. The paddle could be oriented to
rotate in either the vertical or horizontal plane.
(e) Mushroom paddle:
A modification of the paddle concept, similar to that utilised in the University of Waterloo model (refer to Section ??).
(f) Combustion mechanism:
A modification of the solenoid actuation system, with the acceleration
provided by a combustion/pressure vessel system instead of an electromagnetic method.
(g) Spring/motor trigger model with feedback and force transducers:
An actuator attached to a spring that would impart the kinetic energy to
the actuator, after being tensed through the use of a motor pulling the
spring taut.
2. Traverse mechanism for positioning of cue:
(a) Two link serial robot (SCARA):
SCARAs (Selectively Compliant Assembly Robot Arms) are two-link serial robots consisting of three parallel revolute joints and one prismatic
joint (?). This first joint allows the end-effector to move and orient in
68
6.4 Solution principles to subfunctions
a plane, and the fourth joint provides movement normal to the plane.
SCARAs can move very fast, provided the actuators are large, and are
best suited to planar tasks. A configuration of such a robot is shown in
Figure ??.
Figure 6.1: SCARA manipulator. From ?.
(b) Cartesian robot with the option of one or a combination of the following
pulley systems:
i. ball screw,
ii. wire pulley,
iii. rack and pinion,
iv. belt drive,
v. tooth belt.
A Cartesian robot consists simply of three mutually perpendicular prismatic joints corresponding to the x, y, and z directions (?). Cartesian
robots have very stiff structures, and hence large robots can be made
using this configuration. A configuration of such a robot is shown in
Figure ??.
(c) Articulated manipulator:
An articulated manipulator consists of two “shoulder” joints and one
“elbow” (?). The “wrist” consists of two or three joints to control the
pitch, yaw and roll of the end-effector. These robots have less stiffness
69
6.4 Solution principles to subfunctions
Figure 6.2: Cartesian manipulator. From ?.
than Cartesian robots but provide the least intrusion of the manipulator
structure into the workspace. A configuration of such a robot is shown
in Figure ??.
Figure 6.3: Articulated manipulator. From ?.
Drive mechanism for positioning of cue:
(a) Stepper motor:
Stepper motors are essentially electric motors without commutators. All
of the windings of a stepper motor are part of the stator and the rotor
70
6.4 Solution principles to subfunctions
is usually a permanent magnet. A typical stepper motor is shown in
Figure ??.
Figure 6.4: Typical stepper motor, and schematic. From ?.
Stepper motors are controlled through external circuitry that allows for
simple interface with higher level programming such as that found in a
PC. Stepper motors have the ability, due to the physical location of the
stator and rotors, to move in discrete steps. These steps correspond to
an incremental change in the angle of rotation of the rotor. From these
incremental steps it is possible to determine a corresponding translational
change. Stepper motors allow for simple open loop feedback control.
Assuming that the initial rotor orientation (which can be easily determined by resetting the motor before each actuation) is known and provided that the stator is not overloaded, causing errors in the step size, all
that is required to determine the final output of the system is to count the
steps taken. If they correspond with the desired input, then the motor
has translated the system the required distance or angle.
To add feedback, all that is necessary is to add a sensor on the stator
that could measure the rotation of the rotor with respect to its initial
orientation.
Stepper motors are extremely useful in robotic applications that are
static, or are being utilised in low load dynamic conditions.
(b) Servo motor:
A servo motor is a DC/AC or brushless DC motor combined with a
position sensor (?). It has a three wire input that controls the operation.
71
6.4 Solution principles to subfunctions
The amount of rotation of a stepper motor is controlled by the duration of
the pulse of voltage supplied to the control wire. This method of control is
known as pulse width modulation, with the length of the pulse determined
by the control circuitry. In this case, the length of the pulse determines
the direction in which the servo motor should turn. For example, the
Futaba S-148 motor, operates on a 1.5 ms pulse width. The motor expects
a pulse every 20 ms. A 1.5 ms pulse makes the motor return to its neutral
orientation, but if the pulse is less than 1.5 ms the motor will turn the
shaft in one direction, and if the pulse is longer than 1.5 ms the shaft will
turn in the operate direction.
The control circuit feedback is given from a potentiometer attached to
the rotor.
Servo motors are extremely good in robotic applications as they are capable of supplying high power, torque and speed that is necessary in a
large number of applications.
(c) Pneumatics:
Pneumatics run on a compressed air system with forced changes in pressure in the cylinders of the actuation system resulting in a mechanical
output. Pneumatics are commonly used in robotic applications as they
are clean, relatively compact and fairly powerful. The only restriction
with pneumatics, particularly in applications involving a high degree of
motion, is the large amount of piping required. This may be prohibitive
in application for this project, as a relatively large amount of this piping
would need to be added to adequately model the desired output. The
associated cost of purchasing, installing and running an air-compressor
to provide the necessary pressure is also a consideration.
As highlighted earlier, the robot built by students from the University
of Bristol utilised this technology. However it should be noted that in
that instance the industrial robot that was purchased for the project
already had the pneumatic system incorporated into it, hence reducing
the amount of design and testing required.
3. Control of actuation system:
72
6.4 Solution principles to subfunctions
(a) Direct PC control:
This would be achieved through an electromechanical interface between
the computer output devices and the control circuits of the motor. This
allows for higher level programming in PC based languages. It also allows
for easier conversion of data from the output of the control algorithms
responsible for shot generation into the required angle of rotation, by
removing the necessity of changing the input requirements on the control
circuits more than once.
Direct PC control is limited only by the effectiveness and reliability of
the computer.
(b) Programmable logic controller:
PLCs operate by downloading an instruction set for a given operation
into a microcontroller and executing the necessary commands from a
machine level. The main problems with PLCs are that they are inflexible
and time consuming to program for complex functions. They have the
added complication of having to first take an output from the shot control
algorithms, then convert it into a format that the PLC can understand,
downloading it to the PLC and converting it again into control signals
for the motors and sensors. This is an inefficient manner in which to
program and removes the flexibility of higher level programming.
(c) Control/driver system:
This system uses a specific hardware drive system for the stepper motors
that allows direct high level control from a PC. It is essentially a “black
box” approach, where a vector signal is input and a series of steps is
output. This improves the flexibility of the control options, which will in
turn allow for greater accuracy. More specifically, these types of control
mechanisms allow input of a vector from the high level control software
that is converted into a sum of a series of steps.
73
6.5 Concept solution
6.5
Concept solution
Three concept solutions for the mechanical system were selected by taking different
combinations of the solutions to the subfunctions described above. These three
were chosen on the basis that they were the solutions most likely to meet the design
requirements adequately.
Concept A consisted of
• a long stroke/fast response solenoid as the actuator for the cue tip,
• a two stepper motor drive system operating on a modified Cartesian traverse,
• a C-section extrusion for the rails of the traverse,
• a tooth belt drive with appropriate gear train,
• a stepper motor that controls the angular location of the cue tip,
• driver/PC control of the motors and solenoid.
Concept B consisted of
• a long stroke/fast response solenoid as the actuator for the cue tip,
• a servo motor drive system with feedback control for the traverse,
• a C-section/cylindrical shaft for the rails of the traverse,
• a Cartesian robot with a ball screw drive,
• control/drive system for interface with the brain.
Concept C consisted of
• a pneumatic system for the cue tip actuator,
• a Cartesian robot with a wire pulley drive system actuated by stepper motors,
• PLC control of motors with input vectors obtained from the computer.
74
6.5 Concept solution
These were then assessed against a set of weighted criteria in order to determine the
final concept design for the actuation system.
These criteria were:
• Repeatability: How reliable is the actuation going to be? How likely is it that
there will be considerable failings in the construction, testing and operational
phase?
• Accuracy of location: To what degree is the actuation system able to accurately
respond to the desired location? Is the location consistent with the allowable
error margins defined for the project?
• Cost: Is the cost of the actuation system prohibitive?
• Ease of manufacture: Will the manufacturing cost in both time and resources
act to delay the overall progress of the project, and what level of technical
complexity is present in the design?
• Weight: Does the weight of the actuation affect the overall performance of the
project by elongating the response time or affecting the accuracy of location?
• Size: Do the size and physical dimensions impact on the ability of the human
opponent to play the game without interference from the actuation system?
• Speed: Can the actuation system adequately respond in a real time manner
at a human speed in order to take a shot?
• Adaptability: How easy would it be to relocate the robot onto a pool table of
different dimensions?
The weighting for each of these criteria, ranging from 1 (least important) to 10
(most important), were chosen on the basis of ensuring that the overall goals of the
project were achieved as efficiently and accurately as possible.
The weightings can be seen in Table ??
The three concepts were then tabulated, and a rating awarded, depending on how
well each concept met the criteria. These ratings were then multiplied by the individual criteria weightings, and the resultant products were summed to give an
75
6.6 Analysis of design
Table 6.1: Selection critera weightings.
Criteria
Weighting
Repeatability
10
Accuracy of location
10
Cost
7
Safety
5
Ease of manufacture
5
Weight
4
Size
6
Speed
8
Adaptability
3
overall score for each concept. The concept with the highest score would then be
chosen as the concept to be designed more comprehensively.
The results are shown in Table ??.
Based on the results shown in Table ??, Concept A was selected as the concept
design that would be analysed and form the basis for the actuation system.
6.6
Analysis of design
Once the initial concept design was selected, the next phase in the process was to
more comprehensively design the various components and test the mechanisms to
ensure that the most efficient and accurate system had been selected.
Using the broad concept solutions outlined above, a more detailed description of the
actuation system was devised. A description of the processes undertaken to design
each subfunction is discussed in this section.
76
6.6 Analysis of design
Table 6.2: Concept design criteria evaluation.
Concept A
Criteria
Concept B
Concept C
Concept
Weighted
Concept
Weighted
Concept
Weighted
rating
subtotal
rating
subtotal
rating
subtotal
Repeatability (10)
9
90
9
90
7
70
Accuracy of location (10)
9
90
9
90
7
70
Cost (7)
8
56
6
42
4
28
Safety (5)
8
40
8
40
7
35
Ease of manufacture (5)
8
40
6
30
3
15
Weight (4)
7
28
5
20
5
20
Size (6)
7
42
7
42
7
42
Speed (8)
6
48
3
24
6
48
Adaptability (3)
6
18
4
12
4
12
Total:
6.6.1
452
390
340
Energy transfer to cue ball
The selected component for this was a long-stroke solenoid. Solenoids are available
in any number of different forms. Two different types were investigated for potential
use in the project: a solenoid removed from the starter motor of a car, and one from
a washing machine.
These were chosen for their relatively small dimensions and because the rotors both
allowed modifications to the ends to attach a cue tip. The solenoids are shown in
Figures ?? and ??.
Figure 6.5: Solenoid taken from the starter motor of a car.
77
6.6 Analysis of design
Figure 6.6: Solenoid taken from a washing machine.
One of the major considerations in selecting which solenoid to use was the ability
to accurately mimic the types of forces that a human being uses when playing pool.
It was therefore necessary to obtain an indication of the range of values of force
found in a typical 8-Ball Game. For a detailed description of the range of values
needed and the method by which they were obtained, refer to Appendix ??. The
range of forces derived was between 80 N and 1300 N.
A circuit was designed to supply variable power levels to the solenoid, corresponding
to soft, medium and hard shots. This is discussed further in Section ??.
The car starter motor that was used in the design process was from a Holden VN
Commodore. It was designed to be run from a 12 V, 20 A car battery. It was not
possible to conduct any tests for the starter motor solenoid due to the nature of the
construction and the task for which it was originally designed.
The purpose of a car starter motor is to provide a mechanical action through a
pivot pushing a fly wheel onto the gears. When the solenoid is not actuated, a
spring inside the rotor pushes the rotor out of the stator. When a current is applied
to the solenoid from the battery, the rotor is pulled into the solenoid, with active
resistance from the spring extending the length of the stroke. This action causes the
lever arm of the fly wheel to pivot about a fulcrum, initiating the movement of the
flywheel.
78
6.6 Analysis of design
Starter motor solenoids are low voltage, high current devices that operate for extended periods of time. A starter motor solenoid may be activated for a number
of seconds, compared to that of a washing machine that may only be active for
up to half a second. As a result there is no requirement for rapid response, which
manifests as a lower acceleration applied to the solenoid.
In a starter motor, the rapidity of its response is less important than whether the
solenoid can remain fixed in place during the ignition process. For this reason, the
use of a solenoid from a car was deemed to be inappropriate for use in this project, as
it would not be able to supply the required force and, more importantly, acceleration
to the cue tip.
Brief experiments were conducted with the starter motor solenoid, investigating
its general performance. These experiments involved attaching a variable voltage
and current supply to the solenoid, and qualitatively noting the response to high
current. The starter motor solenoid showed measurably slower response rates and
acceleration than the washing machine solenoid. It was deemed unnecessary to
carry out a quantitative experiment, owing to the obvious deficiencies present in the
performance of the starter motor solenoid.
6.6.2
Drive mechanism and traverse mechanism
This section will look at the design structure of the drive and traverse mechanism.
As a game of pool is essentially planar, the choice of a SCARA design seemed
initially obvious. However, a SCARA has not in fact been used, due to the size of
the robot required to reach the entire pool table. Using a robot of this type would
inconvenience a human player making a shot in the same corner that the robot is
mounted. The final design is actually a modified Cartesian robot with two prismatic
joints corresponding to the x and y axes and a third revolute joint to control the z
angle of the cue stick. Cartesian robots have a high level of rigidity, purely due to
their configuration. The final design will use only three stepper motors, whilst still
giving a good range of shots it is able to play. The robot will be able to hit the cue
ball at any (x, y) location in any direction requested by the logic algorithm. Although
this robot is more intrusive over the whole table than a SCARA, it has been designed
79
6.6 Analysis of design
to minimise the impact on a human player’s game. The use of high-powered stepper
motors provides the potential for swift movement so that no considerable delays will
occur during a game. The robot was designed for ease of assembly, whilst being
moderately cheap to manufacture.
As outlined in Section ??, there are numerous solutions available for the type of
drive and traverse mechanisms. Amongst these possible solutions, two appeared
likely to meet the design criteria more effectively than the others. These were the
C-section rail and the bearing/shaft combination.
C-section rails are commonly found in drafting boards and other low load applications.
C- or I- section rails have been selected above a shaft/bearing design primarily because they can be supported mid-span without impairing the motion of the carriage
supporting the cross traverse. This will increase the stiffness of the whole structure
markedly. The same carriage will be mounted inside or around the rail for better
support and rigidity, as then the carriage will not be able to twist around the rail
as could occur with the shaft/bearing configuration.
Small wheels inside or around the rail allow for smooth movement, decreasing the
effects of friction on the motion of the carriage supporting the cross traverse. C- or
I- section rails also allow for the installation of brakes, if required, at a later stage.
As these rails are easily supported from underneath, the carriage (and whole cross
traverse) can easily be removed from the end of the rail if required for modifications
and adjustments.
This configuration has a high level of ease of manufacture and lower cost than the
shaft/bearing configuration considered. A diagram of the configuration is shown in
Figure ??.
The rail/bearing configuration does not allow for mid-span support, as the bearing
surrounds the rail. This could cause the shaft to bow in the middle, since the cross
traverse would potentially be quite heavy. The shaft would have to be supported
at each end, making it difficult to remove the carriage, if required, for modifications
and adjustments of the design.
The effects of friction need to be considered when choosing a bearing/shaft com80
6.6 Analysis of design
Figure 6.7: Wheels locating in modified channel section.
bination, as the bearing would slide along the shaft. It would be difficult to avoid
damaging the bearing when attaching the platform. This design has a low ease of
manufacture and a considerable fabrication manufacturing cost.
It was not feasible to purchase a bearing/rail assembly, as these systems cost up to
thousands of dollars per metre. Commissioning a custom aluminium extrusion was
also considered, but discounted due to the cost involved and lead-time. For these
reasons, a ready made aluminium channel was modified by gluing 8 mm mild steel
rods into the four corners to allow wheels to locate in the middle of the channel.
This avoided large friction forces being generated when the wheels moved axially
and rubbed on the inner walls of the channel. Super strength Araldite was used
to adhere the steel to the aluminium. Figure ?? shows the wheels locating in the
modified channel.
Once the traverse mechanism was selected it was then necessary to select an appropriate belt/gear system to translate the carriage of the actuator around the table.
Common belts used in robotic applications include V-belts, flat belts with guide
pulleys, and tooth belts.
The major design consideration for our project was the ability of the traverse to
accurately locate on the rails. The reliability of the whole system depends upon the
81
6.6 Analysis of design
belts’ ability to accurately locate.
The tooth belt is most accurate due to the tight tolerances of the tooth and gear
system so that slippage of the belt under load conditions is unlikely, or at worst
minimal.
V-belts are prone to distortions and slippage under impact loads that could cause
errors in the response of the system. For example, when the shot is played, there is
a sudden high impulse load applied to the frame and drive system. It is necessary
that this force be absorbed by the system, rather than translated into a motion or
backlash. The carriage and rail system should be able to resist the majority of this
force. However, if the belt system offers minimal resistance, the likelihood of errors
becoming present in the system will increase. Tooth belts, by their nature, add a
level of resistance to motion that cannot be achieved through other belt systems.
A photo of the tooth belt and gear system used in the robot is shown in Figure ??.
Figure 6.8: Tooth belt and pulley.
Pulleys were chosen in such a way that the belt could be clamped to the middle
of the carriage inside the channel and return outside (either above or below). The
basic dimensions of the channel are 25.4 × 76.2 mm; thus the pulley ought to be at
least 45 mmin diameter to satisfy the condition described above. The size of the
six pulleys selected for the final design was a diameter of 50 mm. This provided
linear steps in the x and y directions that were sufficiently small to provide the
required position accuracy and also to ensure the torque provided by the motors
would impart a linear force great enough to overcome the large inertia of the robot.
82
6.7 Summary
The belts selected for the final design have a T5 profile and are 10 mm wide; they
have an allowable tensile load of 350 N and a maximum linear speed of 80 m/s (?).
6.6.3
Control of actuation system
Open loop control was used in the final design with hard stops used to calibrate
each of the stepper motors. Limit switches were not used due to the necessity of an
additional control interface (for example, a PLC).
The control system for the motors and actuators was dependent firstly on the type
of actuator, and then on the programs used to calculate the required shot.
As stepper motors were chosen, the most appropriate method of control for these
motors was to use a pre-existing control driver and to interface this with the PC. A
black box approach to the actual mechanics of the control system was taken. All that
was needed to ensure that the motors were controlled accurately was a knowledge
of the appropriate form of input to the control box for a desired outcome. In this
case, the input is a series of pulses that tell the driver when and how many steps
to increment in the stepper motors. This type of control is readily integrated into
higher level programming languages such as Matlab and allows simple control
programs.
6.7
Summary
The mechanical design phase of the project included a comprehensive design process
to create a mechatronic system that would meet the physical requirements of a game
of pool. The emphasis of the design process was placed on the accessibility and
flexibility of the system. This design philosophy was centred on using industrial
processes to achieve the overall goal.
The chosen robust mechanical design allowed the inevitable modifications that were
required during the commissioning phase to be achieved with a relatively low level
of impact to the overall progress of the project.
83
7 Interface
7.1
Introduction
The goal of the hardware interface is to take the position and force requirements
from the shot selection algorithms and convert these to a series of commands to the
actuators.
The actuation of the motors and solenoid was achieved through direct parallel port
addressing that was decoded by a series of driver boards controlling the stepper
motors and solenoid.
Different options for controlling the output to the actuators in both software and
hardware were investigated. This section discusses the different solutions that were
considered as well as the final method of control used.
7.2
Software development
The final design was chosen to provide an acceptable degree of accuracy whilst still
being feasible in the limited time frame available.
The hardware interface software development process involved four stages:
• identification of the control requirements of the actuators,
• development of a command structure,
• pulse train generation,
• interface code algorithms
84
7.2 Software development
7.2.1
Actuator control requirements
Before it was possible to design software to control the motors and solenoid it was
necessary to determine the control requirements of each of the actuators.
The stepper motors used in the project, as discussed in Section ??, were attached
to a dedicated driver board. This board could then be connected further to another
control card, or else could be directly driven. Irrespective of the connections to
the driver boards, the motors have two requirements in order to achieve adequate
control: a signal indicating which direction to turn, and a step pulse of certain
duration.
The solenoid merely requires that a DC signal of a variable voltage be applied for a
given amount of time.
7.2.2
Command structure development
Once the requirements of the actuators were determined, it was possible to develop
a software command structure that would allow the motors and actuators to be
accurately controlled.
The command structure was also dependent on the output port used to connect
with the dedicated hardware. There were two ports available for output: the serial
(COM) or parallel port. As mentioned earlier, control code was developed using
both ports as potential final choices in order to determine the simplest and most
effective method of control.
A third factor influencing the design of the command structure was the programming
language used for the coding of the hardware interface software. Certain languages
are better suited to parallel port access as opposed to serial port, and vice versa.
Visual Basic was the original language used for developing the interface software,
as it has a very simple method of input and output via either a serial or parallel
port. It was realised that, since the remainder of the software had been written in
Matlab, the use of a second programming language posed further difficulties to
the overall integration of the code. However, at the time that the coding was begun,
there were no better solutions. Matlab 6.1 only allows I/O through a parallel and
85
7.2 Software development
serial port to specific types of dedicated hardware. This made it impossible to use
for the purposes of this project, owing to the nature of the hardware developed.
However, when the Visual Basic code was trialed on the dedicated computer for
the robot, it caused fatal exception errors in the operating system. This was a
result of the properties of the dynamic link library (DLL) that was used to access
the parallel port. The DLL was not compatible with the operating system. An
alternate operating system was unable to be used, due to the requirements of the
camera software. Additionally, no other DLL could be found that provided the same
access.
The release of Matlab 6.5, which allows generic parallel and serial port addressing,
meant that there was an alternative available, and it was this version that was
eventually chosen to control the signals sent to the actuators.
Serial port control was discarded on the basis that, firstly, Visual Basic was unable to
directly bit address the serial port, thereby creating complications for the hardware,
but more importantly, the serial port did not have enough pins to allow all of the
actuator control requirements to be met.
The major distinction between the parallel and serial ports is the number of pins that
can be used to transfer data. At least 8 pins needed to be available in order to control
the motion of the motors and solenoid, due to the command input requirements of
these actuators. Serial ports do not have enough available pins. A parallel port has
8 free pins to be used for data transfer and hence was chosen to provide the output
desired.
The command structure developed is shown in Table ??. Bits 0, 2 and 4 control the
direction of the steps in the three stepper motors; setting the bit high corresponds to
clockwise stepping, and setting the bit low corresponds to anti-clockwise stepping.
Bits 1, 3 and 5 control the timing of the steps; a step occurs when the bit is set from
low to high. By controlling the lower six bits appropriately, simultaneous motor
control is possible.
Bits 6 and 7 control the actuation of the solenoid. This is discussed in more detail
in Section ??.
Once the command structure was determined, the algorithms for generating appro86
7.2 Software development
Table 7.1: Actuators command structure.
bit 0
x-position motor direction
bit 1
x-position motor step
bit 2
y-position motor direction
bit 3
y-position motor step
bit 4
angular position motor direction
bit 5
angular position motor step
bits 6 & 7 solenoid power level
priate command sequences for the motors and solenoid could then be determined.
This was conducted in conjunction with the development of the control circuit discussed in Section ??.
7.2.3
Pulse train generation
The overall function of the hardware interface software was to send sequences of
appropriate commands (as developed in the previous section) to the driver box for
the stepper motors and solenoid, so that the motors would be driven as fast as
possible, without any steps being skipped or any other errors in actuation.
A stepper motor cannot go from being stationary to its maximum speed instantly.
If the code does not specify ramp-up time when the motors start, then the combined
inertia of the load being driven and the internal inertia of the motors would cause
the motor to “slip”. In other words, the currents would be changing through the
phases of the motor, and from the perspective of the software the motors would be
operating correctly. Physically, the stator would be missing steps and be providing
zero torque to the motors during this time. The flow on effect would be to cause
errors in the final location of the carriage and solenoid. Therefore, if a stepper motor
is to be driven at high speeds, ramp-up and ramp-down of speed are required.
Command signal generation was achieved using several methods.
87
7.2 Software development
7.2.3.1
Visual Basic
Two versions of Visual Basic code were developed. The code provided in Appendix ?? uses the interrupt timer method. The alternate code using system pauses
is very similar to that used in the final Matlab code, and hence has not been
included in this report. While these programs worked on Windows 98 machines,
the DLL used by Visual Basic to write to the parallel port was incompatible with
Windows 2000, which is the reason these were not used. If necessary this could be
reworked if a different DLL could be found. It is possible to make the step command
duration much shorter using this method, simply by changing the initial timer values in the code. This code uses sequential, rather than simultaneous, motor control
to allow for simplified testing in the initial phases. The same algorithms as used in
the Matlab code for simultaneous driving could be used.
7.2.3.2
Matlab timer interrupts
Timer interrupts were based on the inbuilt timer in Matlab, which allows variable
interrupt times (allowing for ramp-up and -down). The benefit of using timer interrupts is that it allows other programs to be run while the motors are driving the
actuators. Therefore, the robot was able to move the solenoid over the white ball
as soon as it had located it, and could still silmultaneously run the remaining image
processing and shot selection software while controlling the motors. This reduced
the total time taken to execute a shot significantly. Theoretically, this method of
generating pulses should have been the most efficient.
There were two methods of achieving timer interrupts. The first method was simply
to start a timer, and instruct it to call a specified function periodically, with a
specified period. This function would then send a command signal via the digital
I/O object defined in Matlab. The second method was to set the periodic function
and the timer period as a property of the digital I/O object. Both methods were
essentially equivalent, and gave identical results.
There were a number of problems encountered in implementing this method. The
main problem was that the timers used by Matlab had a maximum resolution of
0.016 s. Therefore, the timer periods were generally rounded to the nearest multiple
88
7.2 Software development
of 0.016 s. Any specified timer period less than 0.016 s would simply be rounded up
to this minimum value. Generating a pulse train with a period of 0.032 s (0.016 s high
and 0.016 s low) results in 1875 steps per minute, which corresponds to a stepper
motor speed of 4.69 rpm (since there were 400 steps per revolution). The maximum
operating speed for the motors used was specified as approximately 50 rpm. Clearly,
the maximum speed at which Matlab was able to drive the stepper motors was far
less than the maximum speed of the stepper motors.
Another problem with the timer interrupts method was that the period between
interrupts was not regular. The period would jump between 0.016 s and 0.032 s,
resulting in very irregular bumpy control of the stepper motors. If the system processor were occupied by further tasks (for example, running the main program of
the robot) then this problem was further exacerbated, with the period between interrupts becoming much more erratic. This behaviour sometimes caused the stepper
motors to skip steps or stall completely.
Therefore, this method could not be used to generate the command signals.
7.2.3.3
Matlab pauses
The other method for generating pulses involved the use of pauses. After each
command signal is sent, the entire system pauses for a specified duration, then
the next command signal is sent, and so on. This method suffered from the same
problem as the timer interrupts method, namely that a time period of less than
0.016 s could not be achieved.
The obvious drawback of this method was that the system could not perform other
tasks whilst the motors were moving. However, the benefit over the timer interrupt
method was that the pulse train was much more regular and contained none of the
random distortions resulting from the previous method. Because the pulse train was
completely smooth and regular, the average speed of this method was faster than
the timer interrupts method.
Ramp-up and -down of speed was achieved by varying the length of the system
pauses during operation. For a certain number of steps at the beginning and end of
every pulse train, the pause delay was doubled to ensure that the motors would not
89
7.2 Software development
slipping. It was not considered necessary to generate a smooth ramp system based on
a lookup table because of the low speed with which the motors were being driven.
As before, the motors were driven at only 4.69 rpm compared to their maximum
speed of approximately 50 rpm. For this reason the ramping did not need to be
as long or as finely calibrated as it would have needed to be, had the motors been
run at higher speeds. In fact, because the speed was so low, it was found that no
ramping at all was necessary, so it was not used in the final version of the interface
software.
It was clear that the sole benefit of being able to run simultaneously with other
software was not sufficient to warrant the usage of the timer interrupts method. The
pauses method was much more smooth and reliable, and ultimately the difference
in the total time needed to take a shot was not very large.
The final code as incorporated into the robot program can be found in Appendix ??.
7.2.3.4
dSPACE
An alternative to controlling the timing of the command signals from a computer
was to use an external system, such as a microcontroller or a controller board. The
use of the dSPACE board DS1102 to send the command signals to the hardware
driver box was investigated. This board was the only suitable controller available
for use.
The program downloaded to the DS1102 runs continuously. If a sequence of, say, 700
command signals needs to be sent to the driver box, then Matlab sends an array
of 700 eight bit commands to the board. Matlab can then trigger an interrupt to
the program running on the board, which responds by sending the 700 commands
sequentially with a specified period.
The C program code written to run on the DS1102 can be found in Appendix ??.
The code currently only writes to two digital I/O pins, but it would be very simple
to modify the code to write to eight.
A more elegant solution than the one presented here would be to have the command
signal generation algorithms in the C code, so that Matlab would only need to
send four integers (rather than an array of commands) to the board, corresponding
90
7.2 Software development
to numbers of steps for the x-direction, y-direction and angular stepper motors, as
well as the solenoid power level.
The system was successful in outputting signals at the specified rate. Unfortunately
the dSPACE board was unable to interface with the computer dedicated to the
project. The dSPACE board was only able to interface with an ISA bus, which
has now been superceded by the PCI bus, which was the only type available on the
project computer. However, modifying the code for a compatible controller would be
a simple task, and this would allow fast and smooth control of the stepper motors.
7.2.3.5
Conclusion
While using an external controller to send the command signals to the driver box
would be the best solution to drive the stepper motors at a reasonable speed, this
was not achieved within the time frame of the project. Instead, the Matlab pauses
method was used, since this produced smooth and reliable motion of the robot.
7.2.4
Code algorithms
All three stepper motors have 400 half-steps per revolution, so that one half-step
corresponds to an angular rotation of 0.9◦ .
The diameter of the pulleys used in the design is d = 50.93 mm. One half-step
correponds to a linear distance of
1
πd
400
= 0.400 mm.
The Matlab shot selection algorithm outputs position and angle coordinates relative to a datum position. These three values are converted to a number of steps
using the ratios above. Then a series of commands is written to the motor driver
boards to invoke the correct number of steps in each motor.
The interface uses the Matlab Data Aquisition Toolbox (DAQ) to convert the
number of steps into a series of commands and hence pulses to the stepper motor
driver boards and solenoid control card.
This is achieved by passing the desired 8-bit output command, represented in the
format shown in Table ??, as a variable to a MEX file which is then written directly
91
7.3 Hardware
to the output memory location of LPT1. The MEX file acts similarly to a dynamic
link library in that it writes directly to the chosen output port. The dedicated
hardware in the driver box then decodes the instructions written to the parallel
port and controls the stepper motors appropriately.
7.3
Hardware
The overall control card circuit can be seen in Appendix ??. The purpose of the
control card is to decode the instructions written to the parallel port memory address.
7.3.1
Motors
As mentioned above, the stepper motors have pre-existing driver boards that require
two input commands:
1. the direction of rotation,
2. a step pulse command of some duration.
The parallel port/driver board interface must be able to link directly with the pins in
the parallel port. This was achieved through the creation of a linking card that takes
in the values from the parallel port and translates them into signals that the driver
board can interpret. This was necessary because the driver board was designed to
interface with a PCI card that would have been inserted into the computer. However,
these type of cards are no longer compatible with more recent computer systems and
hence a hybrid interface card was needed.
As the pins’ command structure was designed to correspond to these requirements, it
was fairly simple to then hardwire these pins to the driver boards. Optical isolators
were added to ensure that the computer would be protected if electrical problems
were encountered in the motors.
92
7.3 Hardware
7.3.2
Solenoid
As mentioned in Section ?? above, two of the parallel port pins controlled by the
Matlab interface software commanded the solenoid.
Two different concept solutions were considered for the digital control of the solenoid.
The initial suggestion was to convert the power supply from AC (mains) to 150 V
DC. The two pins could be used to determine the level of power supplied to the
solenoid. The first pin is set high when a timer in software is initialised. When
this pin goes high, a variable capacitor is charged for the duration that the timer is
running. When the timer expires, the second pin is set high to discharge the variable
capacitor, causing the solenoid to fire.
This method is the most effective and versatile form of control for the solenoid, as it
makes possible a continuous range of force output. However, there was insufficient
time and resources to realise this design.
The alternative solution was simply to use the four possible output combinations
from the two pins (00, 01, 10, 11) to correspond to a different preset power level
supplied to the solenoid. This allows the power supply to remain AC (thus circumventing the need for a transformer). Table ?? shows the power levels indicated
by each output combination. The solenoid remains charged until a 00 command is
received.
Table 7.2: Solenoid power levels and commands.
Pin 6 Pin 7
Power level
0
0
no power (0 V)
0
1
low power (60 V)
1
0
medium power (120 V)
1
1
high power (180 V)
This is a much coarser version of control, but it is simpler to implement in hardware,
as the use of a variable capacitor is not required. A timer is still required in order
to determine when to reset the pins and turn the solenoid off.
93
7.4 User Interface
This second method was the solution chosen for our hardware. The signals from
the two pins allocated to the solenoid were passed through a decoder which, using
a series of relays, created the required power level in the solenoid.
Interlocking relays were used to ensure that no two power levels could be applied
to the solenoid at the same time. The solenoid circuit was also optically isolated to
prevent damage to the computer. A transformer was incorporated into the design
to supply power to the solenoid and motors, and also to the optical isolators, relays
and the decoder.
7.4
User Interface
A GUI was designed to be displayed on a monitor at the beginning of each of the
robot’s shots. This GUI is shown in Figure ??. The main purpose of this GUI was to
provide a trigger which a user could use to tell the robot to take its shot. However,
it also allowed a user to set a number of options.
The robot can be set to choose and play a shot completely independently. Alternatively, the robot can be asked to display its chosen shot on the screen for verification.
Additionally, the robot can present a list of several alternative shots, and a user can
choose a shot from this list for the robot to play.
A user can also decide the level of complexity of the shots which are to be considered
by the robot. The robot can be ordered to only to consider “simple (direct) shots”,
that is, shots of type Bj Pi . If the robot is asked to also consider “medium complex-
ity shots”, then the robot will also consider shots of types [Ca ]Bj Pi , Bj [Ca ]Pi and
Bj Bk Pi . The robot can further consider “very complex shots” of types [Ca ]Bj Bk Pi ,
Bj [Ca ]Bk Pi , Bj Bk [Ca ]Pi and Bj Bk Bl Pi . Finally, the robot may be set to a “trick shot
mode”, so that it only considers shots from this last set of shot types.
The GUI was programmed so that it would remember the options selected for the
previous shot in the game, and present these as the default options for the next
shot.
The robot may be triggered to take its shot by pressing one of the three buttons,
“Table is still open”, “Go for red” and “Go for yellow”. The first of these buttons
94
7.4 User Interface
Figure 7.1: A GUI allowing a user to set various options and trigger the robot to
take its shot.
95
7.4 User Interface
causes the robot to assume that no balls have yet been sunk, so that it may attempt
to sink any coloured balls. Otherwise, the robot will only play to sink only red
or only yellow balls. Additionally, for the first turn of every game, an additional
button, “Play the break”, is presented.
There is an option for the robot to play forward only. This option should be checked
when the opponent has played a shot resulting in the cue ball being sunk in a pocket.
The rules of 8-Ball then dictate that the cue ball must be struck in a forward direction
in order for the shot to not be deemed a foul.
Finally, there is a button to allow a user to halt the robot software and end the
game.
A second GUI was created to display the shot chosen by the robot software on the
screen, and also to allow a user to choose a shot for the robot to play from a list of
alternatives. This GUI is shown in Figure ??.
Figure 7.2: A GUI displaying the chosen shot and allowing a user to choose from
other alternatives.
96
7.5 Summary
Initially, the GUI will display the best shot found by the shot selection software for
the robot to play. A pull-down menu is also provided so that a user can examine
a number of alternative shots. When a shot is selected, an image of the shot is
displayed, and the score (related to the merit function) of the shot is shown.
While the robot was carrying out various tasks, an update window was displayed to
the screen, containing information about the current activity of the robot.
Additionally, a text-to-speech converter was used to add an element of personality to
the robot. The robot was programmed to randomly speak various statements, some
of which related to its operation, and others which were provided for entertainment.
7.5
Summary
The interface between the software and the dedicated hardware was designed to
meet the requirements of the actuators and the output commands from the shot
determination algorithms. Numerous methods of motor control were developed and
tested, but due to hardware conflicts, only one was able to be integrated with the
system. The success of the interface was evinced by the lack of errors present in the
conversion between the software instructions and the resulting physical actuation.
A graphical user interface was developed to allow user control of several options, and
to communicate the current process being executed by the robot. Voice synthesis
was included to add a human element to the robot.
97
8 Commissioning and results
8.1
Introduction
The commissioning of any prototype system is a time consuming task. After the
construction phase was completed, the integration of the different subsystems of the
robot began. This process involved combining the vision system with the shot selection algorithm, and then incorporation with the dedicated hardware. This section
outlines the process that was followed during this stage, as well as highlighting the
various problems that were encountered and their respective solutions.
8.2
Software integration
Software integration was reasonably straightforward, owing to the compatibility of
the different software systems and the flexibility of Matlab.
As discussed in Section ??, a DOS console program was used to interface the digital
camera with the computer. This program was written in Visual C++. Through
this console program, Matlab was able to invoke the D30Remote program when it
was necessary to take a photo.
The other software programs were all written in Matlab. As a result, the task of
combining the different systems was relatively simple. The only modifications that
were necessary during the commissioning phase of the project were in response to
run time errors generated by exceptional ball layouts that precluded a normal shot
being played. These modifications added robustness to the system by ensuring that
the robot would always be able to play a shot. More specifically, if the robot was
unable to play a shot to hit one of its target balls, it was programmed to shoot the
cue ball towards the middle of the table on the highest solenoid power level.
As discussed in Section ??, the dedicated hardware and interface program were
developed simultaneously. Owing to this approach, the communication with the
98
8.3 Position calibration
motors and solenoid was extremely simple. The more difficult part of this process
was found to be in the calibration and testing of the system.
8.3
Position calibration
It was necessary to compare the number of steps per millimetre to the theoretical
values obtained previously. This was achieved by measuring the distance between
two known points on the table, then counting the number of steps required to drive
the motors between these points. The theoretical values and the actual values
corresponded exactly. This was a direct result of the high quality of the belts and
pulleys selected for the drives, which eliminated the possibility of slipping or lag.
The determination of datum positions was the most crucial aspect of the calibration of the system. Any errors present in the calibration of the system resulted in
consistent errors in the positioning of the robot during actuation.
As open loop control for the robot was used, there was no feedback from the motors
to check whether the positioning was correct. Hence, it was necessary to ensure that
the starting position of the robot was as accurate as possible. This was achieved to
varying degrees of success.
The location of the datum position in the x-direction was located by using a hard
stop placed in the channel at the location where the cross traverse had just moved
out of the field of vision of the camera, allowing an uninterrupted view of the table.
The position of this datum with respect to the table was at the end from which
the break is played. The y-direction datum position was found by driving the cross
traverse carriage to the right hand side of the table with reference to the break
position. The robot in the datum position can be seen in Figure ??.
The angular datum position was the most difficult of the positions to accurately
define due to the lack of feedback in the motors. The angular datum position was
chosen to be in line with the x-axis of the table as shown in Figure ??. However,
great difficulty was experienced in determining whether the motor was in its initial
position. The angular position of the cue was also found to be the most critical
in terms of its affect on the ability of the robot to successfully complete its shot.
99
8.3 Position calibration
Figure 8.1: Robot at the three datum positions.
100
8.4 Cue ball location
A significant amount of time was spent attempting to successfully calibrate this
position.
The first calibration method attempted was to turn on the motors and use the cue
as a visual guide, by sighting the cue against the rail to test whether they were
parallel. The angular motor was stepped until it was judged that the solenoid was
aligned with the x-axis. This method was not robust, and as a result, a different
method was found.
The solenoid bracket was driven to the x- and y-direction datum positions, and then
rotated until the solenoid bracket just touched the upright. The bracket was then
rotated a specific number of steps in the anti-clockwise direction to provide a datum
for the angular position. The number of steps was determined by trial and error.
The number of steps to align the cue tip with the x-axis in the datum position was
found to be between 19 and 20 steps anti-clockwise from the upright. This was the
maximum resolution obtainable. If it were possible to increase the resolution of the
motors, say from 0.9◦ per step to 0.09◦ per step, the accuracy of the datum would
be greatly improved.
The next step in calibration process was to determine the stand-off distance between
the cue tip and the cue ball. This was calculated using trial and error, judged by
how well the cue hit the cue ball.
Finally, the relative distance between the datum of the cue and the zero position in
the photo of the table.
These calibration values were then placed into the main control program and used
throughout the commissioning process.
8.4
Cue ball location
Once calibrated, the next phase of commissioning involved the testing of the positional accuracy of the vision system with regard to accurately locating the actuator
near the cue ball. The solenoid actuator was programmed to position the shaft of
the angular position motor directly over the center of the cue ball. This allowed an
easily verifiable method of checking (x,y) calibration.
101
8.5 Cue tip length
The optimum position for the solenoid before actuation was determined to be the location at which the shaft would make contact with the cue ball just before maximum
extension. This allowed the maximum velocity of the solenoid and hence maximum
force to be applied as well as reducing the amount of spin transferred to the cue
ball. The number of steps required from the center of the ball to the optimum firing
position was determined empirically through trial and error. The use of a video
camera that allowed easy measurement of the point of contact with respect to the
centre of the ball was considered, however due to time constraints was unable to be
implemented.
8.5
Cue tip length
One of the problems encountered in the commissioning of the robot was finding the
optimum length of the cue tip. It was a requirement of the system that the cue
tip have the largest possible stroke length, both to improve power and to move the
point of contact closer to the center of the ball. The cue tip also had to be high
enough off the table when fully retracted, so as not to interfere with the placement
of the balls on the table.
The pool table frame was designed to allow the table surface to be adjusted to
obtain the most level playing surface. However, owing to the quality of the table
used, there was still some degree of bowing in the table. This resulted in the height
of the cue tip changing from location to location across the table. As a result, a
fixed position cue was inappropriate as it did not allow the necessary adjustments
to be made during the commissioning phase.
Initially the cue tip, which was made of wood, had a hole bored, and a 10UNF grub
screw glued into the cue. This was then attached to the solenoid. This did not
allow any adjustments of the length of the cue tip and hence was replaced. Instead,
a 5 mm threaded rod was glued into the cue tip. Two lock nuts were used to allow
variable positioning of the cue tip, by placing one 8 mm washer in between two
5 mm washers. The 8 mm washer was larger in diameter then the hole into which
the solenoid retracted. Hence, by adjusting the position of the lock nuts on the
threaded rod, it was possible to vary the distance that the solenoid retracted. A
102
8.6 Carriage wheels and motor torque
photo of the solenoid shaft with the cue tip attached is shown in Figure ??.
Figure 8.2: The solenoid shaft with the cue tip attached.
Once this was completed, a series of height measurements were made over the table
to determine the profile of heights and deduce the worst corner. The cue was adjusted so that, in the highest corner of the table, the retracted cue tip cleared the
height of the balls in any orientation.
8.6
Carriage wheels and motor torque
The initial design used a four wheel system in all carriages to assist in locating the
carriages in the rails. When the robot was assembled the wheels running in the rails
caused too much friction and as a result the motors stalled. This was particularly
the case in the x-direction. This would not necessarily have required the changes
that were eventually made had the motors driving the carriages been more powerful.
The two potential solutions to this problem were to remove the top two wheels in
all carriages and use the weight of the uprights and traverse to locate the wheels
103
8.6 Carriage wheels and motor torque
in the rails, or to find a way to increase the power and hence torque output of the
motors.
There were two possible solutions for boosting the power output. The first was to
use a gear box; this was not used as it was too time and cost intensive to buy and
retrofit. The second method, which was implemented, was to increase the torque
output by increasing the power supplied to the motors. The stepper motors were
designed to be operated at 2.5 A; this was increased to 3 A. Whilst this is not a
recommended solution for future problems encountered with motor power, it was
the only feasible solution available in the time frame given, and coupled with the
changes to the wheel locations, provided an acceptable, if temporary, solution to the
problem.
This solution worked extremely well in the x-direction carriages, however, there
were problems found with the y-direction. When the wheels were removed, the
carriage rotated about the C-section rail, as a result of the center of mass of the
structure being offset from the geometric center. This caused the cue tip to shift in
its position, depending on the orientation of the solenoid bracket. It also caused the
axles of the wheels in the lower rail to rub severely against the inner channel wall,
further causing the motor to stall. It was therefore necessary to place one wheel
back in the top rail. However, the position (rather than being adjustable as it was
initially) was made fixed to ensure that the wheel would not slide around during
operation. The final configuration of the carriage wheels can be seen in Figure ??.
Figure 8.3: The solenoid shaft with the cue tip attached.
104
8.7 Cross traverse shearing
8.7
Cross traverse shearing
It was found that during prolonged operation, the carriages in the x-direction became
misaligned. It was determined that the problem stemmed from the use of grub
screws, used to attach the pulleys to the motor shafts, and the shaft from the motor
onto the coupling. The unequal friction levels in the rails also contributed to the
problem by increasing the level of torque required to move the shaft. The grub
screws were located on flats machined into the shaft of the motors, as well as in the
shaft attached to the coupling. The main difficulty with grub screws is that if there
is any real resistance to motion, they have a tendency to grab and tear the metal
of the shafts. This caused slippage in the shaft attached to the coupling, and as a
result, the relative position of the carriage on that side would slip. The problem
was minimised by regularly monitoring the status of the grub screws and tightening
when necessary. Teflon lubricant was applied to all of the rails to reduce the effects
of friction and this was seen to have a significant effect on the frequency of the cross
traverse shear.
8.8
Cables
The power cables for the stepper motots, solenoid and camera, and the USB cable
for the camera all required to be suspended above the table. The cables needed to
be arranged in such a way so that they did not obscure the image of the table when
a photo was being taken. Also, the cables from the solenoid and angular position
stepper motor needed to be long enough to extend to the far side of the table, as
well as not interfering with the balls, when the carriage was moving over the table.
The solution was to suspend a wire from the camera mount, which followed the
camera cables and met the other cables close to where they joined the solenoid
mount. As the robot needed to be out of the frame of the photo when the camera
was operating, the cable needed to be fairly taught so as to not droop into the image.
The final configuration of the cables and wires setup can be seen in Figure ??.
Because the cable was attached to the camera mount, any significant tension applied
to the cable would cause the mount to shift position, causing errors in the captured
105
8.9 Vision system
Figure 8.4: The cables and wires setup.
image. This problem was reduced by driving the carriages to the datum positions
to take a photo, while still leaving enough slack in the wire so as not to shift the
camera position. However, it was not possible to take a photo with the traverse off
the other end of the table, as this caused a small shift in the camera position.
8.9
Vision system
The attachment of the camera to the ceiling above the pool table is shown in Figure ??.
Figure 8.5: The attachment of the camera to the ceiling.
106
8.9 Vision system
It was necessary to position the camera as accurately as possible such that the plane
of the camera lens was parallel to the plane of the table. This was achieved by testing
the alignment of two parallel lines marked out on the floor beneath the camera. The
orientation of the camera was adjusted until the the images of the parallel lines were
parallel in the photo taken from that camera position. It was difficult to adjust the
camera mount, so the final position may not necessarily have been correct. Image
distortion made this calibration difficult because the lines that were used to judge
the position were distorted in the calibration image.
The vision system required calibration once the camera was mounted above the table
before it could be used and interfaced with the arm of the robot. This calibration
involved two variables: the horizontal movement between the table and the camera,
and the zoom of the lens. Changes in these variables result in changes in the cropping
limits of the photo, and the radius (measured in pixels) of the images of the balls
on the table respectively. This is because the camera and the robot are not linked,
so that any relative movement between the two creates a need for recalibration. If
the camera was mounted on a stiff frame that was physically attached to the pool
table, these issues would not be a problem.
Calibration of the vision system involved setting the template colours to correspond
to the approximate average colours of the balls (determined by the lighting in the
room).
In order to align the table image with the actual tabletop, a fixed point was used
to define the zero pixel of the image. The image was cropped with this point as
pixel (0,0), and the number of steps in the x- and y-directions required to drive the
solenoid bracket to this point from the datum were measured. This point could then
be referenced in both coordinate systems, allowing the ball positions in the image
to be mapped accurately to physical positions on the table.
An auto-calibration program has been written to determine the cropping points
of the image, but the program has not been tested. It is believed that the autocalibration will prove not robust enough to be relied upon to yield sufficiently accurate calibration values.
107
8.10 Results
8.10
Results
Quantitative results were not investigated as comprehensively as would have been
desirable. However, it has been possible to qualitatively comment on the performance of the robot.
This section discusses in more detail the overall performance of the three different
systems of the robot.
8.10.1
The eye
The vision system of the robot was proved successful in locating the balls in the
image of the table. With minor adjustments, and assuming that there was nothing impeding in the photo, the balls were located in the image and their colour
determined 100% of the time.
Small errors came about trying to reconcile the positions of the balls in the image
(with reference to the zero pixel) and the position of the balls on the table (with
reference to the robot datum). Issues regarding this were solved with trial and error
of the cropping values of the image.
The only further error in the vision system related to the lens distortion. The
accuracy of the distortion filter used to correct this error was unable to be quantified; however, it must be assumed that it was reasonably accurate since the robot
frequently was able to sink balls.
8.10.2
The brain
The brain generally selected shots which were consistent with shots that would be
chosen by a human player.
A problem arose with the points used to define the positions of the pockets. The
software assumed that, in order to sink a ball in a pocket from any point on the
table, there was a fixed point towards which the ball must aim. However, there is
no such fixed point. At extreme angles, or when other balls are very close to the
108
8.11 Summary
pockets, the points defined in the program may not necessarily be the optimal points
at which to aim. This caused the shot selection software to sometimes ignore shots
which were quite easy.
8.10.3
The arm
The primary goal of the arm was to take the output from the shot determination
program and accurately position the robot over the desired location on the table.
As mentioned above, owing to the restrictions in time after commissioning, the
results from the arm positioning systems were extremely preliminary.
Qualitatively, the arm system achieved its goal of positioning the cue in the desired
location.
As highlighted in Section ??, there were a number of changes made to the calibration systems that attempted to improve the overall accuracy of the positioning
mechanism. This improved the accuracy of the system considerably, especially in
terms of angular positioning.
Recommendations for future modifications are contained in Section ?? and stem
from the results of the initial qualitative testing of the robot.
8.11
Summary
The commissioning process overall was relatively smooth. There were of course
errors present in the system, which have been discussed. As with any prototype
development process, it is extremely difficult to predict absolutely all of the potential
errors in a system during the design phase. Even with as comprehensive and detailed
a design process as was followed in this project, there are always errors that cannot
be foreseen. However, it is possible is to predict general areas where problems could
be encountered, and this was achieved in this project. The effect of this was to allow
easy determination of errors as there was already a place to look.
Comparatively, the number and severity of the errors encountered in the commissioning phase were low. Considering that the commissioning phase was conducted
109
8.11 Summary
over a period of a little more than two weeks, and the errors still present in the
project at the end of this period were identified as being primarily as a result of
the limitations in the hardware systems used and not as a result of the design or
construction, it can be concluded that the commissioning and design phases were
overall a success.
110
9 Future work
9.1
Introduction
There are several areas that could be developed and modified to increase the accuracy and level of strategy of the robot. As this was the first version of this robot,
some of these issues were considered infeasible in the allowable time frame to design
and build a complete robot. Other problems encountered had more effect on the final product than first predicted. These ideas are discussed in relation to the eye, the
brain and the arm. The following discussion is intended as suggestions that could
be developed by future students to address problems and shortcomings encountered
by this year’s project team.
9.2
The eye
At present, the vision system of the robot is exemplary in its operation of locating
the balls in the image of the table. There are few aspects of its operation that could
be made more robust. However, there are two problems that could still be worked
upon to make the eye more capable of accurately locating the balls on the table.
The first of these is image distortion. Image distortion is present in almost all lenses
to a certain degree and has been investigated to a point as discussed in Section ??.
In the context of the real-time play, implementing an inverse filter on an image of the
table and balls was infeasible. This was due to the size of the original images, and
hence the filter size and processor time dedicated to this task. For this incarnation
of the robot, it was sufficient to adjust the coordinates of the ball centres, depending
on where in the image of the table the balls were found to be located. It is realistic
to say that the digital camera and lens used for this project will be used by the
Department of Mechanical Engineering in the future, possibly in an application
requiring image distortion correction. It is suggested that a more accurate method
for correcting image distortion be investigated and implemented.
111
9.3 The brain
The second aspect of the vision system that requires future development is detection
of the cushions and pockets of the table. This would make the robot more robust as
without edge detection it cannot be determined if the table and camera have moved
in relation to each other. This would make the robot more flexible to changes in
location and remove the need for careful and time-consuming calibration. Edge
detection is discussed in Section ??.
The use of “bigs” and “smalls” is also suggested as an area for further research. The
use of a casino set of balls does not hinder the ability of the robot nor a human to
enjoy a game of pool. This suggestion could be seen as purely academic, but would
also increase the scope of the robot to play games such as 9-ball.
9.3
The brain
Further development of the brain of the robot includes modelling of ball collisions so
as to predict the final position of the cue ball for the common strategy of leaving the
cue ball in a difficult position for the opponent to play a shot. This would require
intimate knowledge of the table’s friction coefficients and the physics of pool.
It is suggested that multiple shot strategy be incorporated into the brain code. This
could include the robot detecting when its opponent has fouled, allowing the robot
to have two shots and determining that it has potted an object ball, earning another
shot. Further to this, the brain code could determine the optimum position for the
cue ball behind the foul line or within the “D” when its opponent has sunk the white
ball.
Ultimately, it is desired that the robot adopt a self learning algorithm such that the
code is modified to increase the accuracy of play based on the result of the intended
shot compared to the actual result (and possibly the outcome of shot modelling).
Once the robot has achieved sufficient accuracy to be dubbed a “pool shark”, code
should be implemented to modify the skill level of the robot so that less talented
players may be competitive with the robot.
112
9.4 The arm
9.4
The arm
The speed with which the Matlab data acquisition toolbox could write to the
parallel port was believed to be much faster than was actually achieved, thus greatly
increasing the time for the robot to take a shot. Methods to combat this problem
were developed, including controlling the motors in Visual Basic or with a dSPACE
board. Tests have indicated that both these methods could be successful in speeding
up play but had to be discarded due to conflicts with either software (in the case
of VB coding) or hardware (as encountered when attempting to implement the
dSPACE board).
In order for the robot to play shots with forward and back spin, two more degrees
of freedom are required: one to control the pitch of the cue and another to control
the height the cue sits above the table. This would require the redesign of the cue
assemble and the use of up to two more motors. The effects of spin would need to
be investigated further and the shot selection algorithm modified accordingly.
Using a different electric circuit could increase the resolution of the force supplied by
the solenoid. This idea was investigated and not pursued only due to the timeframe
of this project. Alternatively, other methods of cue actuation, pneumatics in particular, could be investigated to see if another method provided a similar resolution
and maximum force to that of the washing machine solenoid currently used in the
robot.
Position and angular feedback from the motors is recommended to determine if
slippage of the motors has occurred and for calibration purposes. Problems with
calibration have been discussed in Section ??. Feedback control could then be
implemented to increase the repeatability and accuracy of the mechanical design.
An error of up to 0.45 degrees exists due to the step size of the motor controlling
angular location of the cue. The use of a 10:1 reduction gearbox is recommended to
reduce this error by 90 percent. This modification could be incorporated into the
possible redesign of the cue assembly.
Presently, as discussed in Section ??, the system commands the stepper motors and
solenoid by changing commands to the dedicated driver boards through the parallel
port. In other words, the software tells the stepper motors and solenoid when to
113
9.4 The arm
pulse and when to fire. Unfortunately, the software system was unable to emulate
the necessary command speed required to drive the motors at a faster speed. In the
future as an alternate or combined solution using the dSPACE board, the construction of a more complex piece of dedicated hardware could be implemented. The
solution would still involve the use of either the parallel port or the converted PCI
card, but the method of output to the motors would be different. More specifically,
instead of the software determining the step duration, the hardware would interpret
a string value that was passed to it from the software (for example a total number of
x, y and angular steps and solenoid power setting) from which the hardware would
decode and implement the actuation. This has the obvious benefit of circumventing
the maximum resolution of the command speed present in the software; consequently
allowing finer step size resolution, thereby increasing the speed of the motors. This
could be achieved through purchasing an off-the shelf micro-controller, or if it was
deemed more efficient, to construct a specific one.
114
10 Conclusion
The aim of this project was to design and build a robot capable of playing pool
with speed and accuracy comparable to that of a human player. This aim has
been achieved to a certain degree, with a robot capable of independently playing
a game of pool, although the speed and accuracy of the robot are below initial
expectations. The changes necessary to significantly improve the performace of the
robot are outside the scope of what was achievable in the time frame.
A comprehensive literature review was conducted, covering previous published attempts at constructing pool playing robots, the game of pool itself, and other technical areas relevant to the design. None of the previous published attempts have
been successful in creating an autonomous robot, mostly due to the limitations of
the technology at the time.
An industrial process design approach was used to create the most efficient mechansims for achieving the goal of the project. The component subsystems of the robot
were designed concurrently. The robot goes through the same overall processes as
a human: locating the balls, selecting a shot to play and executing the shot. The
method of implementation of each task differs from that of a human. When designing the robot the most efficient and accurate methods available were used rather
than the most “human-like”.
The robot comprises of a vision system, shot selection algorithms, hardware interface
and a mechatronic actuation system. The integration, commissioning and preliminary testing of the robot has been completed. The areas of weakness have been
investigated and documented for future reference.
115
References
Allen-Bradley. Servo motors. http://www.ab.com/motion/servo/, 2002.
BCA. Billiard Congress of America. http://www.bca-pool.com/, 2002.
Jean-Yves Bouguet. Camera calibration toolbox for Matlab. http://www.vision.
caltech.edu/bouguetj/calib_doc/, 2002.
Breeze. Breeze Systems. http://www.breezesys.com/, 2002.
Christian Bruer-Burchardt and Klaus Voss. Automatic lens distortion calibration
using single views. Mustererkennung, 2000.
John J. Craig. Introduction to Robotics: Mechanics and Control. Addison-Wesley,
second edition, 1989.
George E. Dieter. Engineering Design. McGraw-Hill International Editions, third
edition, 2000.
Farid and Popescu. Blind removal of lens distortion. Journal of the Optical Society
of America, 2001.
Jim Green and Josh Richmond. Robotic image processing pool player. http://
ocho.uwaterloo.ca/~workshop/RIPPP/, 1998.
Peter Grogono. Mathematics for snooker simulation. Private notes, January 1996.
Kenneth H. L. Ho. A snooker playing robot. http://www.bmc.riken.go.jp/
sensor/Ho/chicago/Robotics/Snooker/snooker.html, 1998.
HSW. How stuff works. http://www.howstuffworks.com/, 2002.
Wesley H. Huang. A simple robot pool player. Project report, Massachusetts Institute of Technology, 1991.
Anil K. Jain. Fundamentals of Digital Image Processing. Prentice-Hall, New Jersey,
1989.
116
10 CONCLUSION
Douglas W. Jones. Control of stepping motors, a tutorial. http://www.cs.uiowa.
edu/~jones/step/, 1998.
MG. Melles griot. http://www.mellesgriot.com/, 2002.
Harri Ojanen. Automatic correction of lens distortion by using digital image processing. http://www.math.rutgers.edu/~ojanen/undistort/undistort.pdf,
1999.
OpenCV. Intel open source computer vision library. http://www.intel.com/
research/mrl/research/opencv/index.htm, 2002.
J. Pers̆ and S. Kovac̆ic̆. Nonparametric, model-based radial lens distortion correction
using tilted camera assumption. In H. Wildenauer and W. Kropatsch, editors,
Proceedings of the Computer Vision Winter Workshop, pages 286–295, February
2002.
Bingchen Qi and Yoshikuni Okawa. Building an intelligent mobile robot for billiards.
In 30th International Symposium on Robotics, pages 605–612, 1999.
John C. Russ. The Image Processing Handbook. CRC Press, second edition, 1995.
SciTech. SciTech. http://www.scitech.com.au/, 2002.
Ron Shepard. Amateur physics for the amateur pool player. http://www.playpool.
com/apapp/, 1997.
Transdev. Transdev 2002 catalogue, 2002.
117
Appendix A Robot code
A.1
Main program code
%PLAY Play pool.
%
Invoke a virtual virtuoso pool player.
clear
close all
% set up parallel port
parport = digitalio(’parallel’, ’LPT1’);
hwlines = addline(parport,0:7,’out’);
motorsbusy = 0;
timedelay = 0.001;
% define the colours, as previously sampled from photos
feltblue = [0.21 0.39 0.76];
ballwhite = [0.95 0.94 0.87];
ballblack = [0.02 0.04 0.05];
ballyellow = [0.98 0.76 0.10];
ballred = [0.96 0.21 0.20];
% game identification
gamedatetime = datestr(now);
gamedatetime(find(gamedatetime == ’:’)) = ’.’;
logfiledir = [pwd filesep ’gamelogs’ filesep gamedatetime];
pwdnow = ’’;
turn = 1;
wantexit = 0;
thoughtsoutput = 1; % default output thoughts to file
tgt = 1; % default first target is a break shot
shotcomplexity = [1 1 0]; % default consider all shot complexities
hmshots = 2; % default automatic shot selection by robot head
hmshotsvalues = [3 6 10 20 100];
hmshotsstring = num2str(hmshotsvalues(1));
for ii = 2:length(hmshotsvalues)
hmshotsstring = [hmshotsstring ’|’ num2str(hmshotsvalues(ii))];
end
hmshotslast = 1; % default first number on list
% boundaries of the table in the photo
tabletop = 211;
tablebottom = 1760;
tableleft = 24;
tableright = 3011;
mmperpixel = 1746/(tableright - tableleft);
% distortion correction
load(’cp.mat’) % load control points as ’cpphoto’ and ’cpsteps’
tform = cp2tform(cpsteps,cpphoto,’piecewise linear’);
% position/angle conversion
stepspermm = 2.5;
stepsperradian = 400/(2*pi);
zeropos = [-233 345]; % location of position (0,0)
balltosolenoid = 117*stepspermm; % in steps, how many
118
Appendix A Robot code
% initialise positions
Xpos = 0;
Ypos = 2952;
Apos = -100;
doXstep = 0;
doYstep = 0;
doAstep = 0;
% speech files
speech.on = 1; % if we want the robot to talk
speech.sync = ’async’;
speech.ad = wavread(’ad.wav’);
speech.allday = wavread(’allday.wav’);
speech.benisgreat = wavread(’benisgreat.wav’);
speech.bill = wavread(’bill.wav’);
speech.boyfriend = wavread(’boyfriend.wav’);
speech.chalk = wavread(’chalk.wav’);
speech.daddy = wavread(’daddy.wav’);
speech.doozy = wavread(’doozy.wav’);
speech.drben = wavread(’drben.wav’);
speech.fact19 = wavread(’fact19.wav’);
speech.fact24 = wavread(’fact24.wav’);
speech.fact58 = wavread(’fact58.wav’);
speech.fact62 = wavread(’fact62.wav’);
speech.fact73 = wavread(’fact73.wav’);
speech.faster = wavread(’faster.wav’);
speech.hello = wavread(’hello.wav’);
speech.icantalk = wavread(’icantalk.wav’);
speech.letsplay = wavread(’letsplay.wav’);
speech.look = wavread(’look.wav’);
speech.pints = wavread(’pints.wav’);
speech.playthisone = wavread(’playthisone.wav’);
speech.red = wavread(’red.wav’);
speech.shotlooksgood = wavread(’shotlooksgood.wav’);
speech.silvioetal = wavread(’silvioetal.wav’);
speech.thinking = wavread(’thinking.wav’);
speech.yellow = wavread(’yellow.wav’);
% set up GUI sizes
screensize = [1 1 1024 768]; % get(0,’screensize’);
% fonts
uifont.name = ’Tahoma Bold’;
uifont.smallsize = 10;
uifont.bigsize = 14;
% colours
backcolour = [0.4 0.4 0.4];
buttoncolour = [0.3 0.3 0.6];
frontcolour = [0.4 0.4 0.7];
popupcolour = [1 1 1];
textcolour = [1 1 1];
% for the trigger GUI
window.width = 450;
frame.spacing = 12;
frame.left = frame.spacing + 1;
frame.width = window.width - 2*frame.spacing;
button.spacing = 12;
button.left = frame.spacing + button.spacing + 1;
button.width = window.width - 2*(frame.spacing + button.spacing);
pushheight = 36;
radioheight = 24;
popupheight = 20;
popupwidth = 50;
editheight = 20;
editwidth = 200;
putfilewidth = 20;
119
Appendix A Robot code
titleheight = 40;
% for the choose shot GUI
popupheight2 = 30;
popupwidth2 = 130;
pushwidth2 = 60;
textwidth2 = 120;
% for the update GUI
updatewindow.width = 600;
updatewindow.textheight = 30;
updatewindow.bigtextheight = 120;
updatewindow.height = 4*updatewindow.textheight + updatewindow.bigtextheight;
updatewindow.left = screensize(1) + (screensize(3) - updatewindow.width)/2;
updatewindow.bottom = screensize(2) + (screensize(4) - updatewindow.height)/2;
while 1
logfilenamedefault = [’turn’ num2str(turn) ’thoughts.txt’];
logfilename = logfilenamedefault;
% display the GUI, wait for a button to be pressed
triggergui
if turn == 1
if speech.on == 1
if rand < 1/5
wavplay(speech.hello,speech.sync)
elseif rand < 1/4
wavplay(speech.allday,speech.sync)
elseif rand < 1/3
wavplay(speech.chalk,speech.sync)
elseif rand < 1/2
wavplay(speech.letsplay,speech.sync)
else
wavplay(speech.pints,speech.sync)
end
end
end
uiwait
close all
% exit the program
if wantexit == 1
break
end
if thoughtsoutput == 0
% do not output
fid = 0;
elseif thoughtsoutput == 1
% output to screen
fid = 1;
elseif thoughtsoutput == 2
% open file to write thoughts to
if not(exist(logfiledir,’dir’))
filesepindices = find(logfiledir == filesep);
mkdir(logfiledir(1:filesepindices(1)),logfiledir(filesepindices(1)+1:end));
end
fid = fopen(fullfile(logfiledir,logfilename),’w’);
end
fprintf(fid,[’Turn #’ num2str(turn) ’.\r\n\r\n’]);
tic
updatewindowhandle = figure(’Name’,’Progress update’,...
’Position’,[updatewindow.left updatewindow.bottom ...
updatewindow.width updatewindow.height],...
120
Appendix A Robot code
’Color’,backcolour,...
’Resize’,’off’,...
’MenuBar’,’none’,...
’NumberTitle’,’off’);
messagehandle = uicontrol(’Style’,’text’,...
’Position’,[1 2*updatewindow.textheight+updatewindow.bigtextheight+1 ...
updatewindow.width updatewindow.textheight],...
’BackgroundColor’,backcolour,...
’String’,’Taking and downloading photo...’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour);
adhandle = uicontrol(’Style’,’text’,...
’Position’,[1 updatewindow.textheight+1 ...
updatewindow.width updatewindow.bigtextheight],...
’BackgroundColor’,backcolour,...
’String’,{’ORDER ROBOPOOLPLAYER NOW!’,’Only $14,999!’,...
’Ring 1900-ROBO-POOL’,’While stocks last.’},...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour);
% tell the camera to take a photo
if speech.on == 1
if rand < 1/2
wavplay(speech.look,speech.sync)
end
end
dos(’TakePhoto’);
% read in the image
set(messagehandle,’String’,’Reading image file...’)
rawphoto = imread(’photo.jpg’,’jpeg’);
rawphoto = rawphoto(tabletop+1:tablebottom,tableleft+1:tableright,:);
% compress it by decimating
decimation = 8;
decphoto = (1/256)*double(rawphoto(decimation:decimation:end,decimation:decimation:end,:));
% ball radii
rb = 25.32/mmperpixel; % radius of the other balls
rw = 25.32/mmperpixel; % radius of the white ball
% what number is the black ball ("eight ball")?
bb = 8;
% choose a method of interpolation by uncommenting one line
% interpmethod = ’none’;
% interpmethod = ’spline’;
interpmethod = ’bandlimited’;
balls = zeros(2*bb-1,2);
% find the white ball
set(messagehandle,’String’,’Finding white ball...’)
[whi,dummy] = findball(rawphoto,decphoto,decimation,rw,ballwhite,feltblue,1,1,interpmethod);
fprintf(fid,[’The white ball is at ’ coord2str(whi) ’.\r\n’]);
% convert to steps
whi = fliplr(tforminv([tableleft+whi(2) tabletop+whi(1)],tform));
if tgt == 1
% tell the arm to play the break
set(messagehandle,’String’,’Playing break...’)
while motorsbusy
121
Appendix A Robot code
pause(1)
end
Xposnew = round(zeropos(1) - whi(2) + balltosolenoid);
Yposnew = round(zeropos(2) + whi(1));
Aposnew = -100;
solenoidpower = 3;
TasksExecuted = 0;
while (Xpos ~= Xposnew) | (Ypos ~= Yposnew) | (Apos ~= Aposnew)
movearm
pause(timedelay)
end
else
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
% for testing only
% tell the arm to move over the white ball
set(messagehandle,’String’,’Moving arm into position...’)
Xposnew = round(zeropos(1) - whi(2));
Yposnew = round(zeropos(2) + whi(1));
Aposnew = Apos;
solenoidpower = 0;
TasksExecuted = 0;
while (Xpos ~= Xposnew) | (Ypos ~= Yposnew) | (Apos ~= Aposnew)
movearm
pause(timedelay)
end
pause
% break % for testing
% find the black ball
set(messagehandle,’String’,’Finding black ball...’)
[balls(bb,:),dummy] = ...
findball(rawphoto,decphoto,decimation,rb,ballblack,feltblue,1,1,interpmethod);
b = bb:bb; % black ball number
fprintf(fid,[’The black ball is at ’ coord2str(balls(b,:)) ’.\r\n’]);
clear dummy
% determine how many red balls and find them
set(messagehandle,’String’,’Finding red balls...’)
[balls(1:bb-1,:),nr] = ...
findball(rawphoto,decphoto,decimation,rb,ballred,feltblue,-1,bb-1,interpmethod);
r = 1:nr; % red ball numbers
if nr == 0
fprintf(fid,’There are no red balls.\r\n’);
elseif nr == 1
fprintf(fid,[’The red ball is at ’ coord2str(balls(r,:)) ’.\r\n’]);
else
fprintf(fid,[’The red balls are at ’ coord2str(balls(r,:)) ’.\r\n’]);
end
% determine how many yellow balls and find them
set(messagehandle,’String’,’Finding yellow balls...’)
[balls(bb+1:2*bb-1,:),ny] = ...
findball(rawphoto,decphoto,decimation,rb,ballyellow,feltblue,-1,bb-1,interpmethod);
y = bb+1:bb+ny; % yellow ball numbers
if ny == 0
fprintf(fid,’There are no yellow balls.\r\n’);
elseif ny == 1
fprintf(fid,[’The yellow ball is at ’ coord2str(balls(y,:)) ’.\r\n’]);
else
122
Appendix A Robot code
fprintf(fid,[’The yellow balls are at ’ coord2str(balls(y,:)) ’.\r\n’]);
end
% convert to steps
balls = fliplr(tforminv([tableleft+balls(:,2) tabletop+balls(:,1)],tform));
ry = [r y];
ryb = [r b y];
% table dimensions
Xtotal = cpsteps(end,2);
Ytotal = cpsteps(end,1);
bndry = 48*stepspermm;
X1 = bndry;
X2 = Xtotal + 1 - bndry;
Xmid = (X1 + X2) / 2;
Y1 = bndry;
Y2 = Ytotal + 1 - bndry;
Ymid = (Y1 + Y2) / 2;
% ball radii
rb = 25.32*stepspermm; % radius of the other balls
rw = 25.32*stepspermm; % radius of the white ball
% pocket locations
rp = 1.4*rb; % pocket radius
epd = 0.3*rb; % edge pocket displacement
% these are where the robot aims to sink a ball
pockets = [X1+rb Y1+rb;
X1-epd Ymid;
X1+rb Y2-rb;
X2-rb Y1+rb;
X2+epd Ymid;
X2-rb Y2-rb];
% these are where the holes are physically
pocketcentres = [X1-sqrt(0.5)*rp Y1-sqrt(0.5)*rp;
X1-(rp+epd) Ymid;
X1-sqrt(0.5)*rp Y2+sqrt(0.5)*rp;
X2+sqrt(0.5)*rp Y1-sqrt(0.5)*rp;
X2+(rp+epd) Ymid;
X2+sqrt(0.5)*rp Y2+sqrt(0.5)*rp];
np = size(pockets,1);
fprintf(fid,[’The pockets are at ’ coord2str(pockets) ’.\r\n\r\n’]);
% which colour(s) are we trying to sink?
if speech.on == 1
if rand < 1/3
if tgt == 3
wavplay(speech.red,speech.sync)
elseif tgt == 4
wavplay(speech.yellow,speech.sync)
end
elseif rand < 1/2
wavplay(speech.thinking,speech.sync)
end
end
if tgt == 2
targets = ry;
fprintf(fid,’I’’m trying to sink red or yellow.\r\n’);
elseif tgt == 3
targets = r;
fprintf(fid,’I’’m trying to sink red.\r\n’);
elseif tgt == 4
targets = y;
fprintf(fid,’I’’m trying to sink yellow.\r\n’);
end
123
Appendix A Robot code
if length(targets) == 0
targets = b;
fprintf(fid,’Since there are none left, I’’ll try to sink black.\r\n’);
end
fprintf(fid,’\r\n’);
% create structure ’shotlist’ with fields
%
pathtype, pathnos, coords, goodness
makeshotlist
% try to find the ’hmshots’ best shots
% put their indices into vector ’bestshots’
bestshots = [];
findagoodshot
if length(bestshots) == 0
fprintf(fid,’Shit, I’’m screwed.\r\n\r\n’);
bestshots = [1];
end
fprintf(fid,’Total thinking time was %.2f seconds.\r\n\r\n’,toc);
% display the GUI, wait for a button to be pressed
close all
chooseshotgui
if speech.on == 1
if shotlist(bestshots(1)).goodness > 0.003
if rand > 1/2
wavplay(speech.playthisone,speech.sync)
else
wavplay(speech.shotlooksgood,speech.sync)
end
else
wavplay(speech.doozy,speech.sync)
end
end
uiwait
close all
fprintf(fid,’Looks like I’’m playing shot #%g.\r\n\r\n’,bestshots(playshot));
updatewindowhandle = figure(’Name’,’Progress update’,...
’Position’,[updatewindow.left updatewindow.bottom ...
updatewindow.width updatewindow.height],...
’Color’,backcolour,...
’Resize’,’off’,...
’MenuBar’,’none’,...
’NumberTitle’,’off’);
messagehandle = uicontrol(’Style’,’text’,...
’Position’,[1 2*updatewindow.textheight+updatewindow.bigtextheight+1 ...
updatewindow.width updatewindow.textheight],...
’BackgroundColor’,backcolour,...
’String’,’Playing the shot...’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour);
adhandle = uicontrol(’Style’,’text’,...
’Position’,[1 updatewindow.textheight+1 updatewindow.width ...
updatewindow.bigtextheight],...
’BackgroundColor’,backcolour,...
’String’,{’ORDER ROBOPOOLPLAYER NOW!’,’Only $14,999!’,...
’Ring 1900-ROBO-POOL’,’While stocks last.’},...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour);
% tell the arm to play the shot
124
Appendix A Robot code
if speech.on == 1
if rand < 1/6
wavplay(speech.faster,speech.sync)
elseif rand < 1/5
wavplay(speech.fact19,speech.sync)
elseif rand < 1/4
wavplay(speech.fact24,speech.sync)
elseif rand < 1/3
wavplay(speech.fact58,speech.sync)
elseif rand < 1/2
wavplay(speech.fact62,speech.sync)
else
wavplay(speech.fact73,speech.sync)
end
end
if turn == 1
Xposnew = -250;
Yposnew = 2700;
Aposnew = -100;
solenoidpower = 0;
TasksExecuted = 0;
while (Xpos ~= Xposnew) | (Ypos ~= Yposnew) | (Apos ~= Aposnew)
movearm
pause(timedelay)
end
end
shot = shotlist(bestshots(playshot));
shootangle = - atan2(shot.coords(2,2)-shot.coords(1,2),shot.coords(2,1)-shot.coords(1,1));
Xposnew = round(zeropos(1) - whi(2) - balltosolenoid*sin(shootangle));
Yposnew = round(zeropos(2) + whi(1) - balltosolenoid*cos(shootangle));
Aposnew = round(stepsperradian*shootangle);
if length(shot.pathtype) > 2
solenoidpower = 3;
elseif length(shot.pathtype) == 2
vec1 = shot.coords(2,:) - shot.coords(1,:);
vec2 = shot.coords(4,:) - shot.coords(3,:);
if norm(vec1) + (1.87 / (cosang(vec1,vec2))^2) * norm(vec2) < 1000*stepspermm
solenoidpower = 1;
elseif norm(vec1) + (1.87 / (cosang(vec1,vec2))^2) * norm(vec2) < 2000*stepspermm
solenoidpower = 2;
else
solenoidpower = 3;
end
else
solenoidpower = 1;
end
TasksExecuted = 0;
while (Xpos ~= Xposnew) | (Ypos ~= Yposnew) | (Apos ~= Aposnew)
movearm
pause(timedelay)
end
end
% tell the arm to move to the end of the table
set(messagehandle,’String’,’Returning arm to home position...’)
if speech.on == 1
if rand < 1/6
wavplay(speech.ad,speech.sync)
elseif rand < 1/5
125
Appendix A Robot code
wavplay(speech.icantalk,speech.sync)
elseif rand < 1/4
wavplay(speech.benisgreat,speech.sync)
elseif rand < 1/3
wavplay(speech.drben,speech.sync)
elseif rand < 1/2
wavplay(speech.silvioetal,speech.sync)
else
wavplay(speech.bill,speech.sync)
end
end
Xposnew = 0;
Yposnew = 2700;
Aposnew = -100;
solenoidpower = 0;
TasksExecuted = 0;
while (Xpos ~= Xposnew) | (Ypos ~= Yposnew) | (Apos ~= Aposnew)
movearm
pause(timedelay)
end
close all
fprintf(fid,’Well folks, that’’s the end of turn #%g.\r\n\r\n’,turn);
if thoughtsoutput == 2
fclose(fid);
save([logfiledir ’\wkspace.mat’],...
’X1’,’X2’,’Xtotal’,’Y1’,’Ymid’,’Y2’,’Ytotal’,...
’bb’,’r’,’b’,’y’,’ry’,’ryb’,’targets’,’nr’,’ny’,’np’,...
’balls’,’whi’,’pockets’,...
’shotlist’,’hmshots’,’bestshots’,...
’gamedatetime’,’turn’)
end
turn = turn + 1;
end
A.2
Eye software code
function [pos,nb] = findball(rawphoto,decphoto,decimation,r,ballcolour,backcolour,nb,maxnb,method)
%FINDBALL Find a ball using a coarse-fine search.
%
The image table is decimated and the ball located
%
approximately. The region around the determined location
%
is then examined at the original resolution.
%
%
If nb = -1 then the number of balls is determined.
rup = ceil(r);
[posapprox,nb] = findballxcorr(decphoto,r/decimation,ballcolour,backcolour,nb,maxnb,’none’);
pos = zeros(maxnb,2);
for ii = 1:nb
Xcoords =
Xcoords =
Ycoords =
Ycoords =
(posapprox(ii,1)-1)*decimation-rup:(posapprox(ii,1)+1)*decimation+rup;
intersect(Xcoords,1:size(rawphoto,1));
(posapprox(ii,2)-1)*decimation-rup:(posapprox(ii,2)+1)*decimation+rup;
intersect(Ycoords,1:size(rawphoto,2));
126
Appendix A Robot code
photosection = (1/256)*double(rawphoto(Xcoords,Ycoords,:));
postemp = findballxcorr(photosection,r,ballcolour,backcolour,1,maxnb,method);
pos(ii,1) = interp1(Xcoords,postemp(1));
pos(ii,2) = interp1(Ycoords,postemp(2));
end
function [pos,nb] = findballxcorr(table,r,ballcolour,backcolour,nb,maxnb,method)
%FINDBALLXCORR Find a ball using cross-correlation.
%
The table image is cross-correlated with a template of a
%
ball of radius r of colour ballcolour against a backcolour
%
background, and nb positions of the ball are returned.
%
%
If nb = -1 then the number of balls is determined.
if method(1) == ’n’ % using no interpolation
rup = ceil(r);
% create a template of the ball
template = ballimage(2*rup+1,2*rup+1,rup+1,rup+1,r,ballcolour,backcolour);
% cross correlate the image with the template
XCR = normxcorr2(template(:,:,1),table(:,:,1));
XCR = XCR(rup+1:size(XCR,1)-rup,rup+1:size(XCR,2)-rup,:);
XCG = normxcorr2(template(:,:,2),table(:,:,2));
XCG = XCG(rup+1:size(XCG,1)-rup,rup+1:size(XCG,2)-rup,:);
XCB = normxcorr2(template(:,:,3),table(:,:,3));
XCB = XCB(rup+1:size(XCB,1)-rup,rup+1:size(XCB,2)-rup,:);
% average the red, green and blue correlations
XCtemp = (XCR + XCG + XCB)/3;
% get rid of the edges
XC = zeros(size(XCtemp));
edgerid = 0; % round(0.06*size(XC,1));
XC(edgerid+1:size(XC,1)-edgerid,edgerid+1:size(XC,2)-edgerid) = ...
XCtemp(edgerid+1:size(XC,1)-edgerid,edgerid+1:size(XC,2)-edgerid);
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
% have a look at the cross correlation results
clf
subplot(2,2,1)
surface(flipdim(XCR,1),flipdim(table,1),...
’FaceColor’,’texturemap’,’EdgeColor’,’none’,’CDataMapping’,’direct’)
view(-35,45)
subplot(2,2,2)
surface(flipdim(XCG,1),flipdim(table,1),...
’FaceColor’,’texturemap’,’EdgeColor’,’none’,’CDataMapping’,’direct’)
view(-35,45)
subplot(2,2,3)
surface(flipdim(XCB,1),flipdim(table,1),...
’FaceColor’,’texturemap’,’EdgeColor’,’none’,’CDataMapping’,’direct’)
view(-35,45)
subplot(2,2,4)
surface(flipdim(XC,1),flipdim(table,1),...
’FaceColor’,’texturemap’,’EdgeColor’,’none’,’CDataMapping’,’direct’)
view(-35,45)
pause
clf
surface(flipdim(XC,1),flipdim(table,1),...
’FaceColor’,’texturemap’,’EdgeColor’,’none’,’CDataMapping’,’direct’)
view(-35,45)
pause
127
Appendix A Robot code
if nb == -1 % don’t know how many balls there are
firstmax = max(max(XC));
pos = [];
if firstmax > 0.65 % otherwise we can assume that there are no balls
while (max(max(XC)) > 0.68*firstmax) & (size(pos,1) < maxnb)
% (otherwise we can assume that there are no more balls)
% locate the ball approximately by finding a maximum
% disp(num2str(max(max(XC))))
[xmax ymax] = find(XC == max(max(XC)));
pos = [pos; xmax(1) ymax(1)];
% delete the surrounding area
XC = drawcircle(XC,pos(end,:),r,0,1);
end
% disp(num2str(max(max(XC))))
end
nb = size(pos,1);
else % do know how many balls there are
pos = zeros(nb,2);
for ii = 1:nb
% locate the ball approximately by finding a maximum
% disp(num2str(max(max(XC))))
[xmax ymax] = find(XC == max(max(XC)));
pos(ii,:) = [xmax(1) ymax(1)];
% delete the surrounding area
% if ii < nb
XC = drawcircle(XC,pos(ii,:),r,0,1);
% end
end
% disp(num2str(max(max(XC))))
end
elseif method(1) == ’s’ % using spline interpolation
rup = ceil(r);
% create a template of the ball
template = ballimage(2*rup+1,2*rup+1,rup+1,rup+1,r,ballcolour,backcolour);
% how big a region do we want to interpolate? (2a+1 by 2a+1)
a = 1;
% how many times are we interpolating (by 2)?
k = 4;
xm = a*(2^k) + 1;
ym = a*(2^k) + 1;
% cross correlate the image with the template
XCR = normxcorr2(template(:,:,1),table(:,:,1));
XCR = XCR(rup+1:size(XCR,1)-rup,rup+1:size(XCR,2)-rup,:);
XCG = normxcorr2(template(:,:,2),table(:,:,2));
XCG = XCG(rup+1:size(XCG,1)-rup,rup+1:size(XCG,2)-rup,:);
XCB = normxcorr2(template(:,:,3),table(:,:,3));
XCB = XCB(rup+1:size(XCB,1)-rup,rup+1:size(XCB,2)-rup,:);
% average the red, green and blue correlations
XC = (XCR + XCG + XCB)/3;
128
Appendix A Robot code
pos = zeros(nb,2);
for ii = 1:nb
% locate the ball approximately by finding a maximum
[xmax ymax] = find(XC == max(max(XC)));
posest = [xmax(1) ymax(1)];
% let’s look more closely at the region around ’posest1’
XCzoom = XC(posest(1)-a:posest(1)+a,posest(2)-a:posest(2)+a);
% interpolate!
XCzoomint = interp2(XCzoom,k,’spline’);
[xx yy] = find(XCzoomint == max(max(XCzoomint)));
dx = (mean(xx)-xm) * 2^(-k);
dy = (mean(yy)-ym) * 2^(-k);
pos(ii,:) = posest + [dx dy];
% delete the surrounding area
if ii < nb
XC = drawcircle(XC,pos(ii,:),r,0,1);
end
end
elseif method(1) == ’b’ % using bandlimited interpolation
rup = ceil(r);
% create a template of the ball
template = ballimage(2*rup+1,2*rup+1,rup+1,rup+1,r,ballcolour,backcolour);
% how big a region do we want to interpolate? (2a by 2a)
a = 4;
% by how much are we interpolating?
k = 32;
xm = a*k + 1;
ym = a*k + 1;
% cross correlate the image with the template
XCR = normxcorr2(template(:,:,1),table(:,:,1));
XCR = XCR(rup+1:size(XCR,1)-rup,rup+1:size(XCR,2)-rup,:);
XCG = normxcorr2(template(:,:,2),table(:,:,2));
XCG = XCG(rup+1:size(XCG,1)-rup,rup+1:size(XCG,2)-rup,:);
XCB = normxcorr2(template(:,:,3),table(:,:,3));
XCB = XCB(rup+1:size(XCB,1)-rup,rup+1:size(XCB,2)-rup,:);
% average the red, green and blue correlations
XC = (XCR + XCG + XCB)/3;
pos = zeros(nb,2);
for ii = 1:nb
% locate the ball approximately by finding a maximum
[xmax ymax] = find(XC == max(max(XC)));
posest = [xmax(1) ymax(1)];
% let’s look more closely at the region around ’posest1’
XCzoom = XC(posest(1)-a:posest(1)+a-1,posest(2)-a:posest(2)+a-1);
% interpolate!
129
Appendix A Robot code
XCzoomint = bandltdinterp2(XCzoom,k);
[xx yy] = find(XCzoomint == max(max(XCzoomint)));
dx = (mean(xx)-xm) / k;
dy = (mean(yy)-ym) / k;
pos(ii,:) = posest + [dx dy];
% delete the surrounding area
if ii < nb
XC = drawcircle(XC,pos(ii,:),r,0,1);
end
end
else
error([method ’ is an invalid method.’])
end
function ballimage = ballimage(Lx,Ly,x,y,r,ballcolour,backcolour)
%BALLIMAGE Create an image of a ball.
%
ballimage(Lx,Ly,x,y,r,ballcolour,backcolour) is a colour
%
image of size Lx by Ly, with a ball centred at (x,y) with
%
radius r, of colour ballcolour against a backcolour
%
background.
ballimage = zeros(Lx,Ly,3);
% background
ballimage(:,:,1) = backcolour(1)*ones(size(ballimage(:,:,1)));
ballimage(:,:,2) = backcolour(2)*ones(size(ballimage(:,:,2)));
ballimage(:,:,3) = backcolour(3)*ones(size(ballimage(:,:,3)));
% ball
for xx = floor(x-r):ceil(x+r)
for yy = floor(y-r):ceil(y+r)
if (xx-x)^2 + (yy-y)^2 <= r^2
ballimage(xx,yy,:) = ballcolour;
end
end
end
function AA = bandltdinterp2(A,k)
%BANDLTDINTERP2 Two-dimensional bandlimited interpolation.
%
The matrix A is expanded to a matrix AA by zero-padding in
%
the frequency domain. AA is k times bigger than A in both
%
directions. Both dimensions of A must be even.
halfX = size(A,1)/2;
halfY = size(A,2)/2;
B = fftshift(fft2(A));
BB = zeros(2*halfX*k,2*halfY*k);
BB((k-1)*halfX+2:(k+1)*halfX,(k-1)*halfY+2:(k+1)*halfY) = B(2:2*halfX,2:2*halfY);
BB((k-1)*halfX+1,(k-1)*halfY+2:(k+1)*halfY)
BB((k+1)*halfX+1,(k-1)*halfY+2:(k+1)*halfY)
BB((k-1)*halfX+2:(k+1)*halfX,(k-1)*halfY+1)
BB((k-1)*halfX+2:(k+1)*halfX,(k+1)*halfY+1)
=
=
=
=
B(1,2:2*halfY)/2;
B(1,2:2*halfY)/2;
B(2:2*halfX,1)/2;
B(2:2*halfX,1)/2;
BB((k-1)*halfX+1,(k-1)*halfY+1) = B(1,1)/4;
BB((k-1)*halfX+1,(k+1)*halfY+1) = B(1,1)/4;
130
Appendix A Robot code
BB((k+1)*halfX+1,(k-1)*halfY+1) = B(1,1)/4;
BB((k+1)*halfX+1,(k+1)*halfY+1) = B(1,1)/4;
AA = ifft2(fftshift(BB))*k^2;
A.3
Brain software code
%MAKESHOTLIST Construct a list of shots and rank them.
% goodness penalty for cushion rebounds
penalty = 0.5;
% which cushions can a ball bounce off to get into pocket i?
pock(1).cush = [2 4];
pock(2).cush = [2 3 4];
pock(3).cush = [2 3];
pock(4).cush = [1 4];
pock(5).cush = [1 3 4];
pock(6).cush = [1 3];
% reinitialise shotlist
shotlist = [];
% create an emergency shot in case nothing else is found
shotlist(1).pathtype = ’’;
shotlist(1).pathnos = [];
shotlist(1).coords(1,:) = whi;
shotlist(1).coords(2,:) = [Xmid Ymid];
shotlist(1).goodness = -1;
if shotcomplexity(1)
% examine shots of the type ’BjPi’
set(messagehandle,’String’,’Considering ball-pocket shots...’)
fprintf(fid,’Considering ball-pocket shots...\r\n’);
for ii = 1:np
for jj = targets
shotlist(end+1).pathtype = ’BP’;
shotlist(end).pathnos = [jj ii];
shotlist(end).coords(4,:) = pockets(ii,:);
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(pockets(ii,:)-balls(jj,:))/norm(pockets(ii,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
if cosang(vec1,vec2) > 0
shotlist(end).goodness = (2*rb)^2 * cosang(vec1d,vec2) / ...
(norm(vec1) * norm(vec2) * cosang(vec1,vec1d));
else
shotlist(end).goodness = 0;
end
end
end
end
if shotcomplexity(2)
% examine shots of the type ’CaBjPi’
131
Appendix A Robot code
set(messagehandle,’String’,’Considering cushion-ball-pocket shots...’)
fprintf(fid,’Considering cushion-ball-pocket shots...\r\n’);
for ii = 1:np
for jj = targets
for aa = 1:4
shotlist(end+1).pathtype = ’CBP’;
shotlist(end).pathnos = [aa jj ii];
shotlist(end).coords(6,:) = pockets(ii,:);
shotlist(end).coords(5,:) = balls(jj,:);
shotlist(end).coords(4,:) = balls(jj,:) - (rw+rb) * ...
(pockets(ii,:)-balls(jj,:))/norm(pockets(ii,:)-balls(jj,:));
if aa == 1 % top cushion
% reflect whi through x = X1+rw to get [2*(X1+rw)-whi(1),whi(2)]
shotlist(end).coords(3,:) = ...
[X1+rw interp1([2*(X1+rw)-whi(1) shotlist(end).coords(4,1)],...
[whi(2) shotlist(end).coords(4,2)],X1+rw)];
elseif aa == 2 % bottom cushion
% reflect whi through x = X2-rw to get [2*(X2-rw)-whi(1),whi(2)]
shotlist(end).coords(3,:) = ...
[X2-rw interp1([2*(X2-rw)-whi(1) shotlist(end).coords(4,1)],...
[whi(2) shotlist(end).coords(4,2)],X2-rw)];
elseif aa == 3 % left cushion
% reflect whi through y = Y1+rw to get [whi(1),2*(Y1+rw)-whi(2)]
shotlist(end).coords(3,:) = ...
[interp1([2*(Y1+rw)-whi(2) shotlist(end).coords(4,2)],...
[whi(1) shotlist(end).coords(4,1)],Y1+rw) Y1+rw];
elseif aa == 4 % right cushion
% reflect whi through y = Y2-rw to get [whi(1),2*(Y2-rw)-whi(2)]
shotlist(end).coords(3,:) = ...
[interp1([2*(Y2-rw)-whi(2) shotlist(end).coords(4,2)],...
[whi(1) shotlist(end).coords(4,1)],Y2-rw) Y2-rw];
end
shotlist(end).coords(2,:) = shotlist(end).coords(3,:);
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
if cosang(vec2,vec3) > 0
shotlist(end).goodness = penalty * (2*rb)^2 * cosang(vec2,vec3) / ...
((norm(vec1)+norm(vec2)) * norm(vec3));
else
shotlist(end).goodness = 0;
end
end
end
end
% examine shots of the type ’BjCaPi’
set(messagehandle,’String’,’Considering ball-cushion-pocket shots...’)
fprintf(fid,’Considering ball-cushion-pocket shots...\r\n’);
for ii = 1:np
for jj = targets
for aa = pock(ii).cush
shotlist(end+1).pathtype = ’BCP’;
shotlist(end).pathnos = [jj aa ii];
shotlist(end).coords(6,:) = pockets(ii,:);
if aa == 1 % top cushion
% reflect balls(jj,:) through x = X1+rb to get [2*(X1+rb)-balls(jj,1),balls(jj,2)]
shotlist(end).coords(5,:) = ...
[X1+rb interp1([2*(X1+rb)-balls(jj,1) pockets(ii,1)],...
132
Appendix A Robot code
[balls(jj,2) pockets(ii,2)],X1+rb)];
elseif aa == 2 % bottom cushion
% reflect balls(jj,:) through x = X2-rb to get [2*(X2-rb)-balls(jj,1),balls(jj,2)]
shotlist(end).coords(5,:) = ...
[X2-rb interp1([2*(X2-rb)-balls(jj,1) pockets(ii,1)],...
[balls(jj,2) pockets(ii,2)],X2-rb)];
elseif aa == 3 % left cushion
% reflect balls(jj,:) through y = Y1+rb to get [balls(jj,1),2*(Y1+rb)-balls(jj,2)]
shotlist(end).coords(5,:) = ...
[interp1([2*(Y1+rb)-balls(jj,2) pockets(ii,2)],...
[balls(jj,1) pockets(ii,1)],Y1+rb) Y1+rb];
elseif aa == 4 % right cushion
% reflect balls(jj,:) through y = Y2-rb to get [balls(jj,1),2*(Y2-rb)-balls(jj,2)]
shotlist(end).coords(5,:) = ...
[interp1([2*(Y2-rb)-balls(jj,2) pockets(ii,2)],...
[balls(jj,1) pockets(ii,1)],Y2-rb) Y2-rb];
end
shotlist(end).coords(4,:) = shotlist(end).coords(5,:);
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(4,:)-balls(jj,:))/norm(shotlist(end).coords(4,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
if cosang(vec1,vec2) > 0
shotlist(end).goodness = penalty *(2*rb)^2 * cosang(vec1d,vec2) / ...
(norm(vec1) * (norm(vec2)+norm(vec3)) * cosang(vec1,vec1d));
else
shotlist(end).goodness = 0;
end
end
end
end
% examine shots of the type ’BjBkPi’
set(messagehandle,’String’,’Considering ball-ball-pocket shots...’)
fprintf(fid,’Considering ball-ball-pocket shots...\r\n’);
for ii = 1:np
for kk = targets
for jj = targets
if jj ~= kk
shotlist(end+1).pathtype = ’BBP’;
shotlist(end).pathnos = [jj kk ii];
shotlist(end).coords(6,:) = pockets(ii,:);
shotlist(end).coords(5,:) = balls(kk,:);
shotlist(end).coords(4,:) = balls(kk,:) - (2*rb) * ...
(pockets(ii,:)-balls(kk,:))/norm(pockets(ii,:)-balls(kk,:));
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(4,:)-balls(jj,:))/norm(shotlist(end).coords(4,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec2d = shotlist(end).coords(5,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
if (cosang(vec1,vec2) > 0 & cosang(vec2,vec3) > 0)
shotlist(end).goodness = (2*rb)^3 * cosang(vec1d,vec2) * cosang(vec2d,vec3) / ...
(norm(vec1) * norm(vec2) * norm(vec3) * cosang(vec1,vec1d) * cosang(vec2,vec2d));
133
Appendix A Robot code
else
shotlist(end).goodness = 0;
end
end
end
end
end
end
if shotcomplexity(3)
% examine shots of the type ’CaBjBkPi’
set(messagehandle,’String’,’Considering cushion-ball-ball-pocket shots...’)
fprintf(fid,’Considering cushion-ball-ball-pocket shots...\r\n’);
for ii = 1:np
for kk = targets
for jj = targets
if jj ~= kk
for aa = 1:4
shotlist(end+1).pathtype = ’CBBP’;
shotlist(end).pathnos = [aa jj kk ii];
shotlist(end).coords(8,:) = pockets(ii,:);
shotlist(end).coords(7,:) = balls(kk,:);
shotlist(end).coords(6,:) = balls(kk,:) - (2*rb) * ...
(pockets(ii,:)-balls(kk,:))/norm(pockets(ii,:)-balls(kk,:));
shotlist(end).coords(5,:) = balls(jj,:);
shotlist(end).coords(4,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(6,:)-balls(jj,:))/norm(shotlist(end).coords(6,:)-balls(jj,:));
if aa == 1 % top cushion
% reflect whi through x = X1+rw to get [2*(X1+rw)-whi(1),whi(2)]
shotlist(end).coords(3,:) = ...
[X1+rw interp1([2*(X1+rw)-whi(1) shotlist(end).coords(4,1)],...
[whi(2) shotlist(end).coords(4,2)],X1+rw)];
elseif aa == 2 % bottom cushion
% reflect whi through x = X2-rw to get [2*(X2-rw)-whi(1),whi(2)]
shotlist(end).coords(3,:) = ...
[X2-rw interp1([2*(X2-rw)-whi(1) shotlist(end).coords(4,1)],...
[whi(2) shotlist(end).coords(4,2)],X2-rw)];
elseif aa == 3 % left cushion
% reflect whi through y = Y1+rw to get [whi(1),2*(Y1+rw)-whi(2)]
shotlist(end).coords(3,:) = ...
[interp1([2*(Y1+rw)-whi(2) shotlist(end).coords(4,2)],...
[whi(1) shotlist(end).coords(4,1)],Y1+rw) Y1+rw];
elseif aa == 4 % right cushion
% reflect whi through y = Y2-rw to get [whi(1),2*(Y2-rw)-whi(2)]
shotlist(end).coords(3,:) = ...
[interp1([2*(Y2-rw)-whi(2) shotlist(end).coords(4,2)],...
[whi(1) shotlist(end).coords(4,1)],Y2-rw) Y2-rw];
end
shotlist(end).coords(2,:) = shotlist(end).coords(3,:);
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
vec3d = shotlist(end).coords(7,:) - shotlist(end).coords(5,:);
vec4 = shotlist(end).coords(8,:) - shotlist(end).coords(7,:);
if (cosang(vec2,vec3) > 0 & cosang(vec3,vec4) > 0)
shotlist(end).goodness = penalty * (2*rb)^3 * cosang(vec2,vec3) * cosang(vec3d,vec4) / ...
((norm(vec1)+norm(vec2)) * norm(vec3) * norm(vec4) * cosang(vec3,vec3d));
else
shotlist(end).goodness = 0;
end
134
Appendix A Robot code
end
end
end
end
end
% examine shots of the type ’BjCaBkPi’
set(messagehandle,’String’,’Considering ball-cushion-ball-pocket shots...’)
fprintf(fid,’Considering ball-cushion-ball-pocket shots...\r\n’);
for ii = 1:np
for kk = targets
for jj = targets
if jj ~= kk
for aa = 1:4
shotlist(end+1).pathtype = ’BCBP’;
shotlist(end).pathnos = [jj aa kk ii];
shotlist(end).coords(8,:) = pockets(ii,:);
shotlist(end).coords(7,:) = balls(kk,:);
shotlist(end).coords(6,:) = balls(kk,:) - ...
(2*rb)*(pockets(ii,:)-balls(kk,:))/norm(pockets(ii,:)-balls(kk,:));
if aa == 1 % top cushion
% reflect balls(jj,:) through x = X1+rb to get [2*(X1+rb)-balls(jj,1),balls(jj,2)]
shotlist(end).coords(5,:) = ...
[X1+rb interp1([2*(X1+rb)-balls(jj,1) shotlist(end).coords(6,1)],...
[balls(jj,2) shotlist(end).coords(6,2)],X1+rb)];
elseif aa == 2 % bottom cushion
% reflect balls(jj,:) through x = X2-rb to get [2*(X2-rb)-balls(jj,1),balls(jj,2)]
shotlist(end).coords(5,:) = ...
[X2-rb interp1([2*(X2-rb)-balls(jj,1) shotlist(end).coords(6,1)],...
[balls(jj,2) shotlist(end).coords(6,2)],X2-rb)];
elseif aa == 3 % left cushion
% reflect balls(jj,:) through y = Y1+rb to get [balls(jj,1),2*(Y1+rb)-balls(jj,2)]
shotlist(end).coords(5,:) = ...
[interp1([2*(Y1+rb)-balls(jj,2) shotlist(end).coords(6,2)],...
[balls(jj,1) shotlist(end).coords(6,1)],Y1+rb) Y1+rb];
elseif aa == 4 % right cushion
% reflect balls(jj,:) through y = Y2-rb to get [balls(jj,1),2*(Y2-rb)-balls(jj,2)]
shotlist(end).coords(5,:) = ...
[interp1([2*(Y2-rb)-balls(jj,2) shotlist(end).coords(6,2)],...
[balls(jj,1) shotlist(end).coords(6,1)],Y2-rb) Y2-rb];
end
shotlist(end).coords(4,:) = shotlist(end).coords(5,:);
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(4,:)-balls(jj,:))/norm(shotlist(end).coords(4,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
vec4 = shotlist(end).coords(8,:) - shotlist(end).coords(7,:);
if (cosang(vec1,vec2) > 0 & cosang(vec3,vec4) > 0)
shotlist(end).goodness = penalty * (2*rb)^3 * cosang(vec1d,vec2) * cosang(vec3,vec4) / ...
(norm(vec1) * (norm(vec2)+norm(vec3)) * norm(vec4) * cosang(vec1,vec1d));
else
shotlist(end).goodness = 0;
end
end
end
end
end
end
135
Appendix A Robot code
% examine shots of the type ’BjBkCaPi’
set(messagehandle,’String’,’Considering ball-ball-cushion-pocket shots...’)
fprintf(fid,’Considering ball-ball-cushion-pocket shots...\r\n’);
for ii = 1:np
for kk = targets
for jj = targets
if jj ~= kk
for aa = pock(ii).cush
shotlist(end+1).pathtype = ’BBCP’;
shotlist(end).pathnos = [jj kk aa ii];
shotlist(end).coords(8,:) = pockets(ii,:);
if aa == 1 % top cushion
% reflect balls(kk,:) through x = X1+rb to get [2*(X1+rb)-balls(kk,1),balls(kk,2)]
shotlist(end).coords(7,:) = ...
[X1+rb interp1([2*(X1+rb)-balls(kk,1) pockets(ii,1)],...
[balls(kk,2) pockets(ii,2)],X1+rb)];
elseif aa == 2 % bottom cushion
% reflect balls(kk,:) through x = X2-rb to get [2*(X2-rb)-balls(kk,1),balls(kk,2)]
shotlist(end).coords(7,:) = ...
[X2-rb interp1([2*(X2-rb)-balls(kk,1) pockets(ii,1)],...
[balls(kk,2) pockets(ii,2)],X2-rb)];
elseif aa == 3 % left cushion
% reflect balls(kk,:) through y = Y1+rb to get [balls(kk,1),2*(Y1+rb)-balls(kk,2)]
shotlist(end).coords(7,:) = ...
[interp1([2*(Y1+rb)-balls(kk,2) pockets(ii,2)],...
[balls(kk,1) pockets(ii,1)],Y1+rb) Y1+rb];
elseif aa == 4 % right cushion
% reflect balls(kk,:) through y = Y2-rb to get [balls(kk,1),2*(Y2-rb)-balls(kk,2)]
shotlist(end).coords(7,:) = ...
[interp1([2*(Y2-rb)-balls(kk,2) pockets(ii,2)],...
[balls(kk,1) pockets(ii,1)],Y2-rb) Y2-rb];
end
shotlist(end).coords(6,:) = shotlist(end).coords(7,:);
shotlist(end).coords(5,:) = balls(kk,:);
shotlist(end).coords(4,:) = balls(kk,:) - (2*rb) * ...
(shotlist(end).coords(6,:)-balls(kk,:))/norm(shotlist(end).coords(6,:)-balls(kk,:));
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(4,:)-balls(jj,:))/norm(shotlist(end).coords(4,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec2d = shotlist(end).coords(5,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
vec4 = shotlist(end).coords(8,:) - shotlist(end).coords(7,:);
if (cosang(vec1,vec2) > 0 & cosang(vec2,vec3) > 0)
shotlist(end).goodness = penalty * (2*rb)^3 * cosang(vec1d,vec2) * cosang(vec2d,vec3) / ...
(norm(vec1) * norm(vec2) * (norm(vec3)+norm(vec4)) * ...
cosang(vec1,vec1d) * cosang(vec2,vec2d));
else
shotlist(end).goodness = 0;
end
end
end
end
end
end
% examine shots of the type ’BjBkBlPi’
set(messagehandle,’String’,’Considering ball-ball-ball-pocket shots...’)
136
Appendix A Robot code
fprintf(fid,’Considering ball-ball-ball-pocket shots...\r\n’);
for ii = 1:np
for ll = targets
for kk = ryb
if kk ~= ll
for jj = targets
if (jj ~= ll & jj ~= kk)
shotlist(end+1).pathtype = ’BBBP’;
shotlist(end).pathnos = [jj kk ll ii];
shotlist(end).coords(8,:) = pockets(ii,:);
shotlist(end).coords(7,:) = balls(ll,:);
shotlist(end).coords(6,:) = balls(ll,:) - (2*rb) * ...
(pockets(ii,:)-balls(ll,:))/norm(pockets(ii,:)-balls(ll,:));
shotlist(end).coords(5,:) = balls(kk,:);
shotlist(end).coords(4,:) = balls(kk,:) - (2*rb) * ...
(shotlist(end).coords(6,:)-balls(kk,:))/norm(shotlist(end).coords(6,:)-balls(kk,:));
shotlist(end).coords(3,:) = balls(jj,:);
shotlist(end).coords(2,:) = balls(jj,:) - (rw+rb) * ...
(shotlist(end).coords(4,:)-balls(jj,:))/norm(shotlist(end).coords(4,:)-balls(jj,:));
shotlist(end).coords(1,:) = whi;
vec1 = shotlist(end).coords(2,:) - shotlist(end).coords(1,:);
vec1d = shotlist(end).coords(3,:) - shotlist(end).coords(1,:);
vec2 = shotlist(end).coords(4,:) - shotlist(end).coords(3,:);
vec2d = shotlist(end).coords(5,:) - shotlist(end).coords(3,:);
vec3 = shotlist(end).coords(6,:) - shotlist(end).coords(5,:);
vec3d = shotlist(end).coords(7,:) - shotlist(end).coords(5,:);
vec4 = shotlist(end).coords(8,:) - shotlist(end).coords(7,:);
if (cosang(vec1,vec2) > 0 & cosang(vec2,vec3) > 0 & cosang(vec3,vec4) > 0)
shotlist(end).goodness = (2*rb)^4 * ...
cosang(vec1d,vec2) * cosang(vec2d,vec3) * cosang(vec3d,vec4) / ...
(norm(vec1) * norm(vec2) * norm(vec3) * norm(vec4) * ...
cosang(vec1,vec1d) * cosang(vec2,vec2d) * cosang(vec3,vec3d));
else
shotlist(end).goodness = 0;
end
end
end
end
end
end
end
end
fprintf(fid,’\r\n’);
clear pock
clear vec1 vec1d vec2 vec2d vec3 vec3d vec4
%FINDAGOODSHOT Find a good shot.
%
Looks through the shot list in order of descending
%
goodness and finds a possible shot.
set(messagehandle,’String’,’Checking and ranking shots...’)
% create an index of the shots in order of goodness
[dummy,best] = sort(-[shotlist.goodness]);
clear dummy
% tolerance in pixels
eps = rb/10;
137
Appendix A Robot code
for ii = best(1:end-1)
shot = shotlist(ii);
if shot.goodness == 0
break
end
if ii == best(1)
fprintf(fid,’The best shot is shot #%g: white ball’,ii);
else
fprintf(fid,’The next best shot is shot #%g: white ball’,ii);
end
for jj = 1:length(shot.pathtype)
if shot.pathtype(jj) == ’B’
if shot.pathnos(jj) < bb
fprintf(fid,’ to red ball %g’,shot.pathnos(jj));
elseif shot.pathnos(jj) > bb
fprintf(fid,’ to yellow ball %g’,shot.pathnos(jj));
else
fprintf(fid,’ to black ball’);
end
elseif shot.pathtype(jj) == ’C’
if shot.pathnos(jj) == 1
fprintf(fid,’ off the top cushion’);
elseif shot.pathnos(jj) == 2
fprintf(fid,’ off the bottom cushion’);
elseif shot.pathnos(jj) == 3
fprintf(fid,’ off the left cushion’);
elseif shot.pathnos(jj) == 4
fprintf(fid,’ off the right cushion’);
end
elseif shot.pathtype(jj) == ’P’
fprintf(fid,’ into pocket %g.\r\n’,shot.pathnos(jj));
end
end
thisshotworks = 1; % prove me wrong
rr = rw; % radius of the moving ball
ballsmoved = []; % coloured balls which have moved from their original position
for jj = 1:length(shot.pathtype)
% is a ball in the way?
for ib = ryb
% has this ball already moved?
if ismember(ib,ballsmoved)
% then don’t need to check if this ball is in the way
continue
end
% is it within (rr+rb+eps) of the path line?
cond1a = (abs((balls(ib,1)-shot.coords(2*jj,1)) * ...
(shot.coords(2*jj-1,2)-shot.coords(2*jj,2)) - ...
(shot.coords(2*jj-1,1)-shot.coords(2*jj,1)) * ...
(balls(ib,2)-shot.coords(2*jj,2))) < ...
(rr+rb+eps) * norm(shot.coords(2*jj,:)-shot.coords(2*jj-1,:)));
% is it after the beginning of the path?
cond1b = (dot(balls(ib,:)-shot.coords(2*jj-1,:),...
shot.coords(2*jj,:)-shot.coords(2*jj-1,:)) > 0);
% is it before the end of the path?
cond1c = (dot(balls(ib,:)-shot.coords(2*jj,:),...
shot.coords(2*jj-1,:)-shot.coords(2*jj,:)) > 0);
% is the ball in the shot path?
cond2a = not(ismember(’B’,shot.pathtype(find(shot.pathnos == ib))));
% is it within (rr+rb+eps) of the end of the path?
138
Appendix A Robot code
cond2b = (norm(shot.coords(2*jj,:)-balls(ib,:)) < rr+rb+eps);
if (cond1a & cond1b & cond1c) | (cond2a & cond2b)
fprintf(fid,’Ball %g is in the way.\r\n’,ib);
thisshotworks = 0;
end
end
% is a cushion in the way?
if shot.pathtype(jj) == ’B’
if shot.coords(2*jj,1) < X1+rr
fprintf(fid,’The top cushion is in the way.\r\n’,ib);
thisshotworks = 0;
elseif shot.coords(2*jj,1) > X2-rr
fprintf(fid,’The bottom cushion is in the way.\r\n’,ib);
thisshotworks = 0;
elseif shot.coords(2*jj,2) < Y1+rr
fprintf(fid,’The left cushion is in the way.\r\n’,ib);
thisshotworks = 0;
elseif shot.coords(2*jj,2) > Y2-rr
fprintf(fid,’The right cushion is in the way.\r\n’,ib);
thisshotworks = 0;
end
end
% is the ball is bouncing off the gap in the top or bottom cushion (where pocket 2/5 is)?
if shot.pathtype(jj) == ’C’ & (shot.pathnos(jj) == 1 | shot.pathnos(jj) == 2)
if shot.coords(2*jj,2) > Ymid-rp-sqrt(0.5)*(rp+epd)-eps & shot.coords(2*jj,2) < ...
Ymid+rp+sqrt(0.5)*(rp+epd)+eps
fprintf(fid,’Can’’t bounce off a pocket, ya doof.\r\n’);
thisshotworks = 0;
end
end
% update the list of balls which have moved
if shot.pathtype(jj) == ’B’
rr = rb;
ballsmoved = [ballsmoved shot.pathnos(jj)];
end
end
if playforwards == 1
% is the shot going forwards?
if shot.coords(1,2) <= shot.coords(2,2)
fprintf(fid,’You’’re not allowed to play backwards.\r\n’);
thisshotworks = 0;
end
end
if shot.pathnos(end) == 2 | shot.pathnos(end) == 5
% is the target ball at too wide an angle to make it into the middle pockets?
% (cannot approach from an angle less than pi/6)
if abs(shot.coords(end-1,2)-shot.coords(end,2))*tan(pi/6) > ...
abs(shot.coords(end-1,1)-shot.coords(end,1))
fprintf(fid,’That ball’’s not going to make it into the pocket at that angle.\r\n’);
thisshotworks = 0;
end
end
if thisshotworks == 1
fprintf(fid,’What a fantasterrific shot!\r\n’);
bestshots = [bestshots ii];
if length(bestshots) >= hmshots
fprintf(fid,’\r\n’);
break
end
139
Appendix A Robot code
end
fprintf(fid,’\r\n’);
end
clear shot thisshotworks
clear cond1a cond1b cond1c cond2a cond2b
function table = ...
tablepic(Xtotal,Ytotal,X1,X2,Y1,Ymid,Y2,np,rp,epd,whi,rw,bb,ryb,balls,rb,pocketcentres,dd)
%TABLEPIC Create and display a table image.
%
Dimensions of the table are determined from
%
Xtotal, Ytotal, X1, X2, Y1, Ymid, Y2.
%
Pocket positions are determined from
%
np, rp, epd.
%
Ball positions are determined from
%
whi, rw, bb, ryb, balls, rb.
Xtotal = round(Xtotal/dd);
Ytotal = round(Ytotal/dd);
X1 = round(X1/dd);
X2 = round(X2/dd);
Y1 = round(Y1/dd);
Ymid = round(Ymid/dd);
Y2 = round(Y2/dd);
rp = rp/dd;
epd = epd/dd;
whi = whi/dd;
rw = rw/dd;
balls = balls/dd;
rb = rb/dd;
pocketcentres = pocketcentres/dd;
table = zeros(Xtotal,Ytotal,3);
% make a polygon mask for the edge shape
edgepolyX = [X1+sqrt(2)*rp X1 X1-sqrt(2)*rp X1 X1 X1-(rp+epd)];
edgepolyX = [edgepolyX fliplr(edgepolyX)];
edgepolyX = [edgepolyX Xtotal+1-edgepolyX];
edgepolyY = [Y1 Y1-sqrt(2)*rp Y1 Y1+sqrt(2)*rp Ymid-rp-(rp+epd) Ymid-rp];
edgepolyY = [edgepolyY Ytotal+1-fliplr(edgepolyY)];
edgepolyY = [edgepolyY fliplr(edgepolyY)];
edgemask = double(roipoly(table(:,:,1),edgepolyY,edgepolyX));
% table outline (brown and blue)
table(:,:,1) = 0; % 0.5 - 0.5*edgemask;
table(:,:,2) = 0; % 0.2 - 0.2*edgemask;
table(:,:,3) = 0.3 + 0.2*edgemask;
% black pockets
for ip = 1:np
table = drawcircle(table,pocketcentres(ip,:),rp,[0 0 0],1);
end
% white ball
table = drawcircle(table,whi,rw,[1 1 1],1);
% red/yellow/black balls
for ib = ryb
if ib < bb
table = drawcircle(table,balls(ib,:),rb,[1 0 0],1);
elseif ib > bb
table = drawcircle(table,balls(ib,:),rb,[1 1 0],1);
else
table = drawcircle(table,balls(ib,:),rb,[0 0 0],1);
140
Appendix A Robot code
end
end
function [] = shotpic(tableonly,np,whi,rw,bb,ryb,balls,rb,pocketcentres,dd,shot)
%SHOTPIC Create and display an image of a shot, with labels.
whi = whi/dd;
rw = rw/dd;
balls = balls/dd;
rb = rb/dd;
pocketcentres = pocketcentres/dd;
shot.coords = shot.coords/dd;
table = tableonly;
howopaque = 0.3;
ballno = 0;
for ii = 1:length(shot.pathtype)-1
if ballno == 0
table = drawcircle(table,shot.coords(2*ii,:),rw,[1
elseif ballno < bb
table = drawcircle(table,shot.coords(2*ii,:),rb,[1
elseif ballno > bb
table = drawcircle(table,shot.coords(2*ii,:),rb,[1
else
table = drawcircle(table,shot.coords(2*ii,:),rb,[0
end
if shot.pathtype(ii) == ’B’
ballno = shot.pathnos(ii);
end
end
imshow(table)
% path lines
ballno = 0;
thickness = 2;
for ii = 1:length(shot.pathtype)
if ballno == 0
line(’YData’,shot.coords(2*ii-1:2*ii,1),...
’XData’,shot.coords(2*ii-1:2*ii,2),...
’Color’,[0.99 0.99 0.99],...
’LineWidth’,thickness)
elseif ballno < bb
line(’YData’,shot.coords(2*ii-1:2*ii,1),...
’XData’,shot.coords(2*ii-1:2*ii,2),...
’Color’,[1 0 0],...
’LineWidth’,thickness)
elseif ballno > bb
line(’YData’,shot.coords(2*ii-1:2*ii,1),...
’XData’,shot.coords(2*ii-1:2*ii,2),...
’Color’,[1 1 0],...
’LineWidth’,thickness)
else
line(’YData’,shot.coords(2*ii-1:2*ii,1),...
’XData’,shot.coords(2*ii-1:2*ii,2),...
’Color’,[0 0 0],...
’LineWidth’,thickness)
end
if shot.pathtype(ii) == ’B’
ballno = shot.pathnos(ii);
end
end
141
1 1],howopaque);
0 0],howopaque);
1 0],howopaque);
0 0],howopaque);
Appendix A Robot code
pocketcentres(2,1) = pocketcentres(1,1);
pocketcentres(5,1) = pocketcentres(4,1);
% pocket labels
for ip = 1:np
text(round(pocketcentres(ip,2)),round(pocketcentres(ip,1)),num2str(ip),...
’HorizontalAlignment’,’center’,...
’FontSize’,9,...
’Color’,[0.99 0.99 0.99])
end
% red/yellow/black ball labels
for ib = ryb
if ib < bb
text(round(balls(ib,2)),round(balls(ib,1)),num2str(ib),...
’HorizontalAlignment’,’center’,...
’FontSize’,8,...
’Color’,’k’)
elseif ib > bb
text(round(balls(ib,2)),round(balls(ib,1)),num2str(ib),...
’HorizontalAlignment’,’center’,...
’FontSize’,8,...
’Color’,’k’)
else
text(round(balls(ib,2)),round(balls(ib,1)),[num2str(ib)],...
’HorizontalAlignment’,’center’,...
’FontSize’,8,...
’FontWeight’,’Bold’,...
’Color’,[0.99 0.99 0.99])
end
end
function cosalpha = cosang(vec1,vec2)
%COSANG Return the cosine of the angle between two vectors.
cosalpha = dot(vec1,vec2)/(norm(vec1)*norm(vec2));
A.4
Interface software code
%MOVEARM Send commands to a driver box to control stepper motors.
command = zeros(1,8);
if TasksExecuted == 0
if Xpos < Xposnew
command(1) = 1;
elseif Xpos > Xposnew
command(1) = 0;
end
if Ypos < Yposnew
command(3) = 1;
elseif Ypos > Yposnew
command(3) = 0;
end
if Apos < Aposnew
command(5) = 1;
elseif Apos > Aposnew
command(5) = 0;
142
Appendix A Robot code
end
else
if Xpos < Xposnew
command(1) = 1;
Xpos = Xpos + doXstep;
doXstep = not(doXstep);
command(2) = doXstep;
elseif Xpos > Xposnew
command(1) = 0;
Xpos = Xpos - doXstep;
doXstep = not(doXstep);
command(2) = doXstep;
end
if Ypos < Yposnew
command(3) = 1;
Ypos = Ypos + doYstep;
doYstep = not(doYstep);
command(4) = doYstep;
elseif Ypos > Yposnew
command(3) = 0;
Ypos = Ypos - doYstep;
doYstep = not(doYstep);
command(4) = doYstep;
end
if Apos < Aposnew
command(5) = 1;
Apos = Apos + doAstep;
doAstep = not(doAstep);
command(6) = doAstep;
elseif Apos > Aposnew
command(5) = 0;
Apos = Apos - doAstep;
doAstep = not(doAstep);
command(6) = doAstep;
end
end
putvalue(parport,command)
TasksExecuted = TasksExecuted + 1;
if [Xpos Ypos Apos] == [Xposnew
pause(0.5);
if solenoidpower == 1
putvalue(parport,[0 0 0
elseif solenoidpower == 2
putvalue(parport,[0 0 0
elseif solenoidpower == 3
putvalue(parport,[0 0 0
end
pause(0.2);
putvalue(parport,[0 0 0 0 0
motorsbusy = 0;
end
Yposnew Aposnew]
0 0 0 0 1])
0 0 0 1 0])
0 0 0 1 1])
0 0 0])
143
Appendix A Robot code
A.5
Miscellaneous code
%TRIGGERGUI Create a GUI to trigger the robot’s shot.
triggerwindow = figure(’Name’,[’Robot turn #’ num2str(turn)],...
’Position’,[1 1 5 5],...
’Color’,backcolour,...
’Resize’,’off’,...
’MenuBar’,’none’,...
’NumberTitle’,’off’,...
’Visible’,’off’);
% ------------------------------------------------------------------------% exit button
fpos = frame.spacing + 1;
bpos = fpos + button.spacing;
frame5 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width pushheight+2*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + pushheight + 2*button.spacing + frame.spacing;
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left bpos button.width pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Exit program’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour,...
’Callback’,’wantexit = 1; uiresume’);
% ------------------------------------------------------------------------% choose targets
bpos = fpos + button.spacing;
if turn == 1
frame4 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width 3*pushheight+radioheight+5*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + 3*pushheight + radioheight + 5*button.spacing + frame.spacing;
elseif tgt == 1 | tgt == 2
frame4 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width 2*pushheight+radioheight+4*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + 2*pushheight + radioheight + 4*button.spacing + frame.spacing;
else
frame4 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width pushheight+radioheight+3*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + pushheight + radioheight + 3*button.spacing + frame.spacing;
end
pf = uicontrol(’Style’,’checkbox’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Cue ball replaced on table; play forward only.’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour);
bpos = bpos + radioheight + button.spacing;
if tgt == 1 | tgt == 2
uicontrol(’Style’,’pushbutton’,...
144
Appendix A Robot code
’Position’,[button.left bpos (button.width-button.spacing)/2 pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Go for red’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,[0.9 0.1 0.1],...
’Callback’,’tgt=3; playforwards = get(pf,’’Value’’); uiresume’);
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left+(button.width+button.spacing)/2 bpos ...
(button.width-button.spacing)/2 pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Go for yellow’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,[1 1 0],...
’Callback’,’tgt=4; playforwards = get(pf,’’Value’’); uiresume’);
bpos = bpos + pushheight + button.spacing;
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left bpos button.width pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Table is still open’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour,...
’Callback’,’tgt=2; playforwards = get(pf,’’Value’’); uiresume’);
bpos = bpos + pushheight + button.spacing;
if turn == 1
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left bpos button.width pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Play the break’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,textcolour,...
’Callback’,’tgt=1; playforwards = get(pf,’’Value’’); uiresume’);
end
end
if tgt == 3
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left bpos button.width pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Go for red’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,[1 0 0],...
’Callback’,’tgt=3; playforwards = get(pf,’’Value’’); uiresume’);
end
if tgt == 4
uicontrol(’Style’,’pushbutton’,...
’Position’,[button.left bpos button.width pushheight],...
’BackgroundColor’,buttoncolour,...
’String’,’Go for yellow’,...
’FontName’,uifont.name,...
’FontSize’,uifont.bigsize,...
’ForegroundColor’,[1 1 0],...
’Callback’,’tgt=4; playforwards = get(pf,’’Value’’); uiresume’);
end
% ------------------------------------------------------------------------% shot complexity
145
Appendix A Robot code
bpos = fpos + button.spacing;
frame3 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width 4*radioheight+2*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + 4*radioheight + 2*button.spacing + frame.spacing;
complexitytrick = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Trick shot mode: only consider very complex shots’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([complexitylow complexitymedium complexityhigh],’’Value’’,0);’...
’shotcomplexity = [0 0 1];’]);
bpos = bpos + radioheight;
complexityhigh = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Also consider very complex shots’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([complexitylow complexitymedium complexitytrick],’’Value’’,0);’...
’shotcomplexity = [1 1 1];’]);
bpos = bpos + radioheight;
complexitymedium = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Also consider medium complexity shots’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([complexitylow complexityhigh complexitytrick],’’Value’’,0);’...
’shotcomplexity = [1 1 0];’]);
bpos = bpos + radioheight;
complexitylow = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Consider only simple (direct) shots’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([complexitymedium complexityhigh complexitytrick],’’Value’’,0);’...
’shotcomplexity = [1 0 0];’]);
if shotcomplexity == [1 0 0]
set(complexitylow,’Value’,1)
elseif shotcomplexity == [1 1 0]
set(complexitymedium,’Value’,1)
elseif shotcomplexity == [1 1 1]
set(complexityhigh,’Value’,1)
elseif shotcomplexity == [0 0 1]
set(complexitytrick,’Value’,1)
end
% ------------------------------------------------------------------------% how many shots to find
146
Appendix A Robot code
% hmshotslist = 0 means that the robot automatically chooses a shot
% otherwsie, the robot presents a user with hmshotslist choices
bpos = fpos + button.spacing;
frame2 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width 2*radioheight+popupheight+2*button.spacing],...
’BackgroundColor’,frontcolour);
fpos = fpos + 2*radioheight + popupheight + 2*button.spacing + frame.spacing;
uicontrol(’Style’,’text’,...
’Position’,[button.left+1 bpos button.width-popupwidth-button.spacing popupheight-4],...
’BackgroundColor’,frontcolour,...
’String’,’Number of alternatives:’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’HorizontalAlignment’,’right’);
% The -4 is so that the text aligns with the number in the popupmenu
hmshotslist = uicontrol(’Style’,’popupmenu’,...
’Position’,[window.width-frame.spacing-button.spacing-popupwidth+1 bpos ...
popupwidth popupheight],...
’BackgroundColor’,popupcolour,...
’Enable’,’off’,...
’String’,hmshotsstring,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’Value’,hmshotslast,...
’Callback’,[’hmshots = hmshotsvalues(get(hmshotslist,’’Value’’));’...
’hmshotslast = get(hmshotslist,’’Value’’);’]);
bpos = bpos + popupheight;
manualchoose = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Manually choose a shot from given alternatives’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set(autochoose,’’Value’’,0);’...
’set(hmshotslist,’’Enable’’,’’on’’);’...
’hmshots = hmshotsvalues(get(hmshotslist,’’Value’’));’]);
bpos = bpos + radioheight;
autochoose = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Let the robot decide upon a shot automatically’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set(manualchoose,’’Value’’,0);’...
’set(hmshotslist,’’Enable’’,’’off’’);’...
’hmshots = 0;’]);
if hmshots == 0
set(autochoose,’Value’,1);
else
set(manualchoose,’Value’,1);
set(hmshotslist,’Enable’,’on’);
end
147
Appendix A Robot code
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
% ------------------------------------------------------------------------% where to output thoughts
% thoughtsoutput = 0 means thoughts are not output
% thoughtsoutput = 1 means thoughts are output to screen
% thoughtsoutput = 2 means thoughts are output to fullfile(logfiledir,logfilename)
bpos = fpos + button.spacing;
frame1 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width 3*radioheight+editheight+2*button.spacing],...
’BackgroundColor’,frontcolour);
window.height = fpos + 3*radioheight + editheight + 2*button.spacing + frame.spacing;
uicontrol(’Style’,’text’,...
’Position’,[button.left+1 bpos ...
button.width-editwidth-putfilewidth-2*button.spacing editheight-3],...
’BackgroundColor’,frontcolour,...
’String’,’Log file:’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’HorizontalAlignment’,’right’);
% The -3 is so that the text aligns with the number in the edit box
logfiledisp = uicontrol(’Style’,’edit’,...
’Position’,[window.width-frame.spacing-2*button.spacing-putfilewidth-editwidth+1 bpos ...
editwidth editheight],...
’BackgroundColor’,’white’,...
’Enable’,’off’,...
’String’,fullfile(logfiledir,logfilenamedefault),...
’FontSize’,uifont.smallsize,...
’HorizontalAlignment’,’left’,...
’Callback’,[’[logfiledir logfilename ext] = fileparts(get(logfiledisp,’’String’’));’...
’logfilename = [logfilename ’’.’’ ext];’]);
logfilegetdir = uicontrol(’Style’,’pushbutton’,...
’Position’,[window.width-frame.spacing-button.spacing-putfilewidth+1 bpos ...
putfilewidth editheight],...
’Enable’,’off’,...
’String’,’...’,...
’Callback’,[’pwdnow = pwd;’...
’if exist(logfiledir,’’dir’’),’...
’cd(logfiledir);’...
’[tempname,tempdir] = uiputfile(logfilename,’’Logfile name’’);’...
’cd(pwdnow);’...
’else,’...
’[tempname,tempdir] = uiputfile(logfilename,’’Logfile name’’);’...
’end,’...
’if not(tempname == 0),’...
’logfiledir = tempdir;’...
’logfilename = tempname;’...
’set(logfiledisp,’’String’’,fullfile(logfiledir,logfilename));’...
’end’]);
bpos = bpos + popupheight;
texttofile = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Save thoughts in log file’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([textnowhere texttoscreen],’’Value’’,0);’...
’set([logfiledisp logfilegetdir],’’Enable’’,’’on’’);’...
148
Appendix A Robot code
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
’thoughtsoutput = 2;’]);
bpos = bpos + radioheight;
texttoscreen = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Output thoughts to screen’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([textnowhere texttofile],’’Value’’,0);’...
’set([logfiledisp logfilegetdir],’’Enable’’,’’off’’);’...
’thoughtsoutput = 1;’]);
bpos = bpos + radioheight;
textnowhere = uicontrol(’Style’,’radiobutton’,...
’Position’,[button.left bpos button.width radioheight],...
’BackgroundColor’,frontcolour,...
’String’,’Do not output thoughts anywhere’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Value’,0,...
’Callback’,[’set([texttoscreen texttofile],’’Value’’,0);’...
’set([logfiledisp logfilegetdir],’’Enable’’,’’off’’);’...
’thoughtsoutput = 0;’]);
if thoughtsoutput == 0
set(textnowhere,’Value’,1);
elseif thoughtsoutput == 1
set(texttoscreen,’Value’,1);
else
set(texttofile,’Value’,1);
set([logfiledisp logfilegetdir],’Enable’,’on’);
end
% ------------------------------------------------------------------------% title
bpos = fpos + button.spacing;
frame0 = uicontrol(’Style’,’frame’,...
’Position’,[frame.left fpos frame.width titleheight+2*button.spacing],...
’BackgroundColor’,frontcolour);
window.height = fpos + titleheight + 2*button.spacing + frame.spacing;
uicontrol(’Style’,’text’,...
’Position’,[button.left+1 bpos button.width titleheight],...
’BackgroundColor’,frontcolour,...
’String’,’ROBOPOOLPLAYER’,...
’FontName’,uifont.name,...
’FontSize’,titleheight/2,...
’ForegroundColor’,textcolour);
% ------------------------------------------------------------------------window.left = screensize(1) + (screensize(3) - window.width)/2;
window.bottom = screensize(2) + (screensize(4) - window.height)/2;
set(triggerwindow,’Position’,[window.left window.bottom window.width window.height]);
%CHOOSESHOTGUI Create a GUI to choose the robot’s shot.
dd = 4; % decimation of displayed images
149
Appendix A Robot code
choosewindow = figure(’Name’,[’Robot turn #’ num2str(turn)],...
’Position’,[1 1 5 5],...
’Color’,backcolour,...
’Resize’,’off’,...
’MenuBar’,’none’,...
’NumberTitle’,’off’,...
’Visible’,’off’);
imwidth = length([dd:dd:size(rawphoto,2)]);
imheight = length([dd:dd:size(rawphoto,1)]);
imaxes = axes(’Units’,’pixels’,...
’Position’,[frame.spacing+1 popupheight2+frame.spacing+button.spacing imwidth imheight]);
tableonly = ...
tablepic(Xtotal,Ytotal,X1,X2,Y1,Ymid,Y2,np,rp,epd,whi,rw,bb,ryb,balls,rb,pocketcentres,dd);
shotpic(tableonly,np,whi,rw,bb,ryb,balls,rb,pocketcentres,dd,shotlist(bestshots(1)))
bestshotspopvalues = ’’;
for ii = bestshots
bestshotspopvalues = [bestshotspopvalues ’Shot #’ num2str(ii) ’|’];
end
bestshotspopvalues = [bestshotspopvalues ’Table photo’];
playshot = 1;
bestshotspop = uicontrol(’Style’,’popupmenu’,...
’Position’,[frame.spacing+floor(imwidth/2)-popupwidth2+1 frame.spacing+1 ...
popupwidth2 popupheight2],...
’BackgroundColor’,popupcolour,...
’String’,bestshotspopvalues,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’Callback’,[’playshot = get(bestshotspop,’’Value’’);’...
’if playshot <= length(bestshots),’...
’shotpic(tableonly,np,whi,rw,bb,ryb,balls,rb,pocketcentres,’...
’dd,shotlist(bestshots(playshot))),’...
’set(chooseshot,’’Enable’’,’’on’’);’...
’set(showgoodness,’’String’’,[’’Goodness: ’’ ’...
’num2str(round(100+10*log([shotlist(bestshots(playshot)).goodness])))]);’...
’else,’...
’imshow(rawphoto(dd:dd:end,dd:dd:end,:)),’...
’set(chooseshot,’’Enable’’,’’off’’);’...
’set(showgoodness,’’String’’,’’’’);’...
’end’]);
chooseshot = uicontrol(’Style’,’pushbutton’,...
’Position’,[frame.spacing+floor(imwidth/2)+button.spacing+1 frame.spacing+1 ...
pushwidth2 popupheight2],...
’BackgroundColor’,buttoncolour,...
’String’,’Play’,...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour,...
’Callback’,’uiresume’);
showgoodness = uicontrol(’Style’,’text’,...
’Position’,[frame.spacing+floor(imwidth/2)+pushwidth2+2*button.spacing+1 frame.spacing+1 ...
textwidth2 popupheight2-4],...
’BackgroundColor’,backcolour,...
’String’,[’Shot score: ’ num2str(round(100+10*log([shotlist(bestshots(1)).goodness])))],...
’FontName’,uifont.name,...
’FontSize’,uifont.smallsize,...
’ForegroundColor’,textcolour);
% The -4 is so that the text aligns with the number in the popupmenu
uicontrol(’Style’,’text’,...
150
Appendix A Robot code
’Position’,[frame.spacing+1 popupheight2+imheight+2*frame.spacing+button.spacing ...
imwidth titleheight],...
’BackgroundColor’,backcolour,...
’String’,’ROBOPOOLPLAYER’’S SHOT!’,...
’FontName’,uifont.name,...
’FontSize’,titleheight/2,...
’ForegroundColor’,textcolour);
window.width2 = imwidth + 2*frame.spacing;
window.height = imheight + popupheight2 + titleheight + 3*frame.spacing + button.spacing;
window.left = screensize(1) + (screensize(3) - window.width2)/2;
window.bottom = screensize(2) + (screensize(4) - window.height)/2;
set(choosewindow,’Position’,[window.left window.bottom window.width2 window.height]);
function pic = drawcircle(pic,coord,r,colour,howopaque)
%DRAWCIRCLE Draw a circle on a given image.
for xx = intersect(floor(coord(1)-r):ceil(coord(1)+r),1:size(pic,1))
for yy = intersect(floor(coord(2)-r):ceil(coord(2)+r),1:size(pic,2))
if (xx-coord(1))^2 + (yy-coord(2))^2 <= r^2
pic(xx,yy,:) = howopaque*colour’ + (1-howopaque)*squeeze(pic(xx,yy,:));
% dodgy stuff must be done in order to have the weird sized
% matrices add properly
end
end
end
%COORD2STR Convert coordinate pairs to a string.
function str = coord2str(vecs,ls)
x = vecs(:,1);
y = vecs(:,2);
numofvecs = size(vecs,1);
str = [’’];
for ii = 1:numofvecs
str = [str ’(’ num2str(round(x(ii))) ’,’ num2str(round(y(ii))) ’)’];
if ii < numofvecs
str = [str ’,’];
end
end
151
Appendix B Visual Basic code
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
Dim
a As Boolean
Xsteps As Integer
Ysteps As Integer
AngSteps As Integer
Solenoid_Power As Integer
Port_Location As Integer
Command_1 As Long
command_1a As Long
Command_2 As Long
Command_3 As Long
command_3a As Long
Command_4 As Long
Command_5 As Long
Command_6 As Long
Command_7 As Long
Command_8 As Long
Command_9 As Long
command_10 As Long
Private Sub Form_Load()
’ This section will load the values outputted from the MATLAB program
’ and convert them to steps and output them to the motors
Open
Open
Open
Open
App.Path
App.Path
App.Path
App.Path
&
&
&
&
"xdir.txt" For Input As #1 ’ Reads in inputs
"ydir.txt" For Input As #2 ’ from vision
"ang.txt" For Input As #3 ’ system program
"sol.txt" For Input As #4
Dim OutputArray(4) As Integer
Dim i As Integer
Input
Input
Input
Input
’
’
’
’
’
#1,
#2,
#3,
#4,
OutputArray(1)
OutputArray(2)
OutputArray(3)
OutputArray(4)
Diagnostic Check: Directly Input Steps if necessary
OutputArray(1) = 100 ’ X direction
OutputArray(2) = 0 ’ Y direction
OutputArray(3) = 0 ’ Angle Direction
OutputArray(4) = 0 ’ Solenoid Power Level
For OutputArray(i) = 1 To 4
If OutputArray(i) = vbEmpty Then
’ If the files do not exist then exit the subroutine and input manually
intPress = MsgBox("File Not Found", vbCritical)
Exit Sub
End If
Next OutputArray(i)
Xsteps = OutputArray(1)
Ysteps = OutputArray(2)
AngSteps = OutputArray(3)
Solenoid_Power = OutputArray(4)
’ Move to the initial position:
Call Motor_Output(Xsteps, Ysteps, AngSteps, Solenoid_Power)
Xsteps = -Xsteps ’ Invert all values.
Ysteps = -Ysteps
152
Appendix B Visual Basic code
AngSteps = 0
Solenoid_Power = 0
’ Now return carriage to original position:
Call Motor_Output(Xsteps, Ysteps, AngSteps, Solenoid_Power)
’ Once returned from the output subroutine, delete the confirm file
’ to allow the program to indicate that it is finished:
Kill "C:\My Documents\tom\tom\Uni\Project\vbstuff\File Loader\Done.txt"
End
End Sub
Private Sub Motor_Output(Xsteps, Ysteps, AngSteps, Solenoid_Power)
’ System Initialisation:
Timer1.Interval = 20 ’ 20 mS
Timer2.Interval = 20 ’ 10 mS
Timer3.Interval = 500 ’ 0.5 S
Timer1.Enabled = False
Timer2.Enabled = False
Timer3.Enabled = False
Command_1
command_1a
Command_2
Command_3
command_3a
Command_4
Command_5
Command_6
Command_7
Command_8
Command_9
command_10
=
=
=
=
=
=
=
=
=
=
=
=
&H1
&H2
&H3
&H4
&H8
&HC
&H10
&H30
&H80
&H40
&HC0
&H0
’
’
’
’
’
’
’
’
’
’
’
’
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
Binary:
0000
0000
0000
0000
0000
0000
0001
0011
1000
0100
1100
0000
0001
0010
0011
0100
1000
1100
0000
0000
0000
0000
0000
0000
Positive x-direction bit
Step in negative x-direction
Step in positive x-direction
Positive y-direction bit
Step in negative y-direction
Step in positive y-direction
negative and y-direction bit
a = False
Port_Location = &H378
’ Sets the initial location of the Parallel Port in Memory
’ This may differ between computers
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
’ %%%% X - DIRECTION CONTROL %%%%
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
If Xsteps > 0
Then Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_1 ’ Set Pin 1 High (Binary: 0000 0001)
Loop
a = False
For Pulses = 0 To Xsteps
If Pulses < Xsteps / 4 Or Pulses > (3 * Xsteps) / 4 Then
’ Set timer to half max power timer and make system wait (set initiallly to 20ms)
Timer1.Enabled = True ’ Delay for 20mS then send new command
Do While a = False
’ Send step pulse and then wait for timer to expire
DoEvents
Out Port_Location, Command_2
’ Sets step Pin high and Sets the Direction as Positive (Binary: 0000 0011)
Loop
a = False
Else
Timer2.Enabled = True ’Delay for 10mS then send new command
Do While a = False
DoEvents
153
Appendix B Visual Basic code
Out Port_Location, Command_2
Loop
a = False
End If
Timer2.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_1 ’ Set Pin 1 High (Binary: 0000 0001)
Loop
a = False
Next Pulses
End If
If Xsteps < 0 Then
a = False
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_10 ’ Set inital 20ms delay
Loop
a = False
Pulses = Xsteps
’ Sets bit 1 Low but still allows stepping in negative direction
For Pulses = Xsteps To 0
If Pulses > Xsteps / 4 Or Pulses < (3 * Xsteps) / 4 Then
’ Set timer to half max power timer and make system wait (set initiallly to 20ms)
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_1a ’ Delay for 20mS then send new command
Loop
a = False
Else
’ Set timer to full speed timer (set initially to 10ms)
Timer2.Enabled = True
Do While a = False
’ Set set negative x direction pin
DoEvents
Out Port_Location, command_1a
Loop
a = False
End If
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_10
Loop
a = False
Next Pulses
End If
If Xsteps = 0 Then
Out Port_Location, command_10
End If
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
’ %%%% Y-DIRECTION CONTROL %%%%
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
If Ysteps > 0 Then
Timer1.Enabled = True
a = False
Do While a = False
DoEvents
Out Port_Location, Command_3 ’ Set Pin 1 High (Binary: 0000 1000)
Loop
154
Appendix B Visual Basic code
a = False
For Pulses = 0 To Ysteps
If Pulses < Ysteps / 4 Or Pulses > (3 * Ysteps) / 4 Then
’ Set timer to half max power timer and make system wait (set initially to 20ms)
Timer1.Enabled = True ’ Delay for 20mS then send new command
Do While a = False
’ Send step pulse and then wait for timer to expire
DoEvents
Out Port_Location, Command_4
’ Sets step Pin high and Sets the Direction as Positive (Binary: 0000 1100)
Loop
a = False
Else
Timer2.Enabled = True ’ Delay for 10mS then send new command
Do While a = False
DoEvents
Out Port_Location, Command_4
Loop
a = False
End If
Timer2.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_3 ’ Set Pin 1 High (Binary: 0000 1000)
Loop
a = False
Next Pulses
End If
If Ysteps < 0 Then
a = False
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_10 ’ Set inital 20ms delay
Loop
a = False
Pulses = Ysteps
’ Sets bit 1 Low but still allows stepping in negative direction
For Pulses = Ysteps To 0
If Pulses > Ysteps / 4 Or Pulses < (3 * Ysteps) / 4 Then
’ Set timer to half max power timer and make system wait (set initially to 20ms)
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_3a
’ Delay for 20mS then send new command
Loop
a = False
Else
’ Set timer to full speed timer (set initially to 10ms)
Timer2.Enabled = True
Do While a = False
’ Set set negative y direction step pin
DoEvents
Out Port_Location, command_3a
Loop
a = False
End If
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_10 ’ wait a further 20ms and step again
Loop
a = False
Next Pulses
155
Appendix B Visual Basic code
End If
If Ysteps = 0 Then
Out Port_Location, command_10
End If
’ %%%%%%%%%%%%%%%%%%%%%%%
’ %%%% ANGLE CONTROL %%%%
’ %%%%%%%%%%%%%%%%%%%%%%%
If AngSteps > 0 Then
Timer1.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_5 ’ Set Pin 1 High Binary: 0001 0000
Loop
a = False
For Pulses = 0 To AngSteps
If Pulses < AngSteps / 4 Or Pulses > (3 * AngSteps) / 4 Then
’ Set timer to half max power timer and make system wait (set initiallly to 20ms)
Timer1.Enabled = True ’Delay for 20mS then send new command
Do While a = False
’ Send step pulse and then wait for timer to expire
DoEvents
Out Port_Location, Command_6
’ Sets step Pin high and Sets the Direction as Positive Binary: 0011 0000
Loop
a = False
Else
Timer2.Enabled = True ’ Delay for 10mS then send new command
Do While a = False
DoEvents
Out Port_Location, Command_6
Loop
a = False
End If
Timer2.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_5 ’ Set Pin 1 High Binary: 0001 0000
Loop
a = False
Next Pulses
End If
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
’ %%%% SOLENOID POWER LEVELS %%%%
’ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
If Solenoid_Power = 0 Or vbEmpty Then
’ Enable the 0.5s timer
a = False
Timer3.Enabled = True
Do While a = False
DoEvents
Out Port_Location, command_10 ’ If no force required do nothing
Loop
’ Timer just to allow the hardware to register the signal
End If
If Solenoid_Power = 1 Then
a = False
Timer3.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_7 ’ Low force setting
156
Appendix B Visual Basic code
Loop
Out Port_Location, command_10 ’ Reset the solenoid
End If
If Solenoid_Power = 2 Then
a = False
Timer3.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_8 ’ Med Force Setting
Loop
Out Port_Location, command_10 ’ Reset the solenoid
End If
If Solenoid_Power = 3 Then
a = False
Timer3.Enabled = True
Do While a = False
DoEvents
Out Port_Location, Command_9 ’ High Force Setting
Loop
Out Port_Location, command_10 ’ Reset the solenoid
End If
End Sub
Private Sub Timer1_Timer() ’ 20 mS
a = True
txtAck.Text = txtAck.Text & "1 "
Timer1.Enabled = False ’ reset the timer
End Sub
Private Sub Timer2_Timer() ’ 10 mS
a = True
txtAck2.Text = txtAck2.Text & "2 "
Timer2.Enabled = False
End Sub
Private Sub Timer3_Timer() ’ 0.5 seconds
a = True
txtAck3.Text = txtAck3.Text & "3 "
Timer3.Enabled = False
End Sub
157
Appendix C dSPACE code
C.1
C code for DS1102
#include <brtenv.h>
/* basic real-time environment */
#define INT_TABLE (0x000000)
#define INT_ACK
(0x800000)
/* address of interrupt table */
/* acknowledge interrupts */
/* global variables for the communication with the MATLAB workspace */
float mydata[2][1000];
int motorsbusy = 0;
float timeinterval = 0.01;
long *int_tbl=(long*)INT_TABLE;
volatile long *ioctl=(long*)IOCTL;
void c_int04(void)
{
int i;
float timeprev;
tic0_init();
motorsbusy = 1;
for(i=1; i<=1000; i++)
{
timeprev = tic0_read();
while(tic0_read() < timeprev + timeinterval)
{
master_cmd_server();
host_service(0,0);
}
ds1102_xf0_out(mydata[1][i]);
ds1102_xf1_out(mydata[2][i]);
}
motorsbusy = 0;
*ioctl=(*ioctl)&(0x8000)|INT_ACK;
/* acknowledge */
asm(" nop"); asm(" nop");
/* nop’s are necessary ! */
asm(" andn 0008h,IF");
/* clear bit in interrupt flag register */
}
void int_enable(void)
{
int_tbl[4]=(long)c_int04;
asm(" or 0008h,IE");
/* set bit in interrupt enable register */
asm(" or 2000h,ST");
/* and in status register
*/
}
main()
{
ds1102_init_xf0(1);
ds1102_init_xf1(1);
int_enable();
/* configure XF0 for output */
/* configure XF1 for output */
/* enable DSP interrupts */
while(1)
{
while(msg_last_error_number() == DS1102_NO_ERROR)
158
Appendix C dSPACE code
{
master_cmd_server();
host_service(0,0);
};
msg_info_set(MSG_SM_USER, msg_last_error_no, "Error occurred.");
while(msg_last_error_number() != DS1102_NO_ERROR)
{
master_cmd_server();
host_service(0,0);
};
msg_info_set(MSG_SM_USER, 0, "Error released.");
};
}
C.2
Matlab code
mlib(’SelectBoard’,’ds1102’)
channel_desc = mlib(’GetMapVar’,’mydata’,’Type’,’FloatDSP32’,’Length’,length(mydata));
mlib(’Write’,channel_desc,’Data’,mydata);
mlib(’Intrpt’)
159
Appendix D Control card circuit
160
Appendix D Control card circuit
This page is to be substituted with the control card circuit.
161
Appendix E Cost analysis
A summary of the costs of the project are shown in Table ??. The theoretical cost
indicates what the items would cost to buy. The actual cost indicates the actual cost
to the project budget. Where the actual cost is less than the theoretical cost, the
items have been generously supplied by the Department of Mechanical Engineering
for use in the project, with the exception of the pool balls, which were donated by
The Ballroom.
The extras refer to costs incurred to the project budget, but these items are not
necessary for the actual construction of the robot. (All figures are in Australian
dollars.)
162
Appendix E Cost analysis
Table E.1: Cost analysis.
Item
Stepper motor: high powered
Stepper motor: regular
Stepper motor control box
Washing machine solenoid
Transformer (for solenoid)
Frame: mild steel 35 × 18 RHS
Channel section (for rails)
Camera
Camera lens
D30Remote software
Electrical (plugs, boxes, loom, etc.)
Mechanical (belts, pulleys, clamps)
Pool table
Pool balls
Computer
Matlab license (incl. toolboxes)
Labour
Car starter motor solenoid
MIT thesis
Price
Quantity
Actuation
$300
2
$100
1
$3000
1
$80
1
$100
1
Table base
$5/m
18 m
$55
1
Vision system
$4500
1
$480
1
$100
1
Miscellaneous
$200
—
$830
—
$325
1
$90
1 set
$4000
1
$2660
1
$50/hour 10 hours
Total:
Extras
$88
$120
163
1
1
Total:
Theoretical cost
Actual cost
$600
$100
$3000
$80
$100
$0
$0
$0
$0
$60
$90
$55
$90
$55
$4500
$480
$100
$0
$0
$100
$200
$830
$325
$90
$4000
$2660
$500
$17710
$200
$830
$325
$0
$0
$0
$0
$1660
$88
$120
$17918
$88
$120
$1868
Appendix F Solenoid force
experiment
F.1
Introduction
This experiment was conducted in order to design a robotic mechanism that can play
a range of billiard shots comparable to that of a human, and that can withstand
forces exerted by the actuation system. During a game of billiards, many types
of shots are made ranging from short, soft shots to breaks and what is commonly
known as the “six pocket theory” shot. This experiment was conducted to determine
the range of forces used in a typical pool game on the table acquired for this project.
F.2
Aim
To determine the range of forces used during a typical game of pool.
F.3
Apparatus
• Tektronix Digital CRO
• Bruel & Kjäer charge amplifier
• Graphite insulated cables (to reduce noise)
• 8200 Force Transducer (reference sensitivity at 159.0 Hz, 23◦ C is 3.80 pC/N)
The apparatus is set up as shown in Figure ??.
F.4
Method
The voltage outputs corresponding to the forces used for various pool shots (from
short, soft shots to breaks to “six pocket theory” shots) were recorded for each
164
Appendix F Solenoid force experiment
Figure F.1: Force experiment setup.
member of the group. This experiment was conducted on the table acquired for the
project as well as on other pool tables, so as to leave scope for transferring the robot
to a different or larger table.
F.5
Results and analysis
The voltage output read from the digital CRO was converted into Newtons using
the conversion factor 100 N/V. The results are shown in Table ?? below.
F.6
Conclusion
It was determined from the results that an amateur player uses forces ranging approximately from 80 N to 1300 N during the course of a game. The actuation system
must therefore be able to produce a similar force range to these values.
165
Appendix F Solenoid force experiment
Table F.1: Washing machine solenoid impulse time results.
Voltage
12.0 V
12.0 V
11.0 V
13.0 V
7.0 V
7.0 V
10.0 V
9.0 V
3.0 V
2.0 V
5.5 V
0.8 V
13.0 V
1.7 V
1.8 V
1.0 V
Force
1200 N
1200 N
1100 N
1300 N ← Largest force
700 N
700 N
1000 N
900 N
300 N
200 N
550 N
80 N ← Smallest force
1300 N
170 N
180 N
100 N
166
Voltage
1.8 V
2.2 V
3.1 V
8.0 V
1.6 V
5.5 V
5.0 V
9.5 V
2.4 V
2.4 V
5.0 V
7.6 V
7.0 V
8.2 V
4.2 V
10.0 V
Force
180 N
220 N
310 N
800 N
160 N
550 N
500 N
950 N
240 N
240 N
500 N
760 N
700 N
820 N
420 N
1000 N
Appendix G Technical drawings
167
Appendix G Technical drawings
This page is to be substituted with drawings.
168
Download