SI 486L Lab 4: A Slice of Pi + Traveling...

advertisement
SI 486L Lab 4: A Slice of Pi + Traveling in Circles
This lab is due before the next lab period. All the files should be zipped or tar'd into a single file and emailed (subject should contain the phrase “lab 4”) as an attachment to the instructor. One of the files
should be a makefile with a default target that builds two executables, one called “pi” and the other
called “ring”. A README file is always a welcome addition, especially to explain the contents of the
collection of files and to explain any procedures for their use.
Introduction
This is a two-part lab; the first uses Monte-Carlo techniques to approximate pi. The second
demonstrates message passing in a ring configuration.
Task #1
Your objective is to write an MPI program to approximate the value of pi. An example invocation
might look like this:
$ mpirun ­n 24 ./pi 12547638
After 12547638 times on 24 ranks the value of pi is 3.14159265858
How many ranks can you run? How close to pi can you get?
Design
Here's a simple design (in pseudo code) for the main() function:
main
seed the random number generator
for ranks not 0 do this:
receive the number of iterations from rank 0
for the number of iterations specified
pick 2 random numbers between 0 and 1 as an (x,y) point
does that point lie within the unit circle?
if so, count it
report the count back to rank 0
otherwise, for rank 0
read a number of iterations from the user (exit on EOF)
send a message to all (non-0) ranks telling the iterations
receive a message from each rank
sum the two counts
compute the approximation for pi
display it
To send a single number via MPI we can use a “buffer” of a single long variable. The count parameter
in the MPI_Send and MPI_Recv calls will be just 1 as we are only sending one long.
You need to generate a random number between 0.0 and 1.0 from a function that returns an integer.
How can you do that? See the man page on rand ( man 3 rand ) for more information on this and
on how to seed the random number generator.
Task #2 – Running in Circles
With the previous task you have an MPI program
that passes messages, but they were few and far
between. This task will have you passing messages
back and forth in circles.
The use of the “MPI_ANY_TAG” constant allows
an MPI_RECV( ) call to accept a message sent with
any tag. The receiver can then examine the actual
tag received to determine what to do with the
message.
from MPI_DuckDuckGoose pdf on sc-education.org
Similarly, the “MPI_ANY_SOURCE” constant
allows the MPI_RECV( ) call to accept a message from any rank, not just a specified, pre-determined
rank.
Write an mpi program to receive a message of any tag from any source.
Write your mpi program so that rank 0 starts off not receiving a message but sending one. The
“message” should be the rank number of the destination (an integer). Rather than send the message
directly to that rank (which would be the most direct way to send it) have your program send it to an
adjacent rank (e.g., rank 0 should only send to rank 1 or n-1, rank 1 should only send to 0 or 2). It
should send the initial message one direction or the other (chosen randomly). Thereafter, it should also
be prepared to receive a message.
On receipt of a message, the rank which receives the message should do the following:
• print a message acknowledging the receipt of the message and its source
• if it is not the intended recipient it should send it on (in the same direction around the ring)
• if the message is intended for this rank, it should print that fact then select a new “target” rank,
print out its intent and send that new message – back the way it came.
Use the TAG field to determine the direction. Define two values like this:
#define CLOCKWISE 1
#define COUNTERCLK 2
The output messages should look something like this:
$ mpirun ­n 12 ./ring
0: Sending a message clockwise to Rank 3 1: Recvd msg from 0; sending clockwise to Rank 2
2: Recvd msg from 1; sending clockwise to Rank 3
3: Recvd msg from 2; it is a message for me!!!
3: Sending a message counterclockwise to Rank 9
2: Recvd msg from 3; sending counterclockwise to Rank 1
...
Of course the numbers may vary because they're being chosen randomly.
Why
Why are we doing this? This is an exercise in MPI message passing. The first task is a relatively
“easy” parallelization, since there is little communication. It fits in the category of “embarrassingly
parallel”. The second uses MPI to construct ring-style communication, showing the flexibility of the
MPI API to model various communication patterns.
Clean Code Counts
These projects will be graded on two criteria: 1) does it work? and 2) is it well written? That means
clean, readable code, well structured and well commented/documented. If you need help with “C”
come and see me – ASAP; it'll only get worse.
Download