[ Search |  Up Level |  Project home |  Index |  Class hierarchy ]

 CFMPI.ox

An Object-Oriented Interface to the MPI Library.  ⇩ 

CFMPI includes a library of external routines that interface with the MPI library. See MPIinterface for a description.

On top of the MPI interface, CFMPI includes the base MPI class for an object-oriented approach to message passing. Derived MPI are point-to-point (P2P) and peer (Peer) classes. These classes help implement standard message passing paradigms.

Also see How to Install and Use CFMPI in your code.

CFMPI P2P

A program that uses P2P for Client-Server interactions, has a simple overall structure, as seen in the template file
Source: niqlow/templates/CFMPI/ClientServerTemplate1.ox.

Include
Your program should include useMPI.ox, which is located in the niqlow/include.
#include "useMPI.ox"
In turn it will use preprocessor macros to determine if real MPI message passing is available (linked in) or if fake (simulated) message passing on a single instance of the program should occur. See also How to ....
You then create your own derived Client and Server classes that will handle the tasks you want to perform.
Earlier versions of CFMPI relied heavily on static members and methods, but this no longer true.
In the current version you can have more than one client or server class in order to parallelize two different parts of you code.
Your P2P object
Your main code creates a new P2P object which takes two arguments: a new object of your derived Client and a new object of your derived Server class.
The P2P constructor calls MPI_Init() to initialize the MPI environment. Then if it is executing on the client (ID=0) node it will delete the server object it was sent an keep the client object.
If P2P is executing on a server node it will delete the client and object and keep the server object.
Under two conditions P2P will keep both the client and server object (on the same node). First, if there is only one node (Nodes=1) then that node it is both client and server. Second, there are more than one nodes but the first argument to P2P() is FALSE then the user is asking the client node to be use itself as a server in addition to the other nodes. In that case the client node will maintain both the client and server objects.
Begin a Client-Server Cycle
When the code calls the Execute() the node goes into client or server mode as dictated by their role. Execute is very simple:
P2P::Execute() {
    if (IamClient) client->Execute(); else  server->Loop(Server::iml);
    }
Client Execute
Your client class must provide a Execute() method. This does everything the client must do to get the job done. It can use other methods to call on the servers to help, especially ToDoList().
Server Execute
Your server class must provide a Execute() method. This carries out whatever task servers must carry out for the client. They are called from the built-in Loop() routine, which waits for messages and stops once the STOP_TAG is received from the client.
ToDoList()
The client tasks are put in a separate function, which can use ToDoList() to send out messages to the servers. Often, a large number of tasks can be done, each with a different message, such as the vector of parameters to operate on.
ToDoList takes a matrix (or array) of messages organized as columns. It then sends them out to all the servers. If there are more messages than servers it gets them all busy and then waits until one is finished. Then it sends the next message to the reporting server and waits again until all the messages are sent. It then waits until all the servers report back.
The results are stored and returned to the user's program as a matrix, one column for each input message. The third argument is the maximum length of the return messages.

P2P Example

For example, given a multidimensional function f(const theta), where theta is a N×1 vector and f() returns a M×1 output, the Jacobian can be computed in parallel with the following code:
MyClient::Execute() {
  N = rows(theta);
  ToDoList( (theta+epsmatrix) ~ (theta-epsmatrix) ,&Jmat,M,1);
  Jmat = (Jmat[][:N-1] - Jmat[][N:])/Jeps;
  }

MyServer::Execute() { N = rows(Buffer); Buffer = f(Buffer); return N; }

The client code creates a N×(2N) matrix of parameter vectors centered on theta. Each column is either a step forward or backward in one of the parameters. (This code is a bit crude, because a proportional step size should be used with an additive step only if the element of theta is very close to 0.)
Then ToDoList() is sent the matrix of messages. The server code will get the parameter vector that it should evaluate f() at in Buffer.
The server executive sends the buffer to f() which returns the output to be put back in the buffer for return to the client. Execute() must always return the maximum size of the next expected message so that Loop() can initialize storage for it.
If this code is run without MPI defined on the command line, then ToDoList() reduces to a loop that calls Execute() on the same node in serial, sending one column at a time. There is a small amount of overhead in terms of intermediate function calls, and in serial only one column would have to be stored rather than 2N columns. In most cases this overhead is not very large, especially when f() is not trivial. And the same code can be used whether MPI is available or not.

CFMPI Peer (or Group) Communication

MPI Group communication elements are available in the Peer class.
Documentation to be completed …
Author:
© 2011-2015 Christopher Ferrall

Documentation of Items Defined in CFMPI.ox ⇧ 

 Global variables

Variables
 fakeP2P static

 Client : P2P : MPI

Act as the Client in Client/Server P2P communication.
Public fields
 MaxSubReturn
 me_as_server If client node should work, then this holds the Server object.
 NSubProblems
Public methods
 Announce Announce a message to everyone (and perhaps get answers).
 Execute virtual The default simply announces today's date to all nodes.
 Recv Receive buffer from a source node.
 Send Point-to-Point: Sends buffer to a destination node.
 Stop Send STOP_TAG to all servers, do not wait for answers.
 ToDoList Distribute parallel tasks to all servers and return the results.
Inherited methods from P2P:
P2P
Inherited methods from MPI:
Barrier, Initialize
Inherited fields from P2P:
ANY_SOURCE, ANY_TAG, Buffer, client, MaxSimJobs, server, Source, STOP_TAG, Tag
Inherited fields from MPI:
called, CLIENT, Error, fake, IamClient, ID, Nodes, Volume

 MPI

Base MPI class. All members of the base class are static.
Public fields
 called static Initialize already called.
 CLIENT static const ID of Client Node (=0)
 Error static Error code from last Recv
 fake static faking message passing.
 IamClient static ID==CLIENT; node is client
 ID static My id (MPI rank).
 Nodes static Count of nodes availible.
 Volume static print out info about messages.
Public methods
 Barrier static Set a MPI Barrier to Coordinate Nodes.
 Initialize static Initialize the MPI environment.

 P2P : MPI

Point-to-point communication object.

Point-to-point is communication from one node to another node through MPI.

Usually these messages are between the client node and a server node.

Messages are vectors, and are tagged with an integer code so that the receiver of the message knows how to interpret the message.
Public fields
 ANY_SOURCE static Receive from any node
 ANY_TAG static Receive any tag
 Buffer Place for MPI message (in/out)
 client Client object.
 MaxSimJobs Nodes or Nnodes-1.
 server Server object.
 Source Node that sent the last message
 STOP_TAG static const Tag that ends Loop
 Tag Tag of last message
Public methods
 Execute virtual Begin Client-Server execution.
 P2P Initialize Point-to-Point Communication.

Inherited methods from MPI:
Barrier, Initialize
Inherited fields from MPI:
called, CLIENT, Error, fake, IamClient, ID, Nodes, Volume

 Peer : MPI

A peer in Group (peer-to-peer) communication.
Public fields
 Buffer Place for MPI message (in/out)
 Offset vector of offsets in buffer in Gatherv Offset[Node] is the total buffer size
 SegSize My segment size in Gathers
Public methods
 Allgather Gather and share vectors to/from all nodes.
 Allgatherv Gather variable sized segments on all nodes.
 Allsum Compute and share the sum of vectors to/from all nodes.
 Bcast Broadcast buffer of size iCount from CLIENT (ROOT) to all nodes.
 Gather Gather vectors from all nodes at Client.
 Gatherv Gather vectors from all nodes at Client with Variable segment sizes.
 Peer
 Setdisplace Set the displacement for each node in gathers.
 Sum Compute the sum of vectors from all nodes at Client.
Inherited methods from MPI:
Barrier, Initialize
Inherited fields from MPI:
called, CLIENT, Error, fake, IamClient, ID, Nodes, Volume

 Server : P2P : MPI

Act as a server in Client/Server P2P communication.
Public fields
 iml static initial messge length, first call to Loop.
Public methods
 Execute virtual The default server code.
 Loop virtual A Server loop that calls a virtual Execute() method.
 Recv Receive buffer from CLIENT.
 Send Server sends buffer to the CLIENT.
Inherited methods from P2P:
P2P
Inherited methods from MPI:
Barrier, Initialize
Inherited fields from P2P:
ANY_SOURCE, ANY_TAG, Buffer, client, MaxSimJobs, server, Source, STOP_TAG, Tag
Inherited fields from MPI:
called, CLIENT, Error, fake, IamClient, ID, Nodes, Volume

 Global

 fakeP2P

static decl fakeP2P

 Client

 Announce

Client :: Announce ( Msg , Pmode , aResults , mxlength )
Announce a message to everyone (and perhaps get answers).
Parameters:
Msg arithmetic type. Buffer is set to vec(Msg).
Pmode index into parallel modes, see ParallelExecutionModes (default=MultiParamVectors), the base tag of the MPI messages.
If aResults is an address actual tags sent equal BaseTag[Pmode]+n, n=0...(Nodes-1).
aResults integer (default), no results reported
an address, returned as a mxlength x nsends matrix
The answer of all the nodes.
mxlength integer (default=1), size to set Buffer before Recv or Recv
@the If DONOTUSECLIENT was sent as TRUE to P2P() and MPI::Nodes> 0 then the CLIENT does not call itself. Otherwise the CLIENT will announce to itself (after everyone else).

 Execute

virtual Client :: Execute ( )
The default simply announces today's date to all nodes.

 MaxSubReturn

decl MaxSubReturn [public]

 me_as_server

decl me_as_server [public]
If client node should work, then this holds the Server object.

 NSubProblems

decl NSubProblems [public]

 Recv

Client :: Recv ( iSource , iTag )
Receive buffer from a source node.
Parameters:
iSource id of target/destination node
ANY_SOURCE, receive message from any node.
iTag tag to receive
ANY_TAG, receive any tag
Example:
p2p->Recv(P2P::ANY_SOURCE,P2P::ANY_TAG);
println("Message Received from ",P2P::Source," with Tag ",P2P::Tag," is ",p2p.Buffer);
Comments:
Actual Source, Tag and Error are stored on exit in Source, Tag, and Error

 Send

Client :: Send ( iCount , iDest , iTag )
Point-to-Point: Sends buffer to a destination node.
Parameters:
iCount integer 0, send the whole Buffer
> 0, number of elments of Buffer to send.
iDest integer id of target/destination node.
iTag integer non-negative. User-controlled MPI Tag to accompany message.
Example:
p2p.Buffer = results;
p2p->Send(0,2,3);  //send all results to node 2 with tag 3

 Stop

Client :: Stop ( )
Send STOP_TAG to all servers, do not wait for answers.

 ToDoList

Client :: ToDoList ( mode , Inputs , aResults , mxlength , Pmode )
Distribute parallel tasks to all servers and return the results.

Exits if run by a Server node.

Parameters:
mode 0: Inputs is a matrix or array of messages
nsends: Input is a single vector message. nsends is how many subtasks to call
Inputs an array of length nsends or a M x nsends matrix
The inputs to send on each task.
aResults an address, returned as a mxlength x nsends matrix
The output of all the tasks.
mxlength integer, size to set Buffer before Recv or Recv
Pmode index into base tags. See ParallelExecutionModes. Actual tags sent equal BaseTag[Pmode]+n, n=0...(nsends-1).
Comments:
If DONOTUSECLIENT was sent as TRUE to P2P() and MPI::Nodes> 0 then the CLIENT does not call itself. Otherwise the CLIENT will call itself exactly once after getting all Servers busy.

 MPI

 Barrier

static MPI :: Barrier ( )
Set a MPI Barrier to Coordinate Nodes. A MPI Barrier is a rendezvous point. Each node waits until all nodes reach a barrier. Once all reach the barrier execution continues.

 called

static decl called [public]
Initialize already called.

 CLIENT

static const decl CLIENT [public]
ID of Client Node (=0)

 Error

static decl Error [public]
Error code from last Recv

 fake

static decl fake [public]
faking message passing.

 IamClient

static decl IamClient [public]
ID==CLIENT; node is client

 ID

static decl ID [public]
My id (MPI rank).

 Initialize

static MPI :: Initialize ( )
Initialize the MPI environment. Retrieves the number of nodes and myID initializes the MPI environment.
Comments:
MPI_Init() is called only the first time Initialize() is called.

 Nodes

static decl Nodes [public]
Count of nodes availible.

 Volume

static decl Volume [public]
print out info about messages.

 P2P

 ANY_SOURCE

static decl ANY_SOURCE [public]
Receive from any node

 ANY_TAG

static decl ANY_TAG [public]
Receive any tag

 Buffer

decl Buffer [public]
Place for MPI message (in/out)

 client

decl client [public]
Client object.

 Execute

virtual P2P :: Execute ( )
Begin Client-Server execution. If IamClient call the (virtual) Execute(). Otherwise, enter the (virtual) Loop

 MaxSimJobs

decl MaxSimJobs [public]
Nodes or Nnodes-1.

 P2P

P2P :: P2P ( DONOTUSECLIENT , client , server )
Initialize Point-to-Point Communication.
Parameters:
DONOTUSECLIENT TRUE the client (node 0) will not be used as a server in ToDoList()
FALSE it will used ONCE after all other nodes are busy
client either Client object or FALSE, only used if IamClient
If not IamClient and this is a class it is deleted.
server either Server object or FALSE. If IamClient and DONOTUSECLIENT then this is deleted.

 server

decl server [public]
Server object.

 Source

decl Source [public]
Node that sent the last message

 STOP_TAG

static const decl STOP_TAG [public]
Tag that ends Loop

 Tag

decl Tag [public]
Tag of last message

 Peer

 Allgather

Peer :: Allgather ( iCount )
Gather and share vectors to/from all nodes.
Parameters:
iCount 0, Gather the whole Buffer
> 0, number of elments of Buffer to share.

 Allgatherv

Peer :: Allgatherv ( )
Gather variable sized segments on all nodes. This requires that Setdisplace called first. Gather is "in place" so Buffer on each node must be large enough for all segments and contain the current nodes contribution in the proper location before the call.

 Allsum

Peer :: Allsum ( iCount )
Compute and share the sum of vectors to/from all nodes.

 Bcast

Peer :: Bcast ( iCount )
Broadcast buffer of size iCount from CLIENT (ROOT) to all nodes.
Parameters:
iCount 0, Broadcast the whole Buffer
> 0, number of elments of Buffer to send.

 Buffer

decl Buffer [public]
Place for MPI message (in/out)

 Gather

Peer :: Gather ( iCount )
Gather vectors from all nodes at Client.
Parameters:
iCount 0, Gather the whole Buffer
> 0, number of elments of Buffer to share.

 Gatherv

Peer :: Gatherv ( )
Gather vectors from all nodes at Client with Variable segment sizes. This requires that Setdisplace is called first. The Gather is "in place" on CLIENT, so Buffer on CLIENT must be large enough for all segments and contain the CLIENTs contribution at the start.

 Offset

decl Offset [public]
vector of offsets in buffer in Gatherv Offset[Node] is the total buffer size

 Peer

Peer :: Peer ( )

 SegSize

decl SegSize [public]
My segment size in Gathers

 Setdisplace

Peer :: Setdisplace ( SegSize )
Set the displacement for each node in gathers. Calls Initialize() first, which will set the MPI environment if not done already.
Parameters:
SegSize the size of my segment. Must be equal across all nodes for non-variable gathers.
Comments:
The vector of displacements in the Buffer stored in Offset along with the total size of the Buffer (an extra last element of Offset).

 Sum

Peer :: Sum ( iCount )
Compute the sum of vectors from all nodes at Client.
Parameters:
iCount The size of the vector to sum

 Server

 Execute

virtual Server :: Execute ( )
The default server code. Simply reports who I am, tag and message received. Adds ID to Buffer.

 iml

static decl iml [public]
initial messge length, first call to Loop.

 Loop

virtual Server :: Loop ( nxtmsgsize , calledby )
A Server loop that calls a virtual Execute() method.
Parameters:
nxtmsgsize integer. The size of Buffer expected on the first message Received. It is updated
calledby string. Name or description of routine loop was called from. by Execute() on each call.
Returns:
the number of trips through the loop.
Program goes into server mode (unless I am the CLIENT). If the current ID equals CLIENT then simply return.
Enters a do loop that
Receives a message from Client
Calls Execute().
If Tag does NOT equal STOP_TAG then send Buffer back to Client.
If Tag is STOP_TAG then exit the loop and return.

 Recv

Server :: Recv ( iTag )
Receive buffer from CLIENT.
Parameters:
iTag tag to receive
ANY_TAG, receive any tag
Example:
p2p->Recv(ANY_TAG);
Comments:
Source and Tag are stored on exit in Source and Tag

 Send

Server :: Send ( iCount , iTag )
Server sends buffer to the CLIENT.
Parameters:
iCount 0, send the whole Buffer
> 0, number of elments of Buffer to send.
iTag integer (Non-Negative). User-controlled MPI Tag to accompany message.
Example:
p2p.Buffer = results;
p2p->Send(0,3);  //send all results to node 0 with tag 3