Source: niqlow/templates/CFMPI/ClientServerTemplate1.ox.
useMPI.ox
, which is located in the niqlow/include
.
#include "useMPI.ox"In turn it will use preprocessor macros to determine if real MPI message passing is available (linked in) or if fake (simulated) message passing on a single instance of the program should occur. See also How to ....
ID=0
) node it will delete the server object it was sent an keep the client object.Nodes=1
) then that node it is both client and server. Second, there are more
than one nodes but the first argument to P2P() is FALSE
then the user is asking the client node to be use itself as a server in addition to the other nodes. In that case the client node will maintain both the client and server objects.P2P::Execute() { if (IamClient) client->Execute(); else server->Loop(Server::iml); }
Execute()
method. This does everything the client must do to get the job done. It can use other methods to call on the servers to help, especially ToDoList().Execute()
method. This carries out whatever task servers must carry out for the client. They are called from the built-in Loop() routine, which waits for messages and stops once the STOP_TAG
is received from the client.f(const theta)
, where theta
is a N×1
vector and f()
returns a M×1
output, the Jacobian can be computed in parallel with the following code:
MyClient::Execute() { N = rows(theta); ToDoList( (theta+epsmatrix) ~ (theta-epsmatrix) ,&Jmat,M,1); Jmat = (Jmat[][:N-1] - Jmat[][N:])/Jeps; } MyServer::Execute() { N = rows(Buffer); Buffer = f(Buffer); return N; }
N×(2N)
matrix of parameter vectors centered on theta
. Each column is either a step forward or backward in one of the parameters. (This code is a bit crude, because a proportional step size should be used with an additive step only if the element of theta
is very close to 0.) f()
at in Buffer. f()
which returns the output to be put back in the buffer for return to the client. Execute() must always return the maximum size of the next expected message so that Loop() can initialize storage for it.MPI
defined on the command line, then ToDoList() reduces to a loop that calls Execute()
on the same node in serial, sending one column at a time. There is a small amount of overhead in terms of intermediate function calls, and in serial only one column would have to be stored rather than 2N
columns. In most cases this overhead is not very large, especially when f()
is not trivial. And the same code can be used whether MPI is available or not.Variables | ||
fakeP2P | static |
Public fields | ||
MaxSubReturn | ||
me_as_server | If client node should work, then this holds the Server object. | |
NSubProblems | ||
Public methods | ||
Announce | Announce a message to everyone (and perhaps get answers). | |
Execute | virtual | The default simply announces today's date to all nodes. |
Recv | Receive buffer from a source node. | |
Send | Point-to-Point: Sends buffer to a destination node. | |
Stop | Send STOP_TAG to all servers, do not wait for answers. | |
ToDoList | Distribute parallel tasks to all servers and return the results. |
static
.
Public fields | ||
called | static | Initialize already called. |
CLIENT | static const | ID of Client Node (=0) |
Error | static | Error code from last Recv |
fake | static | faking message passing. |
IamClient | static | ID==CLIENT; node is client |
ID | static | My id (MPI rank). |
Nodes | static | Count of nodes availible. |
Volume | static | print out info about messages. |
Public methods | ||
Barrier | static | Set a MPI Barrier to Coordinate Nodes. |
Initialize | static | Initialize the MPI environment. |
Public fields | ||
ANY_SOURCE | static | Receive from any node |
ANY_TAG | static | Receive any tag |
Buffer | Place for MPI message (in/out) | |
client | Client object. | |
MaxSimJobs | Nodes or Nnodes-1. | |
server | Server object. | |
Source | Node that sent the last message | |
STOP_TAG | static const | Tag that ends Loop |
Tag | Tag of last message | |
Public methods | ||
Execute | virtual | Begin Client-Server execution. |
P2P | Initialize Point-to-Point Communication. |
Public fields | ||
Buffer | Place for MPI message (in/out) | |
Offset | vector of offsets in buffer in Gatherv Offset[Node] is the total buffer size | |
SegSize | My segment size in Gathers | |
Public methods | ||
Allgather | Gather and share vectors to/from all nodes. | |
Allgatherv | Gather variable sized segments on all nodes. | |
Allsum | Compute and share the sum of vectors to/from all nodes. | |
Bcast | Broadcast buffer of size iCount from CLIENT (ROOT) to all nodes. | |
Gather | Gather vectors from all nodes at Client . |
|
Gatherv | Gather vectors from all nodes at Client with Variable segment sizes. | |
Peer | ||
Setdisplace | Set the displacement for each node in gathers. | |
Sum | Compute the sum of vectors from all nodes at Client. |
Public fields | ||
iml | static | initial messge length, first call to Loop. |
Public methods | ||
Execute | virtual | The default server code. |
Loop | virtual | A Server loop that calls a virtual Execute() method. |
Recv | Receive buffer from CLIENT. | |
Send | Server sends buffer to the CLIENT. |
Msg | arithmetic type. Buffer is set to vec(Msg). |
Pmode | index into parallel modes, see ParallelExecutionModes (default=MultiParamVectors), the base tag of the MPI messages. If aResults is an address actual tags sent equal BaseTag[Pmode]+n, n=0...(Nodes-1). |
aResults | integer (default), no results reported an address, returned as a mxlength x nsends matrix The answer of all the nodes. |
mxlength | integer (default=1), size to set Buffer before Recv or Recv |
@the | If DONOTUSECLIENT was sent as TRUE to P2P() and MPI::Nodes> 0 then the CLIENT does not call itself. Otherwise the CLIENT will announce to itself (after everyone else). |
iSource | id of target/destination node ANY_SOURCE, receive message from any node. |
iTag | tag to receive ANY_TAG, receive any tag |
p2p->Recv(P2P::ANY_SOURCE,P2P::ANY_TAG); println("Message Received from ",P2P::Source," with Tag ",P2P::Tag," is ",p2p.Buffer);
iCount | integer 0, send the whole Buffer > 0, number of elments of Buffer to send. |
iDest | integer id of target/destination node. |
iTag | integer non-negative. User-controlled MPI Tag to accompany message. |
p2p.Buffer = results; p2p->Send(0,2,3); //send all results to node 2 with tag 3
mode | 0: Inputs is a matrix or array of messages nsends: Input is a single vector message. nsends is how many subtasks to call |
Inputs | an array of length nsends or a M x nsends matrix The inputs to send on each task. |
aResults | an address, returned as a mxlength x nsends matrix The output of all the tasks. |
mxlength | integer, size to set Buffer before Recv or Recv |
Pmode | index into base tags. See ParallelExecutionModes. Actual tags sent equal BaseTag[Pmode]+n, n=0...(nsends-1). |
DONOTUSECLIENT | TRUE the client (node 0) will not be used as a server in ToDoList() FALSE it will used ONCE after all other nodes are busy |
client | either Client object or FALSE, only used if IamClient If not IamClient and this is a class it is deleted. |
server | either Server object or FALSE. If IamClient and DONOTUSECLIENT then this is deleted. |
iCount | 0, Gather the whole Buffer > 0, number of elments of Buffer to share. |
iCount | 0, Broadcast the whole Buffer > 0, number of elments of Buffer to send. |
Client
.
iCount | 0, Gather the whole Buffer > 0, number of elments of Buffer to share. |
SegSize | the size of my segment. Must be equal across all nodes for non-variable gathers. |
iCount | The size of the vector to sum |
nxtmsgsize | integer. The size of Buffer expected on the first message Received. It is updated |
calledby | string. Name or description of routine loop was called from.
by Execute() on each call.
|
iTag | tag to receive ANY_TAG, receive any tag |
p2p->Recv(ANY_TAG);
iCount | 0, send the whole Buffer > 0, number of elments of Buffer to send. |
iTag | integer (Non-Negative). User-controlled MPI Tag to accompany message. |
p2p.Buffer = results; p2p->Send(0,3); //send all results to node 0 with tag 3