[ Search |  Up Level |  Project home |  Index |  Class hierarchy ]

 InstallAndUse

Install and Use CFMPI.

One-time Initialization on a Cluster with MPI

Install Ox and niqlow.
Find MPI
Locate the MPI lib and include directories on the cluster. You need to find the files mpi.h and libmpi.o (or .a, .la). Note their locations. You also need to choose a C compiler (such as gcc).
If your cluster uses OpenMPI then you should be able to use CFMPI without changing the Ox and C code it uses. Further, CFMPI gets some special integer codes used in MPI dynamically so that implementations of MPI other than OpenMPI should not require and code changes. However, there is no guarantee that CFMPI is fully compatible with MPI on your cluster. If any errors occur please contact the author. The amount of tweaks to the code to work with MPI variants is probably very small.
Make the CFMPI Shared Object file
Edit the file niqlow/source/CFMPI/Makefile with the information for your C compiler and MPI library. Then make the shared CFMPI library file
$ make -C $niqlowHOME/source/CFMPI 
If successful this installs CFMPI.so in niqlow/include.
Include MPI and niqlow in the LD_LIBRARY_PATH when running Ox.
If the script oxl has been edited properly when niqlow was installed it will add the niqlow include directory to LD_LIBRARY_PATH. If not, you must add it to the path before running Ox with CFMPI.
The directory that contains libmpi.a (or .o or .la) must also be on LD_LIBRARY_PATH when Ox runs.
This could be done by modifying the oxl script so that it happens anytime Ox runs.

Use CFMPI with or without MPI

To include CFMPI in your program, do one of two things at the top of your file:
#include "CFMPI.ox"
#ifdef CFMPIfakeDEFINED
#include "fakeCFMPI.ox"
#endif
Those four lines allow you to use MPI for parallel communication if the MPI library is linked in, but if not it will use fake versions of the CFMPI routines in serial.
Note that you should not use import with CFMPI. When MPI is undefined a set of non-parallel dummy functions replace the parallel routines that call the MPI library. In this way, the same code works whether MPI is present or not. This substitution must occur at run time, so CFMPI should not be compiled to an .oxo file. If the .ox file is included the conditional definition of dummy routines or real interface routines will happen on your machine.
To link the MPI interface routines
If MPI is available and is to be used, the Ox macro MPI must be defined when your program is run.
There are 2 ways to do this. The preferred method is to define it on the command line:
$ oxl  -DMPI  MyOxProgram
By doing this the user code does not need to change to switch between MPI or not.
Alternatively, define MPI before including or importing the code:
#define MPI
#ifdef CFMPIfakeDEFINED
#include "fakeCFMPI.ox"
#endif
The problem with the second method is that it will generate a linking error if the program is run and MPI is not available. By defining MPI on the command line the same user code will work in either parallel or serial.

Run oxl within the MPI environment.

MPI must be run under the MPI runtime environment.
It is not sufficient to link in the library, because the environment must assign nodes (processors) to your program and give each a MPI rank (an integer ID).
How to run a program in the MPI environment is specific to your cluster. Usually the cluster will have a script that submits a job to a queue that will also allocate nodes to your job.