lib and include directories on the cluster. You need to find the files mpi.h and libmpi.o (or .a, .la). Note their locations. You also need to choose a C compiler (such as gcc).If your cluster uses OpenMPI then you should be able to use CFMPI without changing the Ox and C code it uses. Further, CFMPI gets some special integer codes used in MPI dynamically so that implementations of MPI other than OpenMPI should not require and code changes. However, there is no guarantee that CFMPI is fully compatible with MPI on your cluster. If any errors occur please contact the author. The amount of tweaks to the code to work with MPI variants is probably very small.
niqlow/source/CFMPI/Makefile with the information for your C compiler and MPI library. Then make the shared CFMPI library file$ make -C $niqlowHOME/source/CFMPIIf successful this installs
CFMPI.so in niqlow/include.LD_LIBRARY_PATH when running Ox.oxl has been edited properly when niqlow was installed it will add the niqlow include directory to LD_LIBRARY_PATH. If not, you must add it to the path before running Ox with CFMPI.libmpi.a (or .o or .la) must also be on LD_LIBRARY_PATH when Ox runs.
oxl script so that it happens anytime Ox runs.
#include "CFMPI.ox" #ifdef CFMPIfakeDEFINED #include "fakeCFMPI.ox" #endif
import with CFMPI. When MPI is undefined a set of non-parallel dummy functions replace the parallel routines that call the MPI library. In this way, the same code works whether MPI is present or not. This substitution must occur at run time, so CFMPI should not be compiled to an .oxo file. If the .ox file is included the conditional definition of dummy routines or real interface routines will happen on your machine.MPI must be defined when your program is run.$ oxl -DMPI MyOxProgramBy doing this the user code does not need to change to switch between MPI or not.
MPI before including or importing the code:
#define MPI #ifdef CFMPIfakeDEFINED #include "fakeCFMPI.ox" #endif
MPI on the command line the same user code will work in either parallel or serial.oxl within the MPI environment.