Classroom Cluster - Using MPI

Parallelism on hpc-class is obtained by using MPI. All accounts are set up so that MPI uses the high performance Infiniband communication network. To use MPI :

* To compile Fortran 77, Fortran 90, and Fortran 95 MPI programs use mpiifort. 
     To compile C and C++ MPI programs use either mpiicc or mpiicpc. 
* Use the hpc-class PBS script_writer to write a script to submit to the batch 
* In the script use "mpirun -np 8 ./a.out" to start 8 MPI processes.
* Make sure that the executable (a.out in the example above) resides in either 
     /home/user (where 'user' is your user name) or /ptmp . Both these locations 
     are mounted on each of the compute nodes.
     Don't place the executable in the local filesystem (/tmp) as each node has  
     its own /tmp . Files placed into /tmp on the front end node won't be 
     available on the compute nodes, so mpirun won't be able to start processes 
     on compute nodes.
* One can use the storage on the disk drive on each of the compute nodes by 
     reading and writing to $TMPDIR.  This is temporary storage that can be used 
     only during the execution of your program. Only processors executing on a 
     node have access to this disk drive.  Since 16 processors share this same 
     storage, you must include the rank of the executing MPI processes when 
     reading and writing files to $TMPDIR. The size of $TMPDIR is about 3 TB.

* The -e and -o PBS files are not available until PBS job finishes, so you
     may want to use "mpirun -np 12 a.out >& output_files".  Then you can see
     the output from hpc-class while the job is running. Alternatively you can 
     use qpeek command:
       "qpeek    job#" shows STDOUT while job is running.
       "qpeek -e job#" shows STDERR while job is running.