An MPI job can be configured with the standard #PBS resource specification:

#PBS -lselect=N:ncpus=Y:mem=Z

and then run with:

module load mpi
mpiexec a.out

 
This will start NxY MPI ranks, with Y ranks on each of N distinct compute nodes.

Occasionally you may wish to run a hybrid program: one which combines MPI and OpenMP or other threading scheme. In this case, you may control the number of MPI ranks and  threads placed on each node with the additional directives mpiprocs and ompthreads. For example:

#PBS -lselect=N:ncpus=Y:mem=Z:mpiprocs=P:ompthreads=W

This will cause N nodes with Y ncpus to be allocated to the job. On each node there will be P MPI ranks and each will be configured to run W threads. You should ensure that PxW <= Y or the job may be aborted by PBS for exceeding its stated requirements.

When omitted these parameters assume the default values of  mpiprocs==ncpus and ompthreads==1

Running a program with mpiexec 

mpiexec does not require any arguments other than the name of the program to run. For example:

mpiexec a.out [program arguments]

Note  in particular:

  • that it's not necessary to add "-n" or any other flag to specify the number of ranks. 
  • it isn't necessary to specify the full path to the program: mpiexec will search PATH for the program
  • mpiexec must be used, never mpirun or other alternative launcher.

If you wish to use pbsexec to ensure early termination of the job as it approached its walltime limit, place this before mpiexec, eg:

pbsexec mpiexec a.out