IntroductionMPICH2 is a portable implementation of MPI, a standard for message-passing for distrubuted memory applications used in parallel computing. It provides an MPI implementation that efficiently supports different computation and communication platform including commodity clusters, high-speed networks and proprietry high-end computing systems. MPICH2 is Free Software and is available for most flavours of Unix and Microsoft Windows. MPICH2 provides a separation of process management and communication. The default runtime environment consists of a set of daemons, called mpd, that establish communication among the machines to be used before application process startup, thus providing a clearer picture of what is wrong when communication cannot be established and providing a fast and scalable startup mechanism when parallel jobs are started. MPICH2 on fermi cluster Fermi cluster has mpich2-1.4.1p1 installed on all the four nodes. MPICH2 is an all-new implementation of the MPI Standard, designed to implement all of the MPI-2 additions to MPI like dynamic process management, one-sided operations, parallel I/O, and other extensions making it more robust, efficient, and convenient to use. How to Use: Login to fermi1.serc.iisc.ernet.in with your computational loginid . On fermi cluster, MPICH2 is installed in /opt/mpich2-1.4.1p1 Users have to set the enviromental variable PATH in order to use MPICH2. MPICH2 compiled with gcc:
setenv PATH /opt/mpich2-1.4.1p1/gcc/bin:$PATH for tcsh and csh, or export PATH=/opt/mpich2-1.4.1p1/gcc/bin:$PATH for bash and sh before Compiling and Linking MPICH2 compiled with Intel compilers: setenv PATH /opt/mpich2-1.4.1p1/intel/bin:$PATH for tcsh and csh, or export PATH=/opt/mpich21.4.1p1/intel/bin:$PATH for bash and sh before Compiling and Linking Compiling and Linking Users can use mpicc, mpicxx, mpif77, and mpif90 commands, for C, C++, Fortran 77, and Fortran 90 programs, respectively. Running Programs After the program had successfully compiled and the executable has been created, users can use mpiexec to run their jobs. Documentation: Report Problems to:
If you encounter any problem in using this software please report to SERC helpdesk at the email address helpdesk_serc or contact System Administrators in #109 (SERC) |