[Contents] [Index] [Top] [Bottom] [Prev] [Next]


3. Building Parallel Applications

The LSF Parallel systems provides tools to help build a parallel application to take full advantage of the LSF Batch system. Most parallel applications can be reused by simply re-linking with the PAM-aware MPI library, in some instances there may not even be a need to re-compile.

This chapter discusses the basic steps in building a parallel application. We discuss the basic structure of the application and how it is compiled and linked.

Note

This chapter focuses on building a parallel application to make optimal use of the LSF Batch system. It assumes familiarity with the LSF Suite of products and standard MPI. Therefore it does not discuss writing MPI programs.

This chapter contains the following topics:

Including the Header File 14

Compiling and Linking 15

Building a Heterogeneous Parallel Application 17

.

Including the Header File

A set of PAM aware header files are included with the LSF Parallel system installation. They are typically located in the LSF_INCLUDEDIR/lsf/mpi/ directory. The header files contain the MPI definitions, macros, and function prototypes necessary for using the LSF Parallel system.

Include Syntax

The include syntax must be placed at the top of any parallel application that calls MPI routines. The include statement looks like this:

In C applications:

   #include <mpi.h> 

In Fortran 77 applications:

   INCLUDE `mpif.h' 

Note

If the header files are not located in the LSF_INCLUDEDIR/lsf/mpi/ directory, check with your system administrator.

Compiling and Linking

The LSF Parallel system provides a set of scripts that help with the creation of executable objects. They are: mpicc for C programs and mpif77 for Fortran 77 programs. These scripts provide the options and special libraries needed to compile and link MPI programs for use with the LSF Parallel system. Applications are linked to system-dependent libraries and the appropriate MPI library.

C Programs

The LSF Parallel C compiler, mpicc, is used to compile MPI C source files. It is used in a similar manner to other UNIX-based C compilers.

For example: To compile the sample program contained in a file myjob.c enter:

   % mpicc -c myjob.c 

This command produces the myjob.o that contains the object code for this LSF Parallel source file.

To link the myjob.o object file with the LSF Parallel libraries to create an executable, enter:

   % mpicc -o myjob myjob.o 

As with most C compilers, the -o flag specifies that the name of the executable produced by the linker is to be myjob.

The C source file can be compiled and linked in one step using the following command:

   % mpicc myjob -o myjob 

Fortran 77 Programs

The LSF Parallel Fortran 77 compiler, mpif77, is used to compile MPI Fortran 77 source files. It is used in a similar manner to other UNIX-based Fortran 77 compilers.

For example: To compile the sample program contained in a file myjob.f enter:

   % mpif77 -c myjob.f 

This command produces the myjob.o that contains the object code for this LSF Parallel source file.

To link the myjob.o object file with the LSF Parallel libraries to create an executable, enter:

   % mpif77 -o myjob myjob.o 

As with most Fortran 77 compilers, the -o flag specifies that the name of the executable produced by the linker is to be myjob.

The Fortran 77 source file can be compiled and linked in one step using the following command:

   % mpif77 myjob -o myjob 

Building a Heterogeneous Parallel Application

The LSF Parallel system provides a host type substitution facility to allow a heterogeneous multiple-architecture distributed application to be submitted to the LSF Batch system. The following steps outline how to build and deploy a heterogeneous application:

1. Design the parallel application.

2. Compile the application on all LSF host-type architectures that will be used to support this application.

Note

The binaries must either be named with valid LSF host-type extensions or placed in directories named with valid LSF host-type path names.

3. Place binaries in the appropriate shared file system or distribute them accordingly.

4. Use the %a annotation to submit the parallel application to the LSF Batch system.

LSF Host Type Naming Convention

Binaries must be compiled on the target host type architectures. The binary must be named using a valid LSF host type string as the extension to its name or the name of a directory in its path (lshosts displays a list of valid LSF host types). When the %a notation is used to submit a parallel application to the LSF Batch system the target host type string is substituted.

All binaries for a specific application must be named using the same host type substitution format (i.e., binary extension or path name).

Example: The following binaries are named with appropriate host type extensions to identify the target platform on which they are to run. These binaries are named to use Sun Solaris and RS6000 architecture machines:

Example: The following binaries are named with appropriate path names to identify the target platform on which they are to run. These binaries are named to use Sun Solaris and RS6000 architecture machines:

   %a Notation 

After a parallel application is submitted to the LSF Batch system, the Parallel Application Manager (PAM) replaces the %a annotation with the appropriate LSF host type string. PAM then launches the individual tasks of the application on the remote hosts using the correct binaries.

Note

Use the lshosts command to determine which LSF hosts are available. For example:

   % lshosts
HOST_NAME  type    model     cpuf  ncpus maxmem maxswp server      RESOURCES
host1      SUNSOL  SunSparc  6.0   1     64M    112M   Yes (solaris cserver)
host2      RS6K    IBM350    7.0   1     64M    124M   Yes     (cserver aix)
 

Example: To submit the myjob application from the same directory using LSF host type extensions the following command is used:

   % pam -n 2 myjob.%a 

PAM will make the following substitutions for the %a notation:



[Contents] [Index] [Top] [Bottom] [Prev] [Next]


doc@platform.com

Copyright © 1994-1998 Platform Computing Corporation.
All rights reserved.