petsc-dev

The building of petsc-dev varies from system to system.

On matrix.westgrid.ca

./config/configure.py  --with-mpi-dir=/opt/hpmpi/lib/linux_amd64 --without-fortran --with-debugging=no  --LIBS=-L/usr/apps/PGI-7.2-1/linux86-64/7.2-1/lib/ --with-clanguage=cxx --with-sieve=1 --download-boost=1 --download-chaco   --with-opt-sieve=1 --download-c-blas-lapack=1

On terminus.ucalgary.ca

web: http://hpc.ucalgary.ca/

Get the petsc-dev code

[seki@h3 ~]$ cd software/src
[seki@h3 src]$ wget ftp://info.mcs.anl.gov/pub/petsc/petsc-dev.tar.gz
[seki@h3 src]$ tar -xzf petsc-dev.tar.gz

Alternatively, install the mercurial (hg) source control first, which is preferred since this is exactly the same tool the PETSc developers are using.

[seki@h3 ~]$ cd software/src/
[seki@h3 src]$ wget http://www.selenic.com/mercurial/release/mercurial-1.1.1.tar.gz
[seki@h3 src]$ tar -xzf mercurial-1.1.1.tar.gz
[seki@h3 src]$ cd mercurial-1.1.1
[seki@h3 mercurial-1.1.1]$ make install-home

This will install mercurial into ~/bin and ~/lib64(or lib for 32 bit systems).
Now edit ~/.bashrc to have the following variables set

export PATH=$PATH:~/bin
export PYTHONPATH=${HOME}/lib64/python

After ". ~/.bashrc", you should be able to run the hg command.

Now retrieve the latest petsc-dev code.

[seki@h3 ~]$ cd software/src/
[seki@h3 src]$ hg clone http://petsc.cs.iit.edu/petsc/petsc-dev
[seki@h3 src]$ cd petsc-dev/config
[seki@h3 config]$ hg clone http://petsc.cs.iit.edu/petsc/BuildSystem BuildSystem

Setup Intel MPI Compiler

Run the following two commands

module add intel
module add mpi

It is a good idea to put the two lines in ~/.bashrc.
Now check the C++ compiler for MPI

[seki@h3 src]$ mpiCC -show
/usr/apps/intel/cce/10.1.015/bin/icc -L/opt/hpmpi/lib/linux_amd64 -I/opt/hpmpi/include -lhpmpio -lhpmpi -ldl -lmpiCC -Kc++
[seki@h3 ~]$ mpicc -show
/usr/apps/intel/cce/10.1.015/bin/icc -L/opt/hpmpi/lib/linux_amd64 -I/opt/hpmpi/include -lhpmpio -lhpmpi -ldl
[seki@h3 ~]$ mpif90 -show
/usr/apps/intel/fce/10.1.015/bin/ifort -L/opt/hpmpi/lib/linux_amd64 -I/opt/hpmpi/include/64 -lhpmpio -lhpmpi -ldl

Make sure you ~/.bashrc has the following:

module add intel
module add mpi
export LD_LIBRARY_PATH=/opt/hpmpi/lib/linux_amd64/:/usr/apps/intel/cce/10.1.015/lib/:/usr/apps/intel/fce/10.1.015/lib/

Build petsc-dev

[seki@h3 ~]$ cd software/src/petsc-dev
[seki@h2 petsc-dev]$ config/configure.py —with-clanguage=cxx —with-sieve=1 —download-chaco —with-shared=0 -with-opt-sieve=1 —with-debugging=no —with-mpi-shared=1 —download-f-blas-lapack=1 —download-boost=1 -with-batch

Note that —with-batch is required due to the scheduler used on terminus, i.e., mpirun is required to run any MPI code, even with 1 process.


After config/configure.py is finished, the following output is shown:
Since your compute nodes require use of a batch system or mpiexec you must: 
 1) Submit ./conftest to 1 processor of your batch system or system you are     
    cross-compiling for; this will generate the file reconfigure.py             
 2) Run ./reconfigure.py (to complete the configure process).

You should find the file conftest under the current directory.

[seki@h2 petsc-dev]$ ./conftest
[seki@h2 petsc-dev]$ ./reconfigure.py

Edit ~/.bashrc to have

export PETSC_DIR=/home/users/seki/software/src/petsc-dev
export PETSC_ARCH=linux-gnu-cxx-opt

and then

[seki@h2 petsc-dev]$ . ~/.bashrc
[seki@h2 petsc-dev]$ make

Then a lot of building information will be shown.

On Amazon E2C Virtual Machine

1. sudo apt-get install mercurial
2. Obtain petsc-dev code
3. sudo apt-get install gfortran

For shared memory only

4.

 config/configure.py --with-clanguage=cxx --with-sieve=1 --download-chaco --with-shared=0 -with-opt-sieve=1 --with-debugging=no --with-mpi-shared=1 --download-f-blas-lapack=1 --download-boost=1 -with-batch --download-mpich

Simply doing —download-mpich would assuming the following options in building MPICH2

—without-mpe —with-pm=gforker

as default. This works for a single shared memory machine. But for a cluster, need to use the non-default options.
5.

./conftest
./reconfigure.py

For cluster computer

4.

 config/configure.py --with-clanguage=cxx --with-sieve=1 --download-chaco --with-shared=0 -with-opt-sieve=1 --with-debugging=no --with-mpi-shared=0 --download-f-blas-lapack=1 --download-boost=1   --download-mpich --download-mpich-pm=mpd

I should have taken out —with-batch earlier. Also mpi-shared is disabled since we are doing cluster computing.

Build FCPU code

Now that petsc-dev is successfully built, FCPU code can be built. Please refer to tutorial on AICT cluster on how to build FCPU and run CFD simulations.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License