Difference between revisions of "HOWTO use AmpTools on the JLab farm GPUs"

From GlueXWiki
Jump to: navigation, search
(AmpTools Compilation with CUDA)
Line 53: Line 53:
  
 
'''7)''' Build main AmpTools library with GPU support
 
'''7)''' Build main AmpTools library with GPU support
  cd $AMPTOOLS
+
  cd $AMPTOOLS_HOME
 
  make gpu
 
  make gpu
  
Line 68: Line 68:
 
where YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.
 
where YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.
  
=== Submitting Batch Jobs ===
+
== Combining GPU and MPI ==
 +
 
 +
=== AmpTools ===
 +
 
 +
Build the main AmpTools library with GPU and MPI support (note "mpigpu" option)
 +
cd $AMPTOOLS_HOME
 +
make mpigpu
 +
 
 +
=== halld_sim ===
 +
 
 +
With the environment setup above the fitMPI executable is the only thing that needs to be recompiled, which will recognize the AmpTools GPU and MPI flag and build the necessary libraries and executables to be run on the GPU with MPI
 +
cd $HALLD_SIM_HOME/src/programs/AmplitudeAnalysis/fitMPI/
 +
scons -u install
 +
 
 +
=== Performing Fits Interactively ===
 +
 
 +
The fitMPI executable is run with mpirun the same as on a CPU
 +
 
 +
mpirun -np N fitMPI -c YOURCONFIG.cfg
 +
 
 +
where N is the number of parallel processes to use in the fit and YOURCONFIG.cfg is your usual config file.  Note: additional command line parameters can be used as well, as needed.
 +
 
 +
== Submitting Batch Jobs ==

Revision as of 17:03, 3 February 2022

Access through SLURM

JLab currently provides NVidia Titan RTX or T4 cards on the sciml19 an sciml21 nodes. The nodes can be accessed through SLURM, where N is the number of requested cards (1-4):

>salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1

or

>salloc --gres gpu:T4:N --partition gpu --nodes 1

An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:

>srun --pty bash

Information about the cards, cuda version and usage is displayed with this command:

>nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01    Driver Version: 418.87.01    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN RTX           Off  | 00000000:3E:00.0 Off |                  N/A |
| 41%   27C    P8     2W / 280W |      0MiB / 24190MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

AmpTools Compilation with CUDA

This example was done in csh for the Titan RTX cards on sciml1902.

1) Download latest AmpTools release

git clone git@github.com:mashephe/AmpTools.git

2) Set AMPTOOLS directory

setenv AMPTOOLS_HOME $PWD/AmpTools/
setenv AMPTOOLS $AMPTOOLS_HOME/AmpTools/

3) Load cuda environment module

module add cuda
setenv CUDA_INSTALL_PATH /usr/local/cuda

4) Set AMPTOOLS directory

setenv AMPTOOLS $PWD/AmpTools

5) Put root-config in your path

setenv PATH $ROOTSYS/bin:$PATH

6) Edit the AmpTools Makefile to pass the appropriate GPU architecture to the cuda complier (info e.g. here)

CUDA_FLAGS := -m64 -arch=sm_75

7) Build main AmpTools library with GPU support

cd $AMPTOOLS_HOME
make gpu

halld_sim Compilation with GPU

The GPU dependent part of halld_sim is libraries/AMPTOOLS_AMPS/ where the GPU kernels are located. With the environment setup above the full halld_sim should be compiled, which will recognize the AMPTOOLS GPU flag and build the necessary libraries and executables to be run on the GPU

cd $HALLD_SIM_HOME/src/
scons -u install -j8

Performing Fits Interactively

With the environment setup above, the fit executable is run the same as on a CPU

fit -c YOURCONFIG.cfg

where YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.

Combining GPU and MPI

AmpTools

Build the main AmpTools library with GPU and MPI support (note "mpigpu" option)

cd $AMPTOOLS_HOME
make mpigpu

halld_sim

With the environment setup above the fitMPI executable is the only thing that needs to be recompiled, which will recognize the AmpTools GPU and MPI flag and build the necessary libraries and executables to be run on the GPU with MPI

cd $HALLD_SIM_HOME/src/programs/AmplitudeAnalysis/fitMPI/
scons -u install

Performing Fits Interactively

The fitMPI executable is run with mpirun the same as on a CPU

mpirun -np N fitMPI -c YOURCONFIG.cfg

where N is the number of parallel processes to use in the fit and YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.

Submitting Batch Jobs