Difference between revisions of "HOWTO use AmpTools on the JLab farm GPUs"

From GlueXWiki
Jump to: navigation, search
(AmpTools Compilation with CUDA)
Line 33: Line 33:
  
 
'''1)''' Download latest AmpTools release
 
'''1)''' Download latest AmpTools release
  wget https://github.com/mashephe/AmpTools/archive/refs/tags/v0.12.2.tar.gz
+
  git clone git@github.com:mashephe/AmpTools.git
  
'''2)''' Extract files
+
'''2)''' Set AMPTOOLS directory
  tar -xvf v0.12.2.tar.gz
+
  setenv AMPTOOLS_HOME $PWD/AmpTools/
 +
setenv AMPTOOLS $AMPTOOLS_HOME/AmpTools/
  
 
'''3)''' Load cuda environment module
 
'''3)''' Load cuda environment module
Line 53: Line 54:
 
'''7)''' Build main AmpTools library with GPU support
 
'''7)''' Build main AmpTools library with GPU support
 
  cd $AMPTOOLS
 
  cd $AMPTOOLS
  make GPU=1
+
  make gpu
  
 
=== halld_sim Compilation with GPU ===
 
=== halld_sim Compilation with GPU ===

Revision as of 16:25, 3 February 2022

Access through SLURM

JLab currently provides NVidia Titan RTX or T4 cards on the sciml19 an sciml21 nodes. The nodes can be accessed through SLURM, where N is the number of requested cards (1-4):

>salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1

or

>salloc --gres gpu:T4:N --partition gpu --nodes 1

An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:

>srun --pty bash

Information about the cards, cuda version and usage is displayed with this command:

>nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01    Driver Version: 418.87.01    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN RTX           Off  | 00000000:3E:00.0 Off |                  N/A |
| 41%   27C    P8     2W / 280W |      0MiB / 24190MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

AmpTools Compilation with CUDA

This example was done in csh for the Titan RTX cards on sciml1902.

1) Download latest AmpTools release

git clone git@github.com:mashephe/AmpTools.git

2) Set AMPTOOLS directory

setenv AMPTOOLS_HOME $PWD/AmpTools/
setenv AMPTOOLS $AMPTOOLS_HOME/AmpTools/

3) Load cuda environment module

module add cuda
setenv CUDA_INSTALL_PATH /usr/local/cuda

4) Set AMPTOOLS directory

setenv AMPTOOLS $PWD/AmpTools

5) Put root-config in your path

setenv PATH $ROOTSYS/bin:$PATH

6) Edit the AmpTools Makefile to pass the appropriate GPU architecture to the cuda complier (info e.g. here)

CUDA_FLAGS := -m64 -arch=sm_75

7) Build main AmpTools library with GPU support

cd $AMPTOOLS
make gpu

halld_sim Compilation with GPU

The GPU dependent part of halld_sim is libraries/AMPTOOLS_AMPS/ where the GPU kernels are located. With the environment setup above the full halld_sim should be compiled, which will recognize the AMPTOOLS GPU flag and build the necessary libraries and executables to be run on the GPU

cd $HALLD_SIM_HOME/src/
scons -u install -j8

Performing Fits Interactively

With the environment setup above, the fit executable is run the same as on a CPU

fit -c YOURCONFIG.cfg

where YOURCONFIG.cfg is your usual config file. Note: additional command line parameters can be used as well, as needed.

Submitting Batch Jobs