Difference between revisions of "HOWTO use AmpTools on the JLab farm GPUs"

From GlueXWiki
Jump to: navigation, search
m (Access through SLURM)
(AmpTools Compilation with CUDA)
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
=== Access through SLURM ===
 
=== Access through SLURM ===
  
JLab currently provides 4 NVidia Titan RTX cards which can be accessed through SLURM, where N is the number of requested cards (1-4):
+
JLab currently provides 4 NVidia Titan RTX cards per node. The nodes can be accessed through SLURM, where N is the number of requested cards (1-4):
 
  >salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1
 
  >salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1
 
An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:
 
An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:
Line 28: Line 28:
  
 
=== AmpTools Compilation with CUDA ===
 
=== AmpTools Compilation with CUDA ===
 +
This example was done in csh for the Titan RTX cards on sciml1902.
 +
 +
'''1)''' Download latest AmpTools release
 +
wget https://github.com/mashephe/AmpTools/archive/refs/tags/v0.12.2.tar.gz
 +
 +
'''2)''' Extract files
 +
tar -xvf v0.12.2.tar.gz
 +
 +
'''3)''' Load cuda environment module
 +
module add cuda
 +
setenv CUDA_INSTALL_PATH /usr/local/cuda
 +
 +
'''4)''' Set AMPTOOLS directory
 +
setenv AMPTOOLS $PWD/AmpTools
 +
 +
'''5)''' Put root-config in your path
 +
setenv PATH $ROOTSYS/bin:$PATH
 +
 +
'''6)''' Edit the AmpTools Makefile to pass the appropriate GPU architecture to the cuda complier (info e.g. [https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/ here])
 +
CUDA_FLAGS := -m64 -arch=sm_75
 +
 +
'''7)''' Build main AmpTools library with GPU support
 +
cd $AMPTOOLS
 +
make GPU=1
  
 
=== Performing Fits Interactively ===
 
=== Performing Fits Interactively ===
  
 
=== Submitting Batch Jobs ===
 
=== Submitting Batch Jobs ===

Revision as of 11:03, 22 October 2021

Access through SLURM

JLab currently provides 4 NVidia Titan RTX cards per node. The nodes can be accessed through SLURM, where N is the number of requested cards (1-4):

>salloc --gres gpu:TitanRTX:N --partition gpu --nodes 1

An interactive shell (e.g. bash) on the node with requested allocation can be opened with srun:

>srun --pty bash

Information about the cards, cuda version and usage is displayed with this command:

>nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01    Driver Version: 418.87.01    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN RTX           Off  | 00000000:3E:00.0 Off |                  N/A |
| 41%   27C    P8     2W / 280W |      0MiB / 24190MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

AmpTools Compilation with CUDA

This example was done in csh for the Titan RTX cards on sciml1902.

1) Download latest AmpTools release

wget https://github.com/mashephe/AmpTools/archive/refs/tags/v0.12.2.tar.gz

2) Extract files

tar -xvf v0.12.2.tar.gz

3) Load cuda environment module

module add cuda
setenv CUDA_INSTALL_PATH /usr/local/cuda

4) Set AMPTOOLS directory

setenv AMPTOOLS $PWD/AmpTools

5) Put root-config in your path

setenv PATH $ROOTSYS/bin:$PATH

6) Edit the AmpTools Makefile to pass the appropriate GPU architecture to the cuda complier (info e.g. here)

CUDA_FLAGS := -m64 -arch=sm_75

7) Build main AmpTools library with GPU support

cd $AMPTOOLS
make GPU=1

Performing Fits Interactively

Submitting Batch Jobs