Computer Farms

From GlueXWiki
Jump to: navigation, search

Several computer farms exist that can potentially be used by GlueX collaborators. This page attempts to list the farms and some rough parameters that can be used to gauge their ability. For each farm, there is a contact person who you will need to coordinate with in order to get access. The exception being the JLab farm which can be accessed through the CUE system.


Institution contact Nodes Cores CPU Memory OS Notes
JLab Sandy Philpott See the Scientific Computing Resources page This is the "Scientific Computing" farm only (there is another HPC farm for lattice calculations.) This is available to anyone with a JLab CUE computer account. However, it is often busy with processing experimental data.
Indiana Univ. Matt Shepherd 55 110 1.6GHz
Indiana Univ. Matt Shepherd 768 1536 2.5GHz This is University-level farm that we can get access to if really needed. From Matt's e-mail:
"If we need mass simulation work, we can also try to tap into the university research computing machines (Big Red has 768 dual 2.5 GHz nodes), but these might be best reserved for very large simulation jobs like Pythia background for high-stats analyses."
Univ. of Edinburgh Dan Watts 1456 1456 Linux This is a large, high performance farm from which you can buy time. From Dan's e-mail:
"We have access to a very large farm here at Edinburgh. We can apply to purchase priority time on the farm or have a current base subscription which schedules the jobs with lower priority (but seems to run fine for the current usage). It is a high-performance cluster of servers (1456 processors) and storage (over 275Tb of disk)."
Glasgow Univ. Ken Livingston 32
9
26
64
72
52
2MHz Opteron
1.8MHz Opteron
1MHz PIII
1G
16G
0.5G
Fedora 8
Carnegie Mellon Univ. Curtis Meyer 47 32x8+15*2=286 32 AMD Barcelona, 15 Xeon(older) 1GB/core RHEL5
Univ. of Connecticut Richard Jones 91 786 248 AMD 2GHz
146 AMD 3.4GHz
384 2GHz i7
1-2GB/core Centos 6 Scheduling is by condor, accepts Gluex jobs from OSG, local users have priority over grid jobs.
Florida State Univ. Paul Eugenio 60 118 88 cores: Intel Core 2 Quad Q6600 2.4GHz
30 cores: AMD MP 2600
1-2GB/core Upgrading to Rocks 5.1 (CentOS 5 based) (currently CentOS 4.5 (Rocks 4.3) FSU Nuclear Physics Group cluster
Florida State Univ. Paul Eugenio 400 2788 D-Core 2220 2.8 GHz Opterons, Q-Core 2356 2.3 GHz Opteron, Q-Core AMD 2382 Processors Shanghai 2.6GHz 2GB/core x86_64 Centos 5 based Rocks 5.0 cluster FSU HPC university cluster
Northwestern Univ. Sean Dobbs 22 328 Various Intel Xeons, 4-10 cores, 1.9-2.4 GHz 1-2 GB/core CentOS 6 NUMEP group cluster, also accepts OSG jobs
Univ. of Regina Zisis Papandreou 10 20 Intel(R) Xeon(TM) CPU 2.80GHz 1GB/node Red Hat 9 Batch system: condor queue, access through head node. NFS disk handling; close to 0.75 TB