2012 Software Review Planning Meeting: Mar. 16, 2012

From GlueXWiki
Revision as of 14:11, 16 March 2012 by Cmeyer (Talk | contribs) (Introductory talk)

Jump to: navigation, search

Main Hall-D Software Review Page



  • +1-866-740-1260 : US+Canada
  • +1-303-248-0285 : International
  • then enter participant code: 3421244# (remember the "#").
  • or www.readytalk.com (and code without the #)


Dear all,

    Thanks for the work to come to concise summaries on our
previous self-assigned actions. I think this has been a very
useful communication and collaboration between the four Halls
and other interested parties. Very good!

    Please find attached an example charge as given to me by
Chip Watson, and once more the earlier document on planning for
the software review.

    Here's our further tentative planning, assuming a software
review June 7 + 8:
    Between March 29 and April 4:
        Followup preparatory meeting to discuss
    1) few slides intro on scientific computing outlook by 2015
    2) each Hall one slide on anticipated weaknesses/tight spots
        and one slide on planned management structure

    Week of April 23-27:
        Initial Dry Run: Talk Outlines
            (what you plan to present in each slide)

    May 14 or May 15:
        First Dry Run: Near-complete presentations

    Period of May 24, 25, 29, 30 (dates to be determined):
        Final Dry Run: Timed presentations

                    Best regards,    Rolf 
  • Time-line for Hall-D
    • April 23-27 - talk outlines.
    • April 30 - Written material for the review (CDR?).
    • May 14/15 - First draft of presentations for dry run.
    • May 24/25 or 29/30 - Final talks with timed presentations.
    • June7/8 Software Review at JLab.
  • Cross-Hall Collaboration
  • Organization of Hall-D preparations for Review


Attendees: Mark Ito, David Lawrence and Curtis Meyer

Status of Review Organization

Cross-Hall Collaboration

Organization of Hall-D preparations for Review

Introductory talk

  • We went over the introductory talk which we felt should be fairly graphically orientated.

Our current feeling is that the slide should include the following:

  1. Overview of Collaboration/Hall D software management.
  2. Overview (graphically) of what the software needs to do with rough data volumes and possibly estimates of compute resources.
  3. An "event display" showing a b1pi event and discussing what we need to have to do amplitude analysis.
  4. A overview of the 3pi analysis done by Jake showing that we have carried out a full analysis from event generators and background generators through GEANT on the grid, actual reconstruction of the events, physics cuts and then an amplitude analysis that could have been done on a GPU.
  5. A list of what we feel still needs to be done to have us ready for data analysis in 2015.

Software Manpower within collaboration

Offline Computing Schedule

Software Milestones/Challenges

  • Curtis has written a one-page summary document describing the experience in CLAS in moving large data sets from Jefferson Lab to remote site for analysis.



  • Curtis will contact members of the collaboration to try and get an estimate of the current computing resources that are available outside of Jefferson Lab for GlueX. The example from CMU is
At CMU: These resources are split with our lattice QCD group with whom
we share operations of the cluster. All CPUs are roughly 2.4GhZ nodes and
all are under three years old at the moment.

32       dual-quad core AMD machines      ->  256 compute cores 1GB core per CPU
12       quad-eight-core AMD machines    ->   384 compute cores 2GB core per CPU
180TB of RAID storage spread across several servers
10Giga-bit backbone from the RAID servers to the switches and 2Giga-bit connections
to each compute node.

Over the next few years, we anticipate purchasing around 16 boxes of the next
generation of CPUs (quad-sixteen or thrirty-two core boxes). We also expect to
double our disk storage capability as we start to retire some of our very old few-TB

Action Items