GlueX Offline Meeting, March 2, 2016

From GlueXWiki
Jump to: navigation, search

GlueX Offline Software Meeting
Wednesday, March 2, 2016
1:30 pm EST
JLab: CEBAF Center F326/327


  1. Announcements
    1. Deleting legacy builds
    2. Lustre upgrade
    3. Software Help Forum delayed
    4. Paper Review Status
    5. SWIF Analysis Jobs (Paul)
  2. Review of minutes from February 3 (all)
  3. Offline Monitoring (Paul): Monitoring Plan
  4. Calibration Challenge/Processing (Sean) - slides
  5. Geant4 Update (Richard, David)
  6. C++ version upgrade discussion (all)
  7. Review of recent pull requests (all)
  8. Photon beam polarization: Note on Beam Polarization
  9. Sim1 (Sean)
  10. Doing pull-requests building and testing in a VM
  11. Action Item Review

Communication Information

Remote Connection


Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2016 on the JLab CUE. This directory is accessible from the web at .


There is a recording of this meeting on the BlueJeans site.


  • CMU: Naomi Jarvis, Curtis Meyer
  • FIU: Mahmoud Kamel
  • FSU: Brad Cannon
  • JLab: Sergey Furletov, Mark Ito (chair), Paul Mattione, Dmitry Romanov, Nathan Sparks
  • NU: Sean Dobbs
  • UConn: Richard Jones
Software Meeting


  1. Deleting legacy builds. They were backed up and deleted.
  2. Lustre upgrade. The upgrade had to be abandoned for now. SciComp will continue to study the upgrade path.
  3. Software Help Forum delayed. Mark had a concern about whether we should be mixing scientific paper discussions and software questions and answers on the same site. Computer and Network Infrastructure (CNI) is putting together a Drupal-based forum for us to test drive.
  4. Paper Review Status. The requested materials have been submitted to Graham Heyes:
    1. Responses to recommendations to past software reviews.
    2. Progress in GlueX software since the last review (Feb. 2015)
    3. A revised computing resource need estimate.
      • Mark and Graham are still going back and forth on the spreadsheet format.
  5. SWIF Analysis Jobs. Paul put together a wiki page on how to get started with the Scientific Workflow Indefatigable Factotum (SWIF). He provides a simple set of scripts. The system is oriented toward using REST data to do physics analysis.
    • Dmitry mentioned that he is interested in a system that starts from data in the RCDB and generates a template for batch analysis of chosen files. He will talk to Paul about whether Paul's system might help with this.

Review of minutes from February 3

  • Mark commented that many users are taking advantage of the one-node/multi-thread job queue that Scientific Computing (SciComp) has put up recently.
  • Dmitry reminded us that the best way to make requests for bug fixes or new features is to create an issue on the RCDB GitHub site. That way it will not get lost in the email shuffle.
    • Note that the RCDB code is still being kept in our Subversion repository. Migration of the code to GitHub will not occur until after the current run.

Offline Monitoring

Paul pointed out some features of the Monitoring Plan. He commented on the three steps during experimental running:

  1. Incoming. This is underway on all new data as it hits the tape library.
  2. Monitoring Launches. He is waiting on calibration constants from the Calibration Train to show up before starting.
  3. Initial Reconstruction Launch. We are not there yet.

We decided not to wait for the ultimate constants from calibration and go ahead and start a Monitoring Launch on Friday, after Sean has done some touch-up to the timing constants. Also since there is not a lot of data yet, that launch could go ahead and process all of the data collected so far, effectively making it an preliminary Initial Reconstruction Launch.

Mahmoud mentioned that the monitoring plugin for the Start Counter is not seeing any TDC hits. He and Paul will get together to track down the problem.

Calibration Challenge/Processing

Sean described details of the the newly initiated calibration passes on recently taken data. See his two slides for all of the bullets. In addition to the calibration steps, he discussed doing π0 skims and a couple of items left to do.

The last of the to-do items prompted a discussion on how we should feedback our experience with farm performance to SciComp and what metrics should be included in that feedback. We decided to discuss this next week after we see how the weekend's launch goes.

Geant4 Update

Richard has been busy with diamond production lately, a good thing for all of us, and will return to work on Geant4 when the run is over.

C++ version upgrade discussion

After lengthy discussion we settled on June 1, 2016 as the date for sim-recon conversion to GCC 4.9. See the recording for details. More precisely, after that date code that relies on language features present in that version of GCC will be accepted on the sim-recon master branch. When this happens collaborators will be required to upgrade their compilers if they have not done so.

Nathan agreed to write a wiki page to describe the Software Collections system that allows installation of binaries for recent versions of GCC (among other things) for RedHat and CentOS systems. This has the potential to greatly simplify the compiler upgrade process and subsequent environmental set-up.

Photon beam polarization

Curtis called our attention to a recent note from Ken Livingston where Ken gave his thoughts on how polarization information should be recorded and how that information should be used in data analysis. We agreed that this was required reading for those who would like to contribute to the discussion going forward.

Despite this admonition, we launched into a discussion of dealing with shifts of the location of the coherent edge during a run (i. e., beyond the photon energy dependence of the polarization for a fixed coherent edge position). This lead to whether it should be done by event ranges in the CCDB, which would require significant CCDB development, or fixed table of event dependent parameters, one table per run which could itself be kept in the CCDB.

Mark pointed out that the need for this approach has not been demonstrated. Richard thought that it was likely that a thin diamond would not be stable. Mark conceded that in the end we would need event ranges in the CCDB, if not to handle this problem, then for others that might arise. That feature, however, will not be there tomorrow.

In the end we agreed that the first step is to assess the need, and if something has to be done, we will do it.