GlueX Offline Meeting, October 30, 2013

From GlueXWiki
Revision as of 17:04, 24 February 2017 by Marki (Talk | contribs) (Text replacement - "" to "")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

GlueX Offline Software Meeting
Wednesday, October 30, 2013
1:30 pm EDT
JLab: CEBAF Center F326/327


  1. Announcements
    1. Mirror database server outage
    2. Encouraging trends in JLab Farm Utilization by GlueX collaborators
  2. Review of minutes from the last meeting: all
  3. Software Review Planning
    1. GlueX/Hall-D Computing
    2. JLab Farm CPU request, timeline (Mark)
  4. IU Mini Data Challenge (Kei)
  5. Mantis Bug Tracker Review
  6. Review of recent repository activity: all

Communication Information

Video Conferencing

Desktop Sharing

You can view the computer desktop in the meeting room at JLab via the web.

  1. Go to
  2. In the "join a meeting" box enter the Hall D code: 1833622
  3. Fill in the participant registration form.


To connect by telephone:

  1. dial:
  2. enter access code followed by the # sign: 1833622#


Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2013-4Q on the JLab CUE. This directory is accessible from the web at .



  • CMU: Paul Mattione, Curtis Meyer
  • IU: Kei Moriya, Matt Shepherd
  • JLab: Mark Dalton, Mark Ito (chair), David Lawrence, Nathan Sparks, Simon Taylor, Elliott Wolin, Beni Zihlmann
  • Northwestern: Sean Dobbs
  • UConn: Alex Barnes


  1. Elliott reported that the counting room raid to tape library transfer is working. A directory structure for the raw data is in place. Elliott will give us a description at a future meeting.
  2. David has an SCons system that builds the entire sim-recon tree. This is motivated by the desire to do different parts of the build in parallel. He will give us a presentation describing the new system at a future meeting.
  3. Mark announced that the Lab has entered an agreement with GitHub to host git repositories. The site is already being used actively by Halls A and C. Mark succeeded in uploading our Subversion repository to the GitHub site. He will report to the group when he has a better understanding of what he is doing.
  4. Mark also announced that he is about to release sim-recon-2013-10-17. The version builds on three platforms and can be checked out from the SVN repository.

Long-Term Code Versioning

Paul raised the issue of how we should manage future versions of the code when we have significant data sets taken and reconstructed. Simulated data related to the real data will also exist. Analysis tools may evolve, but the reconstruction and the simulation will have to be tied to the data. One version of the software will likely not fulfill all of our needs.

Paul will give a presentation on ideas he and Matt have been discussing in December some time.

Review of Minutes from the Last Meeting

We looked over the minutes of the October 16th meeting.

  • The single-track reconstruction test is completing, but the efficiency is not what it used to be. Tracks in the backward direction (with polar angle greater than 70 degrees) have the biggest problem. Simon is investigating.
  • David has been making progress on the vertex smearing issue raised at the last meeting by Kei. When he is finished we will get a report.

Software Review Planning

Curtis let us through the list of slides for each of the talks planned at the upcoming review. Draft talks should be ready next week. Rolf will have another division-wide meeting November 11th. We are supposed to show the draft slides then.

David is working on the manpower assessment. He has received responses from some, but not all institutions. We hope to account for manpower for analysis as well this time.

Mini Data Challenge at IU

Kei presented slides summarizing a mini data challenge he performed on the IU cluster. See his slides for details.

He generated 50 million bggen events at a rate of about 20 million a day. He saw failures (no REST files or unusable REST file) at about the a same rate we have been seeing in previous data challenges. He was able to report some of the details of the failures however. He also looked at the output sample in terms of a few hyperon channels. He is exploring ways to increase his statistics.