GlueX Offline Meeting, July 9, 2014
GlueX Offline Software Meeting
Wednesday, July 9, 2014
1:30 pm EDT
JLab: CEBAF Center F326/327
- Review of minutes from June 25 (all)
- Tagger Reconstruction (Richard)
- Removing Access to Truth Information in mcsmear (Richard)
- Data Challenge
- Status of reconstruction from EVIO
- Staging EVIO files to tape
- The BlueJeans meeting number is 968592007 .
- Join the Meeting via BlueJeans
- Detailed Information for BlueJeans for the Offline Software Meeting
Talks can be deposited in the directory
/group/halld/www/halldweb/html/talks/2014-3Q on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2014-3Q/ .
- CMU: Paul Mattione, Curtis Meyer
- FSU: Aristeidis Tsaris
- IU: Kei Moriya, Matt Shepherd
- JLab: Alex Barnes, Mark Ito (chair), David Lawrence, Mike Staib, Simon Taylor, Beni Zihlmann
- MIT: Justin Stevens
- NU: Sean Dobbs
- UConn: Richard Jones
- CCDB 1.02 has been released. This fixes a problem with the use of non-default variations.
- sim-recon-2014-06-30 has been released. Matt and Paul asked about version compatibility for this version. On the farm machines at JLab, the following versions were used to build this tag:
- Xerces 3.1.1
- JANA 0.7.1p3
- ROOT 5.34.01
- CERNLIB 2005
- gcc/g++/gfortran : 4.4.6 20110731 (Red Hat 4.4.6-3)
- HDDS 2.1
- CCDB 1.02
This information is also contained in the release notes. Matt suggested in the future that this be posted on a web page, like the corresponding information for Data Challenge 2.
- Recent automatic tests, both the single track and the b1pi, have been failing recently. Mark and Simon are looking into this.
Review of minutes from June 25
We reviewed the minutes.
Mark has succeeded in building EVIO using the source code from the Data Acquisition Group's webpage. Use needs to be incorporated into the build system.
No new news.
Data Challenge 3
We validated the sense of the last meeting: we will limit goals to analyzing data from tape at JLab and not try to generate a large data set suitable for studies. This reduces the amount of preparation needed and can be started by the middle of August.
Generation and reconstruction of EVIO-formatted simulated data is very close. David is working on this actively.
Richard led us through his proposal for introducing a random global time offset to all events, consistent with the 500 MHz RF time structure. This would simulate the real-life uncertainty due to event-to-event trigger latency variations and the intrinsic jitter due having a fully pipelined data acquisition system driven on a 250 MHz clock. At present, all events are analyzed as if the true RF bucket is known, a priori. Also the true beam photon energy is also assumed in the analysis.
Tagger hits, including out-of-time accidentals, are already present in the simulated data as long as the electromagnetic background is turned on. This change means that we would have to use detector information to determine both the time and energy of the beam photon of interest, as we will have to do for real data.
This change should not break the current reconstruction, in particular since each charged track is reconstructed with its own independent starting time. It will require changes to the analysis library, but Paul already has a scheme implemented in his analysis library for dealing with multiple photon tag candidates; it has just not been enabled for GlueX analysis. The scheme includes a parameter for setting the time window to use for tag candidates.
We endorsed the proposal. We thought that it should be made the default scheme, but that disabling it should be possible with a FFREAD card in HDGeant. In addition the time smearing parameter should be under user control.
Paul and Richard discussed division of labor. Richard will provide a set of tagger hit objects and Paul will produce the DBeamPhoton object needed for the analysis library. Richard will code up an example of how the latter step might go.
Mark will propose a Subversion strategy for how to manage changes from both Richard and Paul without impacting others during development.
Reinstating the Separation between Truth and Hit Information in HDDM
Richard described the structure in HDDM that we have now where Monte Carlo truth information is recorded in parallel with "hit" or "detected" information in HDDM in mcsmear. He is implementing these schemes for some additional detectors, including the start counter and the tagger.
In the process, he is modernizing the HDDM parsing code to use the C++ API rather than the original C routines. This gives a major reduction in lines of code. Also compression will be done on the HDDM output.