Difference between revisions of "GlueX Offline Meeting, March 4, 2015"

From GlueXWiki
Jump to: navigation, search
(Slides)
m (Minutes: removed redundant instance of Paul Mattione)
Line 35: Line 35:
 
* '''CMU''': Curtis Meyer
 
* '''CMU''': Curtis Meyer
 
* '''FIU''' Mahmoud Kamel
 
* '''FIU''' Mahmoud Kamel
* '''JLab''': Mark Ito (chair), David Lawrence, Paul Mattione, Paul Mattione, Kei Moriya, Eric Pooser, Nathan Sparks, Justin Stevens, Simon Taylor
+
* '''JLab''': Mark Ito (chair), David Lawrence, Paul Mattione, Kei Moriya, Eric Pooser, Nathan Sparks, Justin Stevens, Simon Taylor
 
* '''MEPhI''': Dmitry Romanov
 
* '''MEPhI''': Dmitry Romanov
 
* '''NU''': Sean Dobbs
 
* '''NU''': Sean Dobbs

Revision as of 10:45, 19 March 2015

GlueX Offline Software Meeting
Wednesday, March 4, 2015
1:30 pm EST
JLab: CEBAF Center F326/327

Agenda

  1. Announcements
    1. FADC125 upsampling algorithm implemented in emulation mode (David)
    2. CentOS 6.5 added to b1pi test (Mark)
  2. Review of minutes from February 4 (all)
  3. Data Challenge 3
  4. Commissioning Run Review:
    1. Offline Monitoring Report (Kei)
    2. Commissioning-branch-to-trunk migration (Simon/Mark)
  5. EM background mix-ins (all)
  6. Action Item Review

Communication Information

Remote Connection

Slides

Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2015 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2015/ .

Minutes

Present:

  • CMU: Curtis Meyer
  • FIU Mahmoud Kamel
  • JLab: Mark Ito (chair), David Lawrence, Paul Mattione, Kei Moriya, Eric Pooser, Nathan Sparks, Justin Stevens, Simon Taylor
  • MEPhI: Dmitry Romanov
  • NU: Sean Dobbs

Announcements

  • FADC125 upsampling algorithm implemented in emulation mode. David led us through two recent emails to the group (see below). Now FADC time and charge information can be chosen between (a) firmware supplied and (b) emulated quantities. There was also a change in how pedestal information is reported; an average is now reported.
  • CentOS 6.5 added to b1pi test.
  • Code analysis. David reported that Mike Staib has been using some Intel-provided tools to analyze our reconstruction code. Mike found a race condition in the CCDB package that has been causing crashes when running multi-threaded. This has been reported to Dmitry and he is working on a fix. The High-Performance Computing group at JLab has a license for this software (although Mike did not access the tool using that license). We will ask Mike to document his experience with the package so others can try it out.
  • Why we upgraded. Mark commented on the schedule for our recent upgrade of the web and database servers. The idea was to wait until after the collaboration meeting, but also to switch as far in advance of the Spring run as possible. That put things at the Monday before last.

Review of minutes from February 4

We looked at the minutes.

Dmitry has been working on the Run Control Database (RCDB). He is importing information from Sean's data monitoring database and refreshing information from re-parsed CODA log files. This work is on-going. He has also released documentation for the system.

Offline Monitoring Report

Kei described the most recent launch of the offline monitoring jobs. Please see his [Media:2015-03-04-offline monitoring.pdf|slides] for details. Some take-aways:

  • difference in CPU time compared to last time: CPU time much higher compared to wall time
  • version 10 vs. version 09 shows good correlation of CPU times
  • version 11 has much lower CPU time than version 10 (David thought this might be due to improvements in CDC plug-in efficiency).

Commissioning-branch-to-trunk migration

Justin discovered the cause of the failure of the commissioning branch to successfully reconstruct simulated data: a default of zero B-field. Assigning a run number taken during commissioning in bggen solved the problem; the CCDB will then respond with the correct field map.

This led to a discussion of how we should handle the calibration constants for simulated data. There are two cases:

  1. simulations intended to mimic conditions of already-taken real data
  2. simulations to explore conditions beyond those already achieved

At present there are two degrees of freedom that we have to play with: run number and CCDB variation. Ideas discussed included, but were not limited to:

  • negative number numbers (i. e. (-1)*real-run-number)
  • run numbers greater than 106 designated as simulation
  • run-period specific reserved run numbers (run ranges designated as simulation only, data taking would then avoid these run numbers, run keep-out zones in correspondence to run periods)
  • run numbers with year encoded in the higher-order digits

In the end we formed a consensus on the following scheme for the two cases:

  1. "mc" variation of CCDB: run numbers indicate the real-data run numbers that are being simulated. This variation already exists.
  2. user-named variations in CCDB: run numbers have a user-defined meaning, variation name reflecting speculative conditions being explored, e. g., "high-intensity", "upgrade-study-5".

EM background mix-ins

We agreed to emphasize the importance of having a random trigger for study and/or inclusion of electromagnetic background.

mcsmear execution time

David did some measurements and found that the BCAL simulation is using most of the CPU time in mcsmear. This is due to the detailed simulation of hits implemented when studying different segmentation schemes in for BCAL read-out. For other studies we should be able to get away with a less detailed but less CPU-intensive approach.