March 9, 2016 Calibration

From GlueXWiki
Revision as of 13:05, 21 March 2016 by Sdobbs (Talk | contribs) (Minutes)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

GlueX Calibration Meeting
Wednesday, March 9, 2016
11:00 am, EST
JLab: CEBAF Center, F326

Communication Information

Remote Connection

You can connect using BlueJeans using the meeting number 630 804 895 .       (Click "Expand" to the right for more details -->):

  1. Make sure you have created a BlueJeans account via your JLab CUE account using this link:

  2. Meeting ID: 630804895
    • (you may need to type this in, depending how you connect)

  3. If connecting via Web Browser: click this link (no passcode is needed):

  4. If connecting via iOS or Android App:
    • Use your JLab e-mail address to log in and then enter the meeting ID given above to join the meeting

  5. If connecting via Phone: Dial one of the following numbers and then enter the meeting ID above and hit "#" or "##"

  6. If connecting via Polycom unit:
    • Dial 199.48.152.152 or bjn.vc
    • Enter meeting ID above
    • Use *4 to unmute

Slides

Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2016 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2016/ .

Agenda

  1. Announcements
  2. Calibration Tasks
  3. Subdetector Reports
  4. Sim1 progress
  5. AOB

Minutes

Attending: Sean (NU); Simon, Mark D., Paul M., Will M., Justin, Nathan, Adesh, Mark I., Eugene (JLab); Matt S. (IU); Curtis, Naomi (CMU); Mahmoud (FIU); Cristiano (MIT)

  • Announcements
    • There was a discussion of farm usage based on this email from Mark Ito yesterday.
      • Over the weekend, we "stress tested" the farm with calibration and monitoring/reconstruction jobs. This led us to be using more than our "share" of farm resources for an extended period of time (~50% compared to a share of 30%). A large portion of the farm had been converted to "exclusive" nodes based on our expected usage, and there was talk of converting nodes back.
      • We had previously let SciComp know that we expected our usage this spring to be high, based on the data we are collecting. Mark I. is planning to discuss raising our allocated share of farm resources.
      • Paul reported a scaling factor of 35 on the exclusive nodes. Each has 24 physics cores + 24 hyperthreads, so that corresponds to each hyperthread being ~45% of a physical core for these jobs. Sean will work on extracting useful numbers for calibration jobs.
      • There was talk of adding a "12 core" job queue based on poorly understood utilization numbers. No one knows the current number of machines that are configured to be in which queue. This should be on the SciComp website. Mark I. also pointed out that the usage numbers in the email he forwarded should be available to everyone. This could help us monitor our own usage.
    • Sean asked if anyone had looked at the mode 8 data that we've taken yet (planned as ~5-10% of the total production data). This data was to be used to check our reconstruction. No one had, though there is some confusion between which runs are mode 8 production data, and which are used for trigger studies (which often have detectors taken out of the readout).
    • Mark I. pointed out that on the offline FAQ, there is a SWIF section, which contains some useful information (including undocumented commands!). He encouraged people (especially Paul and Sean) to add to this.
    • Mark I. has tagged a new version of sim-recon
  • Calibration Train
    • Sean reported that he has started calibration jobs on all of the production runs taken up to last Friday. The jobs are slowing making their way through the farm due to slow farm throughput (as discussed above) and the need for more manual intervention than expected. He is working on processing and improving the results.
    • Sean and Paul have generated a large list of suggested features to Chris Larrieu (the author of SWIF) over the past few months. Chris has mentioned that he hasn't been able to work on SWIF in this time. It is not clear when the next major release of SWIF can be expected.
    • He has also started to update the wiki page to give some useful information for people looking at the results, i.e., what plugins are run, what outputs are generated and where to find them.
    • Another run over all the data is planned. This will include TOF calibrations and EVIO skims for pi0 calibration. Sean is working on updating the EVIO output code to support these skims and fixing some other bugs along the way, notably in determining the run number for EVIO files.
    • Beni mentioned that he asked for the TOF calibrations to be added to the calibration train because his calibration jobs couldn't keep up with the data, and suggested that others do the same. Sean agreed that train running maximizes the efficiency of running over the raw data and suggested others do the same.
  • Spring 2016 Recon Checklist
    • Simon reported that he and Mike S. have been working on improving the FDC reconstruction, including improving timing cuts to pick up more hits and better handling of edge cases.
    • Sean reported that according to Cristiano, who is looking at eta -> e+ e- gamma, the electron reconstruction fixes implemented by Simon are working well.
    • Justin reported that he is finding that the edge of the coherent peak seems stable in some runs and unstable in others. He is still working to understand if we need to track the variation of the beam polarization calibration inside a run or not.
    • There has been some work to correctly handle the quality factor in CDC hits.
  • Mcsmear updates
    • Mahmoud showed some results for a first comparison of Start Counter efficiencies in data and simulation. See his slides for details. There is general agreement within ~1% except in the very forward and very backward directions. It's not clear what is going on in the backward direction, but the tracks involved in this measurement need to be studied. In the forward direction, the efficiencies are very sensitive to the alignment of the detector. It is known that this needs to be careful measured and updated. Perhaps we are also sensitive to different sizes of the paddles.
  • Data Validation
    • Sean is taking a first pass through the production runs to mark good runs, to at least eliminate the obviously unsuitable ones. Naomi offered to help cross-check these with information from the elog. This check is needed because the current production DAQ settings do not clearly differentiate production runs from various test runs, and until recently end-or-run comments were not being regularly left.
  • Subdetector Reports
    • FCAL - Adesh is working on preparing some more high voltage updates. Eugene asked about the gain dependence on magnetic field. The dependence is non-linear, it's 6-7% from 0 - 1200A, from 800 - 1200A, perhaps ~1%.
    • BCAL - Will M. and Mark D. are working on improving the clustering algorithm. Will M. is also working on analyses of pi0 Dalitz decays.
    • CDC - Some of the HV channels are not great. This should be addressed next access.
    • TOF - HV scans have been taken, Sasha O. is analyzing the data to determine the best set point for next running period. Beni installed the new TDC bin size calibrations, no major differences or improvement is seen (as expected).
    • TAGH - Some counters may be turned off to improve their lifetime, this is under discussion.
    • TAGM - At the RC meeting, it was reported that Alex B. and Richard are analyzing the data taken to update the calibrations.
    • PS - Alex S. is planning to work on the energy calibrations soon.
    • TPOL - Nathan reported that the analysis of polarimeter data is going well. A first look at the data from 20um diamond runs is showing similar polarization to the 50um diamond runs, but there are substantial systematics that need to be understood.
  • Sim1 progress
    • Mark I. reported that 5700 out of 15000 jobs have been completed. Progress is generally going well. There was an incident on the farm this morning that caused a large number of jobs to fail.