Difference between revisions of "Calibration Train"

From GlueXWiki
Jump to: navigation, search
(Job Structure)
(Procedures)
 
(24 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
  
= Fall 2016 Processing =
+
= Processing Overview =
 +
 
 +
* [[Old Calibration Train]]
  
 
== Job Structure ==  
 
== Job Structure ==  
Line 9: Line 11:
 
The calibrations/plugins that are run on each pass are:
 
The calibrations/plugins that are run on each pass are:
 
* '''Pass 1'''
 
* '''Pass 1'''
** RF_online, HLDetectorTiming, TOF_TDC_shift.  To add: PS/Tagger timing?
+
** Run as many calibrations as possible on one file
 +
*# Pass 1: RF_online
 +
*# Pass 2: HLDetectorTiming,TOF_TDC_shift
 +
*# Pass 3: st_tw_corr_auto [don't commit]
 +
*# Pass 4: HLDetectorTiming, CDC_amp, BCAL_TDC_Timing [time offsets, need to update]
 
* '''Pass 2'''
 
* '''Pass 2'''
** Calibrations:  BCAL_LEDonline, BCAL_point_time (?), CDC_TimeToDistance (?) HLDetectorTiming, imaging, PSC_TW,  PS_timing, pedestals, pedestal_online(?), ST_Propagation_Time, st_tw_corr_auto, TAGH_timewalk, TAGM_TW
+
** Process/skim full run
 +
** Calibrations:  BCAL_attenlength_gainratio, BCAL_LEDonline, CDC_amp,CDC_TimeToDistance, FCALpedestals, FCALpulsepeak, FCAL_TimingOffsets, HLDetectorTiming, imaging, PSC_TW,  PS_timing, pedestals,ST_Propagation_Time
 
** EVIO skims: FCAL pi0, BCAL pi0, BCAL-LED, FCAL-LED, random, sync
 
** EVIO skims: FCAL pi0, BCAL pi0, BCAL-LED, FCAL-LED, random, sync
 
** ROOT skims: TOF_calib
 
** ROOT skims: TOF_calib
 +
** Other [Monitoring]: BCAL_LED, BCAL_inv_mass, imaging, p2pi_hists, p3pi_hists
 +
* '''Incoming'''
 +
** Tagger/PS workflow
 +
** BCAL LED monitoring
  
= Spring 2016 Processing =
+
== Calibration run plan ==
  
== Run Groups ==
+
The two priority items are to recalibrate the TOF with the its new running conditions and to verify calibrations with the new fADC250 firmware.  All calibrations should be checked, in any case.
  
{| class="wikitable"
+
# Standalone calibrations (no forward tracking)
! SWIF workflow
+
#* ✓ RF time (Paul M.)
! Run Range
+
#* Pedestals (System owners)
! Total # Jobs
+
#* BCAL Attenuation length/gain ratio (Mark D.)
! Notes
+
#* CDC time to distance (Mike S.)
! Total Skim Size (TB)
+
#* CDC gain (Naomi)
! FCAL Skim (TB)
+
#* ✓ SC timewalk (Mahmoud)
! BCAL Skim (TB)
+
#* TOF timing (Offsets/timewalks) (Beni)
! PS Skim (TB)
+
#* ✓ TAGM timing (Alex B.)
|-
+
#* ✓ TAGH timing (Nathan)
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-02-25 GlueX-CalibRun-2016-02-25] || 10457 - 10529 || ||  first pass, abandoned
+
#* ✓ PS timing (Nathan)
|-
+
#* ✓ Overall timing (rough) (Mike S./Sean)
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-02-25 GlueX-CalibRun-2016-02-29] || 10531 - 10647 || || first pass, abandoned
+
# Full tracking calibrations
|-
+
#* ✓ BCAL Effective velocities (George)
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-02-25 GlueX-CalibRun-2016-03-04] || 10649 - 10724 || || first pass, abandoned
+
#* ✓ Overall timing (Mike S./Sean)
|-
+
#* ✓ SC Propagation time (Mahmoud)
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-03-18/  GlueX-CalibRun-2016-03-18] || 10331 - 10913 || 6570 || second pass || 23 || 8.1 || 0.94 || 14
+
# ✓ BCAL/FCAL pi0 calibrations (Adesh/Will M.)
|-
+
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-04-04/  GlueX-CalibRun-2016-04-04] || 11048 - 11145 || 2993 || first pass || 9.4 || 4.7 || 1.0 || 3.5
+
|-
+
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-04-11/  GlueX-CalibRun-2016-04-11] || 11150 - 11312 || 3573 || first pass || 9.7 || 6.2 || 1.4 || 2.1
+
|-
+
| [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-04-27/  GlueX-CalibRun-2016-04-27] || 11366 - 11668 || 10599 || first pass || 25  || 16 || 2.8 || 5.5
+
|-
+
| '''TOTAL''' || || || || '''67''' || '''35''' || '''6''' || '''25'''
+
|}
+
  
== Job Structure ==
+
All calibrations except the pi0 calibration should take no more than 1-3 2h runs of data.
  
Currently 3 "passes" through the data are performed. The first pass tries to do as many calibrations as possible with one file of data.  The second pass tries to do calibrations that need a larger data set.  The final pass runs through a full run to use the full statistics in a run and to generate outputs for other calibration procedures that can't be done automatically yet (e.g. pi0 calibrations).
+
=== Frequency ===
  
The plugins that are run on each pass are:
+
* Timing will be checked for each run.  The known variations are:
* '''Pass 1'''
+
** TOF (run-to-run)
** Step 1 - RF_online
+
** Tagger (~<day, corrected in run-to-run in the spring)
** Step 2 - HLDetectorTiming (coarse timing+ADC/TDC alignment),TOF_TDC_shift
+
* CDC gains vary with temperature and pressure, can be averaged over a 1-2 hour run.
** Step 3 - HLDetectorTiming (track-based timing),BCAL_TDC_Timing (timewalks)
+
* '''Pass 2'''
+
** Step 1 - TAGH_timewalk,BCAL_attenlength_gainratio,BCAL_TDC_Timing (full)
+
** Step 2 - st_tw_corr_auto
+
* '''Pass 3'''
+
** HLDetectorTiming,PSC_TW,BCAL_attenlength_gainratio,BCAL_gainmatrix,FCALgains,FCALpedestals,ST_Tresolution,ST_Propagation_Time,p2gamma_hists,imaging,pedestal_online,BCAL_LEDonline,PS_timing,TOF_calib,pi0fcalskim,pi0fcalskim,ps_skim
+
  
== Output ==
+
All other calibrations have been seen (so far) to be stable on a ~several week timescale
  
The output of the calibration jobs can be found at [https://halldweb.jlab.org/calib_challenge/GlueX-CalibRun-2016-02-25/ this webpage] or at /volatile/halld/home/gxproj3/calib_jobs .  There is a subdirectory for each calibration launch.  Each launch has a directory for each run that is processed.  For each run, several types of outputs are kept
+
=== To-dos ===
* The results for the processing of each file are kept in a subdirectory with the same number as the file, in the format NNN
+
* The summed results for each pass through the data are kept in ROOT files with names of the form" hd_calib_passN_RunRRRRRR.root"
+
* The processed results for each pass (e.g. constants files and figures) are kept in subdirectories with names of the form "passN/"
+
  
The relevant directories are:
+
# <strike>Finish skim improvements</strike>
* ROOT files & calibration constants: /volatile/halld/home/gxproj3/calib_jobs/[workflow]/output
+
# Automate constant -> CCDB pipeline
* EVIO skims: /volatile/halld/home/gxproj3/calib_jobs/[workflow]/skims
+
# Implement tracking database
  
= General Information =
+
= Procedures =
  
== Instructions ==
+
=== How to start a new run period ===
  
The software from the calibration train can be obtained from
+
# Edit configuration file, e.g., $CALIBRATION_TRAIN/configs/data.config
<pre>
+
#* Change job name to the current run period and set resource limits
git clone https://github.com/sdobbs/calibration_train
+
# Edit $CALIBRATION_TRAIN/template/job_wrapper.sh
</pre>
+
#* Set run period and version number
 +
# Set up the files and directories
 +
#* e.g. python setup_run.py configs/data.config
 +
# Create workflows
 +
#* swif create -workflow GXCalib-2017-01-pass1
 +
#* swif create -workflow GXCalib-2017-01-pass2
 +
# Create SQLite CCDB
 +
#* $CCDB_HOME/scripts/mysql2sqlite/mysql2sqlite.sh -hhallddb.jlab.org -uccdb_user ccdb | sqlite3 ccdb.sqlite
 +
#* mv ccdb.sqlite somewhere
 +
# Launch pass1 jobs
 +
#* python run_jobs_p1.py 2017-01 run_lists/f17.test
  
 +
= Calibration Outputs =
  
Job submission is controlled by the following configuration file:
+
== RunPeriod-2017-01 ==
<pre>
+
# data.config - example configuration file
+
jobname = GlueX-CalibRun-2016-04-27    # name of the SWIF workflow
+
# data/memory sizes in GB
+
mem_requested = 9                      # max vmem for jobs
+
disk_space = 30                        # max disk space for jobs
+
nthreads = 8                          # number of threads requested
+
# time in hours
+
time_limit = 36                        # max run time for jobs
+
  
# this file is used for debugging
+
{| class="wikitable"
ccdb_table_file = configs/calib_tables
+
! Run Range
</pre>
+
! Version tag
 
+
! Total # Jobs
Building a SWIF workflow is controlled by the job_manager.py command
+
! Notes
<pre>
+
! Total Skim Size (TB)
ifarm1102> ./job_manager.py
+
! BCAL pi0 (GB)
usage: job_manager.py init [config_file]
+
! BCAL LED (GB)
      job_manager.py build [-z] [config_file] [run file]
+
! FCAL pi0 (GB)
      job_manager.py run [-L] [config_file]
+
! FCAL LED (GB)
</pre>
+
! PS Skim (TB)
Some descriptions of the sub-commands:
+
! TOF Skim (TB)
* '''init''' - This takes the configuration file as an input, does the basic setup for the job: creates directories, saves configurations, builds CCDB SQLite file
+
|-
* '''build''' - This takes the configuration file and a file with one run number per line as an input, and creates the SWIF workflow
+
| 30274 - 30621
* '''run''' - This just starts the SWIF workflow.  Standard SWIF commands can be used to deal with the workflow from here.
+
| ver01
 
+
| 13362
Currently, there are two major commands to run
+
|
 
+
| 11.2
= Calibration Challenge =
+
| 338
 
+
| 74
* [[Calibration Challenge|Calibration Challenge 1]]
+
| 862
 
+
| 147
= Old Planning =
+
| 6.36
 
+
| 1.63
== Organization ==
+
|-
 
+
| 30622 - 30959
* The jobs will be submitted every Tuesday at noon, JLab time.
+
| ver02
* The jobs will be run from the gxproj3 account  [parallel use with EventStore jobs]
+
| 13783
 
+
| looser BCAL pi0 cuts
* The output of the jobs will be stored in ...
+
| 23.3
 
+
| 2379
== Run Ranges ==
+
| 131
 
+
| 1703
The following runs will be processed:
+
| 297
 
+
| 12.7
'''RunPeriod-2015-03'''
+
| 3.36
* 2931, 3079, 3179, 3180, 3183, 3185
+
|-
 +
| 30960 -
 +
| ver03
 +
| 14869
 +
| looser BCAL pi0 cuts
 +
| 13.4
 +
| 1368
 +
| 71
 +
| 1000
 +
| 142
 +
| 7.47
 +
| 1.99
 +
|}
  
== Calibrations ==
+
* Skim files can be found in the following directory:  /cache/halld/RunPeriod-2017-01/calib/ver01
 +
** BCAL-LED - BCAL LED triggered events
 +
** BCAL_pi0 - BCAL pi0 candidates
 +
** FCAL-LED - FCAL LED triggered events
 +
** FCAL_pi0 - FCAL pi0 candidates
 +
** PS - PS triggered events
 +
** random - random (out-of-time) triggered events
 +
** sync - TS sync events
 +
** TOF - TOF calibration ROOT skim
  
=== Job Requirements ===
+
<!--
 +
ver01:
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/hists
 +
total = 1673.9482654 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/BCAL-LED
 +
total = 74.0351243354 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/BCAL_pi0
 +
total = 338.330394119 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/FCAL-LED
 +
total = 146.729467098 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/FCAL_pi0
 +
total = 862.115063164 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/PS
 +
total = 6364.3464742 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/TOF
 +
total = 1636.12805751 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/random
 +
total = 128.437043864 GB
 +
/mss/halld/RunPeriod-2017-01/calib/ver01/sync
 +
total = 3.4401245974 GB
  
Each calibration process should include the following:
 
* Plugin stored in standard location
 
** https://halldsvn.jlab.org/repos/trunk/sim-recon/src/plugins/Calibration
 
* ROOT/other scripts stored in standard location
 
** https://halldsvn.jlab.org/repos/trunk/sim-recon/src/scripts/calibrations ?
 
** Individual git repos?
 
* Output to files
 
* QA routines
 
  
=== What is Being Run ===
+
ver02:
  
The following plugins are currently being run:
 
* RF_online (RF signal)
 
* BCAL_TDC_Timing 
 
* HLDetectorTiming 
 
* PSC_TW
 
  
Working on adding:
+
ver03:
* BCAL_attenlength_gainratio
+
-->
* BCAL gains - /work/halld/home/wmcginle/Gain_Calib
+
* TOF calibrations - https://halldsvn.jlab.org/repos/trunk/home/zihlmann/TOF_calib/
+

Latest revision as of 23:53, 16 January 2018

Processing Overview

Job Structure

Currently, two passes are planned: One automated step, and one to produce outputs for calibration procedures that are still manual.

The calibrations/plugins that are run on each pass are:

  • Pass 1
    • Run as many calibrations as possible on one file
    1. Pass 1: RF_online
    2. Pass 2: HLDetectorTiming,TOF_TDC_shift
    3. Pass 3: st_tw_corr_auto [don't commit]
    4. Pass 4: HLDetectorTiming, CDC_amp, BCAL_TDC_Timing [time offsets, need to update]
  • Pass 2
    • Process/skim full run
    • Calibrations: BCAL_attenlength_gainratio, BCAL_LEDonline, CDC_amp,CDC_TimeToDistance, FCALpedestals, FCALpulsepeak, FCAL_TimingOffsets, HLDetectorTiming, imaging, PSC_TW, PS_timing, pedestals,ST_Propagation_Time
    • EVIO skims: FCAL pi0, BCAL pi0, BCAL-LED, FCAL-LED, random, sync
    • ROOT skims: TOF_calib
    • Other [Monitoring]: BCAL_LED, BCAL_inv_mass, imaging, p2pi_hists, p3pi_hists
  • Incoming
    • Tagger/PS workflow
    • BCAL LED monitoring

Calibration run plan

The two priority items are to recalibrate the TOF with the its new running conditions and to verify calibrations with the new fADC250 firmware. All calibrations should be checked, in any case.

  1. Standalone calibrations (no forward tracking)
    • ✓ RF time (Paul M.)
    • Pedestals (System owners)
    • BCAL Attenuation length/gain ratio (Mark D.)
    • CDC time to distance (Mike S.)
    • CDC gain (Naomi)
    • ✓ SC timewalk (Mahmoud)
    • TOF timing (Offsets/timewalks) (Beni)
    • ✓ TAGM timing (Alex B.)
    • ✓ TAGH timing (Nathan)
    • ✓ PS timing (Nathan)
    • ✓ Overall timing (rough) (Mike S./Sean)
  2. Full tracking calibrations
    • ✓ BCAL Effective velocities (George)
    • ✓ Overall timing (Mike S./Sean)
    • ✓ SC Propagation time (Mahmoud)
  3. ✓ BCAL/FCAL pi0 calibrations (Adesh/Will M.)

All calibrations except the pi0 calibration should take no more than 1-3 2h runs of data.

Frequency

  • Timing will be checked for each run. The known variations are:
    • TOF (run-to-run)
    • Tagger (~<day, corrected in run-to-run in the spring)
  • CDC gains vary with temperature and pressure, can be averaged over a 1-2 hour run.

All other calibrations have been seen (so far) to be stable on a ~several week timescale

To-dos

  1. Finish skim improvements
  2. Automate constant -> CCDB pipeline
  3. Implement tracking database

Procedures

How to start a new run period

  1. Edit configuration file, e.g., $CALIBRATION_TRAIN/configs/data.config
    • Change job name to the current run period and set resource limits
  2. Edit $CALIBRATION_TRAIN/template/job_wrapper.sh
    • Set run period and version number
  3. Set up the files and directories
    • e.g. python setup_run.py configs/data.config
  4. Create workflows
    • swif create -workflow GXCalib-2017-01-pass1
    • swif create -workflow GXCalib-2017-01-pass2
  5. Create SQLite CCDB
    • $CCDB_HOME/scripts/mysql2sqlite/mysql2sqlite.sh -hhallddb.jlab.org -uccdb_user ccdb | sqlite3 ccdb.sqlite
    • mv ccdb.sqlite somewhere
  6. Launch pass1 jobs
    • python run_jobs_p1.py 2017-01 run_lists/f17.test

Calibration Outputs

RunPeriod-2017-01

Run Range Version tag Total # Jobs Notes Total Skim Size (TB) BCAL pi0 (GB) BCAL LED (GB) FCAL pi0 (GB) FCAL LED (GB) PS Skim (TB) TOF Skim (TB)
30274 - 30621 ver01 13362 11.2 338 74 862 147 6.36 1.63
30622 - 30959 ver02 13783 looser BCAL pi0 cuts 23.3 2379 131 1703 297 12.7 3.36
30960 - ver03 14869 looser BCAL pi0 cuts 13.4 1368 71 1000 142 7.47 1.99
  • Skim files can be found in the following directory: /cache/halld/RunPeriod-2017-01/calib/ver01
    • BCAL-LED - BCAL LED triggered events
    • BCAL_pi0 - BCAL pi0 candidates
    • FCAL-LED - FCAL LED triggered events
    • FCAL_pi0 - FCAL pi0 candidates
    • PS - PS triggered events
    • random - random (out-of-time) triggered events
    • sync - TS sync events
    • TOF - TOF calibration ROOT skim