Difference between revisions of "Offline Analysis Commissioning"

From Hall D Ops Wiki
Jump to: navigation, search
(Data History Monitoring)
(David/Online)
 
(31 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
 
==Introduction==
 
==Introduction==
This document describes the goals of the data monitoring that will be carried out during the fall 2014 commissioning run of Hall D.
+
* This document describes the goals & plans for the data monitoring that will be carried out during the fall 2014 commissioning run of Hall D.
  
== Raw Data Monitoring ==
+
== Commissioning Tests ==
* These questions are for every detector system: TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS (Pair Spectrometer)
+
=== Raw Data ===
  
=== Commissioning Tests ===
 
* Can we read data from tape?
 
 
* Are the detectors working?  
 
* Are the detectors working?  
 +
** TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS (Pair Spectrometer)
 
** Do all of the channels have hits?  
 
** Do all of the channels have hits?  
 
** What are the hit counts/rates per channel?
 
** What are the hit counts/rates per channel?
 
** Are the energies & times OK, garbage, or out of range? (by channel)
 
** Are the energies & times OK, garbage, or out of range? (by channel)
 +
* Can we read data from tape?
 +
* Can we reproduce the online histograms with offline data?
  
=== Online Action Items ===
+
=== Reconstruction Quality Monitoring ===
  
* David: Online data monitoring environment (RootSpy, hdview2, etc.)
 
** David is updating RootSpy and its documentation/instructions.
 
** When he's finished, he'll contact the different detector groups and remind them to write their online monitoring plugins.
 
 
* Detector groups:
 
** Writing the monitoring plugins for their systems.
 
** Determining which plots are the primary plots (most important for shift-takers) and which are diagnostic plots.
 
** Integrating their histograms into RootSpy.
 
 
== Reconstruction Quality Monitoring ==
 
 
=== Commissioning Tests ===
 
 
* What is the calibration quality of each system?
 
* What is the calibration quality of each system?
 +
** TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS
 
* Can we perform reconstruction for each system? (Tracks, showers, etc.)
 
* Can we perform reconstruction for each system? (Tracks, showers, etc.)
 
* Are there any regions of the detector where reconstruction is inefficient?  
 
* Are there any regions of the detector where reconstruction is inefficient?  
Line 33: Line 23:
 
* What is the quality of the particle ID?
 
* What is the quality of the particle ID?
  
=== Action Items ===
+
=== Analysis Quality Monitoring ===
* Paul: Will integrate the monitoring_hists plugin (reconstruction) plots into RootSpy.
+
  
* Detector groups:
+
* Can we see &pi;<sup>0</sup> peaks
** Writing the calibration scripts/programs/plugins for their systems.
+
** Determining which plots are the primary plots and which are diagnostic plots.
+
 
+
== Analysis Quality Monitoring ==
+
 
+
=== Commissioning Tests ===
+
* Can we see pi0 peaks
+
 
* Can we see simple final states  
 
* Can we see simple final states  
 
** &gamma; p &rarr; p &pi;<sup>+</sup> &pi;<sup>-</sup>
 
** &gamma; p &rarr; p &pi;<sup>+</sup> &pi;<sup>-</sup>
Line 49: Line 31:
 
** &gamma; p &rarr; p &pi;<sup>+</sup> &pi;<sup>-</sup> &eta;
 
** &gamma; p &rarr; p &pi;<sup>+</sup> &pi;<sup>-</sup> &eta;
  
== Data History Monitoring ==
+
== Offline Data Monitoring Plans ==
 +
 
 +
=== Raw Data ===
 +
* Pull key data/histograms from online histograms, put data in database, make history & plots viewable on webpage
 +
* Periodically test that we can reproduce online histograms using on-tape EVIO data.
 +
 
 +
=== Calibration/Reconstruction Quality ===
 +
* Periodically submit jobs to monitor calibration & reconstruction quality of data on tape.
 +
** Just a file or two from each run.
 +
** Submit either at some fixed-time interval (every 2 weeks?) or perhaps after big changes.
 +
** Save key data in database, make history & key plots viewable on webpage
 +
* When ready to do full reconstruction, run calibration & reconstruction quality plugins on all files, save key data to database & make viewable on webpage.
 +
 
 +
=== Analysis Quality ===
 +
* Periodically submit analysis jobs to study &pi;<sup>0</sup>'s and simple final states, show results at meetings.
 +
 
 +
== Action Items ==
 +
 
 +
=== David/Sergey/Online ===
 +
* David and the online group will manage the online monitoring environment (RootSpy, hdview2, etc.)
 +
* Make sure online monitoring histograms are stored on the ifarm work disk somewhere for quick offline access, and archived to tape.
 +
* Make sure that the run conditions information stored in the online database are query-able.
 +
 
 +
=== Detector groups ===
 +
* Writing the raw data monitoring plugins for their systems.
 +
** [https://halldsvn.jlab.org/repos/trunk/online/packages/monitoring/src/plugins/ SVN Plugins]
 +
** [https://halldweb1.jlab.org/wiki/index.php/Online_Monitoring_plugins Instructions/Documentation]
 +
* Determining which raw data plots are the primary plots (most important for shift-takers) and which are diagnostic plots.
 +
* Integrating their raw data histograms into RootSpy.
 +
* Writing the calibration scripts/programs/plugins for their systems, and updating the reconstruction software as needed.
 +
 
 +
=== Paul ===
 +
* Will setup and maintain the offline reconstruction software build for monitoring (help/direction from Simon/Mark?)
 +
* Will integrate the monitoring_hists plugin (reconstruction) plots into RootSpy.
  
* How do the online histograms change as a function of time (e.g. hits/event vs run number)?
+
=== Kei ===
 +
* Will write and (periodically) submit jobs to the farm to:
 +
** Test the raw data to see whether we can reproduce the online hisotgrams offline
 +
** Produce updated calibration & reconstruction quality histograms (a few files per run)
 +
** Study &pi;<sup>0</sup> reconstruction and simple final states, and will show results at meetings.
 +
* Will maintain/organize the histogram files on the GlueX work disk.
  
* Is the reconstruction working?
+
=== Sean ===
** Run full reconstruction over a small % of data, view tracking/shower-reconstruction results online (hdview2, monitoring_hists plugins).
+
* Will build (sqlite) database for storing key data monitoring information (run meta info, and entries for each EVIO file).
 +
* Will write and launch scripts that pull key data from the online monitoring histograms (after each run ends) and stores them in the database.
 +
* Will write and launch scripts that pull key data from the calibration/reconstruction quality histograms (that Kei makes) and stores them in the database.
 +
** All of these scripts should also grab key plots for the webpage and save the png(s) to disk.
  
* Paul: Will contact David & Mark and find locations to host the websites, mysql database, and ROOT files.
+
=== Justin ===
* Paul: Will contact David and make sure there is an easy way to access the configuration used for a given run (which detectors were connected/on/off, trigger configuration, did the DAQ crash or did the data get written to disk).
+
* Will build webpage(s) for viewing primary plots of:
* Paul, Sean: Make sure that the detector groups make plugins for doing the calibrations and viewing the quality/results.
+
** Raw data for: All runs, past run history (trends of key data from database)
* Kei: Make cron-job scripts that automatically read EVIO data from tape, run the online/monitoring plugins, and save the ROOT files to disk.
+
** Reconstruction & calibration quality for: Each run, & trends of key data for all runs.  
** Raw data (hit occupancies) need to only be read once per run, calibration/reconstruction data need to be rerun and updated periodically.
+
* Will work with Sean to write scripts to ping database for data and grab histogram png(s) to update the webpages.
* Sean: Make cron-job scripts that read the data from the ROOT files and:
+
** Store appropriate data in an sqlite database (e.g. hit rates, # events, # reconstructed tracks/showers, calibration resolutions) as a function of run number.
+
** Make png(?) files of important histograms on disk (hit occupancies, calibration plots, reconstructed track p vs. &theta;, etc.)
+
* Kei/Justin: Make cron-job scripts that create a version-0, "just get something working" webpage for viewing:
+
** Important histograms for each detector (png's created by Sean)
+
** (w/ Sean's help for database access): Values in the mysql database as a function of run number.
+
  
 
<!--
 
<!--

Latest revision as of 11:29, 24 September 2014

Introduction

  • This document describes the goals & plans for the data monitoring that will be carried out during the fall 2014 commissioning run of Hall D.

Commissioning Tests

Raw Data

  • Are the detectors working?
    • TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS (Pair Spectrometer)
    • Do all of the channels have hits?
    • What are the hit counts/rates per channel?
    • Are the energies & times OK, garbage, or out of range? (by channel)
  • Can we read data from tape?
  • Can we reproduce the online histograms with offline data?

Reconstruction Quality Monitoring

  • What is the calibration quality of each system?
    • TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS
  • Can we perform reconstruction for each system? (Tracks, showers, etc.)
  • Are there any regions of the detector where reconstruction is inefficient?
  • Are tracks being properly matched to hits in the other detectors?
  • What is the quality of the particle ID?

Analysis Quality Monitoring

  • Can we see π0 peaks
  • Can we see simple final states
    • γ p → p π+ π-
    • γ p → p π+ π- π0
    • γ p → p π+ π- η

Offline Data Monitoring Plans

Raw Data

  • Pull key data/histograms from online histograms, put data in database, make history & plots viewable on webpage
  • Periodically test that we can reproduce online histograms using on-tape EVIO data.

Calibration/Reconstruction Quality

  • Periodically submit jobs to monitor calibration & reconstruction quality of data on tape.
    • Just a file or two from each run.
    • Submit either at some fixed-time interval (every 2 weeks?) or perhaps after big changes.
    • Save key data in database, make history & key plots viewable on webpage
  • When ready to do full reconstruction, run calibration & reconstruction quality plugins on all files, save key data to database & make viewable on webpage.

Analysis Quality

  • Periodically submit analysis jobs to study π0's and simple final states, show results at meetings.

Action Items

David/Sergey/Online

  • David and the online group will manage the online monitoring environment (RootSpy, hdview2, etc.)
  • Make sure online monitoring histograms are stored on the ifarm work disk somewhere for quick offline access, and archived to tape.
  • Make sure that the run conditions information stored in the online database are query-able.

Detector groups

  • Writing the raw data monitoring plugins for their systems.
  • Determining which raw data plots are the primary plots (most important for shift-takers) and which are diagnostic plots.
  • Integrating their raw data histograms into RootSpy.
  • Writing the calibration scripts/programs/plugins for their systems, and updating the reconstruction software as needed.

Paul

  • Will setup and maintain the offline reconstruction software build for monitoring (help/direction from Simon/Mark?)
  • Will integrate the monitoring_hists plugin (reconstruction) plots into RootSpy.

Kei

  • Will write and (periodically) submit jobs to the farm to:
    • Test the raw data to see whether we can reproduce the online hisotgrams offline
    • Produce updated calibration & reconstruction quality histograms (a few files per run)
    • Study π0 reconstruction and simple final states, and will show results at meetings.
  • Will maintain/organize the histogram files on the GlueX work disk.

Sean

  • Will build (sqlite) database for storing key data monitoring information (run meta info, and entries for each EVIO file).
  • Will write and launch scripts that pull key data from the online monitoring histograms (after each run ends) and stores them in the database.
  • Will write and launch scripts that pull key data from the calibration/reconstruction quality histograms (that Kei makes) and stores them in the database.
    • All of these scripts should also grab key plots for the webpage and save the png(s) to disk.

Justin

  • Will build webpage(s) for viewing primary plots of:
    • Raw data for: All runs, past run history (trends of key data from database)
    • Reconstruction & calibration quality for: Each run, & trends of key data for all runs.
  • Will work with Sean to write scripts to ping database for data and grab histogram png(s) to update the webpages.