Offline Analysis Commissioning
From Hall D Ops Wiki
- 1 Introduction
- 2 Commissioning Tests
- 3 Offline Data Monitoring Plans
- 4 Action Items
- This document describes the goals & plans for the data monitoring that will be carried out during the fall 2014 commissioning run of Hall D.
- Are the detectors working?
- TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS (Pair Spectrometer)
- Do all of the channels have hits?
- What are the hit counts/rates per channel?
- Are the energies & times OK, garbage, or out of range? (by channel)
- Can we read data from tape?
- Can we reproduce the online histograms with offline data?
Reconstruction Quality Monitoring
- What is the calibration quality of each system?
- TAGH, TAGM, CDC, FDC, SC, TOF, BCAL, FCAL, PS
- Can we perform reconstruction for each system? (Tracks, showers, etc.)
- Are there any regions of the detector where reconstruction is inefficient?
- Are tracks being properly matched to hits in the other detectors?
- What is the quality of the particle ID?
Analysis Quality Monitoring
- Can we see π0 peaks
- Can we see simple final states
- γ p → p π+ π-
- γ p → p π+ π- π0
- γ p → p π+ π- η
Offline Data Monitoring Plans
- Pull key data/histograms from online histograms, put data in database, make history & plots viewable on webpage
- Periodically test that we can reproduce online histograms using on-tape EVIO data.
- Periodically submit jobs to monitor calibration & reconstruction quality of data on tape.
- Just a file or two from each run.
- Submit either at some fixed-time interval (every 2 weeks?) or perhaps after big changes.
- Save key data in database, make history & key plots viewable on webpage
- When ready to do full reconstruction, run calibration & reconstruction quality plugins on all files, save key data to database & make viewable on webpage.
- Periodically submit analysis jobs to study π0's and simple final states, show results at meetings.
- David and the online group will manage the online monitoring environment (RootSpy, hdview2, etc.)
- Make sure online monitoring histograms are stored on the ifarm work disk somewhere for quick offline access, and archived to tape.
- Make sure that the run conditions information stored in the online database are query-able.
- Writing the raw data monitoring plugins for their systems.
- Determining which raw data plots are the primary plots (most important for shift-takers) and which are diagnostic plots.
- Integrating their raw data histograms into RootSpy.
- Writing the calibration scripts/programs/plugins for their systems, and updating the reconstruction software as needed.
- Will setup and maintain the offline reconstruction software build for monitoring (help/direction from Simon/Mark?)
- Will integrate the monitoring_hists plugin (reconstruction) plots into RootSpy.
- Will write and (periodically) submit jobs to the farm to:
- Test the raw data to see whether we can reproduce the online hisotgrams offline
- Produce updated calibration & reconstruction quality histograms (a few files per run)
- Study π0 reconstruction and simple final states, and will show results at meetings.
- Will maintain/organize the histogram files on the GlueX work disk.
- Will build (sqlite) database for storing key data monitoring information (run meta info, and entries for each EVIO file).
- Will write and launch scripts that pull key data from the online monitoring histograms (after each run ends) and stores them in the database.
- Will write and launch scripts that pull key data from the calibration/reconstruction quality histograms (that Kei makes) and stores them in the database.
- All of these scripts should also grab key plots for the webpage and save the png(s) to disk.
- Will build webpage(s) for viewing primary plots of:
- Raw data for: All runs, past run history (trends of key data from database)
- Reconstruction & calibration quality for: Each run, & trends of key data for all runs.
- Will work with Sean to write scripts to ping database for data and grab histogram png(s) to update the webpages.