Offline Monitoring Post Processing

From GlueXWiki
Revision as of 10:14, 3 February 2016 by Pmatt (Talk | contribs) (Created page with "== Overview == To visualize the monitoring data, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Overview

To visualize the monitoring data, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on the monitoring web pages. The results from different raw data files in a run are also combined in to single ROOT and REST files. This section describes how to generate the monitoring images and database information.

The post-processing scripts generally perform the following steps for each run:

  1. Summarize monitoring information from each EVIO file, store this information in a database
  2. Merge the monitoring ROOT files into a single file for the run
  3. Generate summary monitoring information for the run and store it in a database
  4. Generate summary monitoring plots and store these in a web-accessible location
  5. Merge the REST files generated by the monitoring jobs into a single file for each run

The scripts used to generate this summary data are primarily run from /home/gxprojN/monitoring/process i.e. the same account from which the monitoring launch was performed. If you want a new copy of the scripts, e.g., for a new monitoring run, you should check the scripts out from SVN:

svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/process

Note that these scripts depend on standard GlueX environment definitions to load the python modules needed to access MySQL databases.

Online Monitoring

There are two primary scripts for running over the monitoring data generated by the online system. The online script can be run with either of the following commands:

/home/gluex/halld/monitoring/process/check_new_runs.py
 
OR 
 
/home/gluex/halld/monitoring/process/check_new_runs.csh

The shell script is appropriate to use in a cron job. The cronjob is currently run under the "gluex" account.

The online monitoring system copies a ROOT file containing the results of the online monitoring, and other configuration files into a directory accessible outside the counting house. This python script automatically checks for new ROOT files, which it will then automatically process. It contains several configuration variables that must be correctly set, which contains the location of input/output directories, etc... Currently it will load run meta-info based on the run conditions text file which is also copied by the online system - this may change when the RCDB is fully online.

IMPORTANT - When a new run period is started, a new data version must be created, and the scripts updated to reflect the new run period. You may want to update the run number range to scan as well.

Offline Monitoring

After the data is run over, the results should be processed, so that summary data is entered into the monitoring database and plots are made for the monitoring webpages. Currently, this processing is controlled by a cronjob that runs the following script:

/home/gxproj1/halld/monitoring/process/check_monitoring_data.csh

The default behavior of this script is as following: This script checks for new ROOT files, and only runs over those it hasn't processed yet. Since one monitoring ROOT file is produced for each EVIO file, whenever a new file is produced, the plots for the corresponding run are recreated and all the ROOT and REST files for a run are combined into single files. Information is stored in the database on a per-file basis and for the whole run.

This procedure has many options, and many of these steps can be toggled on and off. Look at the output of "process_new_offline_data.py -h" for more information.

Plots for the monitoring web page can be made from single histograms or multiple histograms using RootSpy macros. If you want to change the list of plots made, you must modify one of the following files:

  • histograms_to_monitor - specify either the name of the histogram or its the full ROOT path
  • macros_to_monitor - specify the full path to the RootSpy macro .C file

Note that the most time-consuming parts of this process are merging the ROOT and REST files.

Step-by-Step Instructions For Processing a New Offline Monitoring Run

The monitoring launches are currently run out of the gxproj1 and gxproj5 accounts. After an offline monitoring launch has been successfully started on the batch farm, the following steps should be followed to setup the post-processing for these runs.

  1. The post-processing scripts are stored in $HOME/monitoring/process and are automatically run by cron.
  2. Run "svn update" to bring any changes in. Be sure that the list of histograms and macros to plot are current.
  3. Add a new data version [as described below]
  4. Edit check_monitoring_data.csh to point to the current revisions/directories
    • RUNPERIOD
    • VERSION
    • ARGS
    • Note that the environment depends on a standard script - $HOME/setup_jlab.csh or $HOME/env_monitoring_launch
  5. Update files in the web directory, so that the results are displayed on the web pages: /group/halld/www/halldweb/html/data_monitoring/textdata
  6. The current policy is to keep the REST files on the volatile disk and allow them to be deleted according to that disk's cleanup policy. The latest version of the files should always be available. Can also copy the REST files to more permanent locations:
    • cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /cache/halld/RunPeriod-YYYY-MM/REST/verVV [under testing]

Check log files in $HOME/monitoring/process/log for more information on how each run went. If there are problems, check log files, and modify check_monitoring_data.csh to vary the verbosity of the output.