Difference between revisions of "Data Monitoring Procedures"

From GlueXWiki
Jump to: navigation, search
(Links)
(Post-Processing Procedures)
Line 139: Line 139:
 
MINRUN and MAXRUN which will set the range of runs submitted.
 
MINRUN and MAXRUN which will set the range of runs submitted.
 
-->
 
-->
 
==Post-Processing Procedures==
 
 
To visualize the monitoring data, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on the monitoring web pages. The results from different raw data files in a run are also combined in to single ROOT and REST files. This section describes how to generate the monitoring images and database information.
 
 
The post-processing scripts generally perform the following steps for each run:
 
 
# Summarize monitoring information from each EVIO file, store this information in a database
 
# Merge the monitoring ROOT files into a single file for the run
 
# Generate summary monitoring information for the run and store it in a database
 
# Generate summary monitoring plots and store these in a web-accessible location
 
# Merge the REST files generated by the monitoring jobs into a single file for each run
 
 
The scripts used to generate this summary data are primarily run from /home/gxprojN/monitoring/process i.e. the same account from which the monitoring launch was performed.  If you want a new copy of the scripts, e.g., for a new monitoring run, you should check the scripts out from SVN:
 
<syntaxhighlight>
 
svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/process
 
</syntaxhighlight>
 
 
Note that these scripts depend on standard GlueX environment definitions to load the python modules needed to access MySQL databases.
 
 
===Online Monitoring===
 
 
There are two primary scripts for running over the monitoring data generated by the online system. The online script can be run with either of the following commands:
 
<syntaxhighlight>
 
/home/gluex/halld/monitoring/process/check_new_runs.py
 
 
OR
 
 
/home/gluex/halld/monitoring/process/check_new_runs.csh
 
</syntaxhighlight>
 
The shell script is appropriate to use in a cron job.  The cronjob is currently run under the "gluex" account.
 
 
The online monitoring system copies a ROOT file containing the results of the online monitoring, and other configuration files into a directory accessible outside the counting house.  This python script automatically checks for new ROOT files, which it will then automatically process.  It contains several configuration variables that must be correctly set, which contains the location of input/output directories, etc...  Currently it will load run meta-info based on the run conditions text file which is also copied by the online system - this may change when the RCDB is fully online.
 
 
'''IMPORTANT''' - When a new run period is started, a new data version must be created, and the scripts updated to reflect the new run period.  You may want to update the run number range to scan as well.
 
 
===Offline Monitoring===
 
 
After the data is run over, the results should be processed, so that summary data is entered into the monitoring database and plots are made for the monitoring webpages.  Currently, this processing is controlled by a cronjob that runs the following script:
 
<syntaxhighlight>
 
/home/gxproj1/halld/monitoring/process/check_monitoring_data.csh 
 
</syntaxhighlight>
 
The default behavior of this script is as following:  This script checks for new ROOT files, and only runs over those it hasn't processed yet.  Since one monitoring ROOT file is produced for each EVIO file, whenever a new file is produced, the plots for the corresponding run are recreated and all the ROOT and REST files for a run are combined into single files.  Information is stored in the database on a per-file basis and for the whole run.
 
 
This procedure has many options, and many of these steps can be toggled on and off.  Look at the output of "process_new_offline_data.py -h" for more information.
 
 
Plots for the monitoring web page can be made from single histograms or multiple histograms using RootSpy macros.  If you want to change the list of plots made, you must modify one of the following files:
 
* histograms_to_monitor - specify either the name of the histogram or its the full ROOT path
 
* macros_to_monitor - specify the full path to the RootSpy macro .C file
 
 
Note that the most time-consuming parts of this process are merging the ROOT and REST files.
 
 
===Step-by-Step Instructions For Processing a New Offline Monitoring Run===
 
 
The monitoring launches are currently run out of the gxproj1 and gxproj5 accounts.  After an offline monitoring launch has been successfully started on the batch farm, the following steps should be followed to setup the post-processing for these runs.
 
 
# The post-processing scripts are stored in $HOME/monitoring/process and are automatically run by cron.
 
# Run "svn update" to bring any changes in.  Be sure that the list of histograms and macros to plot are current.
 
# Add a new data version [as described below]
 
# Edit check_monitoring_data.csh to point to the current revisions/directories
 
#* RUNPERIOD
 
#* VERSION
 
#* ARGS
 
#* Note that the environment depends on a standard script - $HOME/setup_jlab.csh or $HOME/env_monitoring_launch
 
# Update files in the web directory, so that the results are displayed on the web pages:  /group/halld/www/halldweb/html/data_monitoring/textdata
 
# The current policy is to keep the REST files on the volatile disk and allow them to be deleted according to that disk's cleanup policy.  The latest version of the files should always be available.  Can also copy the REST files to more permanent locations:
 
#* cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /cache/halld/RunPeriod-YYYY-MM/REST/verVV  [under testing]
 
 
Check log files in  $HOME/monitoring/process/log for more information on how each run went.  If there are problems, check log files, and modify check_monitoring_data.csh to vary the verbosity of the output.
 
  
 
==Data Versions==
 
==Data Versions==

Revision as of 10:14, 3 February 2016

Master List of File / Database / Webpage Locations

Run Conditions

  • Online Run-by-run condition files (B-field, current, etc.): /work/halld/online_monitoring/conditions/
  • Offline monitoring run conditions (software versions, jana config): /group/halld/data_monitoring/run_conditions/
  • Run Info vers. 1
  • Run Info vers. 2
  • RCDB

Monitoring Output Files

  • Run Periods 201Y-MM is for example 2015-03, launch ver verVV is for example ver15
  • Online monitoring histograms: /work/halld/online_monitoring/root/
  • Offline monitoring histogram ROOT files (merged): /work/halld/data_monitoring/RunPeriod-201Y-MM/verVV/rootfiles
  • individual files for each job (ROOT, REST, log, etc.): /volatile/halld/offline_monitoring/RunPeriod-201Y-MM/verVV/

Monitoring Database

  • Accessing monitoring database (on ifarm): mysql -u datmon -h hallddb.jlab.org data_monitoring

Monitoring Webpages

SciComp Job Links

Main

Documentation

Job Tracking

Procedures

Saving Online Monitoring Data

The procedure for writing the data out is given in, e.g., Raid-to-Silo Transfer Strategy.

Once the DAQ writes out the data to the raid disk, cron jobs will copy the file to tape, and within ~20 min., we will have access to the file on tape at /mss/halld/$RUN_PERIOD/rawdata/RunXXXXXX.

All online monitoring plugins will be run as data is taken. They will be accessible within the counting house via RootSpy, and for each run and file, a ROOT file containing the histograms will be saved within a subdirectory for each run.

For immediate access to these files, the raid disk files may be accessed directly from the counting house, or the tape files will be available within ~20 min. of the file being written out.

More Procedure Links

Running Over Data As It Comes In

A special user gxproj1 will have a cron job set up to run the plugins as new data appears on /mss. During the week, gxproj1 will submit offline plugin jobs with the same setup as the weekly jobs run the previous Friday. The procedure for this is shown below.


Running the cron job

IMPORTANT: The cron job should not be running while you are manually submitting jobs using the jproj.pl script for the same project, or else you will probably multiply-submit a job.

  • Go to the cron job directory:
cd /u/home/gxproj1/halld/monitoring/newruns
  • The cron_plugins file is the cronjob that will be executed. During execution, it runs the exec.sh command in the same folder. This command takes two arguments: the project name, and the maximum file number for each run. These fields should be updated in the cron_plugins file before running.
  • The exec.sh command updates the job management database table with any data that has arrived on tape since it was last updated, ignoring file numbers greater than the maximum file number. It then submits jobs for these files.
  • To start the cron job, run:
crontab cron_plugins
  • To check whether the cron job is running, do
crontab -l
  • To remove the cron job do
crontab -r


Data Versions

To document the conditions of the monitoring data that is created, for the sake of reproducability and further analysis we save several pieces of information. The format is intended to be comprehensive enough to document not just monitoring data, but versions of raw and reconstructed data, so that this database table can be used for the event database as well.

We store one record per pass through one run period, with the following structure:

Field Description
data_type The level of data we are processing. For the purposes of monitoring, "rawdata" is the online monitoring, "recon" is the offline monitoring
run_period The run period of the data
revision An integer specifying which pass through the run period this data corresponds to
software_version The name of the XML file that specifies the different software versions used
jana_config The name of the text file that specifies which JANA options were passed to the reconstruction program
ccdb_context The value of JANA_CALIB_CONTEXT, which specifies the version of calibration constants that were used
production_time The data at which monitoring/reconstruction began
dataVersionString A convenient string for identifying this version of the data


An example file used as as input to ./register_new_version.py is:

data_type           = recon
run_period          = RunPeriod-2014-10
revision            = 1
software_version    = soft_comm_2014_11_06.xml
jana_config         = jana_rawdata_comm_2014_11_06.conf
ccdb_context        = calibtime=2014-11-10
production_time     = 2014-11-10
dataVersionString   = recon_RunPeriod-2014-10_20141110_ver01