Data Monitoring Procedures

From GlueXWiki
Revision as of 17:40, 27 February 2015 by Kmoriya (Talk | contribs) (Project Management)

Jump to: navigation, search

Master List of File / Database / Webpage Locations

Run Conditions

  • Online Run-by-run condition files (B-field, current, etc.): /work/halld/online_monitoring/conditions/
  • Offline monitoring run conditions (software versions, jana config): /work/halld/data_monitoring/run_conditions/
  • Run Info vers. 1
  • Run Info vers. 2

Monitoring ROOT Files

  • Online monitoring histograms: /work/halld/online_monitoring/root/
  • Offline monitoring histograms: /work/halld/data_monitoring/

Monitoring Database

  • Accessing monitoring database (on ifarm): mysql -u datmon -h hallddb.jlab.org data_monitoring

Monitoring Webpages

Job Monitoring Links

Saving Online Monitoring Data

The procedure for writing the data out is given in, e.g., Raid-to-Silo Transfer Strategy.

Once the DAQ writes out the data to the raid disk, cron jobs will copy the file to tape, and within ~20 min., we will have access to the file on tape at /mss/halld/$RUN_PERIOD/rawdata/RunXXXXXX.

All online monitoring plugins will be run as data is taken. They will be accessible within the counting house via RootSpy, and for each run and file, a ROOT file containing the histograms will be saved within a subdirectory for each run.

For immediate access to these files, the raid disk files may be accessed directly from the counting house, or the tape files will be available within ~20 min. of the file being written out.

Offline Monitoring: Running Over Archived Data

Once the files are written to take we can run the online plugins on these files to confirm what we were seeing in the online monitoring. Manual scripts and cron jobs are set up to look for new data and run the plugins over a sample of files.

The user gxproj1 should be used for official offline monitoring jobs, including an independent builds of hdds, sim-recon, and the monitoring plugins. Every Friday, jobs will be started to run the newest software on all previous runs. This allows everybody to see improvements in each detector over the week.


Procedures

This section is mostly just for documentation and is intended for the person who will run the jobs periodically (currently Kei).

  1. To stop the incoming-data cron job, first kill the cron job with cron -r. Also, delete all jobs that have not started yet. If these jobs are still alive, they will cause confusion over which software version they ran.
  2. Do
    source /home/gxproj1/setup_jlab.csh
    to set up GlueX environment.
  3. svn update & rebuild HDDS with
    cd $HDDS_HOME
     svn up
     scons install
  4. svn update & rebuild sim-recon with
    cd $HALLD_HOME
     svn up
    cd src
    scons install
  5. svn update & rebuild monitoring hists with
    cd /home/gxproj1/builds/online/packages/monitoring/src/plugins
     svn up
    scons -u install
  6. Prepare the latest sqlite file with:
    cp /group/halld/www/halldweb1/html/dist/ccdb.sqlite ~/
  7. Create the appropriate project(s) and submit the jobs using the Hall-D Job Management System, as detailed in the section below.
  8. Create an xml file containing the versions of the software used for the project. This should be stored in /work/halld/data_monitoring/run_conditions/ (where examples can also be found).
  9. Create a JANA config file containing the arguments passed to JANA for the project. This should be stored in /work/halld/data_monitoring/run_conditions/ (where examples can also be found).
  10. Restart cron jobs for immediate processing of runs coming in.
  11. Contact Sean to notify that there are new runs available in /volatile for copying over to the /work disk. He will merge the output ROOT files and update the monitoring webpages as the data comes in.
  12. Do not archive anything. Sean will also manage backing up the merged histograms to tape.

Hall-D Job Management System

  • This section details instructions on how to create and launch a set of jobs using the Hall-D Job Management System developed by Mark Ito. These instructions are generic: this system can be used for the weekly monitoring jobs, but can also be used for other sets of job launches as well.

Database Table Overview

  • Job management database table (<project_name>): For each input file, keeps track of whether or not a job for it has been submitted, along with other optional fields.
  • Job status database table (<project_name>Job (no space)): For each job, keeps track of the job-id, the job status, memory used, cpu & wall time, time taken to complete various stages (e.g. pending, dependency, active), and others.

Initialize Project Management

  • Log into the ifarm machine with one of the gxproj accounts
ssh gxproj1@ifarm -Y
  • Add the perl script directory to the current $PATH environment variable:
source ~/halld/jproj/scripts/setup.csh
  • Come up with a name for your job submission project. It will be a unique identifier for the current set of job submissions. For example, for the 10th pass over the 10/2014 data for the offline monitoring:
offline_monitoring_RunPeriod-2014_10_ver10

The name has been chosen to be as consistent as possible with other directory structures. However, mysql requires that "-" be escaped in table names, so unfortunately run periods will be given as 2YYY_MM instead of 2YYY-MM.

  • However, the output file name format changed during the 10/2014 commissioning run (hd_raw_* --> hd_rawdata_*). Since these scripts assume a fixed file name format, for these runs an additional identifier should be used, e.g.:
offline_monitoring_RunPeriod-2014-10_ver10_hd_rawdata, offline_monitoring_RunPeriod-2014-10_ver10_hd_raw
  • Create a new project directory and associated files. This can be done with
cd ~/halld/jproj/projects/
./create_project.sh [project name] [file type]

To get the script to work, the project name must contain strings of the form

    • RunPeriod-2YYY-MM
    • verXX

so that these variables can be set properly. The variable [file type] is either hd_rawdata or hd_raw (this is for fall 2014 data only) and specifies which files should be run over.

  • For each project, descend into the new directory, and make changes to each file so that it will work for your project. These changes typically include:
    • Changing the project name in both the .jproj and .jsub file names, and in the contents of each file.
    • If the project version number has changed, update it in the contents of the .jsub file.
    • If the run period has changed, update it in the contents of each file (e.g. RunPeriod-2014-10 --> RunPeriod-2015-01).
    • If the path or file name format for the input files have changed, update them in the .jproj and .jsub files.
    • Any other changes to the execution script, environment variables, or job submission instructions can be made in the appropriate files.

Project File Overview

An overview of each project file:

  • clear.sh: For the current project, deletes the job status and management database tables (if any), and creates new, empty ones.
  • <project_name>.jproj: Contains the path and file name format for the input files for the jobs.
  • <project_name>.jsub: The xml job submission script. The run number and file number variables are set during job submission for each input file.
  • script.sh: The script that is executed during the job. If output job directories are not pre-created manually, they should be created in this script with the proper permissions:
mkdir -p -m 775 my_directory
  • setup_jlab.csh: The environment that is sourced at the beginning of the job execution.
  • status.sh: Updates the job status database table, and prints some of its columns to screen.

Project Management

  • Delete (if any) and create the database table(s) for the current set of job submissions:
./clear.sh
  • Search for input files matching the string in the .jproj file, and create a row for each in the job management database table (called <project_name>). You can test by adding an optional argument at the end, which only selects files with a specific file number:
jproj.pl <project_name> update <optional_file_number>
  • Confirm that the job management database is accurate by printing it's contents to screen:
mysql -hhallddb -ufarmer farming -e "select * from <project_name>"
  • ONLY if a mistake was made, to delete the tables from the database and recreate new, empty ones, run:
./clear.sh
  • Submit the unsubmitted jobs in the job management database, and add their job ids to the job status database:
jproj.pl <project_name> submit
  • To look at the status of the submitted jobs, first query auger and update the job status database:
fill_in_job_details.pl <project_name>
  • The job status can then be viewed by submitting a query to the job status database (called <project_name>Job (no space in between)):
mysql -hhallddb -ufarmer farming -e "select id,run,file,jobId,hostname,status,timeSubmitted,timeActive,walltime,cput,timeComplete,result,error from <project_name>Job"
  • These last two commands can instead be executed simultaneously by running:
./status.sh

Handy mysql Instructions

  • Handy mysql instructions:
mysql -hhallddb -ufarmer farming # Enter the "farming" mysql database on "hallddb" as user "farmer"
quit; # Exit mysql
show tables; # Show a list of the tables in the current database
show columns from <project_name>; # show all of the columns for the given table
select * from <project_name>; # show the contents of all rows from the given table

Running Over Data As It Comes In

A special user gxproj1 will have a cron job set up to run the plugins as new data appears on /mss. During the week, gxproj1 will submit offline plugin jobs with the same setup as the weekly jobs run the previous Friday. The procedure for this is shown below.


Running the cron job

IMPORTANT: The cron job should not be running while you are manually submitting jobs using the jproj.pl script for the same project, or else you will probably multiply-submit a job.

  • Go to the cron job directory:
cd /u/home/gxproj1/halld/monitoring/newruns
  • The cron_plugins file is the cronjob that will be executed. During execution, it runs the exec.sh command in the same folder. This command takes two arguments: the project name, and the maximum file number for each run. These fields should be updated in the cron_plugins file before running.
  • The exec.sh command updates the job management database table with any data that has arrived on tape since it was last updated, ignoring file numbers greater than the maximum file number. It then submits jobs for these files.
  • To start the cron job, run:
crontab cron_plugins
  • To check whether the cron job is running, do
crontab -l
  • To remove the cron job do
crontab -r


Extracting Summary Data

For high-level monitoring, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on a web page. This section describes how to generate the monitoring images and database information.

The scripts used to generate this summary data are currently kept in /u/home/gluex/halld/monitoring/process Note that these scripts currently have some parameters which must be periodically set by hand.

The default python version on most JLab machine does not have the modules to allow these scripts to connect to the MySQL database. To run these scripts, load the environment with the following command

source /home/gxproj1/halld/monitoring/process/monitoring_env.csh

Online Monitoring

There are two scripts for running over the monitoring data generated by the online system and offline reconstruction. The online script can be run with either of the following commands:

/home/gxproj1/halld/monitoring/process/check_new_runs.py
 
OR 
 
/home/gxproj1/halld/monitoring/process/check_new_runs.csh

The shell script sets up the environment properly to run the python script. To connect to the monitoring database on the JLab CUE, modules continued in the installation of python >= 2.7 are needed. The shell script is appropriate to use in a cron job.

The online monitoring system copies a ROOT file containing the results of the online monitoring, and other configuration files into a directory accessible outside the counting house. This python script automatically checks for new ROOT files, which it will then automatically process. It contains several configuration variables that must be correctly set, which contains the location of input/output directories, etc...

Note that while this script is current run as a cronjob, the processing of online ROOT files is currently disabled, so its only function it to update the run_info database.

Offline Monitoring

After the data is run over, the results should be processed, so that summary data is entered into the monitoring database and plots are made for the monitoring webpages. Currently, this processing is controlled by a cronjob that runs the following script:

/home/gxproj1/halld/monitoring/process/check_monitoring_data.csh

This script checks for new ROOT files, and only runs over those it hasn't processed yet. Since one monitoring ROOT file is produced for each EVIO file, whenever a new file is produced, the plots for the corresponding run are recreated and all the ROOT files for a run are combined into one file. Information is stored in the database on a per-file basis.

Plots for the monitoring web page can be made from single histograms or multiple histograms using RootSpy macros. If you want to change the list of plots made, you must modify one of the following files:

  • histograms_to_monitor - specify either the name of the histogram or its the full ROOT path
  • macros_to_monitor - specify the full path to the RootSpy macro .C file

When a new monitoring run is started, or the conditions are changed, the following steps should be taken to process the new files:

  1. Add a new data version, as described below:
  2. Change the following parameters in check_monitoring_data.csh:
    1. JOBDATE should correspond to the ouptut date used by the job submission script
    2. OUTPUTDIR should correspond to the directory corresponding to the run period and revision corresponding to the new version you just submitted. Presumably, this directory will be empty at the beginning.
    3. Once you create a new data version as defined below, you should pass the needed information as a command line option. Currently this is done by the ARGS variable. For example, the argument "-v RunPeriod-2014-10,8" tells the monitoring scripts to look up the version corresponding to revision 8 of RunPeriod-2014-10 in the monitoring DB and to use to store the results.
Example configuration parameters:
set JOBDATE=2015-01-09
set INPUTDIR=/volatile/halld/RunPeriod-2014-10/offline_monitoring
set OUTPUTDIR=/w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver08
set ARGS=" -v RunPeriod-2014-10,8 "

If you want to process the results manually, the data is processed using the following script:

./process_new_offline_data.py <input directory> <output directory>
 
EXAMPLE:
 
./process_new_offline_data.py 2014-11-14 /volatile/halld/RunPeriod-2014-10/offline_monitoring/ /w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver02

The python script takes several options to enable/disable various steps in the processing. Of interest is the "--force" option, which will run over all monitoring ROOT files, whether or not they've been previously identified.

Every time a new reconstruction pass is performed, a new version number must be generated. To do this, prepare a version file as described below. Then run the register_new_version.py script to store the information in the database. The script will return a version number, which then should be set by hand in process_new_offline_data.py - future versions of the script will streamline this part of the procedure. An example of how to generate a new version is:

./register_new_version.py add /home/gxproj1/halld/monitoring/process/versions/vers_RunPeriod-2014-10_pass1.txt

Run Conditions

Currently the run_info database is being updated by Sean by hand. Note that this must be done inside the counting house. If you want to do this yourself, check out the monitoring scripts on a gluon machine

svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/process/

In the process/get_conds directory, run the process_runlog_files.py script with the maximum and minimum run number that you want to process, e.g.

./process_runlog_files.py -b 2200 -e 2260

Data Versions

To document the conditions of the monitoring data that is created, for the sake of reproducability and further analysis we save several pieces of information. The format is intended to be comprehensive enough to document not just monitoring data, but versions of raw and reconstructed data, so that this database table can be used for the event database as well.

We store one record per pass through one run period, with the following structure:

Field Description
data_type The level of data we are processing. For the purposes of monitoring, "rawdata" is the online monitoring, "recon" is the offline monitoring
run_period The run period of the data
revision An integer specifying which pass through the run period this data corresponds to
software_version The name of the XML file that specifies the different software versions used
jana_config The name of the text file that specifies which JANA options were passed to the reconstruction program
ccdb_context The value of JANA_CALIB_CONTEXT, which specifies the version of calibration constants that were used
production_time The data at which monitoring/reconstruction began
dataVersionString A convenient string for identifying this version of the data


An example file used as as input to ./register_new_version.py is:

data_type           = recon
run_period          = RunPeriod-2014-10
revision            = 1
software_version    = soft_comm_2014_11_06.xml
jana_config         = jana_rawdata_comm_2014_11_06.conf
ccdb_context        = calibtime=2014-11-10
production_time     = 2014-11-10
dataVersionString   = recon_RunPeriod-2014-10_20141110_ver01