Data Monitoring Procedures

From GlueXWiki
Revision as of 19:28, 23 October 2015 by Kmoriya (Talk | contribs) (Checking the Status and Resubmitting)

Jump to: navigation, search

Master List of File / Database / Webpage Locations

Run Conditions

  • Online Run-by-run condition files (B-field, current, etc.): /work/halld/online_monitoring/conditions/
  • Offline monitoring run conditions (software versions, jana config): /group/halld/data_monitoring/run_conditions/
  • Run Info vers. 1
  • Run Info vers. 2

Monitoring Output Files

  • Run Periods 201Y-MM is for example 2015-03, launch ver verVV is for example ver15
  • Online monitoring histograms: /work/halld/online_monitoring/root/
  • Offline monitoring histogram ROOT files: /work/halld/data_monitoring/RunPeriod-201Y-MM/verVV/rootfiles
  • REST files (most recent launch only): /work/halld/data_monitoring/RunPeriod-201Y-MM/REST/verVV
  • individual files for each job (ROOT, log, etc.): /volatile/halld/offline_monitoring/RunPeriod-201Y-MM/verVV/

Monitoring Database

  • Accessing monitoring database (on ifarm): mysql -u datmon -h hallddb.jlab.org data_monitoring

Monitoring Webpages

Job Monitoring Links

Saving Online Monitoring Data

The procedure for writing the data out is given in, e.g., Raid-to-Silo Transfer Strategy.

Once the DAQ writes out the data to the raid disk, cron jobs will copy the file to tape, and within ~20 min., we will have access to the file on tape at /mss/halld/$RUN_PERIOD/rawdata/RunXXXXXX.

All online monitoring plugins will be run as data is taken. They will be accessible within the counting house via RootSpy, and for each run and file, a ROOT file containing the histograms will be saved within a subdirectory for each run.

For immediate access to these files, the raid disk files may be accessed directly from the counting house, or the tape files will be available within ~20 min. of the file being written out.

Offline Monitoring: Running Over Archived Data

Once files are written to tape we run the online plugins on these files to confirm what we were seeing in the online monitoring, and also to update the results from the latest calibration and software. Manual scripts and cron jobs are set up to look for new data and run the plugins over a sample of files.

Every other Friday (usually the Friday before the offline meetings) jobs will be started to run the newest software on all previous runs,and allows everybody to see improvements in each detector. For each launch, independent builds of hdds, sim-recon, the monitoring plugins, and an sqlite file will be generated.

Below the procedures are described for

  1. Preparing the software for the launch
  2. Starting the launch (using hdswif)
  3. Post-analysis of statistics of the launch

Processing the results and making them available to the collaboration is handled in the section Post-Processing Procedures below.


General Information on Procedures

This section explains how the offline monitoring should be run. Since we may want to simultaneously run offline monitoring for different run periods that require different environment variables, the scripts are set up so that a generic user can download the scripts and run them from anywhere. Most output directories for offline monitoring are created with group read/write permissions so that any Hall D group user has access to the contents, but there are some cases where use of the account that created the launch is necessary.

The accounts used for offline monitoring are the gxprojN accounts created and maintained by Mark Ito (see here for how each account is used). As of October 2015, the following are used:

  • gxproj1 for running over Fall 2014 data (deprecated since June 2015)
  • gxproj5 for running over Spring 2015 data

Since the summer of 2015 we have transitioned from a system using Mark Ito's jproj scripts to integrating the swif system that Chris Larrieu (SciComp) has been developing. For offline monitoring, the hdswif system that Kei developed is used for launching the jobs, and the jproj system is used for meta-analysis of launch statistics.

Both hdswif and jproj are maintained in svn:

To run the offline monitoring each package should be checked out and all necessary scripts are included.

Preparing the software for the launch

To begin a new launch the software must be built to the latest versions. For the gxprojN user accounts used, all software builds are contained in the directory ~/builds (which are soft links to /work/halld/home/gxprojN/builds). When logging into these accounts the setup files ~/setup_jlab-2015-03.csh or similar files should be sourced.

Note that Mark Ito does not want you to change the contents of each .cshrc file. You should consult him if you feel the need.

NOTE: FOR BUILDING SOFTWARE IT IS A GOOD IDEA TO DO A COMPLETE WIPEOUT/CHECKOUT EACH TIME TO AVOID STALE HEADER FILES.

  1. Building hdds: Go to ~/builds/hdds. The directory hdds is the one from git. Delete the contents, then download the newest version from git and build:
    cd ~/builds/hdds/
    rm -frv hdds
    git clone https://github.com/JeffersonLab/hdds
    cd hdds
    scons install
  2. Building sim-recon: Go to ~/builds/sim-recon. The directory sim-recon is the one from git. Delete the contents, then download the newest version from git and build:
    cd ~/builds/sim-recon
    rm -frv sim-recon
    git clone https://github.com/JeffersonLab/sim-recon
    cd sim-recon/src
    scons install -j8
  3. Prepare the latest sqlite file: The sqlite is set in the ~/setup_jlab-2015-03.csh script as sqlite:////home/gxproj5/ccdb.sqlite through the environment variables JANA_CALIB_URL and CCDB_CONNECTION. Therefore, go to this directory and create a new sqlite file. We create the sqlite file in a temporary directory since creating the sqlite file in a directory where the output file exists causes errors. Original documentation on creating sqlite files are here.
    NOTE: SQLITE FILES DO NOT WORK ON THE NEW /work DISK INSTALLED IN OCTOBER 2015
    cd ~/tmp
    $CCDB_HOME/scripts/mysql2sqlite/mysql2sqlite.sh -hhallddb.jlab.org -uccdb_user ccdb | sqlite3 ccdb.sqlite
    mv ccdb.sqlite ../
  4. Note that the above steps must be done BEFORE launch project creation. This is because we will track the revisions of the libraries used, and this is done by extracting the svn information in each directory. Also, note that the system assumes that we have the topmost build directory (usually called GLUEX_TOP) to be $HOME/builds . Such an assumption is necessary to be able to extract information about the library locations automatically.

Create the appropriate project(s) and submit the jobs using hdswif, as detailed in the section below.

Starting the Launch and Submitting Jobs

Until the summer of 2015 we relied solely on Mark Ito's jproj system for submitting and keeping track of jobs. We have since moved to the swif system and use the hdswif wrapper for this. Below are instructions for how to use these.

  1. Downloading hdswif: Download the hdswif directory from svn. For the gxprojN accounts, use the directory ~/halld/hdswif.
    cd ~/halld 
     svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/hdswif
    cd hdswif
  2. Creating the workflow: Within SWIF jobs are registered into workflows. First create the workflow. For offline monitoring, the workflow names are of the form offline_monitoring_RunPeriod201Y_MM_verVV_hd_rawdata with suitable replacements for the run period and version number. The command "swif list" will list all existing workflows. Also, for most simple SWIF commands hdswif also provides a wrapper.
    swif list
    For creation of workflows for offline monitoring the command
    hdswif.py create [workflow] -c [config file] 
    should be used. As an example config file, see the input.config file in the folder (and update it). When a config file is passed in, hdswif will automatically create files that record the configuration of the current launch. These files are stored as for example
    • /group/halld/data_monitoring/run_conditions/jana_rawdata_comm_2015_03_ver15.conf
    • /group/halld/data_monitoring/run_conditions/soft_comm_2015_03_ver15.xml
    • The software packages stored in git (sim-recon and hdds) can have git tags applied to them, which makes it easier to find versions of the software than a SHA-1 hash. hdswif will ask if you would like to create a tag, and execute the following sequence:
      git tag -a offmon-201Y_MM-verVV -m "Used for offline monitoring 201Y-MM verVV started on 201y/mm/dd"
      git push offmon-201Y_MM-verVV
      This will only be invoked when the user name is gxprojN, and for the configuration files, the output directory will be /group/halld/data_monitoring/run_conditions/ for gxprojN accounts while it will be the current directory for other users.
    • To use the git-tagged software versions do for example
      cd $HALLD_HOME
      git checkout offmon-2015_03-ver15
  3. Registering jobs in the workflow: To register jobs within the workflow, hdswif provides the use of config files. Jobs can be registered by specifying the workflow, config file (-c), run (-r) and file (-f) numbers if necessary. A typical config file will look this:
PROJECT                       gluex
TRACK                         reconstruction
OS                            centos65
NCORES                        6
DISK                          40
RAM                           8
TIMELIMIT                     8
JOBNAMEBASE                   offmon_
RUNPERIOD                     2015-03
VERSION                       15
OUTPUT_TOPDIR                 /volatile/halld/offline_monitoring/RunPeriod-[RUNPERIOD]/ver[VERSION] # Example of other  variables included in variable
SCRIPTFILE                    /home/gxproj5/halld/hdswif/script.sh                                  # Must specify full path
ENVFILE                       /home/gxproj5/halld/hdswif/setup_jlab-2015-03.csh                     # Must specify full path

The config file contains configuration parameters for each of the jobs.
Note: Job configuration parameters can be set differently for jobs within the same workflow if necessary.

Edit the config file and save as a new file if necessary. Once the configuration is set, jobs can be added via
hdswif.py add [workflow] -c input.config

By default, hdswif will add all files found within the directory /mss/halld/RunPeriod-201Y-MM/rawdata/ where 201Y-MM is specified by the RUNPERIOD parameter in the config file. If only some of the runs or files are needed, these can be specified for example with

hdswif.py add [workflow] -c input.config -r 3180 -f '00[0-4]'

to specify to register running only over run 3180 files 000 - 004 (Unix-style brackets and wildcards can be used).

Running the workflow: To run the workflow, simply use swif run:
swif run -workflow [workflow]
or equivalently, using the hdswif wrapper,
hdsswif.py run [workflow]

It is recommended that some jobs be tested to make sure that everything is working rather than fail thousands of jobs.
For this purpose, hdswif will take an additional parameter to run which limits the number of jobs to submit:
hdswif.py run [workflow] 10
in which case only 10 jobs will be submitted. To submit all jobs after checking the results, do
hdswif.py run [workflow]

Checking the Status and Resubmitting

  1. The status of jobs can be checked on the terminal with
    jobstat -u gxprojN
    on Auger and for SWIF with
    swif list
    or for more information,
    swif status [workflow] -summary
    Also see the Auger job website.
  2. For failed jobs, SWIF can resubmit jobs based on the problem. For resubmission for failed jobs with the same resources,
    swif retry-jobs [workflow] -problems [problem name]
    can be used, and for jobs to be submitted with more resources, e.g., use
    swif modify-jobs -ram add 2gb -problems AUGER-OVER_RLIMIT
    hdswif has a wrapper for both of these:
    hdswif.py resubmit [workflow] [problem]
    In this case [problem] can be one of SYSTEM, TIMEOUT, RLIMIT. If SYSTEM is specified, the jobs will be retried. For TIMEOUT and RLIMIT, the jobs will be modified by default with 2 additional hours or GB of RAM. If one more number is added as an option, then that many hours or GB of RAM will be added., e.g.,
    hdswif.py resubmit [workflow] TIMEOUT 5
    will add 5 hours of processing time. You can wait until almost all jobs finish before resubmitting failed jobs since the number should be relatively small. Even if jobs are resubmitted for one type of failure, jobs that later fail with that failure will not be automatically resubmitted.
  3. For information on swif, use the "swif help" commands and for hdswif see the attached documentaion in https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/hdswif/manual_hdswif.pdf

Post-analysis of statistics of the launch

  1. After jobs have been submitted, it will usually take a few days for all of the jobs to be processed.
  2. The status and results of jobs are saved within the SWIF internal server, and are available via the command
    swif status [workflow] -summary -runs
    where the arguments -summary and -runs show summary statistics and statistics for individual jobs, respectively. hdswif has a command that takes this output in XML output and creates an HTML webpage showing results of the launch. To do this, do
    hdswif.py summary [workflow]
    This will create an XML file swif_output_[workflow].xml that contains all information from SWIF. If the file already exists, hdswif will ask whether to overwrite the existing file.
  3. At this stage the html output and figure files are created and ready to be put online. For this step and other steps involving analysis of the statistics of the launch results, it is convenient to change to the jproj system.
  4. The jproj scripts for offline monitoring are maintained in the svn directory https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj Do
     svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj 
    For the gxprojN accounts used for offline monitoring the directory should be ~/halld/jproj
  5. The jproj directory contains two subdirectories, scripts and projects. The scripts directory contains useful scripts for processing the jobs registered in the jproj system, and each of the offline monitoring launches will be handled in the projects directory.
  6. Go to the projects directory
    cd ~/halld/jproj/projects
    and use the script create_project.sh to create a new directory that contains the processing scripts for the current launch
    ./create_project.sh [workflow]
    This should create a directory such as offline_monitoring_RunPeriod2015_03_ver15_hd_rawdata (same as the workflow name). The script uses the template files in the directory templates and by substitution creates script files for the current launch. Now go to the newly created analysis directory:
    cd [workflow]/analysis

which for the gxprojN accounts should have the full path /home/gxprojN/halld/jproj/projects/[workflow]/analysis.

All of the analysis commands including arguments are contained in run_analysis.sh, but it is strongly recommended that all commands are run manually to check for errors.

  1. The first thing to do is to make the html output from hdswif public. Copy the html file and related figures that were created from hdswif to the appropriate space within /group/halld/www/halldweb/html/data_monitoring/ :
    python publish_offmon_results.py [run period] [version]
    Note that the command with appropriate substitutions for arguments can be found within run_analysis.sh (same for all commands below).
  2. Next, we need to create a few MySQL tables for the current launch. The MySQL tables are useful for comparing run/file combinations for different launches. For the SWIF launches, there are two tables needed, and will be named
    • [workflow]Job
    • [workflow]_aux
  3. This naming scheme of tables and their roles are the same as from the jproj only launches. The [workflow]Job table will contain information gathered from SWIF about each job (which node it went to, start time of each stage, memory usage, etc.). The [workflow]_aux table will contain information gathered from the output stdout files from each job. First, create the Job table using
    python create_jproj_job_table.py [run period] [version]
    Note that this script uses the XML output from hdswif summary and inserts the contents into the MySQL table, so the XML output file must exist.
  4. To check the contents of this MySQL table, do
    mysql -hhalldb -ufarmer farming
    mysql> describe  offline_monitoring_RunPeriod2015_03_ver11_hd_rawdataJob;

Hall D Job Management System

This section details instructions on how to create and launch a set of jobs using the Hall-D Job Management System developed by Mark Ito. These instructions are generic: this system can be used for the weekly monitoring jobs, but can also be used for other sets of job launches as well.

Database Table Overview

  • Job management database table (<project_name>): For each input file, keeps track of whether or not a job for it has been submitted, along with other optional fields.
  • Job status database table (<project_name>Job (no space)): For each job, keeps track of the job-id, the job status, memory used, cpu & wall time, time taken to complete various stages (e.g. pending, dependency, active), and others.
  • Job metrics database table (<project_name>_aux (no space)): For each job, keeps track of the job-id, how many events were processed, the time it took to copy the cache file, and the time it took to run the plugin. This information is culled from the log files of each job, and is done within the analysis directory of each launch.

Initialize Project Management

  • Log into the ifarm machine with one of the gxproj accounts. For this example we will use gxproj1.
ssh gxproj1@ifarm -Y
  • Go to a directory to do the launch. In principle, any directory will work, but for gxproj1 this is usually done in /home/gxproj1/halld/
  • Check out the necessary scripts
    svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj
    This will get all necessary scripts for launching. Once checked out,
    cd projects
  • The script create_project.sh can be used to create a new project. It will take a single argument, the project name. It is assumed that the project name is of the form offline_monitoring_RunPeriod20YY_MM_verVV_hd_rawdata where this string will be parsed to give the run period 20YY_MM and version number VV. One thing to do BEFORE creation of a new project is to editthe conditions of the launch (plugins to run over, memory requested, disk space requested) within templates/template.jsub . This

information is saved automatically at project creation time into files in /group/halld/data_monitoring/run_conditions .

To create a project do
./create_project.sh offline_monitoring_RunPeriod20YY_MM_verVV_hd_rawdata

The name has been chosen to be as consistent as possible with other directory structures. However, mysql requires that "-" be escaped in table names, so unfortunately run periods will be given as 2YYY_MM instead of 2YYY-MM.

  • A new directory with the project name (offline_monitoring_RunPeriod20YY_MM_verVV_hd_rawdata) and files will be copied and modified from the template directory to reflect the run period, the user, the directory that it was created in, project name, etc.
  • For each project, cd into the new directory
  • The script clear.sh will remove any existing tables for the current project name, then recreate it. Do
./clear.sh
  • To use the jproj.pl script that was checked in, add the directory to your path with
source ../../scripts/setup.csh

or always specify the full path

../../scripts/jproj.pl
  • Now update the table of runs with
jproj.pl <project name> update

This will fill the table with all files within /mss that are of the same form as what is in <project name>.jproj . If you want to register only on a subset of all such files, you can edit this file directly.

  • Once you have registered all of the files you would like to run over, do
jproj.pl <project name> submit [max # of jobs] [run number]

where the additional options specify how many jobs to submit and which run number to run on. Without these options all files that are registered and have not been submitted yet will be submitted.

At this stage you are ready to submit all files. It is a good idea to submit a few test jobs at first to check that all scripts are working and that the plugins do not crash. Once you are sure that this does not happen, you can send all jobs in. The remaining jobs are then the monitoring which will (among other things) put the results on the online webpage for the collaboration to view, and the analysis of the launch.

Project File Overview

An overview of each project file:

  • clear.sh: For the current project, deletes the job status and management database tables (if any), and creates new, empty ones.
  • <project_name>.jproj: Contains the path and file name format for the input files for the jobs.
  • <project_name>.jsub: The xml job submission script. The run number and file number variables are set during job submission for each input file.
  • script.sh: The script that is executed during the job. If output job directories are not pre-created manually, they should be created in this script with the proper permissions:
mkdir -p -m 775 my_directory
  • setup_jlab-[run period].csh: The environment that is sourced at the beginning of the job execution.
  • status.sh: Updates the job status database table, and prints some of its columns to screen.

Project Management

  • Delete (if any) and create the database table(s) for the current set of job submissions:
./clear.sh

Also, if testing was done with jobs, it is best to delete the output directory and the configuration files:

rm -frv /volatile/halld/offline_monitoring/RunPeriod-20YY-MM/verVV /group/halld/data_monitoring/run_conditions/soft_comm_20YY_MM_verVV.xml /group/halld/data_monitoring/run_conditions/jana_rawdata_comm_20YY_MM_verVV.conf
  • Search for input files matching the string in the .jproj file, and create a row for each in the job management database table (called <project_name>). You can test by adding an optional argument at the end, which only selects files with a specific file number:
jproj.pl <project_name> update <optional_file_number>
  • Confirm that the job management database is accurate by printing it's contents to screen:
mysql -hhallddb -ufarmer farming -e "select * from <project_name>"
  • ONLY if a mistake was made, to delete the tables from the database and recreate new, empty ones, run:
./clear.sh
  • To look at the status of the submitted jobs, first query auger and update the job status database:
fill_in_job_details.pl <project_name>
  • The job status can then be viewed by submitting a query to the job status database (called <project_name>Job (no space in between)):
mysql -hhallddb -ufarmer farming -e "select id,run,file,jobId,hostname,status,timeSubmitted,timeActive,walltime,cput,timeComplete,result,error from <project_name>Job"
  • These last two commands can instead be executed simultaneously by running:
./status.sh

Handy mysql Instructions

  • Handy mysql instructions:
mysql -hhallddb -ufarmer farming # Enter the "farming" mysql database on "hallddb" as user "farmer"
quit; # Exit mysql
show tables; # Show a list of the tables in the current database
show columns from <project_name>; # show all of the columns for the given table
select * from <project_name>; # show the contents of all rows from the given table

Backing Up Offline Monitoring Tables

Tables created for offline monitoring can be backed up using the script backup_tables.sh which can be checked out with the other files from https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj/projects

The script uses the command mysqldump to print out a file that can be executed to recreate the tables. Since executing this output file will drop the table when it exists, caution is advised. Example usage to backup all three tables created for run period 2014_10 ver 17:

backup_tables.sh 2014_10 17

Running Over Data As It Comes In

A special user gxproj1 will have a cron job set up to run the plugins as new data appears on /mss. During the week, gxproj1 will submit offline plugin jobs with the same setup as the weekly jobs run the previous Friday. The procedure for this is shown below.


Running the cron job

IMPORTANT: The cron job should not be running while you are manually submitting jobs using the jproj.pl script for the same project, or else you will probably multiply-submit a job.

  • Go to the cron job directory:
cd /u/home/gxproj1/halld/monitoring/newruns
  • The cron_plugins file is the cronjob that will be executed. During execution, it runs the exec.sh command in the same folder. This command takes two arguments: the project name, and the maximum file number for each run. These fields should be updated in the cron_plugins file before running.
  • The exec.sh command updates the job management database table with any data that has arrived on tape since it was last updated, ignoring file numbers greater than the maximum file number. It then submits jobs for these files.
  • To start the cron job, run:
crontab cron_plugins
  • To check whether the cron job is running, do
crontab -l
  • To remove the cron job do
crontab -r


Post-Processing Procedures

To visualize the monitoring data, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on a web page. This section describes how to generate the monitoring images and database information.

The scripts used to generate this summary data are primarily run from /home/gxprojN/halld/monitoring/process . If you want a new copy of the scripts, e.g., for a new monitoring run, you should check the scripts out from SVN:

svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/process

Note that these scripts currently have some parameters which must be periodically set by hand.

The default python version on most JLab machine does not have the modules to allow these scripts to connect to the MySQL database. To run these scripts, load the environment with the following command

source /home/gxproj1/halld/monitoring/process/monitoring_env.csh

Online Monitoring

There are two primary scripts for running over the monitoring data generated by the online system and offline reconstruction. The online script can be run with either of the following commands:

/home/gluex/halld/monitoring/process/check_new_runs.py
 
OR 
 
/home/gluex/halld/monitoring/process/check_new_runs.csh

The shell script sets up the environment properly to run the python script. To connect to the monitoring database on the JLab CUE, modules included in the local installation of python >= 2.7 are needed. The shell script is appropriate to use in a cron job. The cronjob is currently run under the "gluex" account.

The online monitoring system copies a ROOT file containing the results of the online monitoring, and other configuration files into a directory accessible outside the counting house. This python script automatically checks for new ROOT files, which it will then automatically process. It contains several configuration variables that must be correctly set, which contains the location of input/output directories, etc...

Offline Monitoring

After the data is run over, the results should be processed, so that summary data is entered into the monitoring database and plots are made for the monitoring webpages. Currently, this processing is controlled by a cronjob that runs the following script:

/home/gxproj1/halld/monitoring/process/check_monitoring_data.csh

This script checks for new ROOT files, and only runs over those it hasn't processed yet. Since one monitoring ROOT file is produced for each EVIO file, whenever a new file is produced, the plots for the corresponding run are recreated and all the ROOT files for a run are combined into one file. Information is stored in the database on a per-file basis.

Plots for the monitoring web page can be made from single histograms or multiple histograms using RootSpy macros. If you want to change the list of plots made, you must modify one of the following files:

  • histograms_to_monitor - specify either the name of the histogram or its the full ROOT path
  • macros_to_monitor - specify the full path to the RootSpy macro .C file

When a new monitoring run is started, or the conditions are changed, the following steps should be taken to process the new files:

  1. Add a new data version, as described below:
  2. Change the following parameters in check_monitoring_data.csh:
    1. JOBDATE should correspond to the ouptut date used by the job submission script
    2. OUTPUTDIR should correspond to the directory corresponding to the run period and revision corresponding to the new version you just submitted. Presumably, this directory will be empty at the beginning.
    3. Once you create a new data version as defined below, you should pass the needed information as a command line option. Currently this is done by the ARGS variable. For example, the argument "-v RunPeriod-2014-10,8" tells the monitoring scripts to look up the version corresponding to revision 8 of RunPeriod-2014-10 in the monitoring DB and to use to store the results.
Example configuration parameters:
set JOBDATE=2015-01-09
set INPUTDIR=/volatile/halld/RunPeriod-2014-10/offline_monitoring
set OUTPUTDIR=/w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver08
set ARGS=" -v RunPeriod-2014-10,8 "

If you want to process the results manually, the data is processed using the following script:

./process_new_offline_data.py <input directory> <output directory>
 
EXAMPLE:
 
./process_new_offline_data.py 2014-11-14 /volatile/halld/RunPeriod-2014-10/offline_monitoring/ /w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver02

The python script takes several options to enable/disable various steps in the processing. Of interest is the "--force" option, which will run over all monitoring ROOT files, whether or not they've been previously identified.

Every time a new reconstruction pass is performed, a new version number must be generated. To do this, prepare a version file as described below. Then run the register_new_version.py script to store the information in the database. The script will return a version number, which then should be set by hand in process_new_offline_data.py - future versions of the script will streamline this part of the procedure. An example of how to generate a new version is:

./register_new_version.py add /home/gxproj1/halld/monitoring/process/versions/vers_RunPeriod-2014-10_pass1.txt

If you are running the offline monitoring by checking out the files in trunk/scripts/monitoring/jproj/projects/ of the svn repository, and created a project with

create_project.sh [project name] hd_rawdata

Then go to the directory [project name]/processing/ and execute

./run_processing.sh

which will run register_new_version.py as well as check_monitoring_data.csh for that project.

Step-by-Step Instructions For Processing a New Monitoring Run

The monitoring runs are current run out of the gxproj1 and gxproj5 accounts. After an offline monitoring run has been successfully started on the batch farm, the following steps should be followed to setup the post-processing for these runs.

  1. The post-processing scripts are stored in $HOME/halld/monitoring/process and are automatically run by cron.
  2. Run "svn update" to bring any changes in. Be sure that the list of histograms and macros to plot are current.
  3. Edit check_monitoring_data.csh to point to the current revisions/directories
    • VERSION
    • ARGS
    • Note that the environment depends on a standard script - $HOME/setup_jlab.csh
  4. Update files in the web directory, so that the results are displayed on the web pages: /group/halld/www/halldweb/html/data_monitoring/textdata
  5. Copy the REST files to more permanent locations:
    • cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /work/halld/data_monitoring/RunPeriod-YYYY-MM/REST/verVV
    • cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /cache/halld/RunPeriod-YYYY-MM/REST/verVV [under testing]

Check log files in $HOME/halld/monitoring/process/log for more information on how each run went. If there are problems, check log files, and modify check_monitoring_data.csh to vary the verbosity of the output.

Data Versions

To document the conditions of the monitoring data that is created, for the sake of reproducability and further analysis we save several pieces of information. The format is intended to be comprehensive enough to document not just monitoring data, but versions of raw and reconstructed data, so that this database table can be used for the event database as well.

We store one record per pass through one run period, with the following structure:

Field Description
data_type The level of data we are processing. For the purposes of monitoring, "rawdata" is the online monitoring, "recon" is the offline monitoring
run_period The run period of the data
revision An integer specifying which pass through the run period this data corresponds to
software_version The name of the XML file that specifies the different software versions used
jana_config The name of the text file that specifies which JANA options were passed to the reconstruction program
ccdb_context The value of JANA_CALIB_CONTEXT, which specifies the version of calibration constants that were used
production_time The data at which monitoring/reconstruction began
dataVersionString A convenient string for identifying this version of the data


An example file used as as input to ./register_new_version.py is:

data_type           = recon
run_period          = RunPeriod-2014-10
revision            = 1
software_version    = soft_comm_2014_11_06.xml
jana_config         = jana_rawdata_comm_2014_11_06.conf
ccdb_context        = calibtime=2014-11-10
production_time     = 2014-11-10
dataVersionString   = recon_RunPeriod-2014-10_20141110_ver01