Data Monitoring Procedures
- 1 Master List of File / Database / Webpage Locations
- 2 Job Monitoring Links
- 3 Saving Online Monitoring Data
- 4 Offline Monitoring: Running Over Archived Data
- 5 Hall D Job Management System
- 6 Running Over Data As It Comes In
- 7 Post-Processing Procedures
- 8 Data Versions
Master List of File / Database / Webpage Locations
- Online Run-by-run condition files (B-field, current, etc.): /work/halld/online_monitoring/conditions/
- Offline monitoring run conditions (software versions, jana config): /group/halld/data_monitoring/run_conditions/
- Run Info vers. 1
- Run Info vers. 2
Monitoring Output Files
- Run Periods 201Y-MM is for example 2015-03, launch ver verVV is for example ver15
- Online monitoring histograms: /work/halld/online_monitoring/root/
- Offline monitoring histogram ROOT files: /work/halld/data_monitoring/RunPeriod-201Y-MM/verVV/rootfiles
- REST files (most recent launch only): /work/halld/data_monitoring/RunPeriod-201Y-MM/REST/verVV
- individual files for each job (ROOT, log, etc.): /volatile/halld/offline_monitoring/RunPeriod-201Y-MM/verVV/
- Accessing monitoring database (on ifarm): mysql -u datmon -h hallddb.jlab.org data_monitoring
Job Monitoring Links
Saving Online Monitoring Data
The procedure for writing the data out is given in, e.g., Raid-to-Silo Transfer Strategy.
Once the DAQ writes out the data to the raid disk, cron jobs will copy the file to tape, and within ~20 min., we will have access to the file on tape at /mss/halld/$RUN_PERIOD/rawdata/RunXXXXXX.
All online monitoring plugins will be run as data is taken. They will be accessible within the counting house via RootSpy, and for each run and file, a ROOT file containing the histograms will be saved within a subdirectory for each run.
For immediate access to these files, the raid disk files may be accessed directly from the counting house, or the tape files will be available within ~20 min. of the file being written out.
Offline Monitoring: Running Over Archived Data
Once files are written to tape we run the online plugins on these files to confirm what we were seeing in the online monitoring, and also to update the results from the latest calibration and software. Manual scripts and cron jobs are set up to look for new data and run the plugins over a sample of files.
Every other Friday (usually the Friday before the offline meetings) jobs will be started to run the newest software on all previous runs,and allows everybody to see improvements in each detector. For each launch, independent builds of hdds, sim-recon, the monitoring plugins, and an sqlite file will be generated.
Below the procedures are described for
- Preparing the software for the launch
- Starting the launch (using hdswif)
- Post-analysis of statistics of the launch
Processing the results and making them available to the collaboration is handled in the section Post-Processing Procedures below.
General Information on Procedures
This section explains how the offline monitoring should be run. Since we may want to simultaneously run offline monitoring for different run periods that require different environment variables, the scripts are set up so that a generic user can download the scripts and run them from anywhere. Most output directories for offline monitoring are created with group read/write permissions so that any Hall D group user has access to the contents, but there are some cases where use of the account that created the launch is necessary.
The accounts used for offline monitoring are the gxprojN accounts created and maintained by Mark Ito (see here for how each account is used). As of October 2015, the following are used:
- gxproj1 for running over Fall 2014 data (deprecated since June 2015)
- gxproj5 for running over Spring 2015 data
Since the summer of 2015 we have transitioned from a system using Mark Ito's jproj scripts to integrating the swif system that Chris Larrieu (SciComp) has been developing. For offline monitoring, the hdswif system that Kei developed is used for launching the jobs, and the jproj system is used for meta-analysis of launch statistics.
Both hdswif and jproj are maintained in svn:
- hdswif: https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/hdswif
- jproj : https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj
To run the offline monitoring each package should be checked out and all necessary scripts are included.
Preparing the software for the launch
To begin a new launch the software must be built to the latest versions. For the gxprojN user accounts used, all software builds are contained in the directory ~/builds (which are soft links to /work/halld/home/gxprojN/builds). When logging into these accounts the setup files ~/setup_jlab-2015-03.csh or similar files should be sourced.
Note that Mark Ito does not want you to change the contents of each .cshrc file. You should consult him if you feel the need.1. Setup the environment:
2. Building hdds:
cd ~/builds/hdds/hdds git pull # Get latest software scons -c install # Clean out the old install: EXTREMELY IMPORTANT for cleaning out stale headers scons install -j4 # Rebuild and re-install with 4 threads
3. Building sim-recon:
cd ~/builds/sim-recon/sim-recon/ git pull cd src scons -c install # Clean out the old install: EXTREMELY IMPORTANT for cleaning out stale headers scons install -j4 # Rebuild and re-install with 4 threads
4. Prepare the latest sqlite file: The sqlite is set in the ~/setup_jlab-2015-03.csh script as sqlite:////home/gxproj5/ccdb.sqlite through the environment variables JANA_CALIB_URL and CCDB_CONNECTION. Therefore, go to this directory and create a new sqlite file. We create the sqlite file in a temporary directory since creating the sqlite file in a directory where the output file exists causes errors. Original documentation on creating sqlite files are here.
NOTE: SQLITE FILES DO NOT WORK ON THE NEW /work DISK INSTALLED IN OCTOBER 2015
cd ~/tmp $CCDB_HOME/scripts/mysql2sqlite/mysql2sqlite.sh -hhallddb.jlab.org -uccdb_user ccdb | sqlite3 ccdb.sqlite mv ccdb.sqlite ../
5. Note that the above steps must be done BEFORE launch project creation. This is because we will track the revisions of the libraries used, and this is done by extracting the svn information in each directory. Also, note that the system assumes that we have the topmost build directory (usually called GLUEX_TOP) to be $HOME/builds . Such an assumption is necessary to be able to extract information about the library locations automatically.
Create the appropriate project(s) and submit the jobs using hdswif, as detailed in the section below.
Starting the Launch and Submitting Jobs
Until the summer of 2015 we relied solely on Mark Ito's jproj system for submitting and keeping track of jobs. We have since moved to the swif system and use the hdswif wrapper for this. Below are instructions for how to use these.
- Downloading hdswif: Download the hdswif directory from svn. For the gxprojN accounts, use the directory ~/halld/hdswif.
svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/hdswif
- Creating the workflow: Within SWIF jobs are registered into workflows. First create the workflow. For offline monitoring, the workflow names are of the form offline_monitoring_RunPeriod201Y_MM_verVV_hd_rawdata with suitable replacements for the run period and version number. The command "swif list" will list all existing workflows. Also, for most simple SWIF commands hdswif also provides a wrapper.
swif listFor creation of workflows for offline monitoring the command
hdswif.py create [workflow] -c [config file]should be used. As an example config file, see the input.config file in the folder (and update it). When a config file is passed in, hdswif will automatically create files that record the configuration of the current launch. These files are stored as for example
- The software packages stored in git (sim-recon and hdds) can have git tags applied to them, which makes it easier to find versions of the software than a SHA-1 hash. hdswif will ask if you would like to create a tag, and execute the following sequence:
git tag -a offmon-201Y_MM-verVV -m "Used for offline monitoring 201Y-MM verVV started on 201y/mm/dd"
git push offmon-201Y_MM-verVVThis will only be invoked when the user name is gxprojN, and for the configuration files, the output directory will be /group/halld/data_monitoring/run_conditions/ for gxprojN accounts while it will be the current directory for other users.
- To use the git-tagged software versions do for example
git checkout offmon-2015_03-ver15
- Registering jobs in the workflow: To register jobs within the workflow, hdswif provides the use of config files. Jobs can be registered by specifying the workflow, config file (-c), run (-r) and file (-f) numbers if necessary. A typical config file will look this:
PROJECT gluex TRACK reconstruction OS centos65 NCORES 6 DISK 40 RAM 8 TIMELIMIT 8 JOBNAMEBASE offmon_ RUNPERIOD 2015-03 VERSION 15 OUTPUT_TOPDIR /volatile/halld/offline_monitoring/RunPeriod-[RUNPERIOD]/ver[VERSION] # Example of other variables included in variable SCRIPTFILE /home/gxproj5/halld/hdswif/script.sh # Must specify full path ENVFILE /home/gxproj5/halld/hdswif/setup_jlab-2015-03.csh # Must specify full path
The config file contains configuration parameters for each of the jobs.
Note: Job configuration parameters can be set differently for jobs within the same workflow if necessary.
hdswif.py add [workflow] -c input.config
By default, hdswif will add all files found within the directory /mss/halld/RunPeriod-201Y-MM/rawdata/ where 201Y-MM is specified by the RUNPERIOD parameter in the config file. If only some of the runs or files are needed, these can be specified for example with
hdswif.py add [workflow] -c input.config -r 3180 -f '00[0-4]'
to specify to register running only over run 3180 files 000 - 004 (Unix-style brackets and wildcards can be used).Running the workflow: To run the workflow, simply use swif run:
swif run -workflow [workflow] -errorlimit noneMAKE SURE THE ERRORLIMIT IS SET TO NONE OR THE WORKFLOW WILL BE STOPPED AFTER ANY JOB FAILS or equivalently, using the hdswif wrapper (which has the errorlimit set by default),
hdsswif.py run [workflow]
It is recommended that some jobs be tested to make sure that everything is working rather than fail thousands of jobs.
For this purpose, hdswif will take an additional parameter to run which limits the number of jobs to submit:
hdswif.py run [workflow] 10
in which case only 10 jobs will be submitted.
Checklist to make sure jobs are running correctly:
- Check stderr files. Are they very large (>kB)?
- Check stdout files. Are they very large (>MB)?
- Check output ROOT files. Are they larger than several MB?
- Check output REST files. Are they larger than several tens of MB?
hdswif.py run [workflow]
Checking the Status and Resubmitting1. The status of jobs can be checked on the terminal with
jobstat -u gxprojNon Auger and for SWIF with
swif listor for more information,
swif status [workflow] -summaryNote that "swif status" tends to be out of date sometimes, so don't panic if your workflow/jobs aren't showing up right away. Also see the Auger job website. 2. For failed jobs, SWIF can resubmit jobs based on the problem. For resubmission for failed jobs with the same resources,
swif retry-jobs [workflow] -problems [problem name]can be used, and for jobs to be submitted with more resources, e.g., use
swif modify-jobs -ram add 2gb -problems AUGER-OVER_RLIMIT
This only re-stages the jobs, be sure to resubmit them with:
swif run -workflow [workflow] -errorlimit nonehdswif has a wrapper for both of these:
hdswif.py resubmit [workflow] [problem]In this case [problem] can be one of SYSTEM, TIMEOUT, RLIMIT. If SYSTEM is specified, the jobs will be retried. For TIMEOUT and RLIMIT, the jobs will be modified by default with 2 additional hours or GB of RAM. If one more number is added as an option, then that many hours or GB of RAM will be added., e.g.,
hdswif.py resubmit [workflow] TIMEOUT 5will add 5 hours of processing time. You can wait until almost all jobs finish before resubmitting failed jobs since the number should be relatively small. Even if jobs are resubmitted for one type of failure, jobs that later fail with that failure will not be automatically resubmitted.
3. For information on swif, use the "swif help" commands and for hdswif see the attached documentaion in https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/hdswif/manual_hdswif.pdf
4. Below is a table describing the various errors that can occur.
|ERROR NAME||Description||Resolution||hdswif command|
SWIF’s attempt to submit jobs to Auger failed. Includes server-side problems as well as user failing to provide valid job parameters (e.g. incorrect project name, too many resources, etc.)
If requested resources are known to be correct resubmit. Otherwise modify job resources using swif directly.
hdswif.py resubmit [workflow] SYSTEM
Auger reports the job FAILED with no specific details.
Resubmit jobs. If problem persists, contact Chris Larrieu or SciComp.
hdswif.py resubmit [workflow] SYSTEM
Failure to copy one or more output files.Can be due to permission problem, quota problem, system error, etc.
Check if output files will exist after job execution and that output directory exists, resubmit jobs. If problem persists, contact Chris Larrieu or SciComp.
hdswif.py resubmit [workflow] SYSTEM
Auger failed to copy one or more of the requested input files, similar to output failures. Can also happen if tape file is unavailable (e.g. missing/damaged tape)
Check if input file exists, resubmit jobs. If problem persists, contact Chris Larrieu or SciComp.
hdswif.py resubmit [workflow] SYSTEM
Job timed out.
If more time is needed for job add more resources. Default is to add 2 hrs of processing time. Also check whether code is hanging.
hdswif.py resubmit [workflow] TIMEOUT
Not enough resources, RAM or disk space.
Add more resources for job.
hdswif.py resubmit [workflow] RLIMIT
Output file specified by user was not found.
Check if output file exists at end of job.
User script exited with non-zero status code.
Your script exited with non-zero status. Check the code you are running.
Job failed owing to a problem with swif (e.g. network connection timeout)
Resubmit jobs. If problem persists, contact Chris Larrieu or SciComp.
hdswif.py resubmit [workflow] SYSTEM
Post-analysis of statistics of the launch
- After jobs have been submitted, it will usually take a few days for all of the jobs to be processed.
- Status of Auger: http://scicomp.jlab.org/scicomp/#/auger/jobs (see also links above)
- Status of user jobs:
jobstat -u [user name]
- The status and results of jobs are saved within the SWIF internal server, and are available via the command
swif status [workflow] -summary -runswhere the arguments -summary and -runs show summary statistics and statistics for individual jobs, respectively. hdswif has a command that takes this output in XML output and creates an HTML webpage showing results of the launch. To do this, do
hdswif.py summary [workflow]This will create an XML file swif_output_[workflow].xml that contains all information from SWIF. If the file already exists, hdswif will ask whether to overwrite the existing file.
- At this stage the html output and figure files are created and ready to be put online. For this step and other steps involving analysis of the statistics of the launch results, it is convenient to change to the jproj system.
- The jproj scripts for offline monitoring are maintained in the svn directory https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj Do
svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jprojFor the gxprojN accounts used for offline monitoring the directory should be ~/halld/jproj
- The jproj directory contains two subdirectories, scripts and projects. The scripts directory contains useful scripts for processing the jobs registered in the jproj system, and each of the offline monitoring launches will be handled in the projects directory.
- Go to the projects directory
cd ~/halld/jproj/projectsand use the script create_project.sh to create a new directory that contains the processing scripts for the current launch
./create_project.sh [workflow]This should create a directory such as offline_monitoring_RunPeriod2015_03_ver15_hd_rawdata (same as the workflow name). The script uses the template files in the directory templates and by substitution creates script files for the current launch. Now go to the newly created analysis directory:
which for the gxprojN accounts should have the full path /home/gxprojN/halld/jproj/projects/[workflow]/analysis.
All of the analysis commands including arguments are contained in run_analysis.sh, but it is strongly recommended that all commands are run manually to check for errors.
- The first thing to do is to make the html output from hdswif public. Copy the html file and related figures that were created from hdswif to the appropriate space within /group/halld/www/halldweb/html/data_monitoring/ :
python publish_offmon_results.py [run period] [version]Note that the command with appropriate substitutions for arguments can be found within run_analysis.sh (same for all commands below).
- Next, we need to create a few MySQL tables for the current launch. The MySQL tables are useful for comparing run/file combinations for different launches. For the SWIF launches, there are two tables needed, and will be named
- This naming scheme of tables and their roles are the same as from the jproj only launches. The [workflow]Job table will contain information gathered from SWIF about each job (which node it went to, start time of each stage, memory usage, etc.). The [workflow]_aux table will contain information gathered from the output stdout files from each job. First, create the Job table using
python create_jproj_job_table.py [run period] [version]Note that this script uses the XML output from hdswif summary and inserts the contents into the MySQL table, so the XML output file must exist.
- To check the contents of this MySQL table, do
mysql -hhalldb -ufarmer farming
mysql> describe offline_monitoring_RunPeriod2015_03_ver11_hd_rawdataJob;
Hall D Job Management System
This section details instructions on how to create and launch a set of jobs using the Hall-D Job Management System developed by Mark Ito. These instructions are generic: this system can be used for the weekly monitoring jobs, but can also be used for other sets of job launches as well.
Database Table Overview
- Job management database table (<project_name>): For each input file, keeps track of whether or not a job for it has been submitted, along with other optional fields.
- Job status database table (<project_name>Job (no space)): For each job, keeps track of the job-id, the job status, memory used, cpu & wall time, time taken to complete various stages (e.g. pending, dependency, active), and others.
- Job metrics database table (<project_name>_aux (no space)): For each job, keeps track of the job-id, how many events were processed, the time it took to copy the cache file, and the time it took to run the plugin. This information is culled from the log files of each job, and is done within the analysis directory of each launch.
Initialize Project Management
- Log into the ifarm machine with one of the gxproj accounts. For this example we will use gxproj1.
ssh gxproj1@ifarm -Y
- Go to a directory to do the launch. In principle, any directory will work, but for gxproj1 this is usually done in /home/gxproj1/halld/
- Check out the necessary scripts
svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jprojThis will get all necessary scripts for launching. Once checked out,
- The script create_project.sh can be used to create a new project. It will take a single argument, the project name. It is assumed that the project name is of the form offline_monitoring_RunPeriod20YY_MM_verVV_hd_rawdata where this string will be parsed to give the run period 20YY_MM and version number VV. One thing to do BEFORE creation of a new project is to editthe conditions of the launch (plugins to run over, memory requested, disk space requested) within templates/template.jsub . This
information is saved automatically at project creation time into files in /group/halld/data_monitoring/run_conditions .To create a project do
The name has been chosen to be as consistent as possible with other directory structures. However, mysql requires that "-" be escaped in table names, so unfortunately run periods will be given as 2YYY_MM instead of 2YYY-MM.
- A new directory with the project name (offline_monitoring_RunPeriod20YY_MM_verVV_hd_rawdata) and files will be copied and modified from the template directory to reflect the run period, the user, the directory that it was created in, project name, etc.
- For each project, cd into the new directory
- The script clear.sh will remove any existing tables for the current project name, then recreate it. Do
- To use the jproj.pl script that was checked in, add the directory to your path with
or always specify the full path
- Now update the table of runs with
jproj.pl <project name> update
This will fill the table with all files within /mss that are of the same form as what is in <project name>.jproj . If you want to register only on a subset of all such files, you can edit this file directly.
- Once you have registered all of the files you would like to run over, do
jproj.pl <project name> submit [max # of jobs] [run number]
where the additional options specify how many jobs to submit and which run number to run on. Without these options all files that are registered and have not been submitted yet will be submitted.
At this stage you are ready to submit all files. It is a good idea to submit a few test jobs at first to check that all scripts are working and that the plugins do not crash. Once you are sure that this does not happen, you can send all jobs in. The remaining jobs are then the monitoring which will (among other things) put the results on the online webpage for the collaboration to view, and the analysis of the launch.
Project File Overview
An overview of each project file:
- clear.sh: For the current project, deletes the job status and management database tables (if any), and creates new, empty ones.
- <project_name>.jproj: Contains the path and file name format for the input files for the jobs.
- <project_name>.jsub: The xml job submission script. The run number and file number variables are set during job submission for each input file.
- script.sh: The script that is executed during the job. If output job directories are not pre-created manually, they should be created in this script with the proper permissions:
mkdir -p -m 775 my_directory
- setup_jlab-[run period].csh: The environment that is sourced at the beginning of the job execution.
- status.sh: Updates the job status database table, and prints some of its columns to screen.
- Delete (if any) and create the database table(s) for the current set of job submissions:
Also, if testing was done with jobs, it is best to delete the output directory and the configuration files:
rm -frv /volatile/halld/offline_monitoring/RunPeriod-20YY-MM/verVV /group/halld/data_monitoring/run_conditions/soft_comm_20YY_MM_verVV.xml /group/halld/data_monitoring/run_conditions/jana_rawdata_comm_20YY_MM_verVV.conf
- Search for input files matching the string in the .jproj file, and create a row for each in the job management database table (called <project_name>). You can test by adding an optional argument at the end, which only selects files with a specific file number:
jproj.pl <project_name> update <optional_file_number>
- Confirm that the job management database is accurate by printing it's contents to screen:
mysql -hhallddb -ufarmer farming -e "select * from <project_name>"
- ONLY if a mistake was made, to delete the tables from the database and recreate new, empty ones, run:
- To look at the status of the submitted jobs, first query auger and update the job status database:
- The job status can then be viewed by submitting a query to the job status database (called <project_name>Job (no space in between)):
mysql -hhallddb -ufarmer farming -e "select id,run,file,jobId,hostname,status,timeSubmitted,timeActive,walltime,cput,timeComplete,result,error from <project_name>Job"
- These last two commands can instead be executed simultaneously by running:
Handy mysql Instructions
- Handy mysql instructions:
mysql -hhallddb -ufarmer farming # Enter the "farming" mysql database on "hallddb" as user "farmer" quit; # Exit mysql show tables; # Show a list of the tables in the current database show columns from <project_name>; # show all of the columns for the given table select * from <project_name>; # show the contents of all rows from the given table
Backing Up Offline Monitoring Tables
Tables created for offline monitoring can be backed up using the script backup_tables.sh which can be checked out with the other files from https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/jproj/projects
The script uses the command mysqldump to print out a file that can be executed to recreate the tables. Since executing this output file will drop the table when it exists, caution is advised. Example usage to backup all three tables created for run period 2014_10 ver 17:
backup_tables.sh 2014_10 17
Running Over Data As It Comes In
A special user gxproj1 will have a cron job set up to run the plugins as new data appears on /mss. During the week, gxproj1 will submit offline plugin jobs with the same setup as the weekly jobs run the previous Friday. The procedure for this is shown below.
Running the cron job
IMPORTANT: The cron job should not be running while you are manually submitting jobs using the jproj.pl script for the same project, or else you will probably multiply-submit a job.
- Go to the cron job directory:
- The cron_plugins file is the cronjob that will be executed. During execution, it runs the exec.sh command in the same folder. This command takes two arguments: the project name, and the maximum file number for each run. These fields should be updated in the cron_plugins file before running.
- The exec.sh command updates the job management database table with any data that has arrived on tape since it was last updated, ignoring file numbers greater than the maximum file number. It then submits jobs for these files.
- To start the cron job, run:
- To check whether the cron job is running, do
- To remove the cron job do
To visualize the monitoring data, we save images of selected histograms and store time series of selected quantities in a database, which are then displayed on a web page. This section describes how to generate the monitoring images and database information.
The scripts used to generate this summary data are primarily run from /home/gxprojN/halld/monitoring/process . If you want a new copy of the scripts, e.g., for a new monitoring run, you should check the scripts out from SVN:
svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/process
Note that these scripts currently have some parameters which must be periodically set by hand.
The default python version on most JLab machine does not have the modules to allow these scripts to connect to the MySQL database. To run these scripts, load the environment with the following command
There are two primary scripts for running over the monitoring data generated by the online system and offline reconstruction. The online script can be run with either of the following commands:
/home/gluex/halld/monitoring/process/check_new_runs.py OR /home/gluex/halld/monitoring/process/check_new_runs.csh
The shell script sets up the environment properly to run the python script. To connect to the monitoring database on the JLab CUE, modules included in the local installation of python >= 2.7 are needed. The shell script is appropriate to use in a cron job. The cronjob is currently run under the "gluex" account.
The online monitoring system copies a ROOT file containing the results of the online monitoring, and other configuration files into a directory accessible outside the counting house. This python script automatically checks for new ROOT files, which it will then automatically process. It contains several configuration variables that must be correctly set, which contains the location of input/output directories, etc...
After the data is run over, the results should be processed, so that summary data is entered into the monitoring database and plots are made for the monitoring webpages. Currently, this processing is controlled by a cronjob that runs the following script:
This script checks for new ROOT files, and only runs over those it hasn't processed yet. Since one monitoring ROOT file is produced for each EVIO file, whenever a new file is produced, the plots for the corresponding run are recreated and all the ROOT files for a run are combined into one file. Information is stored in the database on a per-file basis.
Plots for the monitoring web page can be made from single histograms or multiple histograms using RootSpy macros. If you want to change the list of plots made, you must modify one of the following files:
- histograms_to_monitor - specify either the name of the histogram or its the full ROOT path
- macros_to_monitor - specify the full path to the RootSpy macro .C file
When a new monitoring run is started, or the conditions are changed, the following steps should be taken to process the new files:
- Add a new data version, as described below:
- Change the following parameters in check_monitoring_data.csh:
- JOBDATE should correspond to the ouptut date used by the job submission script
- OUTPUTDIR should correspond to the directory corresponding to the run period and revision corresponding to the new version you just submitted. Presumably, this directory will be empty at the beginning.
- Once you create a new data version as defined below, you should pass the needed information as a command line option. Currently this is done by the ARGS variable. For example, the argument "-v RunPeriod-2014-10,8" tells the monitoring scripts to look up the version corresponding to revision 8 of RunPeriod-2014-10 in the monitoring DB and to use to store the results.
Example configuration parameters: set JOBDATE=2015-01-09 set INPUTDIR=/volatile/halld/RunPeriod-2014-10/offline_monitoring set OUTPUTDIR=/w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver08 set ARGS=" -v RunPeriod-2014-10,8 "
If you want to process the results manually, the data is processed using the following script:
./process_new_offline_data.py <input directory> <output directory> EXAMPLE: ./process_new_offline_data.py 2014-11-14 /volatile/halld/RunPeriod-2014-10/offline_monitoring/ /w/halld-scifs1a/data_monitoring/RunPeriod-2014-10/ver02
The python script takes several options to enable/disable various steps in the processing. Of interest is the "--force" option, which will run over all monitoring ROOT files, whether or not they've been previously identified.
Every time a new reconstruction pass is performed, a new version number must be generated. To do this, prepare a version file as described below. Then run the register_new_version.py script to store the information in the database. The script will return a version number, which then should be set by hand in process_new_offline_data.py - future versions of the script will streamline this part of the procedure. An example of how to generate a new version is:
./register_new_version.py add /home/gxproj1/halld/monitoring/process/versions/vers_RunPeriod-2014-10_pass1.txt
If you are running the offline monitoring by checking out the files in trunk/scripts/monitoring/jproj/projects/ of the svn repository, and created a project with
create_project.sh [project name] hd_rawdata
Then go to the directory [project name]/processing/ and execute
which will run register_new_version.py as well as check_monitoring_data.csh for that project.
Step-by-Step Instructions For Processing a New Monitoring Run
The monitoring runs are current run out of the gxproj1 and gxproj5 accounts. After an offline monitoring run has been successfully started on the batch farm, the following steps should be followed to setup the post-processing for these runs.
- The post-processing scripts are stored in $HOME/halld/monitoring/process and are automatically run by cron.
- Run "svn update" to bring any changes in. Be sure that the list of histograms and macros to plot are current.
- Add a new data version
- Edit check_monitoring_data.csh to point to the current revisions/directories
- Note that the environment depends on a standard script - $HOME/setup_jlab.csh
- Update files in the web directory, so that the results are displayed on the web pages: /group/halld/www/halldweb/html/data_monitoring/textdata
- Copy the REST files to more permanent locations:
- cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /work/halld/data_monitoring/RunPeriod-YYYY-MM/REST/verVV
- cp -a /volatile/halld/offline_monitoring/RunPeriod-YYYY-MM/verVV/REST /cache/halld/RunPeriod-YYYY-MM/REST/verVV [under testing]
Check log files in $HOME/halld/monitoring/process/log for more information on how each run went. If there are problems, check log files, and modify check_monitoring_data.csh to vary the verbosity of the output.
To document the conditions of the monitoring data that is created, for the sake of reproducability and further analysis we save several pieces of information. The format is intended to be comprehensive enough to document not just monitoring data, but versions of raw and reconstructed data, so that this database table can be used for the event database as well.
We store one record per pass through one run period, with the following structure:
|data_type||The level of data we are processing. For the purposes of monitoring, "rawdata" is the online monitoring, "recon" is the offline monitoring|
|run_period||The run period of the data|
|revision||An integer specifying which pass through the run period this data corresponds to|
|software_version||The name of the XML file that specifies the different software versions used|
|jana_config||The name of the text file that specifies which JANA options were passed to the reconstruction program|
|ccdb_context||The value of JANA_CALIB_CONTEXT, which specifies the version of calibration constants that were used|
|production_time||The data at which monitoring/reconstruction began|
|dataVersionString||A convenient string for identifying this version of the data|
An example file used as as input to ./register_new_version.py is:
data_type = recon run_period = RunPeriod-2014-10 revision = 1 software_version = soft_comm_2014_11_06.xml jana_config = jana_rawdata_comm_2014_11_06.conf ccdb_context = calibtime=2014-11-10 production_time = 2014-11-10 dataVersionString = recon_RunPeriod-2014-10_20141110_ver01