Data Monitoring Procedures

From GlueXWiki
Revision as of 12:20, 31 October 2014 by Kmoriya (Talk | contribs) (Running Over Archived Data)

Jump to: navigation, search

Saving Online Monitoring Data

The procedure for writing the data out is given in, e.g., Raid-to-Silo Transfer Strategy.

Once the DAQ writes out the data to the raid disk, cron jobs will copy the file to tape, and within ~20 min., we will have access to the file on tape at /mss/halld/$RUN_PERIOD/rawdata/RunXXXXXX.

All online monitoring plugins will be run as data is taken. They will be accessible within the counting house via RootSpy, and for each run and file, a ROOT file containing the histograms will be saved within a subdirectory for each run.

For immediate access to these files, the raid disk files may be accessed directly from the counting house, or the tape files will be available within ~20 min. of the file being written out.

Running Over Archived Data

Once the files are written to take we can run the online plugins on these files to confirm what we were seeing in the online monitoring. Manual scripts, and eventually cron jobs can be set up to look for new run numbers and run the plugin over a sample of files.

Details of Offline Monitoring

Below are the procedures to

  • run a single offline plugin job manually
  • run a cron job to automate the process for new files

In principle these scripts should work, but if there are changes in the directory structure for the rawdata files, or if there is a significant increase in the memory or disk space necessary for the jobs, these should be modified.

Generating an offline plugin job

Within /home/gluex/halld/monitoring/batch/ there will be scripts to run the online monitoring plugins over tape files. The main script is generatejobs_plugins_rawdata.sh, which can be used as generatejobs_plugins_rawdata.sh XXX where XXX is the run #.

This will generate a script run_rawdata_XXXXXX.sh, where the run # has now been formatted to be 6 digits. Executing this script will send the monitoring plugins job to the Auger batch system.

Internally, the xml file used to submit the job will be created, and the job to run will be given within script.sh. All run parameters should be specified in at the beginning of generatejobs_plugins_rawdata.sh

Since we are running on tape, the tape file will first be copied over to the cache disk, and the job will run over this cached file.

Using cron to run automatically

Within /home/gluex/halld/monitoring/cron/ there is a file cron_plugins that can be executed via crontab cron_plugins This will set up a cron job to call the script scan_for_jobs.sh, which will check in the rawdata directory and call generatejobs_plugins_rawdata.sh for any run that is more than 5 min old. The cron job is set up to run every 10 min.

Extracting Monitoring Data