Difference between revisions of "EventStore Administration"

From GlueXWiki
Jump to: navigation, search
(Indexing Runs)
(Skims and Event Lists)
 
(11 intermediate revisions by the same user not shown)
Line 8: Line 8:
 
<syntaxhighlight>
 
<syntaxhighlight>
 
# sample .esdb.conf file
 
# sample .esdb.conf file
gluex:masterpassword
+
eventstore:eventstoreiscool
 
sdobbs:thisisnotapassword
 
sdobbs:thisisnotapassword
 
ESMASTER=EventStore@hallddb:3306:/var/log/mysql
 
ESMASTER=EventStore@hallddb:3306:/var/log/mysql
Line 18: Line 18:
  
 
== Skims and Event Lists ==
 
== Skims and Event Lists ==
 +
 +
One of the inputs to EventStore is lists of events that are contained in various skims.  Each file stores the event list for one skim.  The file format used is called the IDXA file format, which simply consists of a header with the line "IDXA", and then triples of (run,event,uid), as in the following:
 +
<syntaxhighlight>
 +
# example IDXA file
 +
IDXA
 +
2438 36173 1
 +
2438 1040276 1
 +
2438 1248036 1
 +
2438 1780075 1
 +
2438 1799820 1
 +
[...]
 +
</syntaxhighlight>
 +
 +
The mappings of event lists to skims are kept in files with entries of the form "skimname::eventlist".
 +
<syntaxhighlight>
 +
# example skim mapping
 +
2track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/2track_skim.idxa
 +
2track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/2track1pi0_skim.idxa
 +
3track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/3track_skim.idxa
 +
3track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/3track1pi0_skim.idxa
 +
4track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/4track_skim.idxa
 +
4track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/4track1pi0_skim.idxa
 +
5track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/5track_skim.idxa
 +
5track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/5track1pi0_skim.idxa
 +
</syntaxhighlight>
 +
These mappings are used to build the corresponding skims in the database.
  
 
== Adding new data to EventStore ==
 
== Adding new data to EventStore ==
Line 25: Line 51:
 
$ESBASEDIR/src/AdminScripts
 
$ESBASEDIR/src/AdminScripts
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
=== Clean out bad REST files ===
  
 
=== Build input files and directories ===
 
=== Build input files and directories ===
Line 37: Line 65:
 
<syntaxhighlight>
 
<syntaxhighlight>
 
RUNPERIOD = "RunPeriod-2014-10"
 
RUNPERIOD = "RunPeriod-2014-10"
DATAREVISION = "ver09"
+
DATAREVISION = "ver10"
 
</syntaxhighlight>
 
</syntaxhighlight>
 
By default, the script generates these files processes all available runs and overwrites any existing files.  The script also supports running over a user-defined set of runs.  For instance, if processing new runs 3500-3510, the following command line could be used:  
 
By default, the script generates these files processes all available runs and overwrites any existing files.  The script also supports running over a user-defined set of runs.  For instance, if processing new runs 3500-3510, the following command line could be used:  
Line 50: Line 78:
 
# example inject.csh settings
 
# example inject.csh settings
 
setenv EVENTSTORE_OUTPUT_GRADE "recon-unchecked"
 
setenv EVENTSTORE_OUTPUT_GRADE "recon-unchecked"
setenv EVENTSTORE_WRITE_TIMESTAMP "20150201"
+
setenv EVENTSTORE_WRITE_TIMESTAMP "20150212"
setenv DATA_VERSION_NAME "recon_RunPeriod-2014-10_20150123_ver09"
+
setenv DATA_VERSION_NAME "recon_RunPeriod-2014-10_20150206_ver10"
 
#
 
#
setenv EVENTSTORE_BASE_DIR "/work/halld/EventStore/RunPeriod-2014-10/ver09.1"
+
setenv EVENTSTORE_BASE_DIR "/work/halld/EventStore/RunPeriod-2014-10/ver10"
 
</syntaxhighlight>
 
</syntaxhighlight>
  
 
Notes:  
 
Notes:  
* EVENTSTORE_OUTPUT_GRADE gives the grade that this run's data is being injected into.  More discussion of the grade used in GlueX is given [[EventStore_Table_Definitions#Grades|here].  A writable grade must be specified.   
+
* EVENTSTORE_OUTPUT_GRADE gives the grade that this run's data is being injected into.  More discussion of the grade used in GlueX is given [[EventStore_Table_Definitions#Grades|here]].  A writable grade must be specified.   
 
* EVENTSTORE_WRITE_TIMESTAMP is an arbitrary timestamp associated with the data, of the form "YYYYMMDD".  Conventionally it is the date when injection of a data set started, but this can easily change depending on the particulars of what you are doing.
 
* EVENTSTORE_WRITE_TIMESTAMP is an arbitrary timestamp associated with the data, of the form "YYYYMMDD".  Conventionally it is the date when injection of a data set started, but this can easily change depending on the particulars of what you are doing.
* DATA_VERSION_NAME is the specific version name of the data set, as described [[https://halldweb1.jlab.org/wiki/index.php/Data_Monitoring_Procedures#Data_Versions|here]].
+
* DATA_VERSION_NAME is the specific version name of the data set, as described [[Data_Monitoring_Procedures#Data_Versions|here]].
 
* EVENTSTORE_BASE_DIR is the location of the directory you prepared in the previous step.
 
* EVENTSTORE_BASE_DIR is the location of the directory you prepared in the previous step.
 +
 +
 +
The processing of larger runs can easily take ~1 hour, so to process a large number of runs, we want to run the injection jobs on the batch queues.  To do this, create a file containing the list of runs to process, and pass this as an argument to subjobs.pl, e.g.:
 +
<syntaxhighlight>
 +
./subjobs.pl runlist.txt
 +
</syntaxhighlight>
 +
subjobs.pl has several parameters, which do not need to be changed in normal operation.
 +
 +
A tool for building run lists is the script misc/build_runlist.py .
  
  
Line 70: Line 107:
 
<syntaxhighlight>
 
<syntaxhighlight>
 
# example merge.sh settings
 
# example merge.sh settings
export MyWorkDir=/work/halld/EventStore/RunPeriod-2014-10/ver09.1/merge
+
export MyWorkDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/merge
export MyESDir=/work/halld/EventStore/RunPeriod-2014-10/ver09.1/rest_index
+
export MyESDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/rest_index
  
export MasterDB=EventStore@hallddb.jlab.org:3307
+
export MasterDB=EventStore@hallddb.jlab.org:3306
 
</syntaxhighlight>
 
</syntaxhighlight>
  
 
Notes:
 
Notes:
* MyESDir points to the directory where the sqlite files are, which is conventionally the same as the index files.  The script uses find to build a list of the sqlite files.  A local gzipped tar of the sqlite files is made in the case that merging fails.
+
* MyESDir points to the directory where the sqlite files are, which is conventionally the same as the index files.  The script uses find to build a list of the sqlite files.  A gzipped tar archive of the sqlite files is made in the same directory as the sqlite files, in the case that merging fails.
 
* MyWorkDir is where several files related to the merging are kept.  The number of any failed runs is written to a text file in this directory named failed.lst  
 
* MyWorkDir is where several files related to the merging are kept.  The number of any failed runs is written to a text file in this directory named failed.lst  
 
* MasterDB points to the master database.  A MySQL DB can be specified, as in the example above, or a SQLite master DB can be used by specifying a file name.
 
* MasterDB points to the master database.  A MySQL DB can be specified, as in the example above, or a SQLite master DB can be used by specifying a file name.
 +
* The default behavior for this script is to search the MyESDir looking for sqlite files, and to merge in all the files it finds.  If you only want to merge a specific list of runs, you can put the list into a text file, one run per line, and pass that as an argument to the merge script, e.g.:  "merge.sh goodruns.txt"
  
 
Merging procedure:
 
Merging procedure:
Line 91: Line 129:
 
<syntaxhighlight>
 
<syntaxhighlight>
 
# example moveGrade.sh settings
 
# example moveGrade.sh settings
export MyDB=EventStore@hallddb.jlab.org:3307
+
export MyDB=EventStore@hallddb.jlab.org:3306
 
export OldGrade=recon-unchecked
 
export OldGrade=recon-unchecked
 
export NewGrade=recon
 
export NewGrade=recon
export MyDataVersionName=recon_RunPeriod-2014-10_20150123_ver09
+
export MyDataVersionName=recon_RunPeriod-2014-10_20150206_ver10
export OldTime=20150201
+
export OldTime=20150212
export NewTime=20150208
+
export NewTime=20150206
  
export MyLogDir=/work/halld/EventStore/RunPeriod-2014-10/ver09.1/logs
+
export MyLogDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/logs
 
</syntaxhighlight>
 
</syntaxhighlight>
  

Latest revision as of 21:28, 17 February 2015

This page describes procedures used in administering an EventStore installation.

Configuration

Each admin user should have configuration file in $HOME/.esdb.conf, which contains user:password entries and several other setting as illustrated below:

# sample .esdb.conf file
eventstore:eventstoreiscool
sdobbs:thisisnotapassword
ESMASTER=EventStore@hallddb:3306:/var/log/mysql

The users and passwords are used to control access to the master MySQL databases - management of SQLite databases is not access controlled. Note that the authentication is performed by the EventStore scripts themselves.

NOTE: authentication will change in upcoming versions

Skims and Event Lists

One of the inputs to EventStore is lists of events that are contained in various skims. Each file stores the event list for one skim. The file format used is called the IDXA file format, which simply consists of a header with the line "IDXA", and then triples of (run,event,uid), as in the following:

# example IDXA file
IDXA
2438 36173 1
2438 1040276 1
2438 1248036 1
2438 1780075 1
2438 1799820 1
[...]

The mappings of event lists to skims are kept in files with entries of the form "skimname::eventlist".

# example skim mapping
2track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/2track_skim.idxa
2track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/2track1pi0_skim.idxa
3track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/3track_skim.idxa
3track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/3track1pi0_skim.idxa
4track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/4track_skim.idxa
4track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/4track1pi0_skim.idxa
5track::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/5track_skim.idxa
5track1pi0::/volatile/halld/offline_monitoring/RunPeriod-2014-10/ver10/idxa/002438/EventStore/5track1pi0_skim.idxa

These mappings are used to build the corresponding skims in the database.

Adding new data to EventStore

The scripts for EventStore data management in GlueX are located (assuming the root EventStore directory is given by ESBASEDIR):

$ESBASEDIR/src/AdminScripts

Clean out bad REST files

Build input files and directories

The batch scripts that drive the indexing and cataloging of data use several text files as inputs:

  • data_location - this specifies the full path to the data files. The glob wildcard "*" can be used to [describe how we use the cache disk]
  • eventstore_location - this directory is where the EventStore files (indices, sqlite DB's, log files) are stored. Nothing else should be stored in these directories - they are deleted whenever the injection script is run.
  • idxa_location - this specifies the mapping between skim name and event list for the given run.

We generate versions of these files for both EVIO and REST data on disk.

The main script for generating these files at JLab is located in misc/build_eventstore_inputs.py. Before you run the script, make sure that the run period and revision are properly set in the script itself, e.g.

RUNPERIOD = "RunPeriod-2014-10"
DATAREVISION = "ver10"

By default, the script generates these files processes all available runs and overwrites any existing files. The script also supports running over a user-defined set of runs. For instance, if processing new runs 3500-3510, the following command line could be used:

./build_eventstore_inputs.py -b 3500 -e 3510

Indexing Runs

The next step is to build the indexes for each skim and the metadata used by EventStore. The script that performs this is inject.csh. It takes one argument, the run number to be processed. Several variables need to be set for proper injection (here and in the following, REST file processing will be used as an example):

# example inject.csh settings
setenv EVENTSTORE_OUTPUT_GRADE "recon-unchecked"
setenv EVENTSTORE_WRITE_TIMESTAMP "20150212"
setenv DATA_VERSION_NAME "recon_RunPeriod-2014-10_20150206_ver10"
#
setenv EVENTSTORE_BASE_DIR "/work/halld/EventStore/RunPeriod-2014-10/ver10"

Notes:

  • EVENTSTORE_OUTPUT_GRADE gives the grade that this run's data is being injected into. More discussion of the grade used in GlueX is given here. A writable grade must be specified.
  • EVENTSTORE_WRITE_TIMESTAMP is an arbitrary timestamp associated with the data, of the form "YYYYMMDD". Conventionally it is the date when injection of a data set started, but this can easily change depending on the particulars of what you are doing.
  • DATA_VERSION_NAME is the specific version name of the data set, as described here.
  • EVENTSTORE_BASE_DIR is the location of the directory you prepared in the previous step.


The processing of larger runs can easily take ~1 hour, so to process a large number of runs, we want to run the injection jobs on the batch queues. To do this, create a file containing the list of runs to process, and pass this as an argument to subjobs.pl, e.g.:

./subjobs.pl runlist.txt

subjobs.pl has several parameters, which do not need to be changed in normal operation.

A tool for building run lists is the script misc/build_runlist.py .


Note that the scripts for processing EVIO files were written with the JLab batch farm in mind, where the EVIO files are necessarily processed one at a time. The procedure for the administrator is the same as described above, but changes might need to be made if run at a different institution.

Merging Runs

Once the EventStore information for individual runs has been created, we can merge the information for the processed runs into the master DB. The script that performs this is merge.sh.

# example merge.sh settings
export MyWorkDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/merge
export MyESDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/rest_index
 
export MasterDB=EventStore@hallddb.jlab.org:3306

Notes:

  • MyESDir points to the directory where the sqlite files are, which is conventionally the same as the index files. The script uses find to build a list of the sqlite files. A gzipped tar archive of the sqlite files is made in the same directory as the sqlite files, in the case that merging fails.
  • MyWorkDir is where several files related to the merging are kept. The number of any failed runs is written to a text file in this directory named failed.lst
  • MasterDB points to the master database. A MySQL DB can be specified, as in the example above, or a SQLite master DB can be used by specifying a file name.
  • The default behavior for this script is to search the MyESDir looking for sqlite files, and to merge in all the files it finds. If you only want to merge a specific list of runs, you can put the list into a text file, one run per line, and pass that as an argument to the merge script, e.g.: "merge.sh goodruns.txt"

Merging procedure:

  1. Run merge.sh
  2. Check $MyWorkDir/failed.lst
  3. Fix and iterate

Making Data Accessible

Once all the data has been checked and the EventStore metadata created, injected, and merged into the main DB, the data version then can be moved to a readable grade for general use. The script that performs this action is moveGrade.sh. These variables must be properly set:

# example moveGrade.sh settings
export MyDB=EventStore@hallddb.jlab.org:3306
export OldGrade=recon-unchecked
export NewGrade=recon
export MyDataVersionName=recon_RunPeriod-2014-10_20150206_ver10
export OldTime=20150212
export NewTime=20150206
 
export MyLogDir=/work/halld/EventStore/RunPeriod-2014-10/ver10/logs

Notes:

  • MyDB should point to the master database that you merged into in the previous step.
  • OldGrade is the grade you injected the data with, NewGrade is the final grade. For a more detailed discussion of the grades used by GlueX, see here.
  • OldTime is the timestamp you injected the data with. NewTime is the timestamp that users will access the data with. Note that there does not have to be an particular relation between these times - NewTime can even be before OldTime, if you want. A classic trick used when processing a dataset incrementally (say, during data taking), is that each group of runs may have a different timestamp when injected into an -unchecked grade, and then moved to the same timestamp as all the rest of the runs.