Translation Table

From GlueXWiki
Revision as of 07:41, 20 September 2014 by Davidl (Talk | contribs)

Jump to: navigation, search

Introduction

The translation table is responsible for converting digitized values indexed by Crate, Slot, Channel into detector specific indexing. For example, BCAL: Module, Layer, Sector, End. Nearly 23k channels are defined for the GlueX detector with each subsystem having its own natural indexing system. The BCAL, for instance requires 4 indices (Module, Layer, Sector, End) while the Start Counter requires only one (Sector). This page provides information on the Translation Table including how to access it and how to change it.

History

The origin of the TT data was a large Excel Spreadsheet maintained by Fernando Barbosa in DocDB with document ID=2293. Scripts were written to read from this and write the information into a SQLite DB file. The SQLite DB file was then modified to include information formatted in a way convenient for use in the control system applications. This single SQLite file now serves as the definitive source of translation table information for both the controls and DAQ electronics. The file is kept in the subversion repository here:

https://halldsvn.jlab.org/repos/trunk/controls/epics/app/hvCaenApp/src/tt.db

This, however, is not the end of the story. Offline software requires a history mechanism for the TT so if values are changed (e.g. a cable is swapped during a maintenance period), then a new version can be used without preventing us from processing data taken with the old TT. Thus, for the purposes of Offline Software, the TT is converted into an XML file and stored in the CCDB. This also makes it easily accessible to offline software using the existing CCDB interface. In order to put the TT into the CCDB, it must first be converted into XML using one script and then inserted using another script. It is important to emphasize that the above mentioned SQLite file should be maintained as the definitive source of TT information, even for the offline software. If a change needs to be made, it should be made to that file and then regenerate the tt.xml file from that. Don't just modify a tt.xml file downloaded from the CCDB and reinsert it!

The addition of HV and LV channel info that supplemented what was found in the spreadsheet means it is now too difficult to use the spreadsheet as the definitive source and regenerate the SQLite DB file from it. As such, the SQLite file is now considered the definitive source and modifications are generally applied via scripts as described below.

Implementation

Modifying

The current system of modifying the TT is not terribly convenient and should be done with care. There are nearly 23k channels defined in the TT and mistakes are very hard to track down.

At this point, there has been enough evolution that it is not possible to recreate the DB from the original Excel SpreadSheet. Modifications should be done to the SQLite DB file itself and a log of the changes recorded in svn with the file. To make changes in the SQLite DB available to the offline software, a new tt.xml file must be generated from it and uploaded to the CCDB with appropriate comments that also indicate the changes.

Changes to the TT make follow a procedure such as this:


1. Checkout the TT package from the repository:

> svn co https://halldsvn.jlab.org/repos/trunk/online/packages/TranslationTable
> setenv TTT $PWD/TranslationTable


2. Checkout a copy of the current SQLite file from the repository. (We must checkout the entire source directory so that the modified file can be committed back to the repository later.)

> svn co https://halldsvn.jlab.org/repos/trunk/controls/epics/app/hvCaenApp
> setenv TTDB $PWD/hvCaenApp/tt.db


3. Modify the SQLite file "tt.db" (pointed to by the TTDB environment variable set in previous step). In most cases, it is recommended that this be done via a new python script. There are numerous examples of such scripts in the $TTT directory with names like "tt_fix_tagger.py" and "tt_fix_fdc_strip_no2.py". The scripts are used because they:

  1. provide a very clear record of exactly what commands were used to apply the fix.
  2. allow for extended comments
  3. can check if the fix has already been applied
  4. can perform integrity checks on the resulting tables
  5. can be stored in svn so other users can back out previous changes by checking out earlier versions of tt.db, but then reapply your fix using your script

Most of the scripts in the repository require the name of the SQLite file as the only argument. This is convenient for testing the script using temporary copies of the DB. For example:

> cp $TTDB tt_test.db
> tt_fix_detectorX.py tt_test.db

Make sure to add your script to the https://halldsvn.jlab.org/repos/trunk/online/packages/TranslationTable directory of the svn repository.


4. Use the tt_db2xml.py script to generate an XML file for use in the offline

> $TTT/tt_db2xml.py $TTDB

This will create a file tt.xml in the local directory containing the TT in a form that can be used by the offline software. You can actually use this file to test that everything works before committing it to the CCDB. See the Offline Analysis of DAQ Data section below.


5. Add the XML file to the CCDB using the $TTT/add_custom_data.py script.
WARNING: Without modification, this script will write the file "tt.xml" into the MySQL-backed CCDB regardless of what your environment is set to! It will also write it to cover all run numbers (0 to infinity). You will need to MODIFY THE SCRIPT BY HAND to change these defaults. There are at least two changes you should make to the add_custom_data.py file:

  • Change the "provider.authentication.current_user_name" from anonymous to your username
  • Change the comment at the bottom of the "provider.create_assignment()" call to something reasonable.

Once you've made the changes, you can commit the tt.xml file by just running the script:

> $TTT/add_custom_data.py

n.b. you should generally not commit your modifications to the add_custom_data.py script itself back to the repository


6. Commit the new SQLite file to the subversion repository. Make sure to include a detailed description of what changes were made in the subversion commit log

> cd hvCaenApp
> svn commit tt.db


7. You should notify the appropriate groups to make sure people are aware of the change. Consider e-mailing the halld-controls@jlab.org and halld-offline@jlab.org mailing lists.


Offline Analysis of DAQ Data

Use of the TT in sim-recon is done automatically by the TTab plugin. Generally, one needs to use the DAQ plugin to read in the EVIO data and generate low-level objects with crate,slot,channel indexing and then use the TTab plugin to apply the translation table. For example:

> hd_ana -PPLUGINS=DAQ,TTab,danarest file000.evio

By default, the tt.xml file is obtained from the CCDB using the standard mechanism (i.e. whatever your JANA_CALIB_URL environment variable is pointing to). You can tell the DANA program to use a local tt.xml file though via a configuration parameter like this:

> hd_ana -PPLUGINS=DAQ,TTab,danarest -PTT:NO_CCDB=1 file000.evio

alternatively, you can specify a different XML file. Note that if your file is named "tt.xml" you should use the above. Only use the following if your XML file is named something else:

> hd_ana -PPLUGINS=DAQ,TTab,danarest -PTT:XML_FILENAME=tt_test.xml file000.evio


Controls

The Controls Group implemented a table named Detector_Hierarchy and added two columns to the Crate table ("host" and "IP") and one column to the Channel table ("enable"). In addition some other modifications were needed including additional rows in the Channel table and adjustment of slot values in the Module table. The slot values mainly consist of shifting them down by one since most crates start numbering slots at "0".


Useful Links

The following are some useful links:

Visual translation table broswer