Difference between revisions of "Goals for Spring 2014 Data Challenge"

From GlueXWiki
Jump to: navigation, search
m
Line 7: Line 7:
 
* multi-stage event building
 
* multi-stage event building
 
* farm manager and farm processes
 
* farm manager and farm processes
* ROL's compiled using scons build system
+
* test ROL's compiled using scons build system
 
* prototype scripts including DB bookkeeping and auto elog entries
 
* prototype scripts including DB bookkeeping and auto elog entries
 
* new run control gui features (scripts sets, RO vs RW COOL areas, etc.)
 
* new run control gui features (scripts sets, RO vs RW COOL areas, etc.)
 
* test new COOL development/archive strategy
 
* test new COOL development/archive strategy
* test ROL archive strategy using svn or git
+
* test ROL archive/recovery strategy using svn (or git)
 
* test disentangling - where?
 
* test disentangling - where?
 
* test compression - where?
 
* test compression - where?
* measure and histogram ROL times
 
 
* test secondary ROLs
 
* test secondary ROLs
 
* test injecting MC information into ROC
 
* test injecting MC information into ROC
* test loading front-end boards with playback infor
+
* test loading front-end boards with playback info
* full crate readout
+
* full crate readout including all modules
 
* full TS/TI system
 
* full TS/TI system
 
* full CTP/SSP/GTP system
 
* full CTP/SSP/GTP system
 
* monitor all detectors pre- and post-L3
 
* monitor all detectors pre- and post-L3
 
* L3 in pass-through and with rejection, L3 event marking
 
* L3 in pass-through and with rejection, L3 event marking
 +
* L3 monitoring and RootSpy
 +
* test new RootSpy features:  archiving, etc.
 
* scaler readout
 
* scaler readout
 
* coda start/stop scripting
 
* coda start/stop scripting
 +
* measure and histogram ROL times
 +
* use mini-HBook/RootSpy system to collect and transmit histograms
 +
  
  

Revision as of 09:58, 5 December 2013

Currently undergoing major revision, will turn into a series of milestones leading to a Spring 2014 Online Data Challenge


Misc things to possibly be included in milestones (needs work):

  • stable coda3 release under Hall D control
  • multi-stage event building
  • farm manager and farm processes
  • test ROL's compiled using scons build system
  • prototype scripts including DB bookkeeping and auto elog entries
  • new run control gui features (scripts sets, RO vs RW COOL areas, etc.)
  • test new COOL development/archive strategy
  • test ROL archive/recovery strategy using svn (or git)
  • test disentangling - where?
  • test compression - where?
  • test secondary ROLs
  • test injecting MC information into ROC
  • test loading front-end boards with playback info
  • full crate readout including all modules
  • full TS/TI system
  • full CTP/SSP/GTP system
  • monitor all detectors pre- and post-L3
  • L3 in pass-through and with rejection, L3 event marking
  • L3 monitoring and RootSpy
  • test new RootSpy features: archiving, etc.
  • scaler readout
  • coda start/stop scripting
  • measure and histogram ROL times
  • use mini-HBook/RootSpy system to collect and transmit histograms



Goals for the Dec GlueX online Data Challenge 2013

11-Oct-2013 E. Wolin


The primary goal of the Dec 2013 Online Data Challenge is to test the entire Trigger/DAQ/monitoring chain from the front-end ROC's to the tape silo using production computing and networking systems at relatively low rates using both simulated and triggered event data.

We will use the full TS/TD/TI/SD system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead. Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used. CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm. A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes. The CODA3 run control facility will be used to manage the DAQ system. We will further use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not developed an automated system yet.

Various L3 rejection algorithms, including no rejection, will be employed. All detectors will be monitored via plugins containing at minimum occupancy histograms for all detector elements. RootSPY will be used to aggregate and display the monitoring histograms.

Input to the ROCs will come from two sources: simulated data written to EVIO files using the MC2CODA package, and actual triggered data read out of the front-end modules. In the former case the ROC's will treat the data as if they had read it out of the front-end digitizer boards. For runs using simulated data, since only a subset of all ROCS will participate, additional simulated event data will be added at a separate stage downstream from the ROC's in order to create complete events suitable for monitoring and L3 analysis.

Note: for the next data challenge after this one, perhaps in Spring 2014 or whenever the hardware is ready, we envision a complete system test using all ROC's, triggering from the CTP/SSP/GTP system, and full event monitoring with production-quality monitoring plugins. If possible it should further include use of the production trigger configuration system, DAQ and run bookkeeping system, slow controls detector monitoring system and alarm collection and display system.


The goals of the Dec 2013 ODC are:

  1. Test DAQ stream from ROCS to silo using both real and simulated data, triggers generated by TS/TD/TI/SD system:
    • generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
    • program front-end modules, TS, TD's, TI's and SD's as required
    • create appropriate COOL configuration files, use CODA run control to start and configure DAQ system
    • use low-rate random pulser in TS as trigger, interrupt ROC's at rate expected during low intensity production running
    • upon interrupt read data out of front-end modules or read MC2CODA data from files, forward data to event builders
    • build events using two event builder stages
    • add missing ROC data after final event event builder stage using special program written for this purpose
    • send data to L3 farm, implement and evaluate various L3 rejection algorithms including no rejection (e.g. for real data, which will be mostly noise or pedestals)
    • forward accepted data to ER to write to RAID storage disks
    • transfer data files to silo using production mechanism and dedicated tape unit.
    • measure data rates, cpu performance and other relevant parameters at all stages
  2. Test complete RootSpy detector monitoring system:
    • implement plugins for all detectors and for L3 monitoring
    • deploy L3 and monitoring farm processes, ET systems and ET transfer facilities
    • use RootSpy to collect, aggregate and display histograms on multiple monitors in the counting house
    • compare histograms to archived reference set
    • archive histograms at the end of each run
    • automatically produce web pages showing L3 and monitoring histograms
  3. Test farm manager:
    • implement codaObject communications in JANA-based L3 farm process
    • develop farm manager CODA component
    • use farm manager to start/stop/monitor L3 farm processes at run start/stop
    • cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions
  4. Monitor system performance
    • use Ganglia to monitor all aspects of system performance