Difference between revisions of "Goals for Spring 2014 Data Challenge"

From GlueXWiki
Jump to: navigation, search
m
Line 1: Line 1:
'''Currently undergoing major revision, will turn into a series of milestones leading to a Spring 2014 Online Data Challenge'''
 
  
 
+
Milestones for May 2014 Data Challenge:
Misc things to possibly be included in milestones (needs work):
+
  
 
* develop system to manage and compile ROL's using svn and scons build system
 
* develop system to manage and compile ROL's using svn and scons build system

Revision as of 11:58, 13 January 2014

Milestones for May 2014 Data Challenge:

  • develop system to manage and compile ROL's using svn and scons build system
  • install coda 3.0 release, place under Hall D control
  • test coda 3.0 release
  • implement farm manager agent
  • test farm processes connected to farm manager
  • set up COOL_HOME, develop and implement code management strategy
  • test COOL runscripts and configscripts
  • implement program to prepare mc2coda data for injection into data stream in a ROC
  • inject MC information into ROC for arbitrary block level
  • implement program to re-entangle mc2coda data for arbitrary block level
  • develop DAQ start/stop scripts
  • test single full-crate readout at various block levels
  • test multi-crate readout
  • test L3 in pass-through and with rejection
  • test monitoring of all detectors pre- and post-L3
  • implement L3 event monitoring
  • implement L3 event marking
  • test new RootSpy features: archiving, etc.
  • measure, histogram and display ROL times and other front-end information
  • use mini-HBook/RootSpy system to collect and transmit histograms
  • test multi-stage event building
  • test secondary ROLs
  • test multiple output streams using varied criteria
  • test multi-crate readout with full TI/TS system using TS pulser
  • test multi-crate readout using trigger generated by CTP/SSP/GTP system
  • test various detector-generated triggers
  • test crate readout using front-end board playback mode
  • similar tests as above but at full rate
  • get monitoring histograms from detector groups
  • test DAQ with network connection to CC disabled
  • test ROL archive/recovery strategy using svn (or git)
  • test disentangling
  • test compression
  • test scaler readout
  • create prototype scripts including DB bookkeeping and auto elog entries
  • integrate slow controls into run control- hv,lv,etc.
  • test DAQ with network connection to CC disabled



Goals for the Dec GlueX online Data Challenge 2013

11-Oct-2013 E. Wolin


The primary goal of the Dec 2013 Online Data Challenge is to test the entire Trigger/DAQ/monitoring chain from the front-end ROC's to the tape silo using production computing and networking systems at relatively low rates using both simulated and triggered event data.

We will use the full TS/TD/TI/SD system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead. Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used. CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm. A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes. The CODA3 run control facility will be used to manage the DAQ system. We will further use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not developed an automated system yet.

Various L3 rejection algorithms, including no rejection, will be employed. All detectors will be monitored via plugins containing at minimum occupancy histograms for all detector elements. RootSPY will be used to aggregate and display the monitoring histograms.

Input to the ROCs will come from two sources: simulated data written to EVIO files using the MC2CODA package, and actual triggered data read out of the front-end modules. In the former case the ROC's will treat the data as if they had read it out of the front-end digitizer boards. For runs using simulated data, since only a subset of all ROCS will participate, additional simulated event data will be added at a separate stage downstream from the ROC's in order to create complete events suitable for monitoring and L3 analysis.

Note: for the next data challenge after this one, perhaps in Spring 2014 or whenever the hardware is ready, we envision a complete system test using all ROC's, triggering from the CTP/SSP/GTP system, and full event monitoring with production-quality monitoring plugins. If possible it should further include use of the production trigger configuration system, DAQ and run bookkeeping system, slow controls detector monitoring system and alarm collection and display system.


The goals of the Dec 2013 ODC are:

  1. Test DAQ stream from ROCS to silo using both real and simulated data, triggers generated by TS/TD/TI/SD system:
    • generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
    • program front-end modules, TS, TD's, TI's and SD's as required
    • create appropriate COOL configuration files, use CODA run control to start and configure DAQ system
    • use low-rate random pulser in TS as trigger, interrupt ROC's at rate expected during low intensity production running
    • upon interrupt read data out of front-end modules or read MC2CODA data from files, forward data to event builders
    • build events using two event builder stages
    • add missing ROC data after final event event builder stage using special program written for this purpose
    • send data to L3 farm, implement and evaluate various L3 rejection algorithms including no rejection (e.g. for real data, which will be mostly noise or pedestals)
    • forward accepted data to ER to write to RAID storage disks
    • transfer data files to silo using production mechanism and dedicated tape unit.
    • measure data rates, cpu performance and other relevant parameters at all stages
  2. Test complete RootSpy detector monitoring system:
    • implement plugins for all detectors and for L3 monitoring
    • deploy L3 and monitoring farm processes, ET systems and ET transfer facilities
    • use RootSpy to collect, aggregate and display histograms on multiple monitors in the counting house
    • compare histograms to archived reference set
    • archive histograms at the end of each run
    • automatically produce web pages showing L3 and monitoring histograms
  3. Test farm manager:
    • implement codaObject communications in JANA-based L3 farm process
    • develop farm manager CODA component
    • use farm manager to start/stop/monitor L3 farm processes at run start/stop
    • cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions
  4. Monitor system performance
    • use Ganglia to monitor all aspects of system performance