Difference between revisions of "Online Design Goals"

From GlueXWiki
Jump to: navigation, search
m
Line 31: Line 31:
 
** store events on local disk
 
** store events on local disk
 
** transfer event files to permanent storage.
 
** transfer event files to permanent storage.
 +
 +
 +
 +
'''NOTE...event size is currently being reevaluated in light of recent background studies'''
 +
  
 
At turn-on Hall D will accept 10**7 photons/sec, with an expected L1 trigger rate of 18 kHz.  At high luminosity the beam rate will be ten times higher, or 10**8 photons/sec, giving an expected trigger rate of 180 kHz assuming the same L1 rejection rate.  With an average event size of 4 kByte the data rate off the detector at low luminosity will be 72 MByte/sec, and 720 MByte/sec at high luminosity.  At low luminosity there will be no L3 rejection, and all events will be written to disk (at 72 MByte/sec).  At high luminosity we expect a L3 rejection rate of a factor of 10, so the rate to disk will also be 72 MByte/sec.   
 
At turn-on Hall D will accept 10**7 photons/sec, with an expected L1 trigger rate of 18 kHz.  At high luminosity the beam rate will be ten times higher, or 10**8 photons/sec, giving an expected trigger rate of 180 kHz assuming the same L1 rejection rate.  With an average event size of 4 kByte the data rate off the detector at low luminosity will be 72 MByte/sec, and 720 MByte/sec at high luminosity.  At low luminosity there will be no L3 rejection, and all events will be written to disk (at 72 MByte/sec).  At high luminosity we expect a L3 rejection rate of a factor of 10, so the rate to disk will also be 72 MByte/sec.   
Line 36: Line 41:
 
Notes:  All front-end DAQ boards will be pipelined to handle the high trigger rate without deadtime.  Triggers and backplane interrupts must be distributed to all front-end crates.  Timing distribution must be appropriate for F1TDC's, 250 MHz FADC's, 100 MHz FADC's. and perhaps a few other miscellaneous modules.   
 
Notes:  All front-end DAQ boards will be pipelined to handle the high trigger rate without deadtime.  Triggers and backplane interrupts must be distributed to all front-end crates.  Timing distribution must be appropriate for F1TDC's, 250 MHz FADC's, 100 MHz FADC's. and perhaps a few other miscellaneous modules.   
  
The Hall D Online and Controls systems will be composed of additional computers and other equipment needed to:
+
The online and controls effort consists of developing, configuring, controlling, and/or monitoring the following:
  
* monitor and control the Hall D detector and DAQ system
+
* approximately 80 front-end crates and associated detector electronics
* ensure data quality
+
* a few dozen compute servers, file server, raid system, and associated computer equipment
* collect meta-data
+
* prototype L3 farm consisting of a small number of nodes (upgradable to up to 200 nodes)
* manage storage and transfer of the raw data and associated meta-data
+
* a GBit wired and wireless networking system (eventually we'll need a 10 GBit system)
* etc.
+
* a few hundred detector control points, where e.g. a HV control point may include hundreds of actual channels
 +
* at least one PLC, controlling the solenoid magnet and other devices
 +
* many hundreds of alarm channels
 +
* interface to JLab accelerator controls system
 +
* event display
 +
* data quality monitoring system
 +
* archive system for monitoring and controls data
 +
* run bookkeeping system
 +
* electronic operator log
  
  
Line 50: Line 63:
  
 
The DAQ design must include some headroom above the expected rates.  Thus I propose the following design goals and parameters for the Hall D DAQ (numbers in parenthesis are for high luminosity):
 
The DAQ design must include some headroom above the expected rates.  Thus I propose the following design goals and parameters for the Hall D DAQ (numbers in parenthesis are for high luminosity):
 +
 +
 +
'''NOTE...event size is currently being reevaluated in light of recent background studies'''
 +
  
 
* Accepted L1 trigger rate - 20 kHz (200 kHz)
 
* Accepted L1 trigger rate - 20 kHz (200 kHz)
Line 65: Line 82:
 
==Online/Controls Design Goals==
 
==Online/Controls Design Goals==
  
The online and controls effort consists of developing, configuring, controlling, and/or monitoring the following: 
+
* Experiment controls
 
+
* Unified alarm system
* approximately 80 front-end crates and associated detector electronics
+
* Controls system must be compatible with EPICS, which will be used by the accelerator
* a few dozen compute servers, file server, raid system, and associated computer equipment
+
* Controls system must accomodate Allen-Bradley PLC, used to control the solenoid magnet
* prototype L3 farm consisting of a small number of nodes (upgradable to up to 200 nodes)
+
* a GBit wired and wireless networking system (eventually we'll need a 10 GBit system)
+
* a few hundred detector control points, where e.g. a HV control point may include hundreds of actual channels
+
* at least one PLC, controlling the solenoid magnet and other devices
+
* many hundreds of alarm channels
+
* interface to JLab accelerator controls system
+
* event display
+
* data quality monitoring system
+
* archive system for monitoring and controls data
+
* run bookkeeping system
+
* electronic operator log
+
 
+
 
+
Since we expect the accelerator to still be using EPICS for the upgrade, the Hall D controls system must be compatible with EPICS at the level requested by the Accelerator Operations group.
+

Revision as of 16:42, 17 August 2007

Overview

Below I list the overall specifications, performance requirements, and design goals of Hall D DAQ/Online/Control systems. All groups working on the project, e.g. JLab DAQ group, JLab Electronics group, etc, must design to meet them.

I propose specifying and planning the entire project in three documents:

The first sets the overall parameters of the project. The second adds the time element to the first and specifies major deliverables without going into great detail. The third is a fine breakdown that goes into details and includes assignment of responsitilities.

Other JLab groups will develop similar documents, then they will be reconciled and additional performance milestones, etc. will be developed.


Basic Requirements from Hall D Design Report

Important: High luminosity capability is NOT a CD-4 deliverable, and will only be achieved if additional funds can be procured. Thus at CD-4 all systems need only be capable of upgrade for high luminosity. An example of this is the prototype L3 farm, which at turn-on will only be used for online monitoring, but which needs to be expandable to implement the full L3 trigger.

The Hall D DAQ system will be composed of:

  • trigger system
  • approximately 80 front-end synchronous crates
  • timing distribution system
  • a dozen or so asynchronous data sources
  • a few dozen additional software components that do not generate high-speed data, but need to be integrated into the run control system
  • all the associated computers and software needed to:
    • configure the system
    • take data
    • build events
    • store events on local disk
    • transfer event files to permanent storage.


NOTE...event size is currently being reevaluated in light of recent background studies


At turn-on Hall D will accept 10**7 photons/sec, with an expected L1 trigger rate of 18 kHz. At high luminosity the beam rate will be ten times higher, or 10**8 photons/sec, giving an expected trigger rate of 180 kHz assuming the same L1 rejection rate. With an average event size of 4 kByte the data rate off the detector at low luminosity will be 72 MByte/sec, and 720 MByte/sec at high luminosity. At low luminosity there will be no L3 rejection, and all events will be written to disk (at 72 MByte/sec). At high luminosity we expect a L3 rejection rate of a factor of 10, so the rate to disk will also be 72 MByte/sec.

Notes: All front-end DAQ boards will be pipelined to handle the high trigger rate without deadtime. Triggers and backplane interrupts must be distributed to all front-end crates. Timing distribution must be appropriate for F1TDC's, 250 MHz FADC's, 100 MHz FADC's. and perhaps a few other miscellaneous modules.

The online and controls effort consists of developing, configuring, controlling, and/or monitoring the following:

  • approximately 80 front-end crates and associated detector electronics
  • a few dozen compute servers, file server, raid system, and associated computer equipment
  • prototype L3 farm consisting of a small number of nodes (upgradable to up to 200 nodes)
  • a GBit wired and wireless networking system (eventually we'll need a 10 GBit system)
  • a few hundred detector control points, where e.g. a HV control point may include hundreds of actual channels
  • at least one PLC, controlling the solenoid magnet and other devices
  • many hundreds of alarm channels
  • interface to JLab accelerator controls system
  • event display
  • data quality monitoring system
  • archive system for monitoring and controls data
  • run bookkeeping system
  • electronic operator log


DAQ Design Goals

The initial design must satisfy the low-luminosity CD-4 deliverables, although systems should be capable of upgrade to high luminosity. Note that at high luminosity events will be written from the L3 farm to disk, while at low luminosity they will be written from an earlier stage.

The DAQ design must include some headroom above the expected rates. Thus I propose the following design goals and parameters for the Hall D DAQ (numbers in parenthesis are for high luminosity):


NOTE...event size is currently being reevaluated in light of recent background studies


  • Accepted L1 trigger rate - 20 kHz (200 kHz)
  • Average event size - 5 kByte (5 kByte)
  • Data rate off detector - 100 MByte/sec (1 GByte/sec)
  • Rate to prototype L3 farm - 20 MByte/sec (1 GByte/sec to full L3 farm)
  • L3 rejection - no rejection (factor of 10)
  • Rate to local raid disk - 100 MByte/sec from event builder stage (100 MByte/sec from L3 farm)
  • Rate to silo - 100 MByte/sec (100 MByte/sec)


Although not a requirement or design goal, a feature useful during installation and testing would be the ability to support multiple, simultaneous runs to allow detector groups to check out their hardware in parallel.


Online/Controls Design Goals

  • Experiment controls
  • Unified alarm system
  • Controls system must be compatible with EPICS, which will be used by the accelerator
  • Controls system must accomodate Allen-Bradley PLC, used to control the solenoid magnet