OWG meeting 29-Aug-2007

From GlueXWiki
Jump to: navigation, search


This is a special one-topic meeting whose goal is to try to understand the event size given the recent simulation results on occupancies, especially in the FDC. Note that the 2002 Hall D design document lists a 4kB average event size, although it is possible that the FDC itself may take 4kB (or even more). The average event size affects the design of the DAQ system in a number of ways, so this needs to be understood sooner rather than later. Fortunately the DAQ architecture is highly scalable, so accomodating higher event sizes and event rates is not a big problem.

The goal of this meeting is to:

  1. Understand what we know and do not know about event size, occupancies, etc.
  2. Devise a plan to learn what we need to learn



1:00pm Wed 29-Aug-2007 Cebaf Center F224

To phone in, send me your telephone number and I'll call you.


Next Meeting

  • TBD (some of us will be out of town the first week of Sep, and OECM review is second week of Sep)

New Action Items from this Meeting

Ad-hoc committee has formed to lead the effort to understand the event size. Members are Elliott W, Dave L, and Fernando B.


In attendance: Simon T, Fernando B, Chris C, Dave A, Elke A, Eugene C, Elton S, David L, Graham H, Elliott W, Gerard V (by phone).

A summary and follow-up strategy follows a brief description and notes from each presentation (see the links above to the full presentations).

Fernando gave an overview of the latest electronics channel counts, some of which need to checked by the detector experts.

  • maximum card counts are 18 for VME64X and 16 for VXS
  • channel count table not optimized yet for detector locations, cable routing, etc.

David L presented the latest Monte Carlo results on event size. These studies are still in progress. Results presented were from work done by Dave, Matt S. (from this past spring, based on 100M gammas), and Richard J.

The approach was basically to count the number of hits in each detector system orginating from 3 different sources:

  1. Electromagetic backgrounds
  2. Hadronic event that caused the level 1 trigger
  3. Accidental hadronic background (not included)

Hadronic background events were not included in the hit count.

For the EM background, results from Matt Shepherd's study last spring were used. He estimated backgrounds in the major detector systems (except BCAL) for both the "original" and "new" FDC geometry. The "new" geometry is the latest, but is known to have overall less material than what the final design will have. Average values of the detector rates were eye-balled from the plots made from the 2 geometries. The results did include some estimate for the new CDC design with inner layers going in to less than R=11cm from the beamline.

For the L1 triggered event data, events generated by pythia were used. The events were filtered so that only event with at least 2 charged particles were included as a loose level-1 trigger cut.

  • FDC cathode rates are higher than anode rates
  • 8 MHz worst case per FDC plane
  • 4-5 FDC cathode hits per anode hit using fixed thresholds
  • for 9 GeV photons the average number of hits per event per detector for a level-1 hadronic event with no EM background are:
    • 452 FDC cathodes
    • 54 FDC anodes
    • 70 CDC
    • 4 SC
    • 100 BCAL
    • 12 FCAL
    • 14 TOF
    giving approx. 700 hits per hadronic event
  • number of hits per event depends on time window.
  • at 10**8 the number of hits per event are:
    • 544 FDC cathode and 61 anodes at 400 ns
    • 26 TOF at 100 ns
    • 20 FCAL at 100 ns
    • >100 BCAL
    for a total of about 750 hits/event accounting for EM but not hadronic backgrounds
  • byte count depends on number of bytes per hit, a topic of much discussion!

Things that still need to be done (or redone)

  • Burst Rate
  • Hadronic background
  • Better level1 trigger simulation
  • Simulate hadronic events for lower incident photon energies
  • Complete, current geometry (Version 4)

Dave Abbott discussed transfer rates across VME backplanes:

  • theoretical max backplane rate is 320 MB/sec
  • all boards we plan to use will support all VME transfer modes
  • best result so far for commercial board (SIS3320) using is 132 MB/sec (using slave-terminated DMA, I think...ejw)
  • Dave says to expect the practical maximum for us to be 200 MB/sec using CBLT w/token passing
  • if interrupt rate is 1 KHz then need about 200 events/block to make 200 MHz trigger
  • a padding scheme is needed to accomodate VME transfers, which require data to fall on 4-byte boundaries

Simon discussed time extraction from ADC data:

  • FDC cathodes are simple, leading edge algorithm does ok at about 5 ns precision
  • CDC needs work as 2 ns precision may be needed
  • concerns include whether to use shaping or not, effect of multiple clusters or peaks in signal, etc.
  • best results so far 3.7 ns using exponential fit, which still has problems
  • seem to need 5 points to get the time
  • 3.7 ns may decrease to 2 ns after shaper is fixed

Gerard discussed the FADC 125:

  • design uses 36 + 36 channels and one FPGA
  • 1 MB output FIFO for 1 kHz interrupt rate
  • FPGA can do pretty sophisticated data analysis
  • peak event size 100 kB
  • what size can we tolerate for empty events?
  • can the FPGA fit cathode clusters? account for calibrations?


  • it seems likely that the event size may be as much as two times larger than previously thought (10 kB vs 5 kB)
  • scalability of DAQ design has result that no change in architecture is required, perhaps just a few more crates and cpu's needed
  • might possibly need more expensive raid system than one specified in cost sheets, depending on state of the art at the time of purchase (late!)
  • need to tune Pythia for our energy range
  • need to redo some studies with latest geometry
  • need results on hit multiplicities vs photon energy
  • need number of hits/event vs photon energy
  • need to account for all expected backgrounds (EM, hadronic)
  • hadronic events at low photon energy still needs work
  • VME and network bandwidths are understood and seem adequate
  • need to understand data generated by modules when they have no hits
  • need to understand what data reduction can be done in FPGA's and front end cpu's


  • goal is to understand the event size and data rate off the detector
  • much work needs to be done
  • need to form small committee keep the effort focused and moving
  • committee needs representation from online, offline, and electronics
  • committee members don't necessarily do the work, but make sure someone does
  • Elliott W, David L, and Fernando B have agreed to be on this committee
  • First meeting planned for week of 24-Oct-2007