OWG meeting 29-Aug-2007

From GlueXWiki
Revision as of 14:12, 20 September 2007 by Wolin (Talk | contribs)

Jump to: navigation, search

Overview

This is a special one-topic meeting whose goal is to try to understand the event size given the recent simulation results on occupancies, especially in the FDC. Note that the 2002 Hall D design document lists a 4kB average event size, although it is possible that the FDC itself may take 4kB (or even more). The average event size affects the design of the DAQ system in a number of ways, so this needs to be understood sooner rather than later. Fortunately the DAQ architecture is highly scalable, so accomodating higher event sizes and event rates is not a big problem.

The goal of this meeting is to:

  1. Understand what we know and do not know about event size, occupancies, etc.
  2. Devise a plan to learn what we need to learn


Agenda


Time/Location

1:00pm Wed 29-Aug-2007 Cebaf Center F224

To phone in, send me your telephone number and I'll call you.


Announcements

Next Meeting

  • TBD (some of us will be out of town the first week of Sep, and OECM review is second week of Sep)


New Action Items from this Meeting

Ad-hoc committee has formed to lead the effort to understand the event size. Members are Elliott W, Dave L, and Fernando B.


Minutes

In attendance: Simon T, Fernando B, Dave A, Elke A, Eugene C, Elton S, David L, Graham H, Elliott W, Gerard V (by phone).

A summary and follow-up strategy follows a brief description and notes from each presentation (see the links above to the full presentations).


Fernando gave an overview of the latest electronics channel counts, some of which need to checked by the detector experts.

  • maximum card counts are 18 for VME64X and 16 for VXS
  • channel count table not optimized yet for detector locations, cable routing, etc.


-**Dave...please check this, my notes are a bit sketchy**-

David L presented the latest Monte Carlo results on event size. These studies are still in progress. Results presented were from work done by Dave, Matt (from this past spring, based on 100M gammas), and others. Some results were based on older FDC designs, and must be redone with the latest geometry. Some results included only electromagnetic processes, and not photon-induced hadronic processes. Some notes from his presentation:

  • results below account for backgrounds from xxx and yyy processes
  • FDC cathode rates are higher than anode rates
  • 8 MHz worst case per FDC plane
  • 4-5 FDC cathode hits per anode hit using fixed thresholds
  • main contribution to TOF backgrounds is from the 2 micron copper strips in the FDC
  • CDC results are from both MC and from calculation
  • for 9 GeV photons the average number of hits per event per detector (we need worst case, too) are: 452 FDC cathodes, 54 FDC anodes, 70 CDC, 4 SC, 100 BCAL, 12 FCAL, 14 TOF, giving approx. 700 hits per hadronic event
  • number of hits per event depends on time window.
  • at 10**8 the number of hits per event are 544 FDC cathode and 61 anodes at 400 ns, 26 TOF at 100 ns, 20 FCAL at 100 ns, >100 BCAL, for a total of about 750 hits/event accounting for EM but not hadronic backgrounds
  • byte count depends on number of bytes per hit, a topic of much discussion!


Dave Abbott discussed transfer rates across VME backplanes:

  • theoretical max backplane rate is 320 MB/sec
  • all boards we plan to use will support all VME transfer modes
  • best result so far for commercial board (SIS3320) using is 132 MB/sec (using slave-terminated DMA, I think...ejw)
  • Dave says to expect the practical maximum for us to be 200 MB/sec using CBLT w/token passing
  • if interrupt rate is 1 KHz then need about 200 events/block to make 200 MHz trigger
  • a padding scheme is needed to accomodate VME transfers, which require data to fall on 4-byte boundaries


Simon discussed time extraction from ADC data:

  • FDC cathodes are simple, leading edge algorithm does ok at about 5 ns precision
  • CDC needs work as 2 ns precision may be needed
  • concerns include whether to use shaping or not, effect of multiple clusters or peaks in signal, etc.
  • best results so far 3.7 ns using exponential fit, which still has problems
  • seem to need 5 points to get the time
  • 3.7 ns may decrease to 2 ns after shaper is fixed


Gerard discussed the FADC 125:

  • design uses 36 + 36 channels and one FPGA
  • 1 MB output FIFO for 1 kHz interrupt rate
  • FPGA can do pretty sophisticated data analysis
  • peak event size 100 kB
  • what size can we tolerate for empty events?
  • can the FPGA fit cathode clusters? account for calibrations?


Summary:

  • it seems likely that the event size may be as much as two times larger than previously thought (10 kB vs 5 kB)
  • scalability of DAQ design has result that no change in architecture is required, perhaps just a few more crates and cpu's needed
  • might possibly need more expensive raid system than one specified in cost sheets, depending on state of the art at the time of purchase (late!)
  • need to tune Pythia for our energy range
  • need to redo some studies with latest geometry
  • need results on hit multiplicities vs photon energy
  • need number of hits/event vs photon energy
  • need to account for all expected backgrounds (EM, hadronic)
  • hadronic events at low photon energy still needs work
  • VME and network bandwidths are understood and seem adequate
  • need to understand data generated by modules when they have no hits
  • need to understand what data reduction can be done in FPGA's and front end cpu's


Follow-up:

  • goal is to understand the event size and data rate off the detector
  • much work needs to be done
  • need to form small committee keep the effort focused and moving
  • committee needs representation from online, offline, and electronics
  • committee members don't necessarily do the work, but make sure someone does
  • Elliott W, David L, and Fernando B have agreed to be on this committee
  • First meeting planned for week of 24-Oct-2007