Difference between revisions of "OWG Meeting 28-Jan-2015"

From GlueXWiki
Jump to: navigation, search
m (Text replacement - "hdops.jlab.org/wiki" to "halldweb.jlab.org/hdops/wiki")
(One intermediate revision by the same user not shown)
Line 26: Line 26:
#* Controls
#* Controls
# '''L3 trigger implementation''' (Justin/David)
# '''L3 trigger implementation''' (Justin/David)
#* [https://hdops.jlab.org/wiki/index.php/Online_Monitoring_Shift hdops online monitoring page (data flow)]
#* [https://halldweb.jlab.org/hdops/wiki/index.php/Online_Monitoring_Shift hdops online monitoring page (data flow)]
# '''Selectable "Run Type"s''' (Paul M.)
# '''Selectable "Run Type"s''' (Paul M.)
#* [https://mailman.jlab.org/pipermail/halld-offline/2015-January/001924.html offline mailing list post]
#* [https://mailman.jlab.org/pipermail/halld-offline/2015-January/001924.html offline mailing list post]
# '''E-log and Manuals DB'''
# '''E-log and Manuals DB'''
#* [https://halldweb1.jlab.org/elog-halld/Manuals https://halldweb1.jlab.org/elog-halld/Manuals/]
#* [https://halldweb.jlab.org/elog-halld/Manuals https://halldweb.jlab.org/elog-halld/Manuals/]
# '''RAID disk use during active run periods'''
# '''RAID disk use during active run periods'''
# [http://halldweb1.jlab.org/mantisbt Mantis Task Tracker]
# [http://halldweb.jlab.org/mantisbt Mantis Task Tracker]
= Previous Meeting =
= Previous Meeting =

Latest revision as of 05:58, 1 April 2015

Location and Time

Room: CC F326-327

Time: 1:30pm-2:30pm


(if problems, call phone in conference room: 757-269-6460)

  1. To join via Polycom room system go to the IP Address: (bjn.vc) and enter the meeting ID: 120390084.
  2. To join via a Web Browser, go to the page https://bluejeans.com/120390084.
  3. To join via phone, use one of the following numbers and the Conference ID: 120390084
    • US or Canada: +1 408 740 7256 or
    • US or Canada: +1 888 240 2560
  4. More information on connecting to bluejeans is available.

*note: you'll be muted when first connected to the meeting. press *4 to unmute


  1. Announcements
  2. Major systems reports
    • Trigger
    • DAQ
    • Monitoring
    • Controls
  3. L3 trigger implementation (Justin/David)
  4. Selectable "Run Type"s (Paul M.)
  5. E-log and Manuals DB
  6. RAID disk use during active run periods
  7. Mantis Task Tracker

Previous Meeting


Attendees: David L.(chair), Simon T., Eric P., Alex B., Sergey F., Sean D., Adesh, Kei M., Carl T., Serguei P., Bryan M., Curtis M., William G., Vardan G., Alex S., Eugene C.

DAQ & Trigger

  • FADC125 firmware installed and is currently being tested
  • CODA v3.03 was installed this morning (by Dave A.)
    - Requires front-end DAQ hardware to have updated firmware
    - Alex S. has already updated modules for BCAL to latest and is using it in his testing
    - New firmware may not be completely compatible with CODA 3.02 so we'll have to test whether we can switch back easily without downgrading firmware
    - Sergey F. will start testing CODA 3.03 next week. If he runs into issues, we will schedule time with the CODA group present to help with ironing out the wrinkles with running the new software in a large scale deployment
  • ROC buffer size: Sergey F. asked about the possibility of increasing the buffer size on the ROC to something larger than 1MB since it may be needed for fADC125 in block mode
    - Dave A. said it was possible, but would require changes in other places downstream. He noted that the this may not benefit anything thought since buffers larger than 1MB are likely data rate limited already so larger blocks won't gain much (if anything).
  • Serguei P. is in process of updating jumpers on SD cards to remove few nanoseconds of digital delay that is no longer needed
  • Alex S. is still studying coincidence triggers that can be used in coincidence with the FCAL to cut down rate from FCAL noise triggers
    -Dave mentioned that the Charged Pion Polarizability experiment was expecting to use the MIP FCAL trigger but will probably need to include the TOF given the noise issue.

L3 Trigger

  • We briefly discussed what additional work is needed in order to implement the L3 trigger for testing (not necessarily rejection)
  • David showed what had been the plan for setting up ET systems with farm nodes in order to implement the L3 system along with pre and post monitoring nodes
  • Full L3 needs the following work:
    - Complete the testing and implementation of farm manager component (already >90% done by Vardan, but not yet used by Hall-D)
    - Implement multi-ET system in CODA configuration that includes L3 nodes as part of data stream
    - Reconstitution of events passing L3 for writing to disk. Initial testing with single block mode may just write original EVIO buffer as was implemented in 2013 Online Data Challenge
    - Implementation of L3 algorithm

Run Types

  • We looked at the Offline mailing list thread Paul M. started asking about flags that shift workers may select when starting/ending a run that would help with filtering runs of interest in the offline
  • There was a consensus that these selection should probably be handled in the rcm GUI when starting the DAQ rather than modify the CODA rcgui to implement new options.
  • Sergey has an idea of what is wanted and needed. He will continue to coordinate this with Paul and may include it in report at collaboration meeting

E-log Manuals

  • Serguei P. brought up that the Midas e-log has been used for a few years now to maintain a local database of hardware manuals.
  • The manuals are linked from the existing JInventory database so the Midas e-log must be maintained for those links to remain valid
  • David volunteered to take responsibility for maintaining this resource now that Yi and Elliott are both gone.

RAID disk usage

  • David brought up that a collaborator recently mentioned in a e-mail thread that they had set up a system to automatically copy files from the Hall-D RAID disk offsite to their home institution during the 2014 commissioning run
  • The bandwidth used for this was small an likely did not impact operations in any way.
  • After some discussion, there was a consensus that a better system would be to have the off-line monitoring machinery automatically copy these files to the sci-comp volatile disk on the CUE and have copies to offsite institutions pull from there.
  • We agreed to ask collaborators wishing to set up such systems that reach into the counting from outside to present such plans to the online group for review prior to implementation.


  • A plea was made by David to encourage folks to use the Mantis system for recording issues, feature requests etc. that should eventually be done for the online. This is particularly useful for recording things that are not needed immediately, but should be done at some point later.