Run Coordinator report: Fall 2021 w17

From GlueXWiki
Jump to: navigation, search

The RC week started by running with production of C. In terms of production, we took production of C until Tuesday December 14, when the target was changed from C to LD. Special tasks during this week: Compton run on C, low luminosity runs with C target, PS background check. Survey and Fiducialization of the carbon target. This week had several issues regarding the accelerator and the hall, the following are the summary of the main events:

Wednesday December 8

    • Hall D beam stopper causes almost four hours of down time [1] . The biggest suspect is an air pressure issue that caused a "beam stopper stuck" fault within PLC.
    • Solenoid dump due to a lost communication with both VCL flows for over 1 minute [2][3] Nick replaced the module and Tim and Beni supervised the ramp up the magnet. There was about 9 hours downtime due to this incident.

Thursday December 9

    • Downtime Incidents: RF separators tripped off and simultaneously MBD5C10V Mismatch from 7:30 - 9:30 am [4][5]

​​=Friday December 10=

    • Downtime Incident: Hall D and Hall D Tagger B chain dropped to restricted access (11:00 - 13:00) . SSG investigated the cause for the drop. According to the PLC diagnostic buffer the Hall D Tagger Sys B network switch and downstream hardware lost communication with MCC for 8 seconds. They tested the fiber connection between Tagger Service Bldg and MCC using fiber tester. Both fibers tested OK at 8 dB loss. Power cycled both network switches at the MCC and Tagger SB. Re-seated fiber interface cards at both network switches. [6]

[7]

    • Power glitches in the accelerator [8][9] starting at about 6:20 am. Simultaneously the sweeping magnet tripped and Hovanes helped us to recover it [10].

Sunday December 12

    • LV4 Sector issue[11]: The shift workers found an empty block corresponding to the LV4 sector. After some investigation we found that the LV was off for that sector but no alarm was activated. They turned it on manually. Mark Dalton checked briefly after turning them back on and found that scalers and also the BCAL RootSpy occupancy are within expectations. He called the counting house and agreed with the shift takers that we were good to go. However there are some questions to solve yet. Hovanes looked into the problem and found that LV channel (namely the negative one) tripped at 18:24, and in a few seconds it also turned off the coupled positive channel as well [ttps://logbooks.jlab.org/entry/3961134]. The LV channels were then turned back on individually and not with the botton that turns them on together (as one can see from the GUI in the log entry and in the MYA), which is not the right thing to do for the BCAL coupled LV channels. In such cases the LV channels can get turned off as if the operators deliberately turns them off and no alarm will be generated. The botton to turn these coupled channels back on was pushed at around 23:19.

During the owl and early riser shift alarms and tripping of the LV4 continue [12]

Monday December 13

    • Several beam downtimes: RF Separators Tripped Off (~5 hours) [13], Macropulse Chasis fault [14] and multiple beam trips.

Tuesday December 14

During the access to the hall there were several items that were addressed:

    • BCAL LV module replaced (Nick) [15]#* Survey and alignment did an "as-found" on the carbon ladder target, measuring the z position of the up- and down-stream faces of all 8 discs. Changed C by LD2 [16]
    • Parasitic setups were upgraded in the hall [17] [18]
    • Solenoid started ramping up at about 18:00 [19]