Difference between revisions of "Online Monitoring Shift"

From Hall D Ops Wiki
Jump to: navigation, search
(Viewing Monitoring Histograms)
(19 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
== The Online Monitoring System ==
 
== The Online Monitoring System ==
  
[[Image:20140528_L3MonitoringArchitecture_noL3.png| thumb | 400px | Fig. 2. Online Monitoring Architecture when a Level-3 trigger is inactive. A single "L3" process will still be present and operating in pass-through mode. Note that monitoring is done on the "post-L3" stream to allow the algorithm to set flags in the data stream indicating that pass-through mode was used.]]
+
[[Image:20160915_MonitoringArchitecture.jpg| thumb | 400px | Fig. 1. Online Monitoring System Architecture.]]
  
[[Image:20140528_L3MonitoringArchitecture.png| thumb | 400px | Fig. 2. Online Monitoring and L3 Architecture when a Level-3 trigger is active.]]
+
[[Image:20170126_rootspy.png| thumb | 400px | Fig. 2. ''RootSpy'' screen. Start this from the hdops account by typing ''start_rootspy'' in a terminal.]]
  
 +
The Online Monitoring System is a software system that couples with the Data Acquisition System to monitor the quality of the data as it is read in. The system is responsible for ensuring that the detector systems are producing data of sufficient quality that a successful analysis of the data in the offline is likely and capable of producing a physics result. The system itself does not contain alarms or checks on the data. Rather, it supplies histograms and relies on shift takers to periodically inspect them to assure all detectors are functioning properly.
  
The Online Monitoring System is a software system that couples with the Data Acquisition System to monitor the quality of the data as it is read in. The system is responsible for ensuring that the detector systems are producing data of sufficient quality that a successful analysis of the data in the offline is likely and capable of producing a physics result.
+
Events will be transported across the network via the ET (Event Transfer) system developed and used as part of the DAQ architecture. The configuration of the processes and nodes are shown in Fig. 1.
  
Events will be transported across the network via the ET (Event Transfer) system developed and used as part of the DAQ architecture. The configuration of the processes and nodes are shown in Figs 1 and 2. Fig. 1 shows the simpler case when no L3 algorithm is being applied and only one monitoring farm is needed. Fig. 2 shows the more complicated case of when a Level-3 (L3) trigger algorithm is actively rejecting events. In this case, both the pre-L3 and post-L3 event streams must be monitored to help record what is being discarded by the algorithm.
+
== Routine Operation ==
  
== Procedures for Shift Workers ==
+
=== Viewing Monitoring Histograms ===
 +
 
 +
Live histograms may be viewed using the ''RootSpy'' program. Start it from the ''hdops'' account on any gluon node via the ''start_rootspy'' wrapper script. It will communicate with all histogram producer programs on the network and start cycling through a subset of them for shift workers to monitor. Users can turn off the automatic cycling and select different histograms to display using the GUI itself. An example of the main RootSpy GUI window can be seen in Fig. 2.
  
=== Starting and stopping the Monitoring System ===
 
  
The monitoring system should be automatically started and stopped by the DAQ system whenever a new run is started or ended (see [[Data Acquisition Shift | Data Acquisition]] for details on how to do that.) Shift workers may start or stop the monitoring system by hand if needed. This should be done from the ''hdops'' account by running either the ''start_monitoring'' or ''stop_monitoring'' script. These scripts may be run from any gluon computer since they will automatically launch multiple programs on the appropriate computer nodes. To check the status of the monitoring system run the ''status_monitoring'' program. A summary is given in the following table:
 
  
 
{|class="wikitable" | width=600px
 
{|class="wikitable" | width=600px
Line 21: Line 22:
 
! Action
 
! Action
 
|-
 
|-
| '''start_monitoring''' || Starts all programs required for the the online monitoring system. WARNING: This will kill any existing monitoring processes before restarting them.
+
| '''start_rootspy''' || Starts RootSpy GUI for viewing live monitoring histograms
 
|-
 
|-
| '''stop_monitoring''' || Stops all monitoring processes
+
|}
 +
 
 +
 
 +
'''In case rootspy histograms do not show follow this procedure''' [https://logbooks.jlab.org/entry/3514839 here]!
 +
 
 +
'''Viewing Reference Plots''': Under normal operations, pages displayed by RootSpy are generated by macros. These may contain multiple histograms. Shift workers should monitor these by comparing to a reference plot. To see the reference plot, press the "Show Reference Plot" button at the bottom of the RootSpy main window. This will open a separate window that will display a static image of the reference plot by which to compare (see Fig. 3.). This window will update automatically as RootSpy rotates through its displays. Therefore, the reference plot window can (and should) be left open and near the RootSpy window for shift workers to monitor. For information on updating a specific reference plot, see [[#Reference Plots|the section on Reference Plots]] below.
 +
 
 +
[[Image:20170126_rootspy_reference.png| thumb | 300px | Fig. 3. Example Reference Plot: This is an example of the window showing the current reference plot for the live plot being displayed in the main RootSpy GUI window.]]
 +
 
 +
'''Resetting Histograms''': The RootSpy GUI has a pair of buttons labeled ''Reset'' and ''Restore''. The first will reset the local copies of all histograms displayed in all pads of the current canvas. This does *not* affect the histograms in the monitoring processes and therefore has no affect on the archive ROOT file. What this actually does is save a copy in memory of the existing histogram(s) and subtracts them from what it receives from the producers before displaying them as the run progresses. This feature allows one to periodically reset any display without stopping the program or disrupting the archive. "Restore"-ing just deletes the copies, allowing one to return to viewing the full statistics.
 +
 
 +
'''''IMPORTANT:''''' Shift workers should either reset the histograms at the beginning of a new run or just restart the RootSpy GUI when a new run is started. This is because the RootSpy GUI will otherwise retain copies of histograms from the previous run. This does not affect the archiving utility as that is a separate program that is automatically started and stopped by the DAQ system. However, it may obscure developing issues from the shift workers.
 +
 
 +
'''E-log entries'''
 +
Shift workers should have RootSpy make an e-log entry once per run. This is done by pushing the "Make e-log Entry"
 +
button in the bottom right corner of the main RootSpy GUI window. Before pressing this, one should cycle through
 +
and examine all plots on all tabs. These entries are sent to the [https://logbooks.jlab.org/book/hdrun HDRUN e-log]].
 +
 
 +
'''Grafana Time Histories'''
 +
The Grafana website used by the RootSpy online monitoring system is now up and can be accessed at [https://halldweb.jlab.org/grafana/dashboard/db/gluex-online-monitoring Grafana Online Monitoring]
 +
 
 +
=== Event Viewer ===
 +
The single event viewer can be used by shift workers to monitor single events in the detector directly from the data stream.
 +
To start the event viewer and have it automatically connect to the live data stream just type ''start_hdview2''. Figure 4.
 +
shows an example of hdview2
 +
 
 +
[[Image:20170126_hdview2.png| thumb | 400px | Fig. 4. ''hdview2'' screen. Start this from the hdops account by typing ''start_hdview2'' in a terminal.]]
 +
 
 +
{|class="wikitable" | width=600px
 
|-
 
|-
| '''status_monitoring''' || Gives status of the monitoring system processes
+
! width=200px | Program
 +
! Action
 +
|-
 +
| '''start_hdview2''' || Starts graphical event viewer with the correct parameters to connect to current run
 
|-
 
|-
 
|}
 
|}
  
=== Viewing Monitoring Histograms ===
+
=== Starting and stopping the Monitoring System ===
 +
 
 +
The monitoring system should be automatically started and stopped by the DAQ system whenever a new run is started or ended (see [[Data Acquisition Shift | Data Acquisition]] for details on how to do that.) Shift workers will usually only need to start the RootSpy interface [[#Viewing Monitoring Histograms|described in the previous section]].
 +
 
 +
Shift workers may also start or stop the monitoring system itself by hand if needed. This should be done from the ''hdops'' account by running either the ''start_monitoring'' or ''stop_monitoring'' script. One can also do it via buttons on the ''hdmongui.py'' program (see Fig. 5.)
 +
 
 +
[[Image:20141021_hdmongui_py.png| thumb | 400px | Fig. 5. ''hdmongui.py'' screen. Start this from the hdops account by typing ''hdmongui.py'' in a terminal.]]
  
Live histograms may be viewed using the ''RootSpy'' program. Start it from the ''hdops'' account on any gluon node. It will communicate with all histogram producer programs on the network and start cycling through a subset of them for shift workers to monitor. Users can turn off the automatic cycling and select different histograms to display using the GUI itself.
+
These scripts may be run from any gluon computer since they will automatically launch multiple programs on the appropriate computer nodes. If processes are already running on the nodes then new ones are not started so it is safe to run ''start_monitoring'' multiple times. To check the status of the monitoring system run the ''hdmongui.py'' program as shown in Fig. 5. A summary is given in the following table:
  
 
{|class="wikitable" | width=600px
 
{|class="wikitable" | width=600px
Line 38: Line 76:
 
! Action
 
! Action
 
|-
 
|-
| '''RootSpy''' || Starts RootSpy GUI for viewing live monitoring histograms
+
| '''start_monitoring''' || Starts all programs required for the the online monitoring system.
 +
|-
 +
| '''stop_monitoring''' || Stops all monitoring processes
 +
|-
 +
| '''hdmongui.py''' || Starts graphical interface for monitoring the Online Monitoring system
 
|-
 
|-
 
|}
 
|}
  
  
== Advanced Details of the Monitoring System ==
+
=== Reference Plots ===
 +
Detector experts are ultimately responsible for ensuring the currently installed reference plots are correct. The
 +
plots are stored in the directory pointed to by the ROOTSPY_REF_DIR environment variable. This is set in the
 +
/gluex/etc/hdonline.cshrc file (normally set to ''/gluex/data/REFERENCE_PLOTS''). One can always see the full
 +
path to the currently displayed reference plot at the top of the ''RootSpy Reference Plot'' window along with
 +
the modification date/time of the file.
  
The online monitoring consists primarily of generating numerous histograms that can be viewed by shift takers or analyzed automatically by macros to check the data quality. The system is therefore comprised of histogram producers and consumers.
+
The recommended way of updating a reference plot is via the RootSpy main window itself. At the bottom of the
 +
window is a button "Make this new Reference Plot". This will capture the current canvas as an image and save
 +
it to the appropriate directory with the appropriate name. Moreover, it will first move any existing reference
 +
plot to an archive directory prefixed with the current date and time. This will serve as an archive of when each
 +
reference plot was retired.
  
=== Producers ===
 
These are produced by a set of plugins, each representing a different detector or online system. The plugins are attached to processes running on multiple computers in the counting house. The nodes used will vary depending on whether the DAQ is configured to run a L3 trigger and how many nodes are required by the algorithm being run. The node names will be in the pool specified as "L3" in the list maintained on the [https://halldweb1.jlab.org/wiki/index.php/HallD_Online_IP_Name_And_Address_Conventions HallD Online IP Name And Address Conventions] page of the GlueX wiki.
 
The monitoring processes will be started and killed automatically by the DAQ system via scripts attached to state transitions.
 
  
The definitions of the histograms are ultimately the responsibility of the detector or online system experts.
+
=== Health of gluon cluster ===
 +
The resource usage of the gluon cluster which includes not only the monitoring farm but the DAQ (i.e. EMU)
 +
computers and controls computers is monitored using ganglia. The ganglia web page is served by the
 +
gluonweb computer and is only accessible from the counting house network. Here is the link:
  
===Consumers ===
+
{|class="wikitable" | width=600px
The primary consumer of the histograms will the [http://www.jlab.org/RootSpy RootSpy] system. This has both a GUI interface for shift-takers to monitor and an archiver that can be used to store histograms in files for later viewing. To start the viewer, simply type "RootSpy" from the command line in the [https://halldweb1.jlab.org/wiki/index.php/Policies_for_Using_Online_Directories_and_Accounts hdops account]. The ''RSArchiver'' program is a command line tool used to gather histograms from the RootSpy producers and archive them in a ROOT file. This file will be copied automatically by a DAQ system script to the RAID disk alongside the raw data so that it is stored on tape with the data.
+
|-
 +
! Ganglia website
 +
|-
 +
| [https://gluonweb/ganglia https://gluonweb/ganglia]
 +
|-
 +
|}
  
  
 
== Expert personnel ==
 
== Expert personnel ==
 +
Expert details on the Online Monitoring system can be [[Online Monitoring Expert|here]].
 
The individuals responsible for the Online Monitoring  are shown in following table.
 
The individuals responsible for the Online Monitoring  are shown in following table.
 
Problems with normal operation of the Online Monitoring should be referred to those individuals and any changes to their settings must be
 
Problems with normal operation of the Online Monitoring should be referred to those individuals and any changes to their settings must be
approved by them. Additional experts may be trained by the system owner and their name and signature added to the document residing in the Hall D Counting House.
+
approved by them. Additional experts may be trained by the system owner and their name and date added to this table.
  
 
  {| border=1  
 
  {| border=1  

Revision as of 02:04, 19 March 2020

The Online Monitoring System

Fig. 1. Online Monitoring System Architecture.
Fig. 2. RootSpy screen. Start this from the hdops account by typing start_rootspy in a terminal.

The Online Monitoring System is a software system that couples with the Data Acquisition System to monitor the quality of the data as it is read in. The system is responsible for ensuring that the detector systems are producing data of sufficient quality that a successful analysis of the data in the offline is likely and capable of producing a physics result. The system itself does not contain alarms or checks on the data. Rather, it supplies histograms and relies on shift takers to periodically inspect them to assure all detectors are functioning properly.

Events will be transported across the network via the ET (Event Transfer) system developed and used as part of the DAQ architecture. The configuration of the processes and nodes are shown in Fig. 1.

Routine Operation

Viewing Monitoring Histograms

Live histograms may be viewed using the RootSpy program. Start it from the hdops account on any gluon node via the start_rootspy wrapper script. It will communicate with all histogram producer programs on the network and start cycling through a subset of them for shift workers to monitor. Users can turn off the automatic cycling and select different histograms to display using the GUI itself. An example of the main RootSpy GUI window can be seen in Fig. 2.


Program Action
start_rootspy Starts RootSpy GUI for viewing live monitoring histograms


In case rootspy histograms do not show follow this procedure here!

Viewing Reference Plots: Under normal operations, pages displayed by RootSpy are generated by macros. These may contain multiple histograms. Shift workers should monitor these by comparing to a reference plot. To see the reference plot, press the "Show Reference Plot" button at the bottom of the RootSpy main window. This will open a separate window that will display a static image of the reference plot by which to compare (see Fig. 3.). This window will update automatically as RootSpy rotates through its displays. Therefore, the reference plot window can (and should) be left open and near the RootSpy window for shift workers to monitor. For information on updating a specific reference plot, see the section on Reference Plots below.

Fig. 3. Example Reference Plot: This is an example of the window showing the current reference plot for the live plot being displayed in the main RootSpy GUI window.

Resetting Histograms: The RootSpy GUI has a pair of buttons labeled Reset and Restore. The first will reset the local copies of all histograms displayed in all pads of the current canvas. This does *not* affect the histograms in the monitoring processes and therefore has no affect on the archive ROOT file. What this actually does is save a copy in memory of the existing histogram(s) and subtracts them from what it receives from the producers before displaying them as the run progresses. This feature allows one to periodically reset any display without stopping the program or disrupting the archive. "Restore"-ing just deletes the copies, allowing one to return to viewing the full statistics.

IMPORTANT: Shift workers should either reset the histograms at the beginning of a new run or just restart the RootSpy GUI when a new run is started. This is because the RootSpy GUI will otherwise retain copies of histograms from the previous run. This does not affect the archiving utility as that is a separate program that is automatically started and stopped by the DAQ system. However, it may obscure developing issues from the shift workers.

E-log entries Shift workers should have RootSpy make an e-log entry once per run. This is done by pushing the "Make e-log Entry" button in the bottom right corner of the main RootSpy GUI window. Before pressing this, one should cycle through and examine all plots on all tabs. These entries are sent to the HDRUN e-log].

Grafana Time Histories The Grafana website used by the RootSpy online monitoring system is now up and can be accessed at Grafana Online Monitoring

Event Viewer

The single event viewer can be used by shift workers to monitor single events in the detector directly from the data stream. To start the event viewer and have it automatically connect to the live data stream just type start_hdview2. Figure 4. shows an example of hdview2

Fig. 4. hdview2 screen. Start this from the hdops account by typing start_hdview2 in a terminal.
Program Action
start_hdview2 Starts graphical event viewer with the correct parameters to connect to current run

Starting and stopping the Monitoring System

The monitoring system should be automatically started and stopped by the DAQ system whenever a new run is started or ended (see Data Acquisition for details on how to do that.) Shift workers will usually only need to start the RootSpy interface described in the previous section.

Shift workers may also start or stop the monitoring system itself by hand if needed. This should be done from the hdops account by running either the start_monitoring or stop_monitoring script. One can also do it via buttons on the hdmongui.py program (see Fig. 5.)

Fig. 5. hdmongui.py screen. Start this from the hdops account by typing hdmongui.py in a terminal.

These scripts may be run from any gluon computer since they will automatically launch multiple programs on the appropriate computer nodes. If processes are already running on the nodes then new ones are not started so it is safe to run start_monitoring multiple times. To check the status of the monitoring system run the hdmongui.py program as shown in Fig. 5. A summary is given in the following table:

Program Action
start_monitoring Starts all programs required for the the online monitoring system.
stop_monitoring Stops all monitoring processes
hdmongui.py Starts graphical interface for monitoring the Online Monitoring system


Reference Plots

Detector experts are ultimately responsible for ensuring the currently installed reference plots are correct. The plots are stored in the directory pointed to by the ROOTSPY_REF_DIR environment variable. This is set in the /gluex/etc/hdonline.cshrc file (normally set to /gluex/data/REFERENCE_PLOTS). One can always see the full path to the currently displayed reference plot at the top of the RootSpy Reference Plot window along with the modification date/time of the file.

The recommended way of updating a reference plot is via the RootSpy main window itself. At the bottom of the window is a button "Make this new Reference Plot". This will capture the current canvas as an image and save it to the appropriate directory with the appropriate name. Moreover, it will first move any existing reference plot to an archive directory prefixed with the current date and time. This will serve as an archive of when each reference plot was retired.


Health of gluon cluster

The resource usage of the gluon cluster which includes not only the monitoring farm but the DAQ (i.e. EMU) computers and controls computers is monitored using ganglia. The ganglia web page is served by the gluonweb computer and is only accessible from the counting house network. Here is the link:

Ganglia website
https://gluonweb/ganglia


Expert personnel

Expert details on the Online Monitoring system can be here. The individuals responsible for the Online Monitoring are shown in following table. Problems with normal operation of the Online Monitoring should be referred to those individuals and any changes to their settings must be approved by them. Additional experts may be trained by the system owner and their name and date added to this table.

Table: Expert personnel for the Online Monitoring system
Name Extension Date of qualification
David Lawrence 269-5567 May 28, 2014