Difference between revisions of "GlueX Software Meeting, April 28, 2021"

From GlueXWiki
Jump to: navigation, search
(copy of last time)
 
m
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
GlueX Software Meeting<br>
 
GlueX Software Meeting<br>
Tuesday, March 30, 2021<br>
+
Wednesday, April 28, 2021<br>
3:00 pm EDT<br>
+
2:00 pm EDT<br>
 
BlueJeans: [http://www.bluejeans.com/968592007 968 592 007]
 
BlueJeans: [http://www.bluejeans.com/968592007 968 592 007]
  
Line 7: Line 7:
  
 
# Announcements
 
# Announcements
## [https://halldweb.jlab.org/wiki-private/index.php/SciComp_Issue_Tracking SciComp Issue Tracking]
+
## [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008511.html version set 4.37.1 and mcwrapper v2.5.2] (Thomas)
## [https://mailman.jlab.org/pipermail/halld-offline/2021-March/008497.html New version set: version_4.37.0.xml]
+
## New: [[HOWTO copy a file from the ifarm to home]] (Mark)
# Review of [[GlueX Software Meeting, March 16, 2021#Minutes|Minutes from the Last Software Meeting]] (all)
+
## To come: [[HOWTO use AmpTools on the JLab farm GPUs]] (Alex)
# [[HDGeant4_Meeting, March 23, 2021#Minutes|Minutes from the Last HDGeant4 Meeting]] (all)
+
## [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008517.html work disk full again] (Mark)
# OSG Issues (Thomas)
+
##* [https://docs.google.com/presentation/d/1bHVLp4PuS_8CwIYQwCzxlKOP8JOuy2mxSz6lXjgCfEI/edit?usp=sharing Work Disk Usage Plots]
# Software Testing Discussion (all)
+
## [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008523.html RedHat-6-era builds on group disk slated for deletion] (Mark)
#* [[GlueX_Offline_Software#Testing_and_Debugging|Section of Offline Software wiki page]]
+
## [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008516.html Bug fix release of halld_recon: restore the REST version number] (Mark)
#* [[GlueX_Software_Meeting,_February_2,_2021#Standardized_Tests|List of areas that need improvement from February 2]]
+
# Review of [[GlueX Software Meeting, March 30, 2021#Minutes|Minutes from the Last Software Meeting]] (all)
 +
# [[HDGeant4_Meeting, April 6, 2021#Minutes|Minutes from the Last HDGeant4 Meeting]] (all)
 +
# [https://docs.google.com/presentation/d/113b5zZelTjfDtLgLZIij2GweAPr8CvGGbaQ0_-fcY6c/edit?usp=sharing Report from the April 20th SciComp Meeting] (Mark)
 +
# [Illustrative Slides goes here] (Naomi)
 +
# [https://halldweb.jlab.org/wiki-private/index.php/ROOTWriter_and_DSelectorUpdates2021 ROOTWriter_and_DSelectorUpdates2021] (Jon)
 
# Review of recent issues and pull requests:
 
# Review of recent issues and pull requests:
 
## halld_recon
 
## halld_recon
Line 36: Line 40:
 
== Minutes ==
 
== Minutes ==
  
Present: Alexander Austregesilo, Thomas Britton, Sean Dobbs, Mark Ito (chair), Igal Jaegle, David Lawrence, Justin Stevens, Simon Taylor, Nilanga Wickramaarachchi, Beni Zihlmann
+
Present: Alexander Austregesilo, Edmundo Barriga, Thomas Britton, Sean Dobbs, Mark Ito (chair), Igal Jaegle, Naomi Jarvis, Simon Taylor, Nilanga Wickramaarachchi, Jon Zarling, Beni Zihlmann
  
There is a [https://bluejeans.com/s/ZrnFl4s1d9M/ recording of this meeting]. Log into the [https://jlab.bluejeans.com BlueJeans site] first to gain access (use your JLab credentials).
+
There is a [https://bluejeans.com/s/kxr2tk1kaQB/ recording of this meeting]. Log into the [https://jlab.bluejeans.com BlueJeans site] first to gain access (use your JLab credentials).
  
 
=== Announcements ===
 
=== Announcements ===
  
# [https://halldweb.jlab.org/wiki-private/index.php/SciComp_Issue_Tracking SciComp Issue Tracking]. Sean has put up a wiki page to collect problem reports regarding the farm, ifarm, and other SciComp resources at JLab.
+
# [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008511.html version set 4.37.1 and mcwrapper v2.5.2]. Thomas described the changes in the latest version of MCwrapper. Luminosity is now used to normalize the number of events to produce for each run number requested.
#* Alex reported on a problem that has been fixed: SWIF jobs have recently been failing at a high rate (10-20% of jobs). The cause was traced back to slow database access. Jobs that had finished properly were timing out while trying to transmit their state to the database and thus were getting marked as "SWIF system error". Chris Larrieu discovered a way to increase the speed of the database queries by a factor of twelve, eliminating the time-outs and greatly improving the success rate. Alex will follow up with Chris and get more information on the miracle cure.
+
# New: [[HOWTO copy a file from the ifarm to home]]. Mark pointed us to the new HOWTO. Sean told us that one could do the same thing from ftp.jlab.org without having to set up an ssh tunnel. Mark will make the appropriate adjustments to the documentation.
# [https://mailman.jlab.org/pipermail/halld-offline/2021-March/008497.html New version set: version_4.37.0.xml] The new version set came out Sunday. Note that HDGeant4 now has the fix to the calculation of DOCA in the FDC.
+
# To come: [[HOWTO use AmpTools on the JLab farm GPUs]]. Alex described his HOWTO (still under construction).
 +
# [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008517.html work disk full again]. Mark described the current work disk crunch, including [https://halldweb.jlab.org/doc-public/DocDB/ShowDocument?docid=5082 plots of recent usage history]. More clean-up will be needed until the arrival of new work disk servers this summer.
 +
# [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008523.html RedHat-6-era builds on group disk slated for deletion]. Mark reminded us that the deletion of these builds has been carried out.
 +
# [https://mailman.jlab.org/pipermail/halld-offline/2021-April/008516.html Bug fix release of halld_recon: restore the REST version number]. Mark reviewed the reason for the new version sets.
  
=== Review of Minutes from the Last Software Meeting ===
+
=== Review of Minutes from the Last Software Meeting ===
  
We went over the [[GlueX Software Meeting, March 16, 2021#Minutes|minutes from the meeting on March 16]]. Mark pointed out that Alex highlighted his exploitation of AmpTools ability to use GPUs in his talk at the recent Exotic Search Review. Thomas reported that the purchase order for the new farm nodes, including those with GPUs on-board, has been signed.
+
We went over the [[GlueX Software Meeting, March 30, 2021#Minutes|minutes from the meeting on March 30th]].
 +
 
 +
* It turns out that there is no pull-request-triggered test for HDGeant4. Mark has volunteered to set one up &agrave; la the method Sean instituted for halld_recon and halld_sim.
 +
* Some significant progress has been made on releasing CCDB 2.0.
 +
** The unit tests for CCDB 1.0 have been broken for some time. Mark and Dmitry Romanov found and fixed a problem with the fetch of constants in the form map<string, string> having to do with cache access. This problem is likely in the CCDB 2.0 branch.
 +
** Dmitry has started on reviving the MySQL interface for CCDB 2.0.
 +
** Dmitry has moved us to a new workflow for CCDB pull requests.
 +
*** Developers will fork the JeffersonLab/ccdb repository to their personal accounts and work on branches created there as they see fit.
 +
*** When a change is ready, they will submit a pull request back to the JeffersonLab/ccdb repository for merging.
 +
*** This workflow is common outside Hall D. For example Hall C uses it as do many groups outside the Lab. We may consider using it within Hall D as well. It makes it easier to put up safeguards against spurious errors from inadvertent/faulty commits and any code review mechanism we may want to have. And it solves the problem of the confusing proliferation of branches in the main repository that we have seen. We could move to it with no structural changes to the repositories themselves.
 +
*** Sean pointed out that such a workflow might require minor changes to the automatic-pull-request-triggered tests.
  
 
=== Minutes from the Last HDGeant4 Meeting ===
 
=== Minutes from the Last HDGeant4 Meeting ===
  
We went over the [[HDGeant4_Meeting, March 23, 2021#Minutes|minutes from the meeting on March 23]]. There was some sentiment for closing the issue of drift distances in the FDC, but we did not reach a firm decision about where to continue discussion of agreement with data.
+
We went over the [[HDGeant4_Meeting, April 6, 2021#Minutes|minutes from the HDGeant4 meeting on April 6th]]. Sean noted that the overall focus of the HDGeant4 group is to compare Monte Carlo with data and, using the two simulation engines at our disposal, G3 and G4, try to drill down to see where difference arise at a basic physical level in HDGeant4, and then adjust the model to get agreement with data. This approach is preferred over one where empirical correction factors are imposed as an after-burner on the simulation.
 +
 
 +
=== Report from the April 20th SciComp Meeting ===
 +
 
 +
Mark presented [https://halldweb.jlab.org/doc-public/DocDB/ShowDocument?docid=5081 slides], the first two reproducing the Bryan Hess's agenda for the meeting and the third summarizing some of the discussion. Please see his slides for the details.
 +
 
 +
Sean asked if we could prioritize recovery of certain files over others. Mark will ask.
  
=== OSG Issues ===
+
==== Handling of Recon Launch Output from Off-site ====
  
Thomas reported on a recent fix to a problem that has been with us for over a year with job submission to the OSG. There had been a cap imposed on the number of idle jobs at 1,000 when job execution ground to a halt back then due to a large number of such jobs. The root problem was not understood. Recently the cap was listed and for a period three weeks ago the number of CLAS12 jobs in the idle state ballooned to 70,000. All monitoring and job progress stopped and responses to queries to Condor would not return. Thomas traced the problem to the jobs writing their Condor logs to volatile, generating many small disk accesses to Lustre and rendering the system unresponsive. All jobs going from the JLab submit host (scosg16.jlab.org) were affected. The solution was to dismount all Lustre disk systems from scosg16.
+
Alex raised the issue of disk use when bringing results of reconstruction launches, performed off-site, back to JLab. All data land on volatile, and after reprocessing, get written to cache and from there to tape. He is worried about this procedure for two reasons:
 +
# Data on volatile is subject to deletion (oldest files get deleted first) and we do not want to lose launch output to the disk cleaner.
 +
# The array of problems we have always seen with Lustre disks. Both volatile and cache are Lustre systems.
  
Things are fine now, but there is a backlog of GlueX jobs that are still working their way through the system.
+
Mark showed a plot where the amount of data we have on volatile has been well under the deletion level for months now. His claim was that pre-mature deletion from volatile has not been a problem for quite a while. Alex did not think that the graph was accurate; it showed too little variation in usage level when Alex knows that there has been significant activity on the disk, an argument that Mark found convincing. Mark will have to check on the source of his data. That aside, disk usage in the context should be reviewed.
  
Thomas also mentioned two areas for controlling OSG jobs and their relative priority.
+
==== Consolidation of Skim Files on to Fewer Tapes ====
  
# There is a priority mechanism built in MCwrapper. Requests for adjustment should be directed to Thomas.
+
Sean has noticed that at times reprocessing skimmed data can take a long time due to retrieval times of files from tape. He suspects that this is because the files are scattered on many tapes and so a large number of tape mounts and file skips are needed to get all of the data. He proposed a project where, for certain skims, we re-write the data on to a smaller number of tapes.
# After discussions with Justin and others, Thomas is instituting a upper limit of 250 million events per MCwrapper project. This will prevent inadvertent submissions from swamping the system and allow for the work-around of submitting more than one project if more events are needed.
+
  
=== Software Testing Discussion ===
+
Mark had some comments:
  
Mark reminded us where some of the existing documentation on our test procedures resides, [[GlueX_Offline_Software#Testing_and_Debugging|linked from the Offline Software wiki page]]. He also went the through the list of items we discussed at the [[GlueX_Software_Meeting,_February_2,_2021#Standardized_Tests|Software Meeting on February 2]]. We a bit of an inconclusive discussion on how to make progress on the quality and quantity of our testing regime. Mark suggested a meeting of a small group of us to frame the issue. Justin cautioned us there there was a lot on our plate with the upcoming APS Meeting and that a portion of a Software Meeting after the Meeting might be a good place to come up with a plan.
+
* We should only start such a project on skims for which there is some reasonable expectation that retrieval will be done repeatedly in the future. The consolidation step itself involves reading and writing all of the files of interest and so reading those files has to happen at least a couple of times after consolidation before the exercise shows a net gain.
 +
* The way we write data to tape, by putting skim files on the write-through cache over several weeks guarantees that the files will be scattered on different tapes. With the write through cache we would do better to buffer data on disk until a significant fraction of one tape has been accumulated and then manually trigger the write to tape.
 +
* It is possible to set-up tape "volume sets" (a set of specific physical tapes) in advance in the tape library and then directed selected data types to specific volume sets. The tapes in the volume sets will then be dense in the data types so directed. This is already done for raw data but there is no structural impediment to doing it for other types of data. This approach has the advantage there there is no need to develop software to make it happen.
  
=== Review of recent issues and pull requests ===
+
Something does have to be done on this front. Sean and Mark will discuss the issue further.
  
Mark called out attention to [https://github.com/JeffersonLab/halld_sim/issues/190 halld_sim Issue #190], Run-to-run efficiency variation in 2018 run periods", will be discussed at tomorrow's Production and Analysis meeting, so we did not discuss it directly.
+
=== ROOTWriter and DSelector Updates ===
  
On a related point Justin mentioned that there were several recent changes that should get done before new Monte Carlo is produced.
+
Jon presented a list of ideas and improvements for our data analysis software. See [https://halldweb.jlab.org/wiki-private/index.php/ROOTWriter_and_DSelectorUpdates2021 his wiki page] for the complete list.
  
# Tagger energy assignment improvement.
+
The items and subsequent discussion were in two broad classes:
#* Reconstruction-launch-compatible halld_recon versions need patches to apply the new scheme.
+
#* REST data sets from previous launches need a new reader to undo the old scheme and apply the new one.
+
# additional random trigger file skim
+
#* Sean reported that there will be an effort to fill in some of the gaps in our coverage of runs with corresponding random triggers.
+
# [https://github.com/JeffersonLab/gluex_MCwrapper/issues/56 Sample runs within range by luminosity instead of triggers]
+
#* Brought up by Mark Dalton offline (after the meeting)
+
# [https://github.com/JeffersonLab/halld_sim/issues/190 Fix to CDC efficiency in Spring 2018]: done
+
# [https://github.com/JeffersonLab/HDGeant4/issues/181 Fix to FDC efficiency in G3 vs G4]: done
+
  
Mark mentioned two of his favorite recent outstanding pull requests:
+
* How we use the ROOT toolkit: Are there more efficient practices? Are there features we don't exploit but should?
 +
* How we analyze the data: Are there new features in the Analysis Library that we should develop? Should the contents of the REST format be expanded? Are there things we do in Analysis that should be done in reconstruction or vice-versa?
  
* Diracxx: [https://github.com/JeffersonLab/Diracxx/pull/2 Introduce two make variables: #2]. This will allow Diracxx to build on more advanced distributions like CentOS 8 and Ubuntu 20.
+
One thing that came up was our use of TLorentzVector. Jon has seen others use a smaller (member-data-wise) class. Alex pointed out that the current ROOT documentation has marked this class as deprecated. Yet our use of TLorentzVector is ubiquitous. Several expressed interest in looking into this more closely.
* gluex_root_analysis: [https://github.com/JeffersonLab/gluex_root_analysis/pull/147 Top level make mmi #147]. This is a reworking of the build system to use a makefile at all levels rather than a mixture of makefiles and shell scripts. The old mixed system would not halt when errors were generated in the build. Also dependence of the build on ROOT_ANALYSIS_HOME was removed, making it easier to build a local version of the package. [Added in press: Alex tested and merged the pull request soon after the meeting.]
+
  
=== The Work Disk is Full ===
+
Jon encouraged us to think about where we might want to expend effort. This will likely come up again at a future meeting.
  
Simon broke the news to us. The proximate cause is the mysterious fluctuation of our quota on the disk server. See the red points in the plot below:
+
=== Production of Missing Random Trigger Files ===
  
[[File:Work disk 2021-03-30.png|500 px]]
+
Sean reported that he and Peter Pauli are very close to filling in all of the gaps in the random trigger file coverage for Fall 2018. Peter may give a presentation on this work at a future meeting.
  
 
=== Action Item Review ===
 
=== Action Item Review ===
  
# Make sure that the automatic tests of HDGeant4 pull requests have been fully implemented. (Mark I., Sean)
+
# Set up pull-request-triggered tests for HDGeant4. (Mark)
# Finish conversion of halld_recon to use JANA2. (Nathan)
+
# Modify the documentation to feature ftp.jlab.org. (Mark)
# Release CCDB 2.0 (Dmitry, Mark I.)
+
# Prioritizing specific tapes to be recovered. (Mark)
 +
# Review disk usage when re-repatriating recon launch data. (Alex, Mark)
 +
# Check input data for volatile usage plot. (Mark)
 +
# Make a plan for structuring tape writing for efficient file retrieval. (Sean, Mark)
 +
# Look into how we use TLorentzVector (Alex, Simon, Jon)
 +
# Think about Jon's list of improvements. (all)

Latest revision as of 15:12, 10 May 2021

GlueX Software Meeting
Wednesday, April 28, 2021
2:00 pm EDT
BlueJeans: 968 592 007

Agenda

  1. Announcements
    1. version set 4.37.1 and mcwrapper v2.5.2 (Thomas)
    2. New: HOWTO copy a file from the ifarm to home (Mark)
    3. To come: HOWTO use AmpTools on the JLab farm GPUs (Alex)
    4. work disk full again (Mark)
    5. RedHat-6-era builds on group disk slated for deletion (Mark)
    6. Bug fix release of halld_recon: restore the REST version number (Mark)
  2. Review of Minutes from the Last Software Meeting (all)
  3. Minutes from the Last HDGeant4 Meeting (all)
  4. Report from the April 20th SciComp Meeting (Mark)
  5. [Illustrative Slides goes here] (Naomi)
  6. ROOTWriter_and_DSelectorUpdates2021 (Jon)
  7. Review of recent issues and pull requests:
    1. halld_recon
    2. halld_sim
    3. CCDB
    4. RCDB
    5. MCwrapper
  8. Review of recent discussion on the GlueX Software Help List (all)
  9. Action Item Review (all)

Minutes

Present: Alexander Austregesilo, Edmundo Barriga, Thomas Britton, Sean Dobbs, Mark Ito (chair), Igal Jaegle, Naomi Jarvis, Simon Taylor, Nilanga Wickramaarachchi, Jon Zarling, Beni Zihlmann

There is a recording of this meeting. Log into the BlueJeans site first to gain access (use your JLab credentials).

Announcements

  1. version set 4.37.1 and mcwrapper v2.5.2. Thomas described the changes in the latest version of MCwrapper. Luminosity is now used to normalize the number of events to produce for each run number requested.
  2. New: HOWTO copy a file from the ifarm to home. Mark pointed us to the new HOWTO. Sean told us that one could do the same thing from ftp.jlab.org without having to set up an ssh tunnel. Mark will make the appropriate adjustments to the documentation.
  3. To come: HOWTO use AmpTools on the JLab farm GPUs. Alex described his HOWTO (still under construction).
  4. work disk full again. Mark described the current work disk crunch, including plots of recent usage history. More clean-up will be needed until the arrival of new work disk servers this summer.
  5. RedHat-6-era builds on group disk slated for deletion. Mark reminded us that the deletion of these builds has been carried out.
  6. Bug fix release of halld_recon: restore the REST version number. Mark reviewed the reason for the new version sets.

Review of Minutes from the Last Software Meeting

We went over the minutes from the meeting on March 30th.

  • It turns out that there is no pull-request-triggered test for HDGeant4. Mark has volunteered to set one up à la the method Sean instituted for halld_recon and halld_sim.
  • Some significant progress has been made on releasing CCDB 2.0.
    • The unit tests for CCDB 1.0 have been broken for some time. Mark and Dmitry Romanov found and fixed a problem with the fetch of constants in the form map<string, string> having to do with cache access. This problem is likely in the CCDB 2.0 branch.
    • Dmitry has started on reviving the MySQL interface for CCDB 2.0.
    • Dmitry has moved us to a new workflow for CCDB pull requests.
      • Developers will fork the JeffersonLab/ccdb repository to their personal accounts and work on branches created there as they see fit.
      • When a change is ready, they will submit a pull request back to the JeffersonLab/ccdb repository for merging.
      • This workflow is common outside Hall D. For example Hall C uses it as do many groups outside the Lab. We may consider using it within Hall D as well. It makes it easier to put up safeguards against spurious errors from inadvertent/faulty commits and any code review mechanism we may want to have. And it solves the problem of the confusing proliferation of branches in the main repository that we have seen. We could move to it with no structural changes to the repositories themselves.
      • Sean pointed out that such a workflow might require minor changes to the automatic-pull-request-triggered tests.

Minutes from the Last HDGeant4 Meeting

We went over the minutes from the HDGeant4 meeting on April 6th. Sean noted that the overall focus of the HDGeant4 group is to compare Monte Carlo with data and, using the two simulation engines at our disposal, G3 and G4, try to drill down to see where difference arise at a basic physical level in HDGeant4, and then adjust the model to get agreement with data. This approach is preferred over one where empirical correction factors are imposed as an after-burner on the simulation.

Report from the April 20th SciComp Meeting

Mark presented slides, the first two reproducing the Bryan Hess's agenda for the meeting and the third summarizing some of the discussion. Please see his slides for the details.

Sean asked if we could prioritize recovery of certain files over others. Mark will ask.

Handling of Recon Launch Output from Off-site

Alex raised the issue of disk use when bringing results of reconstruction launches, performed off-site, back to JLab. All data land on volatile, and after reprocessing, get written to cache and from there to tape. He is worried about this procedure for two reasons:

  1. Data on volatile is subject to deletion (oldest files get deleted first) and we do not want to lose launch output to the disk cleaner.
  2. The array of problems we have always seen with Lustre disks. Both volatile and cache are Lustre systems.

Mark showed a plot where the amount of data we have on volatile has been well under the deletion level for months now. His claim was that pre-mature deletion from volatile has not been a problem for quite a while. Alex did not think that the graph was accurate; it showed too little variation in usage level when Alex knows that there has been significant activity on the disk, an argument that Mark found convincing. Mark will have to check on the source of his data. That aside, disk usage in the context should be reviewed.

Consolidation of Skim Files on to Fewer Tapes

Sean has noticed that at times reprocessing skimmed data can take a long time due to retrieval times of files from tape. He suspects that this is because the files are scattered on many tapes and so a large number of tape mounts and file skips are needed to get all of the data. He proposed a project where, for certain skims, we re-write the data on to a smaller number of tapes.

Mark had some comments:

  • We should only start such a project on skims for which there is some reasonable expectation that retrieval will be done repeatedly in the future. The consolidation step itself involves reading and writing all of the files of interest and so reading those files has to happen at least a couple of times after consolidation before the exercise shows a net gain.
  • The way we write data to tape, by putting skim files on the write-through cache over several weeks guarantees that the files will be scattered on different tapes. With the write through cache we would do better to buffer data on disk until a significant fraction of one tape has been accumulated and then manually trigger the write to tape.
  • It is possible to set-up tape "volume sets" (a set of specific physical tapes) in advance in the tape library and then directed selected data types to specific volume sets. The tapes in the volume sets will then be dense in the data types so directed. This is already done for raw data but there is no structural impediment to doing it for other types of data. This approach has the advantage there there is no need to develop software to make it happen.

Something does have to be done on this front. Sean and Mark will discuss the issue further.

ROOTWriter and DSelector Updates

Jon presented a list of ideas and improvements for our data analysis software. See his wiki page for the complete list.

The items and subsequent discussion were in two broad classes:

  • How we use the ROOT toolkit: Are there more efficient practices? Are there features we don't exploit but should?
  • How we analyze the data: Are there new features in the Analysis Library that we should develop? Should the contents of the REST format be expanded? Are there things we do in Analysis that should be done in reconstruction or vice-versa?

One thing that came up was our use of TLorentzVector. Jon has seen others use a smaller (member-data-wise) class. Alex pointed out that the current ROOT documentation has marked this class as deprecated. Yet our use of TLorentzVector is ubiquitous. Several expressed interest in looking into this more closely.

Jon encouraged us to think about where we might want to expend effort. This will likely come up again at a future meeting.

Production of Missing Random Trigger Files

Sean reported that he and Peter Pauli are very close to filling in all of the gaps in the random trigger file coverage for Fall 2018. Peter may give a presentation on this work at a future meeting.

Action Item Review

  1. Set up pull-request-triggered tests for HDGeant4. (Mark)
  2. Modify the documentation to feature ftp.jlab.org. (Mark)
  3. Prioritizing specific tapes to be recovered. (Mark)
  4. Review disk usage when re-repatriating recon launch data. (Alex, Mark)
  5. Check input data for volatile usage plot. (Mark)
  6. Make a plan for structuring tape writing for efficient file retrieval. (Sean, Mark)
  7. Look into how we use TLorentzVector (Alex, Simon, Jon)
  8. Think about Jon's list of improvements. (all)