Validation
Introduction
Validation serves to answer whether our code is functional, what physics the code is producing, and whether it is performing the same in two contexts. It primarily operates by producing standard numbers and histograms, and if needed, comparing the histograms between contexts. Some examples are:
- Every night on the Jenkins platform we run conversion electron generation, simulation, reconstruction, and standard histogramming. We save the histograms and compare to the previous night's result. If the histograms have changed, then send alerts by mail.
- If we gain access to a new platform, such as a supercomputer center, we can run simulation and reco on a known platform and the new platform, and compare results.
- If we release a new code version, we can run standard tests and histograms, and compare to previous releases.
- If we install a new version of a critical package like geant, we can run standard code and histograms, and compare the results from before and after the change.
- In a simulation or reco production run, we can produce a standard set of histograms. This can serve as a check on the results, then can also be archived as a record of what was produced.
Below we list the automated procedures, the individual procedures we run occasionally, and the validation tools. The tools include code to make a standard set of validation histograms, and code to compare histograms.
Automatic procedures
The automatic procedures run on the Jenkins build platform, and there is more information on those pages
- Every time code is committed to our git repository, a procedure on the Jenkins platform is triggered to check that the code still builds and runs simple executables. If anything fails, then send alerts by mail.
- Every night on the Jenkins platform we run conversion electron generation, simulation, reconstruction, and standard histogramming. We save the histograms and compare to the previous night's result. If the histograms have changed, then send alerts by mail.
- Every night on the Jenkins platform, we run the simple overlaps check and several other standard executables.
- every night we check that the code builds with a advanced geant version
- When a new tagged release is built, we run conversion electrons, make standard histograms and archive the results on cvmfs in the release directory.
Individual Procedures
Validation module
The module can be run by itself on a file:
mu2e -f Validation/fcl/val.fcl -s inputfile
This will produce validation.root (or whatever you specify with -T
). This will contain a Validation directory and in there will be more directories, one for each art product the module could find and analyze. The directories are named by the product name. If there are many instances of products, such as StepPointMCs, then each instance will get its own directory and set of histograms. The module can also be run as part of a path:
services : { # request standard geometry and conditions services # request TFileService } # setup module physics :{ analyzers: { Validation : { module_type : Validation validation_level : 1 } } } # put Validation in a path..
Using the module as part of your path will let you see all the products in the event, even ones that will get dropped on output. The validation level will control how in-depth the histogramming will go. So far, only level 1 is implemented. This is intended to be quick and robust, just histogramming a few variables from each product. When level 2 is implemented, it might histogram derived quantities or quantities derived from multiple products, or make cuts. The following products are analyzed:
- CaloCluster
- CaloCrystalHit
- CaloDigi
- CaloRecoDigi
- GenParticle
- SimParticle
- StepPointMC
- StrawHit
- TrackClusterMatch
- TrackSummary
g4test03
This test produces conversion electrons, makes straw hits, and runs calorimeter reconstruction, then runs the
Analyses/src/ReadBack_module.cc
module which reads these art products and makes sanity-check histograms and nutples of basic straw hit and calorimeter quantities. The output is an art file data_03.root
and a root file g4test_03.root
mu2e -n 200 -c Mu2eG4/fcl/g4test_03.fcl
The histogram file is suitable to be used as a validation result for checks and comparisons.
Conversion electrons
One executable can generate conversion electrons, simulate them, and reconstruct them. This does not include background frames. This sample is a standard test of tracking efficiency and momentum resolution. Generating 10,000 electrons takes about two hours:
mu2e -n 10000 -c Analyses/test/genReco.fcl
The output will be an art file genReco.art
and a root file genReco.hist
with histograms and the trkdiag ntuple.
You can make standard plots of the tracking efficiency and resolution by running this script, which opens genReco.hist
and looks at the ntuple:
root -l root [0] .x TrkDiag/test/ce.C
The outputs will be two pdf files: rcan_ce.pdf
, with the resolution fits, and acan_ce.pdf
, with the acceptance plots.
Overlaps
If the volumes we define to geant overlap in a non-physical way, which is easy to do in a complex geometry, the simulation code may crash, or may fail in subtle ways. After making changes to geometry, we want to run this overlap check. The check selects random points on surfaces of volumes and asks if they are in another volume. Since the points only have some chance to land in the trouble spot, the more points checked, the better.
A quick check (runs in 2 hours) can be done with an interactive executable:
mu2e -c Mu2eG4/fcl/surfaceCheck.fcl >& surfaceCheck.log
To check the output, first check that about 10K volumes were checked:
grep 'Checking overlaps for volume' surfaceCheck.log | grep -c OK
Then if the following command produces any lines of output, the check failed, and the text should indicate one of the volumes in question:
grep 'Checking overlaps for volume' surfaceCheck.log | grep -v OK
Stopped muons
Validation tools
When you setup any modern Offline release, you have the validation tools in your path. There are two main tools. The first is a module to make a standard set of histograms, which is explained in the above section. The second is a executable, valCompare, to compare two sets of histogram files.
valCompare
valCompare is an executable that uses the custom code to compare two files of histograms and make reports in various ways, including a web page.
The executable is run with arguments of two histogram filespecs. The first histogram file is take as the standard and in plots it will be the gray histogram. The second file is taken as the file to be tested and appears as red dots. It will only compare histograms that are in the same directories and with the same names, and ignore anything else, such as ntuples.
Two criteria are used: a standard root KS test, and a fraction test. When comparing histograms, underflow and overflow bins are included by default, but you can switch this off. The fraction test is done by integrating through each histogram and at each step compute the difference in the two sums and save the largest difference found, then divide by the total entries in the standard histogram and subtract from one (so when they are similar, the result is near 1). When statistics get very small, the KS test fails but the fraction test still is useful. When statistics are very high, and there is any tiny systematic variation (like comparing two data runs) the KS test will fail. In this case the fraction test will tell you what you want to know. Very generally, if a comparison passes one of the two tests, it is "OK".
There are two comparison levels, a loose and tight, and also two modes. The two most common cases for comparison are either the files are supposed to be identical (like in nightly validation) or statistically independent (like comparing two simulation files which had different random seeds). If the files were supposed to be identical, the alarm levels for the tests are <0.999 (tight) and <0.99 (loose). If the files are independent, they are set at <0.01 (tight) and <0.001 (loose). The normalization of the two files can be scaled.
valCompare options:
valCompare -h lists all options valCompare -s file1.root file2.root produces a summary of the histograms compared valCompare -r file1.root file2.root produces a line for each histogram and how it compared valCompare -w dir/result.html file1.root file2.root produces the named web page and overlay plots for each histogram also in that dir
The core histogram comparison classes an can also be used in root:
root [0] .L lib/libmu2e_Validation_root.so TValCompare c c.SetFile1("file1.root") c.SetFile2("file2.root") c.Analyze() c.Summary() c.Browse()
They can also be used in code. The histograms are compared with TValHistH, TProfile with TValHistP and TEfficiency with TValHistE
#include "Validation/inc/root/TValHistH.h" TH1D* h1 = new TH1D... TH1D* h2 = new TH1D... TValHistH comph() comph.SetHist1(h1) comph.SetHist2(h2) comph.Analyze() comph.Summary() comph.Report() comph.GetKsProb() comph.GetFrProb() comph.Draw()