Running Art Tutorial: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
Line 134: Line 134:
<nowiki>mu2e -c TrkDiag/fcl/TrkAnaReco.fcl --source-list files.txt</nowiki>
<nowiki>mu2e -c TrkDiag/fcl/TrkAnaReco.fcl --source-list files.txt</nowiki>
=== Exercise 8: Using generate_fcl to prepare to run jobs on the grid ===
=== Exercise 8: Using generate_fcl to prepare to run jobs on the grid ===
<nowiki>setup mu2etools</nowiki>
 
* Look at JobConfig/examples/generate_CeEndpoint.sh
When you want to simulate a large number of events, you will need to split your work up over multiple Offline jobs, and you will often want to run them on the Fermilab grid. There is a series of scripts to help you set up your fcl files and to start and track your jobs. The first problem is creating the fcl scripts. If you want to generate 10000 conversion electron events for example but only want to run 500 in a single job, you will need 20 fcl files that do the same thing - but each will need to have a different random seed, and each will need to output to different file names! The script generate_fcl takes a base fcl and will build these 20 files for it.
* description / dsconf / dsowner
 
* Generate grid jobs for digi production
<ol style="list-style-type:lower-alpha">
* Look at JobConfig/examples/generate_reco-CeEndpoint-mix.sh
<li>The generate_fcl tool is in mu2etools so you will first need to setup that UPS product</li>
* inputs / merge-factor
 
* Generate grid jobs for reco production
  setup mu2etools
 
<li>In the $MU2E_BASE_RELEASE/JobConfig/examples directory, there are scripts to call generate_fcl for each of the fcl files in JobConfig/primary,JobConfig/mixing, and JobConfig/reco. Lets look at $MU2E_BASE_RELEASE/JobConfig/examples/generate_CeEndpoint.sh</li>
* You can see from the --include option that this script is using JobConfig/primary/CeEndpoint.fcl as the base fcl, and will be used to produce 2000000 events (200 jobs of 10000 events each)
* The description, dsconf, and dsowner options configure the output filenames. If you are simulating a large production that will be widely used by others, it is important that the output filenames follow specific patterns, which is what generate_fcl tries to enforce. You can read more about the Mu2e filename patterns at https://mu2ewiki.fnal.gov/wiki/FileNames
* generate_fcl will by default create directories named 000,001,002, etc. and put 1000 fcl files in each. In this script you will see that the 000 folder is automatically renamed to CeEndpoint
* More detailed documentation for generate_fcl is located at https://mu2ewiki.fnal.gov/wiki/GenerateFcl
 
<li>Run this script to produce 200 jobs</li>
 
  source $MU2E_BASE_RELEASE/JobConfig/examples/generate_CeEndpoint.fcl
 
* There should now be a CeEndpoint directory. If you ls inside it, you will see many .fcl files and .fcl.json files
* The .json files are used during official productions to keep track of the provenance of every file produced, you can ignore them for now
* If you look at one of the .fcl files, you will see it just #includes our base fcl, and the only change from file to file is a reconfiguration of the seed and output filenames.
 
We can do the same thing for reconstruction jobs. The configuration of generate_fcl is a bit different since we now are telling it how to read in files instead of how many events to produce.
 
<li>Look at scripts/generate_reco-CeEndpoint-mix.sh (This is a slightly modified version of JobConfig/examples/generate_reco-CeEndpoint-mix.sh). Instead of njobs and events-per-job we now just provide a --inputs argument with the name of a file containing filenames of art files to read from, and a --merge-factor parameter. Normally reconstruction is fast enough that in one job you can process the output of many simulation jobs. The merge-factor configuration tells generate_fcl how many art files each reco job will use.</li>
<li>Run this script and explore the output</li>
  source scripts/generate_reco-CeEndpoint-mix.sh
 
* Since data/CeEndpoint-mix.txt had 60 entries and we have a merge-factor of 20, we will produce 3 output fcl files.
</ol>
 
=== Exercise 9: Adding backgrounds with event mixing ===
=== Exercise 9: Adding backgrounds with event mixing ===
* Compare JobConfig/primary/CeEndpoint.fcl to JobConfig/mixing/CeEndpoint.fcl
* Compare JobConfig/primary/CeEndpoint.fcl to JobConfig/mixing/CeEndpoint.fcl

Revision as of 16:20, 4 June 2019

Tutorial Session Goal

In this Tutorial you will learn how to run the Mu2e 'art' framework executable (mu2e), both interactively and on the grid.

Session Prerequisites and Advance Preparation

Session Introduction

Art is a software framework for processing events with modular code with lots of run-time configurability. Art is controlled by scripts in a dedicated configuration language called fhicl (.fcl suffix). Art uses rootIO to store events.

This tutorial will cover how to build and run several different kinds of art jobs, and how to use the mu2e job tools to divide large projects into many separate jobs, and how to run those jobs in parallel on Fermigrid or the OSG (open science grid).

Exercises

Exercise 1: Running a simple module (Hello, Tutorial!) and basic FHiCL

In this exercise, we will run a simple module that will print a welcoming message.

  1. First set up to run Offline
  2. source /setupmu2e-art.sh source /Offline/v7_3_5/SLF6/prof/Offline/setup.sh cd $TUTORIAL_BASE/RunningArt
  3. The executable we use to run Offline is "mu2e." Use the --help option to display all the command line options
  4. mu2e --help
  5. FHiCL files (*.fcl) tell Offline what to do. We specify the fcl file we want to use every time we run Offline using the "-c" option. We will now run a simple job that prints a hello message using a premade fcl file.
  6. mu2e -c fcl/hello.fcl This will write "Hello, world" and the full event id for the first 3 events.
  7. We can now explore the hello.fcl file that configured this Offline job to see how it works.
  8. more fcl/hello.fcl
    1. In FHiCL, we make definitions using the syntax "variable : value". A group of definitions can be combined into a table by surrounding with braces {}.
    2. Like in C++, we can refer to FHiCL code in other files by using "#include" statements
    3. After defining the process name, you will see three main tables: source, services, physics
    4. "source" configures the inputs to the job. If we are making new events from scratch, we use "EmptyEvent". If we are building on top of old files, we might use "RootInput." You can also see that this job is configured to run 3 events by default
    5. "services" configures parts of art that are common to all modules, like the geometry, detector conditions, or ability to print out to files
    6. "physics" is where we configure the modules that do all the work. There are "producer" modules that creates data, and "analyzer" modules that read data and might make things like analysis TTrees. There are a couple different sections to the physics table. First we declare our producer and analyzer modules, then we define our "paths" (see below), and then we tell Art which paths we want to run.
    7. Just adding a module to physics doesn't mean art will run it. It is like defining a function in c++ without calling it. To make the module run, we must tell art the list of modules and the order we want to run them in. We do this by defining a variable called a path to be this list of module names. Here there are two paths, p1 (which is empty), and e1. We then tell Art which paths to run using the definitions of "trigger_paths" and "end_paths". Producers (and filters) go in trigger paths, analyzers go in end paths.
  9. You can see more detail about FHiCL at https://mu2ewiki.fnal.gov/wiki/FclIntro or check out the Art workbook and user guide chapter 9 (https://art.fnal.gov/wp-content/uploads/2016/03/art-workbook-v0_91.pdf)

Exercise 2: Module configuration with FHiCL

We will now see how to modify FHiCL to run different modules and even configure those modules at runtime

  1. We have a new fcl file, hello2.fcl, try running that.
  2. mu2e -c fcl/hello2.fcl We can see we are running a different module, and it has some Magic number that we should be able to change. Looking at hello2.fcl, you should see the new module is called HelloWorld2
  3. We can look at the source code for HelloWorld2 to see how we change Magic number.
  4. gedit $MU2E_BASE_RELEASE/HelloWorld/src/HelloWorld2_module.cc In the constructor (line 29 to 37), you can see this module takes a fhicl::ParameterSet object, and magic number is initialized with the code pset.get<int>("magicNumber",-1) The fhicl::ParameterSet is made up of the table of definitions for that module under the physics table. So this line means in the FHiCL configuration of HelloWorld2, it is looking for a variable:value line where the variable name is "magicNumber" and the value is an integer.
  5. Configure fcl to set Magic number to 5 by adding a line "magicNumber : 5" under module_type. Run the fcl again to check that it changed
  6. You can also add this configuration to the end of the fcl file by using the full parameter location, i.e.
  7. physics.analyzers.hello2.magicNumber : 9 Try adding this to the end of your file and see if the magic Number changed
  8. Finally, try running both this module and the original HelloWorld module by adding the module declaration from hello.fcl and adding it to your end_path. If you need help, check $TUTORIAL_BASE/solutions/hello2.fcl

Exercise 3: Using a more realistic Mu2e fcl to simulate an event

To fully simulate an event in Mu2e, we will need to run many more modules and services. Modules can become dependent on output from previous modules and may require certain services to be set up, so the final FHiCL for a functioning Mu2e Offline job ends up being somewhat complex. To help make things easier, we use a few FHiCL tricks.

  1. Lets look at an example script to produce conversion electron events
  2. gedit fcl/CeEndpoint.fcl
  3. At the top you will see a #include line. Lets look at the file it is including
  4. gedit $MU2E_BASE_RELEASE/JobConfig/fcl/prolog.fcl
  5. You can see that this file includes several more files, and then starts with BEGIN_PROLOG. Prolog files are just a bunch of FHiCL definitions that then can be used later. You can see for example it defines a table called Primary.producers
    • Most directories in Offline have a prolog.fcl file that provide standard definitions for their modules and folders
  6. In JobConfig/fcl/prolog.fcl (and in fcl/CeEndpoint.fcl) you will see several definitions using "@local" or "@table". This is how you reference a previously defined value (for example something defined in a prolog).
    • @local references a standard definition (for example line 16 the definition of a module)
    • @table references a table of several definitions but without the curly braces (for example line 18 adds several more module definition name:value pairs to the producers table)
    • @sequence references a list of values separated by commas like for a path (for example line 62, each sequence adds the name of several modules to this path)
    • For more details read https://mu2ewiki.fnal.gov/wiki/FclIntro
  7. Back in fcl/CeEndpoint.fcl, you should see @table::Primary.producers, which we found in JobConfig/fcl/prolog.fcl. See if you can find out where the generate module in this fcl comes from and what it is running.
  8. You can debug a complicated FHiCL with lots of #includes using the Offline option --debug-config. This will fully process the script, and print out to a file the results with all the @local etc. references made explicit. Lets try with our CeEndpoint.fcl
  9. mu2e -c fcl/CeEndpoint.fcl --debug-config CeEndpoint-debug.fcl Look in CeEndpoint-debug.fcl for the generate module definition and check if you had it right.
  10. Finally, run fcl/CeEndpoint.fcl and generate 10 events
  11. mu2e -c fcl/CeEndpoint.fcl --nevts=10

Exercise 4: Exploring Offline outputs

The above exercise should produce two files, dig.owner.CeEndpoint.version.sequencer.art and nts.owner.CeEndpoint.version.sequencer.root (also located in $TUTORIAL_BASE/RunningArt/data). Both are actually root files, but they contain different information. The .root files produced by Offline are used for diagnostic histograms and TTrees, and analysis output like TrkAna that can be used in a normal root analysis. The .art files contain the actual c++ objects Offline uses to describe the event (both simulation information and reconstructed information), and so are in general meant to be processed by other Offline jobs.

  1. Open both files in a root TBrowser to see their contents
  2. The .root file will have a few histograms describing the event generator output. The .art file will have TTrees for Event/subRun/Run level information. If you open the Events TTree, you will see lots of branches with complicated names.
  3. We can use Offline modules to better understand the .art file contents.
  4. mu2e -c Print/fcl/dumpDataProducts.fcl --source dig.owner.CeEndpoint.version.sequencer.art The art "dataproducts" are saved into .art files using the naming scheme className_moduleName_instanceName_processName Modules are not allowed to modify data in Art. Instead, if you want to change a dataproduct, modules will create a new modified version. Since the saved version always includes the moduleName, it is possible to refer to only this modified version in future modules or analyses.

Exercise 5: Create your own primary production job

  1. The JobConfig directory has the base scripts for all the production jobs, and so has examples of how to correctly configure most kinds of Offline jobs. If you need to do anything different, you can usually start with a JobConfig script and modify it slightly.
    • JobConfig/primary has scripts to generate primary only (no backgrounds) events without doing reconstruction
    • JobConfig/mixing has scripts to generate primary plus background events
    • JobConfig/reco has scripts to take output of the previous two and run reconstruction
    • These scripts are designed to be used with grid production scripts, and so don't include configuration of the random seed. To run the scripts as is, you will need to add two lines:
    services.SeedService.maxUniqueEngines: 50 services.SeedService.baseSeed: <put some number here>
  2. Lets try to make our own fcl script now. See if you can make and run a script to run primary only events with 100 MeV photons. Start by looking at $MU2E_BASE_RELEASE/EventGenerator/fcl/prolog.fcl, which has the default configuration for all of the event generators. There should be one called photonGun.
    • Tip: start by copying $MU2E_BASE_RELEASE/JobConfig/primary/CeEndpoint.fcl and replace the generator.
    • Don't forget to add the seed configuration!
  3. Now see if you can turn on the StrawDigi diagnostic output. Looking at $MU2E_BASE_RELEASE/TrackerMC/src/StrawDigisFromStepPointMCs_module.cc, you will see a FHiCL parameter called diagLevel. Try increasing it in your script from 0 to 2.
  4. Run your script and check the output .root file, you should see a new TDirectory with the StrawDigi diagnostics.
    • If you need help, check solutions/photongun.fcl

Exercise 6: Running event reconstruction

  • FIXME need non mixing script
  • Use output of exercise 4 or $TUTORIAL_BASE/RunningArt/data/dig.owner.CeEndpoint.version.sequencer.art
mu2e -c JobConfig/fcl/mcdigis.fcl --source $TUTORIAL_BASE/RunningArt/data/dig.owner.CeEndpoint.version.sequencer.art
  • Run dumpDataProducts.fcl on the dig.*.art and the mcs.*.art files and compare

Exercise 7: Running TrkDiag to create TrkAna TTrees

mu2e -c TrkDiag/fcl/TrkAnaReco.fcl --source-list files.txt

Exercise 8: Using generate_fcl to prepare to run jobs on the grid

When you want to simulate a large number of events, you will need to split your work up over multiple Offline jobs, and you will often want to run them on the Fermilab grid. There is a series of scripts to help you set up your fcl files and to start and track your jobs. The first problem is creating the fcl scripts. If you want to generate 10000 conversion electron events for example but only want to run 500 in a single job, you will need 20 fcl files that do the same thing - but each will need to have a different random seed, and each will need to output to different file names! The script generate_fcl takes a base fcl and will build these 20 files for it.

  1. The generate_fcl tool is in mu2etools so you will first need to setup that UPS product
  2. setup mu2etools
  3. In the $MU2E_BASE_RELEASE/JobConfig/examples directory, there are scripts to call generate_fcl for each of the fcl files in JobConfig/primary,JobConfig/mixing, and JobConfig/reco. Lets look at $MU2E_BASE_RELEASE/JobConfig/examples/generate_CeEndpoint.sh
    • You can see from the --include option that this script is using JobConfig/primary/CeEndpoint.fcl as the base fcl, and will be used to produce 2000000 events (200 jobs of 10000 events each)
    • The description, dsconf, and dsowner options configure the output filenames. If you are simulating a large production that will be widely used by others, it is important that the output filenames follow specific patterns, which is what generate_fcl tries to enforce. You can read more about the Mu2e filename patterns at https://mu2ewiki.fnal.gov/wiki/FileNames
    • generate_fcl will by default create directories named 000,001,002, etc. and put 1000 fcl files in each. In this script you will see that the 000 folder is automatically renamed to CeEndpoint
    • More detailed documentation for generate_fcl is located at https://mu2ewiki.fnal.gov/wiki/GenerateFcl
  4. Run this script to produce 200 jobs
  5. source $MU2E_BASE_RELEASE/JobConfig/examples/generate_CeEndpoint.fcl
    • There should now be a CeEndpoint directory. If you ls inside it, you will see many .fcl files and .fcl.json files
    • The .json files are used during official productions to keep track of the provenance of every file produced, you can ignore them for now
    • If you look at one of the .fcl files, you will see it just #includes our base fcl, and the only change from file to file is a reconfiguration of the seed and output filenames.
    We can do the same thing for reconstruction jobs. The configuration of generate_fcl is a bit different since we now are telling it how to read in files instead of how many events to produce.
  6. Look at scripts/generate_reco-CeEndpoint-mix.sh (This is a slightly modified version of JobConfig/examples/generate_reco-CeEndpoint-mix.sh). Instead of njobs and events-per-job we now just provide a --inputs argument with the name of a file containing filenames of art files to read from, and a --merge-factor parameter. Normally reconstruction is fast enough that in one job you can process the output of many simulation jobs. The merge-factor configuration tells generate_fcl how many art files each reco job will use.
  7. Run this script and explore the output
  8. source scripts/generate_reco-CeEndpoint-mix.sh
    • Since data/CeEndpoint-mix.txt had 60 entries and we have a merge-factor of 20, we will produce 3 output fcl files.

Exercise 9: Adding backgrounds with event mixing

  • Compare JobConfig/primary/CeEndpoint.fcl to JobConfig/mixing/CeEndpoint.fcl
  • locate background-cat.txt files
  • aux-input
  • generate_fcl
  • run interactively, note job length

Exercise 10: Submitting grid jobs with mu2eprodsys

setup mu2egrid

  • wfproject
  • setup vs code tarball
  • fcl on pnfs vs tarball

mu2eprodsys --dry-run

Exercise 11: Running the event display

Reference Materials