MuseBuildCodeTutorial: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
Line 112: Line 112:
When we read and process events to simulate the detector, reconstruct raw data, or analyze data events we will need a framework to organize our executables, and way to control the behavior of the executable, and a format to write the events into.  These functions are provided by the ''art''' software package.
When we read and process events to simulate the detector, reconstruct raw data, or analyze data events we will need a framework to organize our executables, and way to control the behavior of the executable, and a format to write the events into.  These functions are provided by the ''art''' software package.


When we want to operate on the data, we write c++ code.  We store the code in a commercial site, [http://www.github.com github].  We have a main repository (or repo) call "Offline" and other smaller special-purpose repos.  We compile and link the code using a set of Mu2e scripts called [[Muse]].  Inside those scripts, we use a commercial build system called "scons"Inside scons, we use linux gcc to compile and link the code.
When we want to operate on the data, we write c++ code.  We track code files using a code management tool called '''git''', which writes files to a commercial site, [http://www.github.com github].  We have a main repository (or '''repo''') call '''Offline''' and other smaller special-purpose repos.  We compile and link the code using a set of Mu2e scripts called [[Muse]].  Inside those scripts, we use a code build system called '''scons'''Finally, inside scons, we use linux '''gcc''' to compile and link the code.


==Art==
==Art==

Revision as of 22:12, 31 March 2021


Stuff

As Mu2e approaches data-taking there will be many more ntuples, calibration procedures, and analysis code developed. Muse is a UPS product containing a set of scripts and scons commands to enable building Offline and other Muse-ready repos together. Currently it could be described as pre-beta, so not completely stable.

The following is a recipe for testing building Offline with the Tutorial repo, as an example.

 setup mu2e
 setup muse
  < cd to a working dir >
 git clone https://github.com/Mu2e/Offline
 git clone https://github.com/Mu2e/Tutorial
 muse -h
 muse setup
 muse build -j 20 >& b_00.log
 mu2e -n 10 -c Offline/Validation/fcl/ceSimReco.fcl
 mu2e -s mcs.owner.val-ceSimReco.dsconf.seq.art \
    -c Tutorial/DataExploration/fcl/Ex01.fcl

  < in the same working dir, but in a second window>
 muse setup -q debug
 muse status
 muse build -j 20 >& d_00.log
 mu2e -s mcs.owner.val-ceSimReco.dsconf.seq.art \
    -c Tutorial/DataExploration/fcl/Ex02.fcl

An example of a backing release

 setup mu2e
 setup muse
  < cd to a working dir that will be the backing build >
 git clone https://github.com/Mu2e/Offline
 muse setup
 muse build -j 20 >& b_00.log

  < *in a new process* cd to a working dir that will link to the backing build >

 setup mu2e
 setup muse
 git clone https://github.com/Mu2e/Tutorial
 muse link /path/backing_working_dir/Offline
 muse setup
 muse build -j 20 >& b_00.log
 mu2e -n 10 -c Offline/Validation/fcl/ceSimReco.fcl
 mu2e -s mcs.owner.val-ceSimReco.dsconf.seq.art \
    -c Tutorial/DataExploration/fcl/Ex02.fcl


Example of a partial Offline build using "mgit" the muse version of pgit

setup mu2e
setup muse
cd <a working dir>
  ... establish a backing build
muse link

Recent published releases:
Recent CI builds
2021-03-12 21:45 master/43ebbdfe
2021-03-12 12:44 master/c2409d93

muse link master/43ebbdfe
   ... establish partial code checkout
        include your github username to setup an origin remote (requires authentication)
mgit init rlcee
cd Offline
mgit add HelloWorld
sed -i 's/From/From '$USER'/' HelloWorld/src/HelloWorld_module.cc

git remote -v
git status

cd ..

muse setup
muse build -j 20 --mu2eCompactPrint
mu2e -c HelloWorld/test/hello.fcl


Example of jobs submission

  .. in any Muse build
muse tarball
Tarball: /mu2e/data/users/rlc/museTarball/tmp.BqvJOks3xI/Code.tar.bz2

mu2eprodsys ... \
  --code /mu2e/data/users/rlc/museTarball/tmp.BqvJOks3xI/Code.tar.bz2


Introduction

In the case of Mu2e, an event records the detector response during one microbunch, consisting of a short burst of millions of protons hitting the production target, thousands of muons stopping in the aluminum stopping target, and those muons interacting or decaying. This happens nominally every 1695 ns and each of these burst forms a Mu2e event.

When we read and process events to simulate the detector, reconstruct raw data, or analyze data events we will need a framework to organize our executables, and way to control the behavior of the executable, and a format to write the events into. These functions are provided by the art' software package.

When we want to operate on the data, we write c++ code. We track code files using a code management tool called git, which writes files to a commercial site, github. We have a main repository (or repo) call Offline and other smaller special-purpose repos. We compile and link the code using a set of Mu2e scripts called Muse. Inside those scripts, we use a code build system called scons. Finally, inside scons, we use linux gcc to compile and link the code.

Art

The art package provides:

  • a framework for creating executables
  • the event format (we define the contents)
  • a control language called "fhicl" or "fcl" (pronounced fickle)
  • services such as random numbers

When we write code, typically to simulate the detector, reconstruct the data, or analyze the data, we write our code in c++ modules. We then run the framework, and tell it to call the code in our module. We also tell it what input files to use, if any. The framework opens the input files, reads data, priovides services, calls our modules with the data, event by event, and then handles any filtering of events, and writing output.

When we run executables in the art framework, we have to tell the framework which modules to run and how to configure the modules. A module configuration might include how to find the input data inside the file, and what parameters and cuts to use in performing its task. This function is provided by a fcl file - you always provide a fcl file when you start up an art executable.

art executables read and write files in the art format. This is a highly-structured format which contains the event data, processing history, and other information. Under the covers, art uses the root package (used widely in HEP) to read and write the event files. Each event contains pieces which are referred to as products. For example, in each event, there may be a product which contains raw hits, one that contains reconstructed hits, and one that contains reconstructed tracks. In simulation, there are products with truth information.

All of these concepts will be fleshed out in subsequent tutorials.


Code

We will start with a tour of the Mu2e code base. We keep our code organized using a piece of software called git. This freeware package stores our code in a compact format and keep track of lots of useful things:

  • all the history of each file as it is developed
  • coherent versions of the whole set of code (tags)
  • tracking parallel development by many people, and merging all that work back together when ready.

So lets look at what is stored in our git repository. If you are in the tutorial docker container,

mkdir -p /home/tutorial_code
cd /home/tutorial_code
source /setupmu2e-art.sh

or if you are on the interactive central servers (mu2egpvm)

mkdir -p /mu2e/app/users/$USER/tutorial_code
cd mu2e/app/users/$USER/tutorial_code
source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh 

Then retrieve the code base from the git repository:

git clone http://cdcvs.fnal.gov/projects/mu2eofflinesoftwaremu2eoffline/Offline.git

after a few minutes you should see a directory Offline. Go into that dir

cd Offline

and look around. Note the directory .git. This is where git keeps information such code history and tags. Most of the directories you see here contain our working code. Each directory contains a relatively small, logically-grouped set of code.

Here are few git commands to get you started.

git status

tells you which branch you are on. When multiple people are working on code development, they will probably be working on their own branches - a version of the code they can control. When the changes are tested and approved, the personal, working branch is merged back into the main code branch, called "master". See the active branches:

git branch -al

You can switch to a different active working branch:

git checkout MDC2018
git status

or to a fixed release:

git checkout v7_4_1

all that output text is saying this fixed point of the code and you shouldn't be trying to modify it, which fine, make sense.

git status

see the files which are different from the previous release:

git diff --numstat v7_4_0 | head

and then see the code that changed in that first file listed:

git diff v7_4_0 Analyses/src/KalRepsPrinter_module.cc

and then the file history:

git log Analyses/src/KalRepsPrinter_module.cc

In the git status you should also see "working branch clean" this means your area is still the same as the central repository you just cloned. If there were new or changed files, it would tell you. You can read much more about git and how we use it, including making your own branches and committing code modifications back to the central repo.

Let's look at the code structure. For example, under

ls -l TrkHitReco

you can see directories: fcl, inc, src, test and data. Most of the top-level directories follow this pattern. The c++ includes related to the code are kept under inc, the c++ source code in src, the fcl scripts that configure modules are in fcl, and sometimes scripts are under test or text files used by the code are kept under data. Many directories don't need test or data - see the TrkReco directory for example. Take a look at what in these directories.

Recall that we write modules, which are then run by the framework to act on the event data. Any module we write is compiled into a shared object library which is loaded by the framework if our fcl tells it to. For example,

TrkHitReco/src/StrawHitReco_module.cc

would be compiled into

lib/libmu2e_TrkHitReco_StrawHitReco_module.so

which we can then ask the framework to load and run (or not). The shared object is not there, because we haven't done the compiling, which we will get to in a minute.

Just as a quick peak, open the cc file:

emacs (or vi, cat) TrkHitReco/src/StrawHitReco_module.cc

Find the line

class StrawHitReco : public art::EDProducer

Since the module code inherits from a framework base class (EDproducer), the framework can manipulate it.

Find the line

StrawHitReco::beginJob

this method is called once at the beginning of the job. Similarly

StrawHitReco::beginRun

is called when the input events change run number.

StrawHitReco::produce

is called for every event. It is called "produce" because it typically produces data to insert into the event, in this case, the reconstructed tracker hits. Other option are "filter" if this module can select a subset of events to write out, and "analyze" if the module is not writing any new data to the event.

Find the line

event.getValidHandle(_sdtoken)

this is retrieving the raw hits from the event. Find the line event.put(std::move(chCol)); This adds a newly created data product to the event. All of these concepts will be covered in more detail later.

Building Code

In a typical use pattern, you would modify some of the code, or add code, then you would want to compile it so you can run it. This is the job of the build system. In our case we use a system called scons. The build system has to look over the code, decide what needs to be compiled, do the compilation according to a recipe, then see what needs to be linked and do that linking. The first time you run scons, it will compile and link everything. After that, it should only do the steps which are effected by how you changed or added to the code.

scons action is driven off the SConstruct file, which is written in python and tells scons how to do its job. You can take a look at it and see lots of stuff related to code building. One of the things it does is to look for SConscript files in the directory tree, such as

cat TrkHitReco/src/SConscript

these files, mostly in src subdirectories, can customize the scons behavior for operating on the code in the directory where it sits.

When you build the Mu2e code, you have some options. Your primary choice is whether to build prof (optimized) or debug (not optimized - easier to debug). You can see your options with

./buildopts

You can switch between prof and debug with

./buildopts --build=debug

or the reverse. The other options are beyond the scope of this page. Once you start building you have to stick with your choices until you start over (delete all the built files and setup in a new process). There is a help.

./buildopts --help


There are many software packages written by non-Mu2e parties (like art and root). We need to provide the build system pointers to those packages. We do this by

source setup.sh

You can see everything you now point to:

ups active

observe your environment now points to particular version of art, root and many other packages. (You have to finish with any buildopts choices before running setup.sh, since the latter locks in the choices)

ups is a Fermilab software package that allows you to organize these third party packages and their versions, operating system flavors and inter-dependencies. You can see your OS flavor:

ups flavor

or see what versions of a package, like root, are available:

ups list -aK+ root


Once you're done with buildopts and setups, you can build:

> scons
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o Analyses/src/BkgRates_module.os -c 
...

once it starts compiling, you might as well kill it with ctrl-c, since it will take long time to run. In proactice you will want to build on mnode mu2ebuild01 which has 16 cores and you can build it with 20 threads.

scons -j 20

You can now remove any new files that were written by the build.

scons -c

This leaves all the source code alone. It also doesn't effect the setup choices you made with buildopts and setup.sh, those are still in effect and fixed in your process.

Releases

At appropriate times, we mark the code with a tag such as "v7_4_1". This marks a state of the code we want to save. Maybe we are recording the state of code at a point where we make a production run for simulation, or maybe we are marking major changes in the code. Take look at the release list and the notes for a recent release.

After tagging, we build the code and save it so anyone can use it - this is "release".

If you are in the tutorial container,

ls -l /Offline

If you are on a mu2egpvm machines:

ls -l /cvmfs/mu2e.opensciencegrid.org/Offline | tail

If you wanted to setup this fixed release.

source /setupmu2e-art.sh
source /Offline/v7_4_1/SLF6/prof/Offline/setup.sh

If you are on a mu2egpvm machines:

source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh 
source /cvmfs/mu2e.opensciencegrid.org/Offline/v7_4_1/SLF6/prof/Offline/setup.sh

Since we setup the code above we can't also setup these versions of the code. In order to do that, we need to start a new process, so either start a new tutorial container, or login into the mu2egpvm machines again.

Partial Builds

When we started building the code in the exercise above, we killed it because it would take too long. On a single node it will take an 45 min, and on mu2ebuild01 with its 16 cores, it will take 10min. This is a common problem, so we have developed two ways to build part of the code. Note that this is for a full build form scratch - following incremental builds are usually much faster.

In both methods we need a fully built "base release" that already exists, such as the ones we just looked at. We then make a local directory with just a small piece of the code. We build this small piece, and then take all the rest of the compiled code from the "base release". Because of this, we almost

In both methods you must be aware of the big possible pitfall. If any header file that is compiled into your local small piece of code is different than the same header file compiled into the base release, then the code will have corrupt memory and is likely to fail in unpredictable ways.

The first method is called Satellite releases. You can create one now. You will need to setup the Offline code in different way, so you have to get away from your current setup. If you are in the container, exit and restart. If you are on mu2egpvm, start a new process.

Then, if you are in the container,

source /setupmu2e-art.sh
mkdir -p /home/sat
cd /home/sat
/Offline/v7_4_1/SLF6/prof/Offline/bin/createSatelliteRelease
cd Satellite

and if you are on mu2egpvm

source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh
mkdir -p /mu2e/app/users/$USER/sat
cd /mu2e/app/users/$USER/sat
/cvmfs/mu2e.opensciencegrid.org/Offline/v7_4_1/SLF6/prof/Offline/bin/createSatelliteRelease
cd Satellite

You made your buildopts choices (prof/debug) when you chose which directory that createSatelliteRelease script is in. Take a look at what is in this directory. You should see a link to the SConstruct file in the base release and special setup.sh file. You can now

source setup.sh

Lets add some code:

 cp -r  $MU2E_BASE_RELEASE/HelloWorld .

and you can build it - it is small

 scons

When scons completes successfully the output ends with the line

scons: done building targets.

You can see your new libraries in lib, for example

ls -l lib/libmu2e_HelloWorld_HelloProducer_module.so

which is now in the path before, and therefore replaces, the same in the base release:

ls -l $MU2E_BASE_RELEASE/lib/libmu2e_HelloWorld_HelloProducer_module.so

You can modify code and recompile - remember the big pitfall! This method has best used for project that has small scope and you know you are not going to commit back to the main git repo.

The second method of compiling a small part of the code is called pgit. This also gives you a local directory with a small part of the code you can compile quickly, and it is also backed by a fully pre-build base release. It also has the same potential big pitfall - losing sync of header files. The advantage of pgit is that you still have access to the git repo when you are working on the local code and you can pull and push code back to the main git repo while you are working in the local area. We are in the process of commissioning this method and establishing it can fully replace satellite releases and is not confusing to users. We will not work an example here, but you can try it if you want. At the moment, it will not work in the tutorial container.

Exploring The Partial Build

Here are some suggested exercises.

  1. Look at the source code for the module HelloWorld/src/HelloWorld_module.cc . This file has a lot of structure that you can read about on the wiki page for Modules. The "payload" of the module is the two lines that printout the EventID for each event:
        cerr << "Hello, world with changes.  From analyze: "
             << event.id()
             << endl;
    

    The Mu2e wiki has a short writeup about EventIDs

  2. Run the HelloWorld module:
      mu2e -c HelloWorld/test/hello.fcl 
    

    Identify the printout from the module:

    Hello, world.  From analyze: run: 1 subRun: 0 event: 1
    Hello, world.  From analyze: run: 1 subRun: 0 event: 2
    Hello, world.  From analyze: run: 1 subRun: 0 event: 3
    

    This will also print two warning messages:

    INFO: using default process_name of "DUMMY".
    %MSG-e duplicateDestination: 
    
    ============================================================================ 
    
        Duplicate name for a MessageLogger Statistics destination: "cout"
        Only original configuration instructions are used. 
    
    ============================================================================ 
    
    %MSG
    

    Ignore them. We will fix them. The next tutorial has a lot of information about using fcl files so this tutorial will not discuss them much.

  3. You can also run a larger number of events:
      mu2e -c HelloWorld/test/hello.fcl -n 20
    
  4. Edit the file HelloWorld/src/HelloWorld_module.cc and change the printout. Then rebuild it:
    scons
    mu2e -c HelloWorld/test/hello.fcl
    

    Observe that the printout made by the module has changed.

  5. Make a new module by making a copy of a module and modifying the copy:
    1. Make a copy of HelloWorld/src/HelloWorld_module.cc
      cp HelloWorld/src/HelloWorld_module.cc HelloWorld/src/Bonjour_module.cc
      cp HelloWorld/test/hello.fcl HelloWorld/test/bonjour.fcl
      
    2. Edit the new .cc file and make a global change of HelloWorld to Bonjour. Change the printout again so that you can distinguish this module from the original.
    3. Edit the .fcl file to replace
       module_type : HelloWorld
      

      with

       module_type : Bonjour
      
    4. Build and run the new module
      scons 
      mu2e -c HelloWorld/test/bonjour.fcl
      

      Observe the changed printout. If you have questions about why you did not also change the module label, you can re-read the wiki page on Modules.

  6. Make a new .fcl file that runs both modules in one job:
    cp HelloWorld/test/hello.fcl HelloWorld/test/both.fcl
    

    Edit both.fcl so it looks like:

      analyzers: {
        hello: {
          module_type : HelloWorld
        }
        hi : {
          module_type : Bonjour
        }
      }
    
      p1 : [ ]
      e1 : [hi, hello]
    

    Now run both modules in one job:

    mu2e -c HelloWorld/test/both.fcl
    

    Note that you do not need to rebuild the libraries because you did not change the source code.

  7. Rebuild the code even though there is nothing to do. You should see the following;
    scons: Reading SConscript files ...
    scons: done reading SConscript files.
    scons: Building targets ...
    scons: `.' is up to date.
    scons: done building targets.
    
  8. In both.fcl, swap the order of "hi" and "hello" in the definition of "e1". Rerun. Notice that the order of the printout does not change! art reserves the right to run analyzer modules in an arbitrary order and we should never write code that depends on a particular order. art does respect user specified order for producer and filter modules.