Computing Concepts: Difference between revisions
No edit summary |
No edit summary |
||
Line 26: | Line 26: | ||
The mu2e simulation and reconstruction code is written in c++. We write modules which create data, or read data out of the event, process it, and write the results back into the event. The modules plug into a framework called '''art''', and this framework calls the modules to do the actual work, as the framework reads in input file and writes an output file. The primary data format is determined by the framework, so it is called the '''art format''' and will have extension .art. | The mu2e simulation and reconstruction code is written in c++. We write modules which create data, or read data out of the event, process it, and write the results back into the event. The modules plug into a framework called '''art''', and this framework calls the modules to do the actual work, as the framework reads in input file and writes an output file. The primary data format is determined by the framework, so it is called the '''art format''' and will have extension .art. | ||
We use the '''git''' code management system to store and track our code. Currently we have one main git repository which contains all our simulation and reconstruction code. You can check out this repository, or a piece of it, and build it locally. In this local area you can make changes to code and read, write and analyze small amounts of data. We build the code with a make system called '''scons'''. The code may be built optimized (prof) or non-optimized for running debugger (debug) | We use the '''git''' code management system to store and track our code. Currently we have one main git repository which contains all our simulation and reconstruction code. You can check out this repository, or a piece of it, and build it locally. In this local area you can make changes to code and read, write and analyze small amounts of data. We build the code with a make system called '''scons'''. The code may be built optimized (prof) or non-optimized and prepared for running a debugger (debug). | ||
At certain times the code is tagged, built and published as a stable release. These releases are available on the /cvmfs disk area. This is a sophisticated distributed disk system with layers of servers and caches, but to us it just look like a local disk, which can be mounted almost anywhere. | At certain times the code is tagged, built and published as a stable release. These releases are available on the /cvmfs disk area. This is a sophisticated distributed disk system with layers of servers and caches, but to us it just look like a local disk, which can be mounted almost anywhere. We often run large projects off of these fixed releases. | ||
===Running=== | ===Running=== | ||
Which modules are run and how they are configured is determined by a control file, written in '''fcl''' (pronounced fickle). This control file can change the random seeds for the simulation, and the input and output file names. You | Which modules are run and how they are configured is determined by a control file, written in '''fcl''' (pronounced fickle). This control file can change the random seeds for the simulation, and the input and output file names. A typical run might be to create a new simulation file. For various reasons, we often do our simulation in several stages, writing out a file between each run of the executable, or stage. A second type of job would be to run one of the simulation stages with a variation of the detector design, for example. Another typical run might be to take a simulation file as input and test various reconstruction algorithms. | ||
===Products=== | |||
The data in an event in a file is organized into products. Examples of products include straw tracker hits, tracks, or clusters in the calorimeter. The fcl is often used to decide which products to read, which one to make, and which ones to write out. There are products which contain the information of what happened during the simulation, such as the main particle list, SimParticles. | |||
===Packages=== | |||
The art framework and fcl control language are provided as a '''package''' inside the '''UPS''' package management system. There are many other important packages we use. This software is distributed as a package because many experiment at the lab use these utilities. You can control which packages are available to you, but most of this is organized as defaults inside of a setup scripts. | |||
===Histogramming=== | |||
Once you have an art file, how to actually make plots and histograms of the data? There are many ways to do this, so it is important to consult with the people you work with, and make sure you are working in a style that is consistent with their expertise and preferences, so you can work together effectively. | |||
In any case, we always use the '''root''' package for making and viewing histograms. | |||
There are two main ways to approach it. The first is to insert the histogram code into a module and write out a file which contains the histograms. The second method is to use a module to write out an '''ntuple''', also called a '''tree'''. This is a summary of the data in each event, so instead of writing out the whole track product, you might just write out the momentum and the number of hits in the nutple. The ntuple is very compact, so you can easily open this and make histogram interactively very quickly. | |||
==Workflows== | ==Workflows== | ||
===Your workflow=== | |||
What part of the offline system you wil need to be familiar with will depend on what tasks you will be doing. Let's identify three nominal levels. In all cases, you will need the accounts and authentication. | |||
# ntuple level. This is the simplest case. You probably will be given a ntuple, or a simple recipe to make an ntuple, then you will want to analyze the data. You will need to have a good understanding of c++ and root, but not much else. | |||
# art level. In this level you would be running art executables, so you will also need to understand fcl and products. if you will be doing processing of larger datasets, you will need to understand farms and workflows. | |||
# developer level. In this case, you will be writing modules, so you need to understand the art framework, products, c++ and standards in some detail. | |||
===Grid resources=== | ===Grid resources=== | ||
mu2e has access to a compute farm at Fermilab, called GPGrid. This is several thousand cores and we mu2e is allocated a portion of the nodes, and we can use more if no one else needs it. Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms using the [["jobsub"]] system. You can get typically get 1000 nodes for day before your priority goes down and you get less. If the farm is not crowded, which is not uncommon, you can get several times that. | mu2e has access to a compute farm at Fermilab, called GPGrid. This is several thousand cores and we mu2e is allocated a portion of the nodes, and we can use more if no one else needs it. Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms using the [["jobsub"]] system. You can get typically get 1000 nodes for day before your priority goes down and you get less. If the farm is not crowded, which is not uncommon, you can get several times that. |
Revision as of 18:39, 2 February 2017
Introduction
This page is intended for physicists who are just starting to work in the mu2e computing environment. The following is a broad overview of the major components and how they work. There are many links into the rest of the mu2e documentation which is more terse and is intended as a reference once you get through the introductory material.
Interactive logins
Collaborators can do interactive work in several places. Probably the best place to start is the interactive machines at Fermilab. These are set of virtual machines running SL6 (a variant of linux) . The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the interactive machines. There are five machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. You will have an account and a home area here (the same area is on all machines) and some space for data. We prefer using the bash shell for all purposes. You can read more about the interactive machines here.
Collaborators can also compile and run mu2e code on their linux desktops or laptops. You can read more about that here. The code should be manageable on Mac's unix layer but we have not pursued this option officially. Finally, several of the collaboration institutions have set up working areas at their home institutions. For these options, someone would need to copy the code distribution and supporting software tools to the local machine, or mount the distributed code disk. You can read more about these options here, but best to discuss this with your group leaders first. Generally we recommend working on the lab interactive nodes unless the networking from your institution makes that prohibitive.
Authentication
You login to the virtual machines by using kerberos authentication. You will need a permanent ID called a kerberos "principal" which is looks like "xyz@FNAL.GOV", where xyz is your username. You will have a password associated with your principal. You will use this principal and password to log into linux desktops located at Fermilab or to ssh into the interactive machines from your home institution. You typically refresh your kerberos authentication every day.
The second identity you will need is the "services" principal, which looks like xyz@services.fnal.gov, or just xyz, and also has a password. You will need this identity to log into Fermilab email, the servicedesk web site and some other services based at the lab. You would typically only use this authentication as you log into the service.
Finally, you will need a CILogin certificate. This cert is the basis of authentication to the mu2e documents database, the computing farms, and other services. You will use this cert in two ways. The first way is to load it into your browser, which then gives you access to web pages and web services. The second is by using your kerberos authentication to access a copy of your certificate maintained in a remote database. You get this certificate once and then renew it only once a year.
Please follow the procedure on the ComputingAccounts page to setup your accounts.
Code
Events
Very briefly, the experiment is driven by a short burst of millions of protons hitting the primary target every 1695 ns. This cycle is called a microbunch. After the burst interacts, muons migrate to, and come to rest in the stopping target. After this surge of particles dies down during the first part of the microbunch, the detector observes what happens to the stopped muons during the second part of cycle. The data recorded during this ~900 ns is written out (if it passes the trigger) in a data structure called an event. Many events can be written in one file. Events have unique identifying numbers. Short periods of data-taking (minutes) are grouped into subruns with an ID number, and longer periods of stable running (a few hours) will be grouped into a run with unique run number.
Since we do not have data yet, we analyze simulations which are based on our expectation of what data will look like. We draw randomly from known physics processes and then trace the particles through the detector, and write out events. The interaction of particles in the detector materials is simulated with the geant software package. The simulated data look like the real data will, except it also contains the truth of what happened in the interactions.
Coding
The mu2e simulation and reconstruction code is written in c++. We write modules which create data, or read data out of the event, process it, and write the results back into the event. The modules plug into a framework called art, and this framework calls the modules to do the actual work, as the framework reads in input file and writes an output file. The primary data format is determined by the framework, so it is called the art format and will have extension .art.
We use the git code management system to store and track our code. Currently we have one main git repository which contains all our simulation and reconstruction code. You can check out this repository, or a piece of it, and build it locally. In this local area you can make changes to code and read, write and analyze small amounts of data. We build the code with a make system called scons. The code may be built optimized (prof) or non-optimized and prepared for running a debugger (debug).
At certain times the code is tagged, built and published as a stable release. These releases are available on the /cvmfs disk area. This is a sophisticated distributed disk system with layers of servers and caches, but to us it just look like a local disk, which can be mounted almost anywhere. We often run large projects off of these fixed releases.
Running
Which modules are run and how they are configured is determined by a control file, written in fcl (pronounced fickle). This control file can change the random seeds for the simulation, and the input and output file names. A typical run might be to create a new simulation file. For various reasons, we often do our simulation in several stages, writing out a file between each run of the executable, or stage. A second type of job would be to run one of the simulation stages with a variation of the detector design, for example. Another typical run might be to take a simulation file as input and test various reconstruction algorithms.
Products
The data in an event in a file is organized into products. Examples of products include straw tracker hits, tracks, or clusters in the calorimeter. The fcl is often used to decide which products to read, which one to make, and which ones to write out. There are products which contain the information of what happened during the simulation, such as the main particle list, SimParticles.
Packages
The art framework and fcl control language are provided as a package inside the UPS package management system. There are many other important packages we use. This software is distributed as a package because many experiment at the lab use these utilities. You can control which packages are available to you, but most of this is organized as defaults inside of a setup scripts.
Histogramming
Once you have an art file, how to actually make plots and histograms of the data? There are many ways to do this, so it is important to consult with the people you work with, and make sure you are working in a style that is consistent with their expertise and preferences, so you can work together effectively.
In any case, we always use the root package for making and viewing histograms. There are two main ways to approach it. The first is to insert the histogram code into a module and write out a file which contains the histograms. The second method is to use a module to write out an ntuple, also called a tree. This is a summary of the data in each event, so instead of writing out the whole track product, you might just write out the momentum and the number of hits in the nutple. The ntuple is very compact, so you can easily open this and make histogram interactively very quickly.
Workflows
Your workflow
What part of the offline system you wil need to be familiar with will depend on what tasks you will be doing. Let's identify three nominal levels. In all cases, you will need the accounts and authentication.
- ntuple level. This is the simplest case. You probably will be given a ntuple, or a simple recipe to make an ntuple, then you will want to analyze the data. You will need to have a good understanding of c++ and root, but not much else.
- art level. In this level you would be running art executables, so you will also need to understand fcl and products. if you will be doing processing of larger datasets, you will need to understand farms and workflows.
- developer level. In this case, you will be writing modules, so you need to understand the art framework, products, c++ and standards in some detail.
Grid resources
mu2e has access to a compute farm at Fermilab, called GPGrid. This is several thousand cores and we mu2e is allocated a portion of the nodes, and we can use more if no one else needs it. Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms using the "jobsub" system. You can get typically get 1000 nodes for day before your priority goes down and you get less. If the farm is not crowded, which is not uncommon, you can get several times that.
mu2e also has access to compute farms at other institutions through a collaboration call Open Science Grid (OSG). It is easy to modify your submit command to use these resources. We do not have a quota here, so we don't really know how many nodes we can get, but it is usually at least as much as we can get on GPGrid. This system is less reliable than GPgrid so we often see unusual failure modes, or jobs restarting.
This text is collapsible. Template:Lorem
Tutorials
- Testing the ROOT display
- Testing the Geant4 based event display
- Notes on dynamic libraries
- The First Step: the art workbook
- Running G4 within art: The first examples.
- Mu2e maintained FAQs: C++ FAQ, Unix/Linux FAQ, ROOT FAQ, Geant4 Notes