|
|
Line 6: |
Line 6: |
| You probably don't have to work through this entire page and you can stop and any point, please talk to you advisor or mentor to see what's appropriate. The material you are most likely need to use comes first, followed by more in-depth tutorials for people who will be spending years on mu2e and learn to do more complex work. | | You probably don't have to work through this entire page and you can stop and any point, please talk to you advisor or mentor to see what's appropriate. The material you are most likely need to use comes first, followed by more in-depth tutorials for people who will be spending years on mu2e and learn to do more complex work. |
|
| |
|
| ==Interactive logins==
| |
| Collaborators can do interactive computing work in several places. Probably the best place to start is the collaboration's interactive machines at Fermilab. These are set of virtual machines running SL6 (a variant of linux) . The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the interactive machines. There are five quad-core machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. You will have an account and a home area here (the same home area is on all machines) and some disk space for data. We prefer using the bash shell for all purposes.
| |
|
| |
|
| Collaborators can also compile and run mu2e code on their linux desktops or laptops. The code should be manageable on Mac's unix layer, and maybe even Windows, but we have not pursued these options yet. Finally, several of the collaboration institutions have set up working areas at their home institution's linux systems. For these options, someone would need to mount the distributed code disk or copy the code distribution and supporting software tools to the local machine. It's best to discuss this with your group leaders and mu2e mentors first. Generally we recommend working on the collaboration interactive nodes unless the networking from your institution makes that prohibitive.
| | ==Prerequisites== |
| | In this tutorial we will assume you are familiar with the following topic |
| | *the [[LearnAboutMu2e|mu2e detector]], the parts, what they do, and how they work |
| | * the basics bash shell commands on a linux system |
| | * familiar with c++ code concepts and syntax |
| | * familiar with HEP [[ComputingStart|offline concepts]] |
| | <span style=color:red>provide links to external references</span> |
|
| |
|
| When ready, you can read more about the [[ComputingLogin|logging into interactive machines]] , the [[Disks|disks]], [[Shells|bash shell]], and how to get the code [[CodeDistribution|distribution]] from the code disk, or copied to a desktop, laptop or remote institution.
| |
|
| |
|
| ==Authentication== | | ==Authentication== |
|
| |
|
| You login to the virtual machines with '''kerberos''' authentication. You will need a permanent ID called a kerberos "principal" which is looks like "xyz@FNAL.GOV", where xyz is your username. (You will have one username for all computing purposes lab.) You will have a password associated with your principal. You will use this principal and password to log into the various personal linux desktops located at Fermilab or to ssh into the collaboration interactive machines from your home institution. You typically refresh your kerberos authentication every day. | | You login to the mu2e interactive machines with '''kerberos''' authentication. You will need a permanent ID called a kerberos "principal" which is looks like "your_username@FNAL.GOV". (You will have one username for all computing purposes at Fermilab.) You will have a password associated with your principal. You will use this principal and password to log into the various personal linux desktops located at Fermilab or to ssh into the collaboration interactive machines from your home institution. You typically re-enter your kerberos authentication every day. |
|
| |
|
| The second identity you will need is the '''services''' principal, which looks like xyz@services.fnal.gov, or often just xyz, and also has a password (different from your kerberos password). You will need this identity to log into Fermilab email, the servicedesk web site and some other services based at the lab. You would typically only use this authentication at the point you log into the service. | | The second identity you will need is the '''services''' principal, which looks like your_username@services.fnal.gov, or often just your_username, and also has a password (different from your kerberos password). You will need this identity to log into Fermilab email, the servicedesk web site and some other services based at the lab. You would typically only use this authentication at the point you log into the service. |
|
| |
|
| The third identity you will need is a '''CILogin certificate'''. This "cert" is the basis of authentication to the mu2e documents database, the computing farms, and a few other services. You will use this cert in two ways. The first way is to load it into your browser, which then gives you access to web pages and web services. The second is by using your kerberos authentication to access a copy of your certificate maintained in a remote database. You get this certificate once and then renew it only once a year. | | The third identity you will need is a '''CILogin certificate'''. This "cert" is the basis of authentication to the mu2e documents database, the computing farms, and a few other services. You will use this cert in two ways. The first way is to load it into your browser, which then gives you access to web pages and web services. The second is by using your kerberos authentication to access a copy of your certificate maintained in a remote database. You get this certificate once and then renew it only once a year. |
|
| |
|
| hypernews is an archived blog and email list - for access here, you will need a hypernews password and your services password! | | hypernews is an archived blog and email list - for access here, you will need both a hypernews password and your services password! |
|
| |
|
| Finally, the mu2e internal web pages require a collaboration username and password, please ask your mu2e mentor. | | Finally, the mu2e internal web pages require a collaboration username and password, please ask your mu2e mentor. |
|
| |
|
| If you need to do any computing in mu2e, please go ahead and start the procedure on the [[ComputingAccounts]] to create your accounts and authentication.
| | ===next=== |
| | * [[ComputingAccounts]] create your accounts and authentication if you are joining mu2e |
|
| |
|
| ==Code== | | ==Interactive logins== |
| ===Events===
| | Collaborators can do interactive computing work in several places. Probably the best place to start is the collaboration's interactive '''linux''' machines at Fermilab. The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the interactive machines. There are five quad-core machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. You will have an account and a home area here (the same home area is on all machines) and some disk space for data. We prefer using the bash shell for all purposes. |
| Very briefly, the experiment is driven by a short burst of millions of protons hitting the primary target every 1695 ns. This cycle is called a '''microbunch'''. After the burst interacts, outgoing muons migrate to, and come to rest in, the stopping target. After this surge of particles dies down during the first part of the microbunch, the detector observes what happens to the stopped muons during the second part of the microbunch. The data recorded during this ~900 ns is written out (if it passes the trigger) in a data structure called an event. Many events can be written in one file. Events have unique identifying numbers. Short periods of data-taking (minutes) are grouped into subruns with a unique ID number, and longer periods of stable running (a few hours) will be grouped into a run with unique ID number.
| | Collaborators can also compile and run mu2e code on their linux desktops or laptops. |
| | |
| ===Sim and Reco===
| |
| Since we do not have data yet, we analyze '''simulation''' or '''sim''' events which are based on our expectation of what data will look like. We draw randomly from expected and potential physics processes and then trace the particles through the detector, and write out events. The interaction of particles in the detector materials is simulated with the geant software package. The simulated looks like the real data will, except it also contains the truth of what happened in the interactions.
| |
| | |
| The output of simulation would typically be data events in the '''raw''' formats that we will see produced by the detector readout. These are typically ADC values indicating energy deposited, or TDC values indicating the time of a energy deposit. In the '''reconstruction''' or '''reco''' process, we run this raw data through modules that analyze the raw data and look for patterns that can be identified as evidence of particular particles. For example, hits in the tracker are reconstructed into individual particle paths, and energy in the calorimeter crystals is clustered into showers caused by individual electrons.
| |
| Exactly how a reconstruction module does its work is called its '''algorithm'''. A lot of the work of physicists is invested in these algorithms, because they are fundamental to the quality of the experimental results.
| |
| | |
| ===Coding===
| |
| The mu2e simulation and reconstruction code is written in c++. We write modules which create simulated data, or read data out of the event, process it, and write the results back into the event. The modules plug into a framework called '''art''', and this framework calls the modules to do the actual work, as the framework reads an input file and writes an output file. The primary data format is determined by the framework, so it is called the '''art format''' and the file will have an extension .art.
| |
| | |
| We use the '''git''' code management system to store and version our code. Currently, we have one main git repository which contains all our simulation and reconstruction code. You can check out this repository, or a piece of it, and build it locally. In this local area you can make changes to code and read, write and analyze small amounts of data. We build the code with a make system called '''scons'''. The code may be built optimized ('''prof''') or non-optimized and prepared for running a debugger ('''debug''').
| |
| | |
| At certain times, the code is tagged, built, and published as a stable release. These releases are available on the /cvmfs disk area. cmfvs is a sophisticated distributed disk system with layers of servers and caches, but to us it just looks like a read-only local disk, which can be mounted almost anywhere. We often run large projects using these tagged releases. cmvfs is mounted on the interactive nodes, at remote institutions, on some desktops, and all the many farm nodes we use.
| |
| | |
| You can read more about accessing and building [[CodeRecipe|code]], [[git]], [[scons]] and [[cvmfs]].
| |
| | |
| ===Executables===
| |
| Which modules are run and how they are configured is determined by a control file, written in '''fcl''' (pronounced fickle). This control file can change the random seeds for the simulation and the input and output file names, for example. A typical run might be to create a new simulation file. For various reasons, we often do our simulation in several stages, writing out a file between each run of the executable, or stage, and reading it in to start the next stage. A second type of job might be to run one of the simulation stages with a variation of the detector design, for example. Another typical run might be to take a simulation file as input and test various reconstruction algorithms, and write out reconstruction results.
| |
| | |
| ===Data products===
| |
| The data in an event in a file is organized into data products. Examples of data products include straw tracker hits, tracks, or clusters in the calorimeter. The fcl is often used to decide which data products to read, which one to make, and which ones to write out. There are data products which contain the information of what happened during the simulation, such as the main particle list, SimParticles.
| |
| | |
| ===UPS Products===
| |
| Disambiguation of "products" - please note that we have both '''data products''' and '''UPS products''' which unfortunately are both referred to as "products" at times. Please be aware of the difference, which you can usually determine from the context.
| |
| | |
| The art framework and fcl control language are provided as a '''product''' inside the '''UPS''' software release management system. There are several other important UPS products we use. This software is distributed as UPS products because many experiments at the lab use these utilities. You can control which UPS products are available to you ( which you can recognize as a setup command like "setup root v6_06_08") but most of this is organized as defaults inside of a setup script.
| |
| | |
| You can read more about how [[UPS|'''UPS''']] works.
| |
| | |
| ===Histogramming===
| |
| Once you have an art file, how to actually make plots and histograms of the data? There are many ways to do this, so it is important to consult with the people you work with, and make sure you are working in a style that is consistent with their expertise and preferences, so you can work together effectively.
| |
| | |
| In any case, we always use the '''root''' UPS product for making and viewing histograms.
| |
| There are two main ways to approach it. The first is to insert the histogram code into a module and write out a file which contains the histograms. The second method is to use a module to write out an '''ntuple''', also called a '''tree'''. This is a summary of the data in each event, so instead of writing out the whole track data product, you might just write out the momentum and the number of hits in the nutple. The ntuple is very compact, so you can easily open this and make histogram interactively very quickly.
| |
| | |
| Read more about ways to histogram or [[ntuples|ntuple]] data for analysis.
| |
| | |
| ==Workflows==
| |
| ===Designing larger jobs===
| |
| After understanding data on a small level by running interactive jobs, you may want to run on larger datasets. If a job is working interactively, it is not too hard to take that workflow and adapt it for running on large datasets on the compute farms. First, you will need to understand the '''mu2egrid''' UPS product which is a set of scripts to help you submit jobs and organize the output. mu2egrid will call the '''jobsub''' UPS product to start your job on the farm. You data will be copied back using the '''ifdh''' UPS product, which is a wrapper to data transfer software. The output will go to '''dCache''', which is a high-capacity and high-throughput distributed disk system. We have 100's of terabytes of disk space here, divided into three types (a scratch area, a persistent disk area, and a tape-backed area). Once the data is written, there are procedures to check it and optionally concatenate the files and write them tape. We track our files in a database that is part of the '''SAM''' UPS product. You can see the files in dCache by looking under the '''/pnfs''' filesystem. Writing and reading files to dCache can have consequences, so please understand how to use dCache and also consult with an experienced user before running a job that uses this disk space.
| |
| | |
| ===Grid resources===
| |
| mu2e has access to a compute farm at Fermilab, called '''Fermigrid'''. This farm is several thousand nodes and mu2e is allocated a portion of the nodes (our '''dedicated''' nodes). Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms. You can get typically get 1000 nodes for a day before your priority goes down and you get fewer. If the farm is not crowded, which is not uncommon, you can get several times that by running on the idle or '''opportunistic''' nodes.
| |
|
| |
|
| mu2e also has access to compute farms at other institutions through a collaboration called Open Science Grid (OSG). It is easy to modify your submit command to use these resources. We do not have a quota here, we can only access opportunistic nodes, so we don't really know how many nodes we can get, but it is usually at least as much as we can get on Fermigrid. This system is less reliable than Fermigrid so we often see unusual failure modes or jobs restarting.
| | When ready, you can read more about the , the [[Disks|disks]], [[Shells|bash shell]], and how to get the code [[CodeDistribution|distribution]] from the code disk, or copied to a desktop, laptop or remote institution. |
|
| |
|
| ==Your workflow== | | ===next=== |
| Hopefully you now have a good idea of the concepts and terminology of the mu2e offline. What part of the offline system you will need to be familiar with will depend on what tasks you will be doing. Let's identify four nominal roles. In all cases, you will need to understand the accounts and authentication.
| | Try: |
| # ntuple user. This is the simplest case. You probably will be given a ntuple, or a simple recipe to make an ntuple, then you will want to analyze the contents. You will need to have a good understanding of c++ and root, but not much else.
| | * [[LoginTutorial]] checking your access |
| # art user. In this level you would be running art executables, so you will also need to understand modules, fcl, and data products. Probably also how to make histograms or ntuples from the art file.
| | Be familiar with these references: |
| # farm user. In this level you would be running art executables on the compute farms, so you will also need to understand the farms, workflows, dCache, and possibly uploading files to tape.
| | * [[ComputingLogin]] reference - logging into interactive machines |
| # developer. In this case, you will be writing modules and algorithms, so you need to understand the art framework, data products, geometry, c++, and standards in some detail, as well as the detector itself.
| | * [[Disks|disks]] |
| | * [[Shells|bash shell]] |
|
| |
|
|
| |
|
| | ===next=== |
|
| |
|
|
| |
|
Introduction
This page is intended for physicists who are just starting to work in the mu2e computing environment. The following is a broad overview of the major components, their terminology, and how you might use them. Following each into paragraph, here are links into more specific tutorials and the rest of the mu2e documentation which is more terse and is intended as a reference once you get through the introductory material.
You probably don't have to work through this entire page and you can stop and any point, please talk to you advisor or mentor to see what's appropriate. The material you are most likely need to use comes first, followed by more in-depth tutorials for people who will be spending years on mu2e and learn to do more complex work.
Prerequisites
In this tutorial we will assume you are familiar with the following topic
- the mu2e detector, the parts, what they do, and how they work
- the basics bash shell commands on a linux system
- familiar with c++ code concepts and syntax
- familiar with HEP offline concepts
provide links to external references
Authentication
You login to the mu2e interactive machines with kerberos authentication. You will need a permanent ID called a kerberos "principal" which is looks like "your_username@FNAL.GOV". (You will have one username for all computing purposes at Fermilab.) You will have a password associated with your principal. You will use this principal and password to log into the various personal linux desktops located at Fermilab or to ssh into the collaboration interactive machines from your home institution. You typically re-enter your kerberos authentication every day.
The second identity you will need is the services principal, which looks like your_username@services.fnal.gov, or often just your_username, and also has a password (different from your kerberos password). You will need this identity to log into Fermilab email, the servicedesk web site and some other services based at the lab. You would typically only use this authentication at the point you log into the service.
The third identity you will need is a CILogin certificate. This "cert" is the basis of authentication to the mu2e documents database, the computing farms, and a few other services. You will use this cert in two ways. The first way is to load it into your browser, which then gives you access to web pages and web services. The second is by using your kerberos authentication to access a copy of your certificate maintained in a remote database. You get this certificate once and then renew it only once a year.
hypernews is an archived blog and email list - for access here, you will need both a hypernews password and your services password!
Finally, the mu2e internal web pages require a collaboration username and password, please ask your mu2e mentor.
next
Interactive logins
Collaborators can do interactive computing work in several places. Probably the best place to start is the collaboration's interactive linux machines at Fermilab. The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the interactive machines. There are five quad-core machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. You will have an account and a home area here (the same home area is on all machines) and some disk space for data. We prefer using the bash shell for all purposes.
Collaborators can also compile and run mu2e code on their linux desktops or laptops.
When ready, you can read more about the , the disks, bash shell, and how to get the code distribution from the code disk, or copied to a desktop, laptop or remote institution.
next
Try:
Be familiar with these references:
next
Outline (scratch)
- Intro to physics goals
- concept, new physics, current limits, timeline
- ideas of beamline, stopping, DIO, tracker, final momentum plot
- Detector overview
- overview of why the detector is laid out this way, purpose of calorimeter, gradient filed, cosmic shield
- tour individual pieces, specific points to make
- Backgrounds
- flash and time window
- DIO
- RPC
- antip
- cosmics
- Computing Stage I (getting around, using ntuples)
- Authentication, interactive machines, OS, shell
- c++, how code is organized, cvmfs, UPS products, setup procedure
- intro to art files
- geometry browser
- root ntuples and plots
- documents and getting help
- Computing Stage II (understanding sim, local builds)
- intro to modules and products
- running mu2e exe, fcl,
- intro to simulation, staging and mixing
- paths, generator and geometry files
- checkout and build commands
- Computing Stage III (submit grid jobs)
- dcache
- grids
- mu2eprodsys
- monitoring
- Computing Stage IV (developer)
- committing code
- releases and tags
- how to make products
- random numbers, handles, exceptions, etc
Random Links (scratch)
latest meeting
Sarah's google doc on clickable status and intro paragraphs
Rob's 10/26/17 talk on intro to computing plan
clickable detector
Overview
Build recipe
Rob's first geant run for new users
art workbook
test root
test display
Summer 2016 SCD workshops (includes geometry tutorial)
Summer 2016 mu2e tutorials
unix hints
setup root by itself
c++
linux
root
July 2016 intro talks
Software tutorial
Practicalities of MC
Hits and Mixing
Tutorials (scratch)
- Testing the ROOT display
- Testing the Geant4 based event display
- Notes on dynamic libraries
- The First Step: the art workbook
- Running G4 within art: The first examples.
- Mu2e maintained FAQs: C++ FAQ, Unix/Linux FAQ, ROOT FAQ, Geant4 Notes