Computing Concepts: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
==Introduction== | |||
This page is intended for physicists who are just starting to work in the mu2e computing environment. The following is a broad overview of the major components and how they work together. There are many links into the rest of the mu2e documentation which is more terse and is intended as a reference for people who have enough experience to know what they are looking for. | |||
==Overview == | ==Overview == | ||
=== | ===Hardware=== | ||
Collaborators can do interactive work in several places. Probably the best place to start is the interactive machines at Fermilab. These are set of | Collaborators can do interactive work in several places. Probably the best place to start is the interactive machines at Fermilab. These are set of virtual machines running SL6 (a variant of linux) . A virtual machine looks to the uses like one physical computer but is actually an operating system running on hardware which may be shared by other copies of the operating system. Virtual machines are easy to copy or relocate on different hardware. The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the virtual machines. | ||
The mu2e virtual machines are five machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. | |||
You will have a home area here (is the same area on all machines) and some space for data. You can read more about | |||
the interactive machine [[here]] and the disks [[here]]. | |||
Collaborators can also compile and run mu2e code on their linux desktops or laptops. You can read more about that [[here]]. Finally, several of the collaboration institutions have set up working areas at their home institutions. For these options, generally, someone would need to copy the code distribution and supporting tools to the local area. You can read more about these options [[here]], but best to discuss this with your group leaders first. | |||
mu2e has access to a compute farm at Fermilab, called GPGrid. This is several thousand cores and we mu2e is allocated a portion of the nodes, and we can use more if no one else needs it. Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms using the [["jobsub"]] system. You can get typically get 1000 nodes for day before your priority goes down and you get less. If the farm is not crowded, which is not uncommon, you can get several times that. | |||
Please | |||
mu2e also has access to compute farms at other institutions through a collaboration call Open Science Grid (OSG). It is easy to modify your submit command to use these resources. We do not have a quota here, so we don't really know how many nodes we can get, but it is usually at least as much as we can get on GPGrid. This system is less reliable than GPgrid so we often see unusual failure modes, or jobs restarting. | |||
===Authentication=== | |||
You login to the virtual machines by using kerberos authentication. You will need a permanent ID called a kerberos "principal" which is looks like "xyz@FNAL.GOV", where xyz is your username. You will have a password associated with your principal. You will use this principal and password to log into linux desktops located at Fermilab or to ssh into the interactive machines fro your home institution. Please follow the procedure on the [[ComputingAccounts]] page to setup your accounts. | |||
===Code=== | |||
mu2e simulation, reconstruction, and analysis code is available as one git repository. The code is primarily c++, built with scons, with bash scripting, and some perl and python. Our code depends heavily on external packages such as art and root. Executables, assembled from shared objects and plug-in modules, are controlled by fcl scripts. Pre-built releases are published and available world-wide on cvmfs. | mu2e simulation, reconstruction, and analysis code is available as one git repository. The code is primarily c++, built with scons, with bash scripting, and some perl and python. Our code depends heavily on external packages such as art and root. Executables, assembled from shared objects and plug-in modules, are controlled by fcl scripts. Pre-built releases are published and available world-wide on cvmfs. |
Revision as of 23:27, 1 February 2017
Introduction
This page is intended for physicists who are just starting to work in the mu2e computing environment. The following is a broad overview of the major components and how they work together. There are many links into the rest of the mu2e documentation which is more terse and is intended as a reference for people who have enough experience to know what they are looking for.
Overview
Hardware
Collaborators can do interactive work in several places. Probably the best place to start is the interactive machines at Fermilab. These are set of virtual machines running SL6 (a variant of linux) . A virtual machine looks to the uses like one physical computer but is actually an operating system running on hardware which may be shared by other copies of the operating system. Virtual machines are easy to copy or relocate on different hardware. The disks that the user sees are located on specialized disk server hardware and the same disks are mounted by all the virtual machines. The mu2e virtual machines are five machines named mu2egpvm01.fnal.gov through mu2egpvm05.fnal.gov. You will have a home area here (is the same area on all machines) and some space for data. You can read more about the interactive machine here and the disks here.
Collaborators can also compile and run mu2e code on their linux desktops or laptops. You can read more about that here. Finally, several of the collaboration institutions have set up working areas at their home institutions. For these options, generally, someone would need to copy the code distribution and supporting tools to the local area. You can read more about these options here, but best to discuss this with your group leaders first.
mu2e has access to a compute farm at Fermilab, called GPGrid. This is several thousand cores and we mu2e is allocated a portion of the nodes, and we can use more if no one else needs it. Once you have used the interactive machines to build and test your code, you can submit a large job to the compute farms using the "jobsub" system. You can get typically get 1000 nodes for day before your priority goes down and you get less. If the farm is not crowded, which is not uncommon, you can get several times that.
mu2e also has access to compute farms at other institutions through a collaboration call Open Science Grid (OSG). It is easy to modify your submit command to use these resources. We do not have a quota here, so we don't really know how many nodes we can get, but it is usually at least as much as we can get on GPGrid. This system is less reliable than GPgrid so we often see unusual failure modes, or jobs restarting.
Authentication
You login to the virtual machines by using kerberos authentication. You will need a permanent ID called a kerberos "principal" which is looks like "xyz@FNAL.GOV", where xyz is your username. You will have a password associated with your principal. You will use this principal and password to log into linux desktops located at Fermilab or to ssh into the interactive machines fro your home institution. Please follow the procedure on the ComputingAccounts page to setup your accounts.
Code
mu2e simulation, reconstruction, and analysis code is available as one git repository. The code is primarily c++, built with scons, with bash scripting, and some perl and python. Our code depends heavily on external packages such as art and root. Executables, assembled from shared objects and plug-in modules, are controlled by fcl scripts. Pre-built releases are published and available world-wide on cvmfs.
Tutorials
- Testing the ROOT display
- Testing the Geant4 based event display
- Notes on dynamic libraries
- The First Step: the art workbook
- Running G4 within art: The first examples.
- Mu2e maintained FAQs: C++ FAQ, Unix/Linux FAQ, ROOT FAQ, Geant4 Notes