Spack: Difference between revisions
mNo edit summary |
|||
Line 62: | Line 62: | ||
spack rm kinkal | spack rm kinkal | ||
spack add kinkal@main | spack add kinkal@main %gcc@13.1.0 cxxstd=20 build_type=Release | ||
spack develop kinkal@main | spack develop kinkal@main | ||
spack add Offline@main +g4 | spack add Offline@main +g4 %gcc@13.1.0 cxxstd=20 | ||
spack develop Offline@main | spack develop Offline@main | ||
spack add production@main | spack add production@main |
Latest revision as of 22:45, 12 October 2024
Introduction
Spack is a software package setup and build system. It can replace UPS, MRB (Multi-Repo Build) and Muse "package managers". Spack was developed for the supercomputer environment and is common use there today. The driving force that inspired spack is the need to install many packages while coordinating their dependencies.
The computing division will provide their software (art, ifdhc) within the spack framework. They have requested that we adopt spack as far as possible, so that the lab can converge to one package manager which will simplify maintenance and support. It will also prepare us to be more efficient in our use of supercomputers in the future. The lab has decided to end support for UPS and only provide software in the spack framework starting with he adoption of the AlmaLinux operating system, which will fully adopted by the hard deadline of 6/30/2024.
spack is designed to manage your code from github repo, to build, to installation in the final location. However, for a few reasons, we are adopting spack in two phases. For historical reasons, they are phase 2 and 3. In phase 2, we build our Offline repos locally, driven by the muse scripts, and link to the art suite libraries which are delivered in spack format. This has the primary advantage that all muse commands and functionality stay the same.
Phase 3, which is full spack adoption, we expect will come in two forms. The first is "bare spack" for experts who are comfortable working with spack directly, and a "wrapped' version with Muse-like script which will hide most details for beginners and casual code users. A basic build in bare spack is being used in the online setting.
Mu2e computing management plans to stay at Phase 2 until the following prerequisites for advancing to Phase 3 are achieved:
- Usage of spack by both CSAID and Mu2e is stable, robust and easy enough for novice users.
- There is support for building code on our interactive machines and running it in grid jobs. This is needed for most development workflows.
- All functionality currently provided by Muse is made available in the new environment.
Phase 2 Usage
The initial setup, which works for sl7 (with UPS) and al9 (with spack) is
source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh
If you have chosen to use the recommended Mu2e login scripts ( see Shells) this can be abbreviated to mu2einit
.
After this, muse will be in your path and will work normally. You can "muse setup" and "muse build" as usual. You can also make tarballs, and submit to the grid, or access musings, like "muse setup SimJob".
To access the data-handling tools, both SAM and the new (metacat) tools, you can
muse setup ops
which can be run with or without setting up a Muse Offline build.
We are preparing
muse setup ana
to provide access to a python analysis tool suite, like pyana.
Minimal Commands
listing packages
spack find -lv <packagename>
or with the dependencies (things it depends on)
spack find -lvd <packagename>
just print the version
spack find --format "{version}" root
what environment is active
spack env status
locate a package
spack location -i art
include path
export ART_INC=$(spack location -i art)/include
Local Offline build
Generally on al9, we can build Offline, TrkAna and other repos using muse, which drives the build via Scons and will link to art and root provided in the spack format on cvmfs. For repos which do not have Scons capability, and can only be made with cmake, then we must use the full spack machinery. The most common situation is when a user need to build KinKal or artdaq-core-mu2e locally with Offline, in order to develop new code in these repos together. The following will create a spack working area, check out repos locally, and build them locally using spack driving cmake. To first order, no muse command will work here.
This procedure is all together here to allow cut-and-past and will be explained a bit more below. It assumes you are in the build area, such as your app directory.
export MYSS=kk export MYENV=dev mu2einit smack subspack $MYSS cd $MYSS smack subenv $MYENV source ./setup-env.sh spack env activate $MYENV spack rm kinkal spack add kinkal@main %gcc@13.1.0 cxxstd=20 build_type=Release spack develop kinkal@main spack add Offline@main +g4 %gcc@13.1.0 cxxstd=20 spack develop Offline@main spack add production@main spack develop production@main spack add mu2e-trig-config@main spack develop mu2e-trig-config@main spack concretize -f 2>&1 | tee c_$(date +%F-%H-%M-%S).log spack install 2>&1 | tee i_$(date +%F-%H-%M-%S).log spack env deactivate spack env activate $MYENV
The "develop" repos are checked out in $MYENV
. After editing code, you can simply
spack install 2>&1 | tee i_$(date +%F-%H-%M-%S).log
The code is installed in $SPACK_LOCAL
This build area has a "view" which gathers your bin and includes together in one directory to simplify your paths. This is supposed to get checked with every install command, but sometimes it isn't. So if the build ran and it doesn't look like all your changes are there, you can try to refresh the view.
rm -rf $SPACK_ENV/.spack-env/view $SPACK_ENV/.spack-env/._view spack env view regenerate $MYENV
Other non-obvious errors might be helped with a cleanup (won't delete code in you env or installed packages)
spack clean -a
or refreshing the load
spack uninstall Offline spack install Offline
When returning to the area in a new process, you should do
mu2einit cd $MYSS source ./setup-env.sh spack env activate $MYENV
- smack is a script which simply runs some common combinations of commands, it has a help
- subspack creates a local spack build area backed by builds on cvmfs
- subenv creates a a local environment based on the current complete set of built mu2e code equivalent to a muse envset - art, root, KinKal, BTrk and artdaq-core-mu2e
- an environment limits you to a selected set of code packages and versions
- rm, add, and develop removes the package from the env, adds it to the env, and checks it out locally
- @main means the head of the main branch
- +g4 means build with Geant4 (do not skip it like we do for the trigger, which would be ~g4)
- concretize means find a set of packages and versions which meet all demands, you only need to do it after changing the packages involved
- install means compile and install as needed
- to develop
artdaq-core-mu2e
, replace thekinkal
lines with
spack rm artdaq-core-mu2e spack add artdaq-core-mu2e@develop spack develop artdaq-core-mu2e@develop
Spack is doing a rather large number of operations behind each command, and it is trying to figure it all out for you, so sometimes it will produce strange errors. Probably best to post on slack help-bug.
Using root with Pre Built Dictionaries
In the pre-spack era, run-time resolution of .so library dependencies and in art jobs was done using LD_LIBRARY_PATH. In the spack era, run-time resolution of library dependencies in art jobs is done using RPATHS and LD_LIBRARY_PATH is no longer needed. Therefore LD_LIBRARY_PATH is not defined in the default Mu2e AL9 environment.
However root uses LD_LIBRARY_PATH for a second purpose. At run-time it finds root dictionary files using LD_LIBRARY_PATH. If you run root in the Mu2e default spack enviroment root will not be able to find dictionaries. This will cause errors running root on TrkAna and Stnutple format root files and on any other files that require dictionaries.
The following is a hack that will work until a better solution is developed. Before you run root, type:
export LD_LIBRARY_PATH=${CET_PLUGIN_PATH}
You should only take this step when it is needed. There is a chance that this may sometimes interfere with subsequently running art jobs in the same shell (we have yet fully understood this issue). You can protect against that by doing the following prior to running art jobs:
unset LD_LIBRARY_PATH
You can read about CET_PLUGIN_PATH at SearchPaths#CET_PLUGIN_PATH.
Geant4 names
Geant4 data packages have different names in spack. From Julia:
g4able is G4ABLA g4emlow is G4EMLOW g4neutron is G4NDL g4nucleonsxs is G4SAIDDATA g4nuclide is G4ENSDFSTATE g4photon is PhotonEvaporation g4pii is G4PII g4radiative is RadioactiveDecay g4tendl is G4TENDL g4particlexs is G4PARTICLEXS g4incl is G4INCL G4NEUTRONXS appears to be obsolete.
References
- lab intro talk
- spack official docs fora
- public packages
- lab spack wiki and tutorial
- CSAID has "spack" and "fnal-spack-team-mu2e" channels (invite only)
- githubs
- FNALssi (spack, spack tools)
- fermitools (metacat, jobsub, POMS)
- art framework
- artdaq
- recipe repos
- mu2e-spack (ots,mdh)
- fnal_art (art,canvas,ifdhc)
- artdaq
- scd_recipes (metacat)
- builtin lab spack
- cetmodules
- cmake
- gfal2