Spack: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
 
(43 intermediate revisions by 4 users not shown)
Line 1: Line 1:
==Introduction==
==Introduction==
Spack is a multi-package setup and build system.  It replaces [[UPS]] and MRB (Multi-Repo Build) and related packages. The computing division will work to build and provide their software (art, ifdhc) within spack. Mu2e will probably not use the spack build features, but will have to use spack to access pre-built software, instead of ups setup command.
Spack is a software package setup and build system.  It can replace [[UPS]], MRB (Multi-Repo Build) and [[Muse]] "package managers". Spack was developed for the supercomputer environment and is common use there today. The driving force that inspired spack is the need to install many packages while coordinating their dependencies.


Scripts are provided to allow a spack setup of a UPS product and ''vice versa''.  The user doesn't see the difference.
The computing division will provide their software (art, ifdhc) within the spack framework. They have requested that we adopt spack as far as possible, so that the lab can converge to one package manager which will simplify maintenance and support.  It will also prepare us to be more efficient in our use of supercomputers in the future.  The lab has decided to end support for UPS and only provide software in the spack framework starting with he adoption of the AlmaLinux operating system, which will fully adopted by the hard deadline of 6/30/2024.


[https://docs.google.com/presentation/d/16CxC40Hs0V0m7pwTQaSoUdo-gqy5UqojNZY824AiCBY/edit#slide=id.g23f597c85bd_0_0 draft intro talk]
spack is designed to manage your code from github repo, to build, to installation in the final location. However, for a few reasons, we are adopting spack in two phases. For historical reasons, they are phase 2 and 3.  In phase 2, we build our Offline repos locally, driven by the muse scripts, and link to the art suite libraries which are delivered in spack format.  This has the primary advantage that all muse commands and functionality stay the same.


==Usage==
Phase 3, which is full spack adoption, we expect will come in two formsThe first is "bare spack" for experts who are comfortable working with spack directly, and a "wrapped' version with Muse-like script which will hide most details for beginners and casual code users. A basic build in bare spack is being used in the [[Building_Offline_with_cmake| online setting]].
setup mu2e
source /cvmfs/fermilab.opensciencegrid.org/packages/common/spack/rollout/NULL/share/spack/setup-env.sh
defines
SPACK_ROOT=/cvmfs/fermilab.opensciencegrid.org/packages/common/spack/rollout/NULL
  MODULEPATH=$SPACK_ROOT/share/spack/modules/linux-scientific7-x86_64
DK_NODE=$SPACK_ROOT/share/spack/dotkit/linux-scientific7-x86_64
PATH= ... /cvmfs/fermilab.opensciencegrid.org/packages/common/spack/rollout/NULL/bin
BASH_FUNC_spack()=... load ... unload ...


and some useful commands
Mu2e computing management plans to stay at Phase 2 until the following prerequisites for advancing to Phase 3 are achieved:
# Usage of spack by both CSAID and Mu2e is stable, robust and easy enough for novice users.
# There is support for building code on our interactive machines and running it in grid jobs.  This is needed for most development workflows.
# All functionality currently provided by Muse is made available in the new environment.


spack -h
==Phase 2 Usage==
  spack find -h
The initial setup, which works for sl7 (with UPS) and al9 (with spack) is
  spack arch
  source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh
  spack find
If you have chosen to use the recommended Mu2e login scripts ( see [[Shells]]) this can be abbreviated to <code>mu2einit</code>.
After this, muse will be in your path and will work normally. You can "muse setup" and "muse build" [[Muse| as usual]]. You can also make tarballs, and submit to the grid, or access musings, like "muse setup SimJob".


To access the data-handling tools, both SAM and the new (metacat) tools, you can
muse setup ops
which can be run with or without setting up a Muse Offline build.


You can see the common lab packages are available in spack: ifdhc, jobsub_client, sam_web_client.
We are preparing
muse setup ana
to provide access to a python analysis tool suite, like pyana.


spack load ifdhc
== Minimal Commands ==
# load (like ups setup) using a hash for the version and qualifiers
spack load ifdhc/242t7pk


==Notes==
listing packages
show what spack knows about architectures
spack find -lv <packagename>
  spack arch --known-targets
or with the dependencies (things it depends on)
spack find -lvd <packagename>
just print the version
spack find --format "{version}" root
what environment is active
spack env status
locate a package
  spack location -i art
include path
export ART_INC=$(spack location -i art)/include


full listing of dependencies
spack find --long --show-flags --deps --variants ifdhc


How to group sets of setups into one. You can make a spack environment, like
== Local Offline build ==
  spack env create uboone_analysis_current_sl7_x86_64
Generally on al9, we can build Offline, TrkAna and other repos using muse, which drives the build via Scons and will link to art and root provided in the spack format on cvmfs. For repos which do not have Scons capability, and can only be made with cmake, then we must use the full spack machinery.  The most common situation is when a user need to build KinKal or artdaq-core-mu2e locally with Offline, in order to develop new code in these repos together.  The following will create a spack working area, check out repos locally, and build them locally using spack driving cmake. To first order, no muse command will work here.
then
 
spack activate uboone_analysis_current_sl7_x86_64
This procedure is all together here to allow cut-and-past and will be explained a bit more below. It assumes you are in the build area, such as your app directory.
and "spack install" packages into it; then if you
 
  spack env activate uboone_analysis_current_sl7_x86_64
<pre>
those will be the packages you see, and they're all spack loaded at once.
export MYSS=kk
Marc stills need to add support for that to the cvmfs scripts though...
export MYENV=dev
 
mu2einit
 
smack subspack $MYSS
cd $MYSS
 
smack subenv $MYENV
source ./setup-env.sh
spack env activate $MYENV
 
spack rm kinkal
spack add kinkal@main %gcc@13.1.0 cxxstd=20 build_type=Release
spack develop kinkal@main
spack add Offline@main +g4 %gcc@13.1.0 cxxstd=20
spack develop Offline@main
spack add production@main
spack develop production@main
spack add mu2e-trig-config@main
spack develop mu2e-trig-config@main
 
spack concretize -f 2>&1 | tee c_$(date +%F-%H-%M-%S).log
spack install      2>&1 | tee i_$(date +%F-%H-%M-%S).log
 
spack env deactivate
spack env activate $MYENV
 
</pre>
The "develop" repos are checked out in <code>$MYENV</code>. After editing code, you can simply
spack install 2>&1 | tee i_$(date +%F-%H-%M-%S).log
The code is installed in <code>$SPACK_LOCAL</code>
 
This build area has a "view" which gathers your bin and includes together in one directory to simplify your paths.  This is supposed to get checked with every install command, but sometimes it isn't.  So if the build ran and it doesn't look like all your changes are there, you can try to refresh the view.
<pre>
rm -rf $SPACK_ENV/.spack-env/view $SPACK_ENV/.spack-env/._view
spack env view regenerate $MYENV
</pre>
 
Other non-obvious errors might be helped with a cleanup (won't delete code in you env or installed packages)
  spack clean -a
or refreshing the load
spack uninstall Offline
spack install Offline
 
When returning to the area in a new process, you should do
<pre>
mu2einit
cd $MYSS
source ./setup-env.sh
spack env activate $MYENV
</pre>
 
* smack is a script which simply runs some common combinations of commands, it has a help
** subspack creates a local spack build area backed by builds on cvmfs
** subenv creates a a local environment based on the current complete set of built mu2e code equivalent to a muse envset - art, root, KinKal, BTrk and artdaq-core-mu2e
* an environment limits you to a selected set of code packages and versions
* ''rm'', ''add'', and ''develop'' removes the package from the env, adds it to the env, and checks it out locally
* ''@main'' means the head of the main branch
* ''+g4'' means build with Geant4 (do not skip it like we do for the trigger, which would be ''~g4'')
* concretize means find a set of packages and versions which meet all demands, you only need to do it after changing the packages involved
* install means compile and install as needed
* to develop <code>artdaq-core-mu2e</code>, replace the <code>kinkal</code> lines with
spack rm artdaq-core-mu2e
spack add artdaq-core-mu2e@develop
spack develop artdaq-core-mu2e@develop
 
 
Spack is doing a rather large number of operations behind each command, and it is trying to figure it all out for you, so sometimes it will produce strange errors. Probably best to post on slack help-bug.
 
==Using root with Pre Built Dictionaries==
 
In the pre-spack era, run-time resolution of .so library dependencies and in art jobs was done using LD_LIBRARY_PATH.  In the spack era, run-time resolution of library dependencies in art jobs is done using RPATHS and LD_LIBRARY_PATH is no longer needed.  Therefore LD_LIBRARY_PATH is not defined in the default Mu2e AL9 environment.
 
However root uses LD_LIBRARY_PATH for a second purpose.  At run-time it finds root dictionary files using LD_LIBRARY_PATH.  If you run root in the Mu2e default spack enviroment root will not be able to find dictionaries.  This will cause errors running root on TrkAna and Stnutple format root files and on any other files that require dictionaries.
 
The following is a hack that will work until a better solution is developed.  Before you run root, type:
 
  export LD_LIBRARY_PATH=${CET_PLUGIN_PATH}
 
You should only take this step when it is needed.  There is a chance that this may sometimes interfere with subsequently running art jobs in the same shell (we have yet fully understood this issue).  You can protect against that by doing the following prior to running art jobs:
 
  unset LD_LIBRARY_PATH
 
You can read about CET_PLUGIN_PATH at [[SearchPaths#CET_PLUGIN_PATH]].
 
 
 
===Geant4 names===
Geant4 data packages have different names in spack. From Julia:
<pre>
g4able is G4ABLA
g4emlow is G4EMLOW
g4neutron is G4NDL
g4nucleonsxs is G4SAIDDATA
g4nuclide is G4ENSDFSTATE
g4photon is PhotonEvaporation
g4pii is G4PII
g4radiative is RadioactiveDecay
g4tendl is G4TENDL
g4particlexs is G4PARTICLEXS
g4incl is G4INCL
 
G4NEUTRONXS appears to be obsolete.
</pre>


==References==
==References==
*[https://docs.google.com/presentation/d/16CxC40Hs0V0m7pwTQaSoUdo-gqy5UqojNZY824AiCBY/edit#slide=id.g23f597c85bd_0_0 lab intro talk]
*[https://spack.readthedocs.io/en/latest/index.html spack official docs] [https://spack.io/ fora]
*[https://packages.spack.io/ public packages]
*[https://fifewiki.fnal.gov/wiki/Spack lab spack wiki and tutorial]
* CSAID has "spack" and "fnal-spack-team-mu2e" channels (invite only)
* githubs
**[https://github.com/FNALssi FNALssi] (spack, spack tools)
**[https://github.com/fermitools fermitools] (metacat, jobsub, POMS)
**[https://github.com/art-framework-suite/art.git art framework]
**[https://github.com/art-daq/ artdaq]
* recipe repos
** [https://github.com/Mu2e/mu2e-spack mu2e-spack] (ots,mdh)
** [https://github.com/FNALssi/fnal_art fnal_art] (art,canvas,ifdhc)
** [https://github.com/art-daq/artdaq-spack artdaq]
** [https://github.com/marcmengel/scd_recipes/tree/master/packages scd_recipes] (metacat)
** builtin [https://github.com/FNALssi/spack/tree/fnal-develop/var/spack/repos/builtin lab] [https://github.com/spack/spack/tree/develop/var/spack/repos/builtin spack]
*[https://fnalssi.github.io/cetmodules/ cetmodules]
*[https://cmake.org/cmake/help/latest/ cmake]
*[https://grid-deployment.web.cern.ch/grid-deployment/dms/dmc/docs/gfal2-python-1.9.0/html/index.html gfal2]


*[https://cdcvs.fnal.gov/redmine/projects/spack-infrastructure/wiki/Wiki wiki]
*[https://spack.readthedocs.io/en/latest/ Spack docs]
*[https://github.com/spack/spack github]
*[https://github.com/FNALssi/fermi-spack-tools spack-tools github]


[[Category:Computing]]
[[Category:Computing]]
[[Category:Code]]
[[Category:Code]]
[[Category:CodeManagement]]
[[Category:CodeManagement]]

Latest revision as of 22:45, 12 October 2024

Introduction

Spack is a software package setup and build system. It can replace UPS, MRB (Multi-Repo Build) and Muse "package managers". Spack was developed for the supercomputer environment and is common use there today. The driving force that inspired spack is the need to install many packages while coordinating their dependencies.

The computing division will provide their software (art, ifdhc) within the spack framework. They have requested that we adopt spack as far as possible, so that the lab can converge to one package manager which will simplify maintenance and support. It will also prepare us to be more efficient in our use of supercomputers in the future. The lab has decided to end support for UPS and only provide software in the spack framework starting with he adoption of the AlmaLinux operating system, which will fully adopted by the hard deadline of 6/30/2024.

spack is designed to manage your code from github repo, to build, to installation in the final location. However, for a few reasons, we are adopting spack in two phases. For historical reasons, they are phase 2 and 3. In phase 2, we build our Offline repos locally, driven by the muse scripts, and link to the art suite libraries which are delivered in spack format. This has the primary advantage that all muse commands and functionality stay the same.

Phase 3, which is full spack adoption, we expect will come in two forms. The first is "bare spack" for experts who are comfortable working with spack directly, and a "wrapped' version with Muse-like script which will hide most details for beginners and casual code users. A basic build in bare spack is being used in the online setting.

Mu2e computing management plans to stay at Phase 2 until the following prerequisites for advancing to Phase 3 are achieved:

  1. Usage of spack by both CSAID and Mu2e is stable, robust and easy enough for novice users.
  2. There is support for building code on our interactive machines and running it in grid jobs. This is needed for most development workflows.
  3. All functionality currently provided by Muse is made available in the new environment.

Phase 2 Usage

The initial setup, which works for sl7 (with UPS) and al9 (with spack) is

source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh

If you have chosen to use the recommended Mu2e login scripts ( see Shells) this can be abbreviated to mu2einit. After this, muse will be in your path and will work normally. You can "muse setup" and "muse build" as usual. You can also make tarballs, and submit to the grid, or access musings, like "muse setup SimJob".

To access the data-handling tools, both SAM and the new (metacat) tools, you can

muse setup ops

which can be run with or without setting up a Muse Offline build.

We are preparing

muse setup ana

to provide access to a python analysis tool suite, like pyana.

Minimal Commands

listing packages

spack find -lv <packagename>

or with the dependencies (things it depends on)

spack find -lvd <packagename>

just print the version

spack find --format "{version}" root

what environment is active

spack env status

locate a package

spack location -i art

include path

export ART_INC=$(spack location -i art)/include


Local Offline build

Generally on al9, we can build Offline, TrkAna and other repos using muse, which drives the build via Scons and will link to art and root provided in the spack format on cvmfs. For repos which do not have Scons capability, and can only be made with cmake, then we must use the full spack machinery. The most common situation is when a user need to build KinKal or artdaq-core-mu2e locally with Offline, in order to develop new code in these repos together. The following will create a spack working area, check out repos locally, and build them locally using spack driving cmake. To first order, no muse command will work here.

This procedure is all together here to allow cut-and-past and will be explained a bit more below. It assumes you are in the build area, such as your app directory.

export MYSS=kk
export MYENV=dev

mu2einit

smack subspack $MYSS
cd $MYSS

smack subenv $MYENV
source ./setup-env.sh
spack env activate $MYENV

spack rm kinkal
spack add kinkal@main %gcc@13.1.0 cxxstd=20 build_type=Release
spack develop kinkal@main
spack add Offline@main +g4 %gcc@13.1.0 cxxstd=20
spack develop Offline@main
spack add production@main
spack develop production@main
spack add mu2e-trig-config@main
spack develop mu2e-trig-config@main

spack concretize -f 2>&1 | tee c_$(date +%F-%H-%M-%S).log
spack install       2>&1 | tee i_$(date +%F-%H-%M-%S).log

spack env deactivate
spack env activate $MYENV

The "develop" repos are checked out in $MYENV. After editing code, you can simply

spack install 2>&1 | tee i_$(date +%F-%H-%M-%S).log

The code is installed in $SPACK_LOCAL

This build area has a "view" which gathers your bin and includes together in one directory to simplify your paths. This is supposed to get checked with every install command, but sometimes it isn't. So if the build ran and it doesn't look like all your changes are there, you can try to refresh the view.

rm -rf $SPACK_ENV/.spack-env/view $SPACK_ENV/.spack-env/._view
spack env view regenerate $MYENV

Other non-obvious errors might be helped with a cleanup (won't delete code in you env or installed packages)

spack clean -a

or refreshing the load

spack uninstall Offline
spack install Offline

When returning to the area in a new process, you should do

mu2einit
cd $MYSS
source ./setup-env.sh
spack env activate $MYENV
  • smack is a script which simply runs some common combinations of commands, it has a help
    • subspack creates a local spack build area backed by builds on cvmfs
    • subenv creates a a local environment based on the current complete set of built mu2e code equivalent to a muse envset - art, root, KinKal, BTrk and artdaq-core-mu2e
  • an environment limits you to a selected set of code packages and versions
  • rm, add, and develop removes the package from the env, adds it to the env, and checks it out locally
  • @main means the head of the main branch
  • +g4 means build with Geant4 (do not skip it like we do for the trigger, which would be ~g4)
  • concretize means find a set of packages and versions which meet all demands, you only need to do it after changing the packages involved
  • install means compile and install as needed
  • to develop artdaq-core-mu2e, replace the kinkal lines with
spack rm artdaq-core-mu2e
spack add artdaq-core-mu2e@develop
spack develop artdaq-core-mu2e@develop


Spack is doing a rather large number of operations behind each command, and it is trying to figure it all out for you, so sometimes it will produce strange errors. Probably best to post on slack help-bug.

Using root with Pre Built Dictionaries

In the pre-spack era, run-time resolution of .so library dependencies and in art jobs was done using LD_LIBRARY_PATH. In the spack era, run-time resolution of library dependencies in art jobs is done using RPATHS and LD_LIBRARY_PATH is no longer needed. Therefore LD_LIBRARY_PATH is not defined in the default Mu2e AL9 environment.

However root uses LD_LIBRARY_PATH for a second purpose. At run-time it finds root dictionary files using LD_LIBRARY_PATH. If you run root in the Mu2e default spack enviroment root will not be able to find dictionaries. This will cause errors running root on TrkAna and Stnutple format root files and on any other files that require dictionaries.

The following is a hack that will work until a better solution is developed. Before you run root, type:

 export LD_LIBRARY_PATH=${CET_PLUGIN_PATH}

You should only take this step when it is needed. There is a chance that this may sometimes interfere with subsequently running art jobs in the same shell (we have yet fully understood this issue). You can protect against that by doing the following prior to running art jobs:

 unset LD_LIBRARY_PATH

You can read about CET_PLUGIN_PATH at SearchPaths#CET_PLUGIN_PATH.


Geant4 names

Geant4 data packages have different names in spack. From Julia:

g4able is G4ABLA
g4emlow is G4EMLOW
g4neutron is G4NDL
g4nucleonsxs is G4SAIDDATA
g4nuclide is G4ENSDFSTATE 
g4photon is PhotonEvaporation
g4pii is G4PII
g4radiative is RadioactiveDecay
g4tendl is G4TENDL
g4particlexs is G4PARTICLEXS
g4incl is G4INCL

G4NEUTRONXS appears to be obsolete.

References