Difference between revisions of "ALCF: Theta"

From Mu2eWiki
Jump to navigation Jump to search
Line 74: Line 74:
  
 
[https://colfaxresearch.com/knl-mcdram/ Cache (Flat)] is the memory mode in which the high bandwidth memory MCDRAM acts as a cache (regular memory).  [https://colfaxresearch.com/knl-numa/ Quad] is the clustering mode.  'cache-quad' is the default configuration if none is specified.
 
[https://colfaxresearch.com/knl-mcdram/ Cache (Flat)] is the memory mode in which the high bandwidth memory MCDRAM acts as a cache (regular memory).  [https://colfaxresearch.com/knl-numa/ Quad] is the clustering mode.  'cache-quad' is the default configuration if none is specified.
 +
 +
Once you have requested an interactive session, you may have to wait.  Usually the wait is no more than a few minutes.  The output from the batch system looks something like this during the process:
 +
<pre>
 +
goodenou@thetalogin6:~/Mu2eMT> qsub -A Comp_Perf_Workshop -t 15 -q debug-cache-quad -n 1 -I
 +
Connecting to thetamom2 for interactive qsub...
 +
Job routed to queue "debug-cache-quad".
 +
Memory mode set to cache quad for queue debug-cache-quad
 +
Wait for job 336989 to start...
 +
Opening interactive session to 3830
 +
goodenou@thetamom2:/gpfs/mira-home/goodenou>
 +
</pre>
 +
The service nodes have the names 'thetamom#', so there is no mistaking when you are on one.  Note that you are placed back in your home directory on the service node, regardless of where you made the job request from.
 +
 +
A user can interact with a Singularity container in many ways.  To run a shell from within your container, type
 +
<pre>
 +
singularity shell <containername>
 +
</pre>
 +
On Theta, the container resides in the project directory so that everyone can access it.  The command and output look like this:
 +
<pre>
 +
goodenou@thetamom2:/gpfs/mira-home/goodenou/Mu2eMT> singularity shell /projects/mu2e_CRY/mu2emt-v7_2_0-7.7.6.simg
 +
Singularity: Invoking an interactive shell within container...
 +
 +
Singularity mu2emt-v7_2_0-7.7.6.simg:~>
 +
</pre>
 +
 +
An 'ls' will show you the contents of your $HOME directory, which is always mounted to the container.  [https://www.sylabs.io/guides/3.2/user-guide/bind_paths_and_mounts.html Other directories] are also mounted to container by default.  An 'ls /' will show the contents of the top level directory in the container.
 +
The mu2emt-v7_2_0-7.7.6.simg container was built with the following directory structure
 +
<pre>
 +
Singularity mu2emt-v7_2_0-7.7.6.simg:~> ls /
 +
anaconda-post.log  DataFiles etc       home  media    Offline products  sbin     srv  usr
 +
artexternals   dev etc_bashrc    lib    mnt      opt root   setupmu2e-art.sh  sys  var
 +
bin   environment graphicslibs  lib64  mu2egrid  proc run   singularity     tmp
 +
</pre>
 +
 +
To run the mu2e executable over g4test_03.fcl in this container, we need to execute the following commands.  The output is not shown here.
 +
<pre>
 +
Singularity mu2emt-v7_2_0-7.7.6.simg:~> source /setupmu2e-art.sh
 +
Singularity mu2emt-v7_2_0-7.7.6.simg:~> source /Offline/setup.sh
 +
Singularity mu2emt-v7_2_0-7.7.6.simg:~> mu2e -c /Offline/Mu2eG4/fcl/g4test_03.fcl -n 100
 +
</pre>

Revision as of 22:18, 8 May 2019

Running Jobs on Theta at ALCF

Login

Once you have an allocation on Theta, or if you are using an existing allocation, you can reference the Onboarding Guide for answers to most of your questions about how to get started.

To login to Theta from a terminal:

ssh <username>@theta.alcf.anl.gov

At the prompt you must enter your password; you must enter your 4-digit PIN (given to you by ALCF when you get an account) followed immediately by the one-time 8-digit cryptocard password with no spaces in between the two. You are then in your home directory /home/<username> running on a login node running the bash shell. Login nodes run SUSE Enterprise Linux-based full CLE OS. You can change the login shell in your account web page.

Filesystems

There are two filesystems on Theta: the GPFS system which houses the /home/<username> directories in /gpfs/mira-home), and the Lustre filesystem which houses the /project/<proejctname> directories in /lus/theta-fs0/projects. The /home directories are backed up and by default are 50GiB. The /project directories are NOT backed up and are by default 1TiB. The /project directory is viewable by all members of the project, so common code and files should be placed here.

Environment

Your environment is controlled via 'modules'. There is a default set of modules set up for all users. Run

 module list 

to see what is loaded at any given time. For the work done on Theta to date (as of May 2019), users have not needed to modify their environment. As of May 2019, the output of the 'module list' command for a default environment is

Currently Loaded Modulefiles:
  1) modules/3.2.11.1
  2) intel/18.0.0.128
  3) craype-network-aries
  4) craype/2.5.15
  5) cray-libsci/18.07.1
  6) udreg/2.3.2-6.0.7.1_5.13__g5196236.ari
  7) ugni/6.0.14.0-6.0.7.1_3.13__gea11d3d.ari
  8) pmi/5.0.14
  9) dmapp/7.1.1-6.0.7.1_5.45__g5a674e0.ari
 10) gni-headers/5.0.12.0-6.0.7.1_3.11__g3b1768f.ari
 11) xpmem/2.2.15-6.0.7.1_5.11__g7549d06.ari
 12) job/2.2.3-6.0.7.1_5.43__g6c4e934.ari
 13) dvs/2.7_2.2.118-6.0.7.1_10.1__g58b37a2
 14) alps/6.6.43-6.0.7.1_5.45__ga796da32.ari
 15) rca/2.2.18-6.0.7.1_5.47__g2aa4f39.ari
 16) atp/2.1.3
 17) perftools-base/7.0.4
 18) PrgEnv-intel/6.0.4
 19) craype-mic-knl
 20) cray-mpich/7.7.3
 21) nompirun/nompirun
 22) darshan/3.1.5
 23) trackdeps
 24) xalt

Containers

The easiest way to run the Mu2e Offline code on Theta is to run it in a container. Docker is a common container platform, but because of security issues, ALCF does not allow users to run Docker containers on their systems. Singularity is another container platform that does not have the same security issues as Docker, and can be run on Theta. Singularity is capable of building containers from Docker images, so the Mu2e Offline code can be containerized as a Docker or Singularity container for use on Theta.

We built a Docker container of the Offline code and put it on Docker Hub. To pull a container to Theta and turn it into a Singularity container run the command

singularity pull docker://username/image_name:image_version

You will then have a container named 'image_name-image_version.simg' in the current directory.

For example, for the March 2019 jobs on Theta we used 'singularity pull docker://goodenou/mu2emt:v7_2_0-7.7.6' to create a container called mu2emt-v7_2_0-7.7.6.simg for use. We placed the container in the /projects/mu2e_CRY for all project members to access. For more information on using Singularity containers on Theta, see the ALCF tutorial.

Running Jobs

The ALCF has a detailed webpage on running jobs on Theta. Theta uses the batch scheduler Cobalt. Jobs are run using the 'aprun' command.


As a first test of any code, it is good practice to run an interactive job. To get

  • one node (-n 1)
  • for 15 minutes (-t 15)
  • for interactive use (-I)
  • charged to projectname (-A <projectname>)

run the following:

qsub -A <projectname> -t 15 -q debug-cache-quad -n 1 -I

This will put you on a service node from which you can launch your interactive job.

The 'debug' queue is a good place to test your code, since there is no minimum requirement on the number of nodes that you can request. The maximum number of nodes you can request in the debug queue is 16, in either cache-quad or flat-quad mode. The maximum job time in the debug queue is 1 hour, and a user may only have one job running at a time. For more information on the available queues, see the Job Scheduling Policy page.


Cache (Flat) is the memory mode in which the high bandwidth memory MCDRAM acts as a cache (regular memory). Quad is the clustering mode. 'cache-quad' is the default configuration if none is specified.

Once you have requested an interactive session, you may have to wait. Usually the wait is no more than a few minutes. The output from the batch system looks something like this during the process:

goodenou@thetalogin6:~/Mu2eMT> qsub -A Comp_Perf_Workshop -t 15 -q debug-cache-quad -n 1 -I
Connecting to thetamom2 for interactive qsub...
Job routed to queue "debug-cache-quad".
Memory mode set to cache quad for queue debug-cache-quad
Wait for job 336989 to start...
Opening interactive session to 3830
goodenou@thetamom2:/gpfs/mira-home/goodenou>

The service nodes have the names 'thetamom#', so there is no mistaking when you are on one. Note that you are placed back in your home directory on the service node, regardless of where you made the job request from.

A user can interact with a Singularity container in many ways. To run a shell from within your container, type

singularity shell <containername>

On Theta, the container resides in the project directory so that everyone can access it. The command and output look like this:

goodenou@thetamom2:/gpfs/mira-home/goodenou/Mu2eMT> singularity shell /projects/mu2e_CRY/mu2emt-v7_2_0-7.7.6.simg
Singularity: Invoking an interactive shell within container...

Singularity mu2emt-v7_2_0-7.7.6.simg:~>

An 'ls' will show you the contents of your $HOME directory, which is always mounted to the container. Other directories are also mounted to container by default. An 'ls /' will show the contents of the top level directory in the container. The mu2emt-v7_2_0-7.7.6.simg container was built with the following directory structure

Singularity mu2emt-v7_2_0-7.7.6.simg:~> ls /
anaconda-post.log  DataFiles	etc	      home   media     Offline	products  sbin		    srv  usr
artexternals	   dev		etc_bashrc    lib    mnt       opt	root	  setupmu2e-art.sh  sys  var
bin		   environment	graphicslibs  lib64  mu2egrid  proc	run	  singularity	    tmp

To run the mu2e executable over g4test_03.fcl in this container, we need to execute the following commands. The output is not shown here.

Singularity mu2emt-v7_2_0-7.7.6.simg:~> source /setupmu2e-art.sh
Singularity mu2emt-v7_2_0-7.7.6.simg:~> source /Offline/setup.sh
Singularity mu2emt-v7_2_0-7.7.6.simg:~> mu2e -c /Offline/Mu2eG4/fcl/g4test_03.fcl -n 100