ALCF: Theta
Running Jobs on Theta at ALCF
Login
Once you have an allocation on Theta, or if you are using an existing allocation, you can reference the Onboarding Guide for answers to most of your questions about how to get started.
To login to Theta from a terminal:
ssh <username>@theta.alcf.anl.gov
At the prompt you must enter your password; you must enter your 4-digit PIN (given to you by ALCF when you get an account) followed immediately by the one-time 8-digit cryptocard password with no spaces in between the two. You are then in your home directory /home/<username> running on a login node running the bash shell. Login nodes run SUSE Enterprise Linux-based full CLE OS. You can change the login shell in your account web page.
Filesystems
There are two filesystems on Theta: the GPFS system which houses the /home/<username> directories in /gpfs/mira-home), and the Lustre filesystem which houses the /project/<proejctname> directories in /lus/theta-fs0/projects. The /home directories are backed up and by default are 50GiB. The /project directories are NOT backed up and are by default 1TiB. The /project directory is viewable by all members of the project, so common code and files should be placed here.
Environment
Your environment is controlled via 'modules'. There is a default set of modules set up for all users. Run
module list
to see what is loaded at any given time. For the work done on Theta to date (as of May 2019), users have not needed to modify their environment. As of May 2019, the output of the 'module list' command for a default environment is
Currently Loaded Modulefiles: 1) modules/3.2.11.1 2) intel/18.0.0.128 3) craype-network-aries 4) craype/2.5.15 5) cray-libsci/18.07.1 6) udreg/2.3.2-6.0.7.1_5.13__g5196236.ari 7) ugni/6.0.14.0-6.0.7.1_3.13__gea11d3d.ari 8) pmi/5.0.14 9) dmapp/7.1.1-6.0.7.1_5.45__g5a674e0.ari 10) gni-headers/5.0.12.0-6.0.7.1_3.11__g3b1768f.ari 11) xpmem/2.2.15-6.0.7.1_5.11__g7549d06.ari 12) job/2.2.3-6.0.7.1_5.43__g6c4e934.ari 13) dvs/2.7_2.2.118-6.0.7.1_10.1__g58b37a2 14) alps/6.6.43-6.0.7.1_5.45__ga796da32.ari 15) rca/2.2.18-6.0.7.1_5.47__g2aa4f39.ari 16) atp/2.1.3 17) perftools-base/7.0.4 18) PrgEnv-intel/6.0.4 19) craype-mic-knl 20) cray-mpich/7.7.3 21) nompirun/nompirun 22) darshan/3.1.5 23) trackdeps 24) xalt
Containers
The easiest way to run the Mu2e Offline code on Theta is to run it in a container. Docker is a common container platform, but because of security issues, ALCF does not allow users to run Docker containers on their systems. Singularity is another container platform that does not have the same security issues as Docker, and can be run on Theta. Singularity is capable of building containers from Docker images, so the Mu2e Offline code can be containerized as a Docker or Singularity container for use on Theta.
We built a Docker container of the Offline code and put it on Docker Hub. To pull a container to Theta and turn it into a Singularity container run the command
singularity pull docker://username/image_name:image_version
You will then have a container named 'image_name-image_version.simg' in the current directory.
For example, for the March 2019 jobs on Theta we used 'singularity pull docker://goodenou/mu2emt:v7_2_0-7.7.6' to create a container called mu2emt-v7_2_0-7.7.6.simg for use. We placed the container in the /projects/mu2e_CRY for all project members to access. For more information on using Singularity containers on Theta, see the ALCF tutorial.
Running Jobs
The ALCF has a detailed webpage on running jobs on Theta. Theta uses the batch scheduler Cobalt. Jobs are run using the 'aprun' command.
Interactive Jobs
Interactive Jobs
As a first test of any code, it is good practice to run an interactive job. To get
- one node (-n 1)
- for 15 minutes (-t 15)
- for interactive use (-I)
- charged to projectname (-A <projectname>)
run the following:
qsub -A <projectname> -t 15 -q debug-cache-quad -n 1 -I
This will put you on a service node from which you can launch your interactive job.
The 'debug' queue is a good place to test your code, since there is no minimum requirement on the number of nodes that you can request. The maximum number of nodes you can request in the debug queue is 16, in either cache-quad or flat-quad mode. The maximum job time in the debug queue is 1 hour, and a user may only have one job running at a time. For more information on the available queues, see the Job Scheduling Policy page.
Cache (Flat) is the memory mode in which the high bandwidth memory MCDRAM acts as a cache (regular memory). Quad is the clustering mode. 'cache-quad' is the default configuration if none is specified.
Once you have requested an interactive session, you may have to wait. Usually the wait is no more than a few minutes. The output from the batch system looks something like this during the process:
goodenou@thetalogin6:~/Mu2eMT> qsub -A Comp_Perf_Workshop -t 15 -q debug-cache-quad -n 1 -I Connecting to thetamom2 for interactive qsub... Job routed to queue "debug-cache-quad". Memory mode set to cache quad for queue debug-cache-quad Wait for job 336989 to start... Opening interactive session to 3830 goodenou@thetamom2:/gpfs/mira-home/goodenou>
The service nodes have the names 'thetamom#', so there is no mistaking when you are on one. Note that you are placed back in your home directory on the service node, regardless of where you made the job request from.
Interacting with the Singularity Container via a Shell
A user can interact with a Singularity container in many ways. To run a shell from within your container, type
aprun -n 1 singularity shell <containername>
The '-n 1' argument has a different meaning than the '-n' argument of sub. Here is indicates that we want to run one instance of our job - it has nothing to do with the number of nodes we run on. See the aprun man page for more information on the various arguments to aprun.
On Theta, the container resides in the project directory so that everyone can access it. The command and output look like this:
goodenou@thetamom2:/gpfs/mira-home/goodenou/Mu2eMT> aprun -n 1 singularity shell /projects/mu2e_CRY/mu2emt-v7_2_0-7.7.6.simg Singularity: Invoking an interactive shell within container...
An 'ls' will show you the contents of your $HOME directory, which is always mounted to the container. Other directories are also mounted to container by default. An 'ls /' will show the contents of the top level directory in the container. The mu2emt-v7_2_0-7.7.6.simg container was built with the following directory structure:
ls / anaconda-post.log DataFiles etc home media Offline products sbin srv usr artexternals dev etc_bashrc lib mnt opt root setupmu2e-art.sh sys var bin environment graphicslibs lib64 mu2egrid proc run singularity tmp
To run the mu2e executable over g4test_03.fcl in this container, we need to execute the following commands. The output is not shown here.
source /setupmu2e-art.sh source /Offline/setup.sh mu2e -c /Offline/Mu2eG4/fcl/g4test_03.fcl -n 100
Interacting with the Singularity Container via a Script
The next level of complexity is to run a shell inside the container from a script. The Singularity sub-command 'exec' allows you to spawn an arbitrary command within your container image as if it were running directly on the host system. The command
goodenou@thetamom2:/gpfs/mira-home/goodenou> singularity exec -B /projects/mu2e_CRY:/mnt /projects/mu2e_CRY/mu2emt-v7_2_0-7.7.6.simg bash -c "~/Mu2eMT/run_script.sh"
runs the script ~/Mu2eMT/run_script.sh in a shell inside the container.
The following runscript is a good example of a basic script that can be used to run a single job on Theta.
#!/bin/bash JOB_DIR=~/Mu2eMT ### Setup the environment in the container echo "setting up products and mu2e" source /setupmu2e-art.sh echo "setting up Mu2e Offline in directory /Offline" source /Offline/setup.sh echo "JOBDIR = $JOB_DIR" ### ALPS_APP_PE is an environment variable set up by the Cray Linux Environment utility. ### It is different for every instance of the mu2e executable ### that you start within a given job. The first instance is always '0', and the ### variable is incremented by 1 for each subsequent instance of the executable. ### When you run multiple processes within a job, each will be given a different ### value of ALPS_APP_PE, so the different processes can be identified. echo "ALPS_APP_PE = $ALPS_APP_PE" PROCESS_NUMBER=$(( ${ALPS_APP_PE} + 1)) echo "PROCESS_NUMBER=$PROCESS_NUMBER" echo "cd $JOB_DIR" > $JOB_DIR/runfile_${COBALT_JOBID}_${PROCESS_NUMBER}.sh ### Make the output directory in /mnt inside the container. ### We mounted this to the /project directory through ### '-B /projects/mu2e_CRY:/mnt' in the call to singularity. mkdir -p /mnt/000/00 echo "printenv > OUTFILE_${COBALT_JOBID}_${PROCESS_NUMBER}" >> $JOB_DIR/runfile_${COBALT_JOBID}_${PROCESS_NUMBER}.sh echo echo "mu2e -c FCL_FILES/cnf.goodenou.my-test-s1.v0.000001_00000000_64_1000_${PROCESS_NUMBER}.fcl -n 10000 > OUTFILE_${COBALT_JOBID}_${PROCESS_NUMBER}" \ >> $JOB_DIR/runfile_${COBALT_JOBID}_${PROCESS_NUMBER}.sh source $JOB_DIR/runfile_${COBALT_JOBID}_${PROCESS_NUMBER}.sh
This script does four things:
- sets up the mu2e environment within the container
- defines the PROCESS_NUMBER environment variable, which we need to label the output
- creates the output directory 000/00 in the project directory /projects/mu2e_CRY. (The project directory was bind-mounted to the container in the '-B' argument in the call to singularity shown above. You need to define the location of your output path in your FHiCL file as well.)
- creates and sources another script called runfile_${COBALT_JOBID}_${PROCESS_NUMBER}.sh containing the following commands
- printenv > OUTFILE_${COBALT_JOBID}_${PROCESS_NUMBER}
- mu2e -c FCL_FILES/cnf.goodenou.my-test-s1.v0.000001_00000000_64_1000_${PROCESS_NUMBER}.fcl -n 10000 > OUTFILE_${COBALT_JOBID}_${PROCESS_NUMBER}"
Batch Jobs
For production work comprising tens or hundreds of jobs, interactive sessions are inefficient. You must be able to run batch jobs.