JobPlan: Difference between revisions
Line 104: | Line 104: | ||
resulting in short (minutes) jobs that produce small (few MB) | resulting in short (minutes) jobs that produce small (few MB) | ||
output files. However digi+tracking jobs with background | output files. However digi+tracking jobs with background | ||
mixing would be too slow for | mixing would be too slow for significantly larger g4s4 inputs. | ||
==Prestaging== | ==Prestaging== | ||
Before submitting any jobs to the grid, make sure that tape-based input datasets, if any, are [[Prestage|pre-staged to disk]]. | Before submitting any jobs to the grid, make sure that tape-based input datasets, if any, are [[Prestage|pre-staged to disk]]. |
Revision as of 19:20, 11 April 2017
Introduction
When running large jobs on grids you must take into consideration all the physical limits on resources.
The first question is always: is there a standard dataset that you can use instead of starting from scratch? Some are listed here, and you should also be consulting a mentor in the area of your work, who can answer this question definitively.
Total CPU resources
Curently (2017) a user may expect to get about 500-2000 slots (with spikes to 10K) on GPGrid and 2000-5000 slots (with spikes to 15K) on OSG. While the resources on OSG are substantial, there is additional operational friction such as greater unpredictability and higher rates of jobs failing and restarting.
You can run a test job and multiply up the time to see if it is reasonable given the total CPU you can get in a given time frame. The interactive nodes are, on average, a little faster than grid nodes.
Job time
It is recommended to have jobs longer than 15 minutes, and shorter than about 8 hours, from general considerations of per-job overhead and reliability.
GPGrid has a maximum time limit of about 4 days. If you are running on the OSG, each site has its own limits and if your job is short enough, it can run on more sites.
File Sizes
Worker nodes at Fermilab have 30GB of disk space per core. OSG sites have various defaults and limits, and the less you request, the more sites you can access.
Upload size
For those outputs files that are intended for tape storage, the size is important. It is inefficient to store small files on tape because of per-file costs. Good file sizes are from a few hundred MB to a few GB. If jobs are too slow to produce sufficiently large files, one should consider concatenating outputs before writing them to tape. There are no technical limits on how small files can be, just operational concerns.
We do not recommend writing files larger than 5GB to tape, because of the increased operational risks, such as a corrupt file or disk space and job management, when so much data is grouped in one object. The technical limits are on the scale of a TB, so if you have a few files over 5GB, that's OK.
Memory
Most mu2e art jobs use around 2 GB, so most jobs can be submitted with a request for 2 GB of memory. On GPGrid, the default is 16GB. On OSG all sites have various defaults and limits, and the less you request, the more sites you can access.
Note that a few test events may not use more than 2GB, but then in the full set of jobs, some rare events may use much more due, for example, to physics leading to a very long particle list.
Testing
Wallclock time and the output size scale linearly in the number of
events for most Mu2e jobs. One can estimate the slope and the
intercept by running jobs of increasing length (the -n
option to mu2e
). The /usr/bin/time
utility
(not just "time"!) is useful to measure the memory consumption:
/usr/bin/time mu2e -c test.fcl -n 10 ... Art has completed and will exit with status 0. 28.07user 2.41system 0:43.59elapsed 69%CPU (0avgtext+0avgdata 3836928maxresident)k 3003072inputs+112outputs (9463major+210992minor)pagefaults 0swaps
From the above output, we conclude that the test job used
3836928 / 4096 = 937 MB
of memory. (SL6 is shipped with
a buggy version of GNU time, this is why we need to divide the
reported size by 4096 instead of 1024 for the purported "k" units.)
One caveat is that estimates can be affected by large rare events, which may not be seen in short jobs.
Following stages
If the current job writes out framework files for use by subsequent job stages, the above criteria should be applied to analyze the whole processing chain. It is important to remember that later stages can easily read multiple input files per job, but can not "split" an existing art file. This implies smaller output files from the first stage, or a more mild concatenation factor. For example, the configuration to produce standard g4s4 conversion electron datasets cnf.mu2e.....fcl runs only 1000 events per job, resulting in short (minutes) jobs that produce small (few MB) output files. However digi+tracking jobs with background mixing would be too slow for significantly larger g4s4 inputs.
Prestaging
Before submitting any jobs to the grid, make sure that tape-based input datasets, if any, are pre-staged to disk.