Prestage: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
No edit summary
Line 91: Line 91:
samweb prestage-dataset --parallel=5 --defname=SUBSET_NAME
samweb prestage-dataset --parallel=5 --defname=SUBSET_NAME
</pre>
</pre>
== Prestaging Multiple Small Datasets ==
From time to time it is necessary to prestage several datasets as input to one job.  For example, MDC2020Dev mixing jobs read inputs from 14 different datasets. There are 1776 files in these datasets and the files are scattered across small number of tape volumes.  In this example, prestage time is dominated by robot arm motion and tape seek time.  If you prestage each dataset on its own you will pay most of these costs 14 times.  To optimize this you can form a snapshot that merges these 14 datasets and prestage the snapshot. In the example of MDC2020DEV it took 1 hour to prestage 70 files from one dataset and also 1 hour to prestage 1706 files from the other 13 datasets.  SAM knows how to mount each tape once, copy all files from the snapshot that are on each tape and to do so in the order in which the files are found on each tape; there is no waster robot or seek motion.
If there are more than 100k files in the snapshot, split the snapshot as described in the earlier section. 
The SAM command to create a snapshot that joins 3 data sets is:
<pre>samweb create-definition <your_user_name>_<unique_name> "dh.dataset dsname1 or dh.dataset dsname2 or dh.dataset dsname3"
</pre>
Note that the full list of dh.dataset must be on a single line; in a shell script the line cannot be broken with a trailing \ .


==Prestage speed==
==Prestage speed==

Revision as of 16:47, 9 July 2020


Introduction

In the process of uploading files to tape, they are copied to a tape-backed dCache disk area. From there, they migrate automatically to tape and after that, the least-used will be deleted from the dCache when disk space is needed.

Files that exist only on tape will be copied back off tape to the dCache disk if a user attempts to access the file through its /pnfs filespec. It takes up to one minute or more to mount a tape and find a random file. Backlogs at times of high demand on tape drives can cause hours of wait time.

Prestaging is the process of making sure all the files in a dataset have been copied off tape and written back to disk in dCache so they are ready to be used in a grid job. All files on tape also have a SAM record and part of a SAM dataset. We usually prestage all the files in a dataset, but any subset of files can be prestaged by operating on a SAM dataset definition

When to prestage

First, when in doubt, prestage. It is harmless, except for the delay, and you can be confident that tape response will not be a problem.

Next, you can check if the files are on disk. You can run the SAM utility script samOnDisk:

setup dhtools
samOnDisk <DATASET>

on a dataset name. This script selects some files at random and check if they are on disk. After a few minutes it should become clear what fraction are on disk. If it is nearly 100%, you don't need to prestage.

As this script is looping over random files, it is updating the running totals. Here is a file which is on tape (NEARLINE) and on disk (ONLINE).

ONLINE_AND_NEARLINE
  0
 120/ 159 are on disk, 75.5 %

You can see that 120 of 159 files so far were found on disk. Here is one that is not on disk:

dc_stage fail : File not cached
System error: Resource temporarily unavailable
NEARLINE
  255
 120/ 160 are on disk, 75.0 %

so it is only on tape (NEARLINE). You can see the total on disk is still 120, but the total checked is now 160, so the fraction is went down. Since the file selection is random, it should settle down to the usefully correct answer in 100 files or so.

You do not need to prestage a dataset if it is less than a few hundred files. in this case the system should respond in time so that your grid job will succeed. Note that the prestage should also be quick, so still a good idea to run.

Prestage less than 100K files

As a practical matter, it is better to handle larger dataset by splitting them up first. Smaller datasets, less than 100K files, can be prestaged in one command. You can see how many files in your dataset with:

samweb count-files "dh.dataset=DATASET"

where DATASET is a mu2e SAM dataset name. Prestage with

setup dhtools
samweb prestage-dataset --parallel=5 --defname=DATASET

You will need certificate authentication to run this command, since it writes to the SAM database.

The prestage will create a SAM project on the SAM station and create consumers to start requesting files from the project. The project on the SAM station has a knowledge of all the files it will need, so it does two things:

  • gives out the files it thinks are more likely to be on disk first
  • look ahead in the file list and start prestaging upcoming files in a logical and efficient manner

The first point can be seen as the prestaging proceeding quickly as long it keeps finding files on disk, then slowing down when it starts requesting files off tape. The parallel switch says how many SAM consumers to run in parallel, making requests to the SAM database - 5 should always be reasonable.

Prestage more than 100K files

Please take a look at the above section for some background information.

As a practical matter, it is better to handle larger dataset by splitting them up first. You can do this by creating a set of dataset definitions that contain subsets of the big dataset. There is a script to do this in dhtools:

We recommend breaking up the dataset into chunks of 100K files, so let N=Nfiles/100K, then:

setup dhtools
samSplit DATASET TAG N

The subsets will come out as N dataset definitions, each a subset of DATASET with SUBSET_NAME ${USER}_${TAG}_X where X is the subset's number.

It takes about a day to prestage 100K files, so run one of these every day.

setup dhtools
samweb prestage-dataset --parallel=5 --defname=SUBSET_NAME

Prestaging Multiple Small Datasets

From time to time it is necessary to prestage several datasets as input to one job. For example, MDC2020Dev mixing jobs read inputs from 14 different datasets. There are 1776 files in these datasets and the files are scattered across small number of tape volumes. In this example, prestage time is dominated by robot arm motion and tape seek time. If you prestage each dataset on its own you will pay most of these costs 14 times. To optimize this you can form a snapshot that merges these 14 datasets and prestage the snapshot. In the example of MDC2020DEV it took 1 hour to prestage 70 files from one dataset and also 1 hour to prestage 1706 files from the other 13 datasets. SAM knows how to mount each tape once, copy all files from the snapshot that are on each tape and to do so in the order in which the files are found on each tape; there is no waster robot or seek motion.

If there are more than 100k files in the snapshot, split the snapshot as described in the earlier section.

The SAM command to create a snapshot that joins 3 data sets is:

samweb create-definition <your_user_name>_<unique_name> "dh.dataset dsname1 or dh.dataset dsname2 or dh.dataset dsname3"

Note that the full list of dh.dataset must be on a single line; in a shell script the line cannot be broken with a trailing \ .


Prestage speed

Overall the prestage speed is about 100K files per day if things are going well. If the project has to get files off tape, and enstore is very busy, it may slow down by factors of 2 or 3. There may be periods of several minutes when the file count does not progress. You can monitor the progress of the SAM station by the SAM station links on the OfflineOps page.