Difference between revisions of "Upload"

From Mu2eWiki
Jump to navigation Jump to search
(Created page with "== Introduction == Keeping all Intensity Frontier data on disks is not practical, so large datasets must be written to tape. At the same time, the data must always be avail...")
 
Line 1: Line 1:
 
==  Introduction ==
 
==  Introduction ==
Keeping all Intensity Frontier data on disks is not practical,
+
mu2e has several forms of [[Disk|disk space]] and large aggregated disks are available in the [[Dcache|dCache]] system. 
so large datasets must be written to tape.  At the same time, the data
+
But we also write a large part of the files we produce to tape, which is less expensive, and can hold much more data.
must always be available and delivered efficiently. 
+
We usually write data to tape for one or more of the following reasons
The solution is coordinating several subsystems:
+
* to make room for new activity
 +
* to keep it safe
 +
* to make a permanent record
  
<ul>
+
The tape system is called [[Enstore|enstore]] and consists of several tape libraries and many tape drives with good connections to dCache.  We write to tape by copying files into tape-backed dCache and they are copied automatically to tape. The files will, on the scale of weeks if they are unused, be deleted off disk so they are only on tape.  We can get them copied from tape to disk again by [[Presatge|prestaging]] them.
<li><b>dCache</b>: a set of disk servers, a database of files
 
on the servers, and services to deliver those files with high throughput
 
<ul>
 
<li>[scratchDcache.shtml <b>scratch dCache</b>] : a dCache where least used files are purged as space is needed.
 
<li><b>tape-backed dCache</b>: a dCache where all files are on tape and are
 
cycled in and out of the dCache as needed
 
</ul>
 
<li><b>pnfs</b>: an nfs server behind the /pnfs/mu2e parition
 
which looks like a file system to users, but
 
is actually a interface to the dCache file database.
 
<li><b>Enstore</b>: the Fermilab system of tape and tape drive management
 
<li><b>SAM</b>: Serial Access to Metadata, a database of file metadata
 
and a system for managing large-scale file delivery
 
<li><b>FTS</b>:File Transfer Service, a process which manages the intake
 
of files into the tape-backed dCache and SAM.
 
<li><b>jsonMaker</b>: a piece of mu2e code which helps create
 
and check metadata when creating a SAM record of a file
 
<li><b>SFA</b>: Small File Aggregation, enstore can tar up small
 
files into a single large file before it goes to tape,
 
to increase tape efficiency.
 
</ul>
 
  
 +
'''All data written to tape must follow certain conventions.'''  Please familiarize yourself with the links in this list.
 +
* all files are named by [[FileNames|mu2e conventions]]
 +
* all files will have a [[SAM|SAM record]] with [[SamMetadata|SAM metadata]], including file location
 +
* all files are uploaded using [[FileTools|standard tools]], see especially [[FileTools#jsonMaker|jsonMaker]].
  
The basic procedure is for the user to run the jsonMaker
+
The following are the common workflow cases.
on a data file to make the  
+
 
 +
==Upload steps==
 +
===Generate the metadata===
 
[http://json.org json file] ,  
 
[http://json.org json file] ,  
then copy both the data file and the json
+
===Generate the metadata===
into an FTS area in </a "scratchDcache.shtml">scratch dCache</a>
+
===Generate the metadata===
called a dropbox.
+
===Generate the metadata===
The json file is
 
essentially a set of metadata fields with the
 
corresponding values.  The
 
[http://mu2esamgpvm02.fnal.gov:8787/fts/status FTS] 
 
will see the file with its json
 
file, and copy the file to a permanent location in
 
the tape-backed dCache and use the json to create a metadata record in SAM.
 
The tape-backed dCache will migrate the file to tape quickly
 
and the SAM record will be updated with the tape location.
 
Users will use [sam.shtml SAM]  to read the files
 
in tape-backed dCache.
 
 
 
 
 
Since there is some overhead in uploading, storing and retrieving
 
each file, the ideal file size is as large as reasonable.
 
This size limit should be determined by how long an executable
 
will typically take to read the file.  This will vary according to
 
exe settings and other factors, so a conservative estimate
 
should be used.    A file should be sized so that the longest
 
jobs reading it should take about 4 to 8 hours to run, which
 
generally provides efficient large-scale job processing.
 
A grid job that reads a few files in 4 hours is nearly as efficient,
 
so you can err on the small size.
 
You definately want to avoid a single
 
job section requiring only part of a large file.
 
Generally,
 
file sizes should not go over 20 GB in any case because they get
 
less convenient in several ways.
 
Files can be concatenated
 
to make them larger, or split to make them smaller.
 
Note - we have agreed that a subrun will only appear in one file.
 
Until we get more experience with data handling, and
 
see how important these effects are, we will
 
often upload files in the size we make them or find them.
 
  
Once files have been moved into the FTS directories,  
+
==MC production workflow, art files==
please do not try to move or delete them since this will
+
In the standard MC [[MCProdWorkflow|workflow]], there are three times you might upload files:
confuse the FTS and require a hand cleanup.  Once files
+
* after generating the fcl, uploading the fcl files is part of [[GenerateFcl|that procedure]]
are on tape, there is an expert procedure to delete them,
+
* after producing art files (including concatenation if needed), this is described in this section
and files of the same name can then be uploaded to replace
+
* upload log files are archive, which is handled [[here]]
the bad files.
 
  
<!********************************************************>
+
After the jobs have completed, the output datasets will be below a directory like the following, where you will be working:
== Recipe==
+
  cd /pnfs/mu2e/persistent/users/mu2epro/workflow/project_name/good
 +
Below this directory, there are directories for each cluster, and below that directories for each job.
 +
Each output art file named "a.b.c.d.e.f" should have a associated json file called "a.b.c.d.e.f.json" produced as part of the grid job and containing the SAM record metadata.
  
If you are about to run some new Monte Carlo in the official framework,
+
There are two steps.  First, declare the files to the SAM database
then the upload will be built into the scripts and documented
+
<pre>
with the mu2egrid
+
  mu2eClusterFileList --dsname <dataset> --json <cluster_number>  | mu2eFileDeclare
[ Monte Carlo submission] process.
+
</pre>
<font color=red>this is under development,
+
where <code>dataset</code> is the dataset name of the files to find and upload and
please ask Andrei for the status</font>
+
the <code>cluster_directory</code> is one of the cluster subdirectories.
  
 +
The second step is to move the files to the final location in tape-backed dCache:
 +
<pre>
 +
mu2eClusterFileList --dsname <dataset> <cluster_directory>  | mu2eFileUpload --tape
 +
</pre>
  
Existing files on local disks can be uploaded using the following steps.
+
The third step is to tell SAM where the files are in the tape system, to add their "location" to the SAM record.
The best approach would be to read quickly through the rest of this
+
<pre>
page for concepts then focus on the
+
mu2eDatasetLocation --add=tape <dataset>
[uploadExample.shtml upload examples]  page.
+
</pre>
 +
Since it takes about a day, or sometimes more, for a file to migrate to tape and establish its tape location, after being copied to tape-backed Cache, it makes sense to wait a day before running this command
  
<ul>
+
This command should be as many times as needed in order to get the "Nothing to do" message, which means all the files in the dataset now have their location recorded:
<li>choose values for the [#metadata SAM Metadata] , including
+
<pre>
      the appropriate [#ff file family]
+
> mu2eDatasetLocation --add=tape sim.mu2e.cd3-pions-cs1.v563.art
<li>record the above items in a json file fragment
+
  No virtual files in dataset sim.mu2e.cd3-pions-cs1.v563.art. Nothing to do on Mon Nov 21 18:11:29 2016.
that will apply to all the files in your dataset
+
  SAMWeb times: query metadata = 0.00 s, update location = 0.00 s
<li>[#name rename]  your files by the upload convention
+
  Summary1: out of 0 virtual dataset files 0 were not found on tape.
(This can also be done by jsonMaker in the next step.)
+
  Summary2: successfully verified 0 files, added locations for 0 files.
<li>setup an offline release and run the [#jsonMaker jsonMaker] 
+
  Summary3: found 0 corrupted files and 0 files without tape labels.
to write the json file, which will include the fragment from the previous step
+
</pre>
<li>use "ifdh cp" to copy the data file and the full json file to the FTS area
 
/pnfs/mu2e/scratch/fts (This step can also can be done by jsonMaker.)
 
<li>use [sam.shtml SAM]  to access the file or its metadata
 
</ul>
 
  
</ol>
+
==MC workflow, Log Files==
The following is some detail you should be aware of in general, but
+
We usually upload the ntuple (TFileService) files along with the log files.
a detailed knowledge is not required.
 

Revision as of 22:02, 11 April 2017

Introduction

mu2e has several forms of disk space and large aggregated disks are available in the dCache system. But we also write a large part of the files we produce to tape, which is less expensive, and can hold much more data. We usually write data to tape for one or more of the following reasons

  • to make room for new activity
  • to keep it safe
  • to make a permanent record

The tape system is called enstore and consists of several tape libraries and many tape drives with good connections to dCache. We write to tape by copying files into tape-backed dCache and they are copied automatically to tape. The files will, on the scale of weeks if they are unused, be deleted off disk so they are only on tape. We can get them copied from tape to disk again by prestaging them.

All data written to tape must follow certain conventions. Please familiarize yourself with the links in this list.

The following are the common workflow cases.

Upload steps

Generate the metadata

json file ,

Generate the metadata

Generate the metadata

Generate the metadata

MC production workflow, art files

In the standard MC workflow, there are three times you might upload files:

  • after generating the fcl, uploading the fcl files is part of that procedure
  • after producing art files (including concatenation if needed), this is described in this section
  • upload log files are archive, which is handled here

After the jobs have completed, the output datasets will be below a directory like the following, where you will be working:

cd /pnfs/mu2e/persistent/users/mu2epro/workflow/project_name/good

Below this directory, there are directories for each cluster, and below that directories for each job. Each output art file named "a.b.c.d.e.f" should have a associated json file called "a.b.c.d.e.f.json" produced as part of the grid job and containing the SAM record metadata.

There are two steps. First, declare the files to the SAM database

 mu2eClusterFileList --dsname <dataset> --json <cluster_number>  | mu2eFileDeclare

where dataset is the dataset name of the files to find and upload and the cluster_directory is one of the cluster subdirectories.

The second step is to move the files to the final location in tape-backed dCache:

mu2eClusterFileList --dsname <dataset> <cluster_directory>  | mu2eFileUpload --tape

The third step is to tell SAM where the files are in the tape system, to add their "location" to the SAM record.

mu2eDatasetLocation --add=tape <dataset>

Since it takes about a day, or sometimes more, for a file to migrate to tape and establish its tape location, after being copied to tape-backed Cache, it makes sense to wait a day before running this command

This command should be as many times as needed in order to get the "Nothing to do" message, which means all the files in the dataset now have their location recorded:

> mu2eDatasetLocation --add=tape sim.mu2e.cd3-pions-cs1.v563.art
  No virtual files in dataset sim.mu2e.cd3-pions-cs1.v563.art. Nothing to do on Mon Nov 21 18:11:29 2016.
  SAMWeb times: query metadata = 0.00 s, update location = 0.00 s
  Summary1: out of 0 virtual dataset files 0 were not found on tape.
  Summary2: successfully verified 0 files, added locations for 0 files.
  Summary3: found 0 corrupted files and 0 files without tape labels.

MC workflow, Log Files

We usually upload the ntuple (TFileService) files along with the log files.