Dcache: Difference between revisions
No edit summary |
|||
Line 46: | Line 46: | ||
There are three flavors of dCache. | There are three flavors of dCache. | ||
* '''scratch''' Anyone can write here, and you should use this area as temporary output for your grid jobs. If space is needed, files are deleted according to a least-recently-used algorithm. Your files may last for as little as 20 days since the last time you wrote or read them, and technically there is no guaranteed minimum lifetime, so plan ahead. | * '''scratch''' Anyone can write here, and you should use this area as temporary output for your grid jobs. If space is needed, files are deleted according to a least-recently-used algorithm. Your files may last for as little as 20 days since the last time you wrote or read them, and technically there is no guaranteed minimum lifetime, so plan ahead. | ||
* '''persistent''' Files written here will stay on disk until the user deletes them, so this area can fill up. Only production files are written here - it is not for general use, though | * '''persistent''' Files written here will stay on disk until the user deletes them, so this area can fill up. Only production files are written here - it is not for general use, though special cases might be considered. The one current exception is that users can write fcl files here instead of uploading them. | ||
* '''tape-backed''' All files written to this area are copied to [[Enstore|tape]] automatically. If space is needed, files are deleted off disk according to a least-recently-used algorithm. As files are requested, they are copied from tape to disk as needed, and a request will hang during tape access. This is the way collaborations "write to tape". <font color=red>Do not copy data to this area - it is carefully orangized and only the [http://mu2e.fnal.gov/public/hep/computing/grid_workflow/ production scripts] can write here.</font> Large datasets should be [[Prestage|prestaged]] from tape to disk before they are read from grid job. | * '''tape-backed''' All files written to this area are copied to [[Enstore|tape]] automatically. If space is needed, files are deleted off disk according to a least-recently-used algorithm. As files are requested, they are copied from tape to disk as needed, and a request will hang during tape access. This is the way collaborations "write to tape". <font color=red>Do not copy data to this area - it is carefully orangized and only the [http://mu2e.fnal.gov/public/hep/computing/grid_workflow/ production scripts] can write here.</font> Large datasets should be [[Prestage|prestaged]] from tape to disk before they are read from grid job. | ||
Revision as of 21:11, 10 May 2017
Introduction
dCache is a system of many disks aggregated across dozens of linux disk servers. The system lets all this hardware look like "one big disk" to the user and hides all the details of exactly where the files are and how they are being transferred. It allows load balancing and optimization, such as serving a popular file from several different machines. The entire system is designed to be high-bandwidth so it can serve data files to thousands of grid nodes simultaneously. For example, all the disk servers have special network paths to the grid nodes. Anytime you transfer data files to/from a grid node, it should be to/from dCache using the dedicated tools. Executables, libraries, and UPS products are transferred by cmvfs or special code disks. So far, when we refer to data distributed by dCache, we have been referring to event data, where every gird node gets a unique file. There is another case, intermediate between code on CVFMS and event data on dCache - the case of a single large file that needs to be distributed to every node but is too large for CVMFS. An example might be a library of fit templates that is 5 GB. In this case, the ideal solution is stashCache.
When you read or write to this disk, the request goes through a server "head node", and the system decides which hardware to read or write. There is a database behind the system to track where all the files are logically and physically. Accessing this database adds latency to all commands accessing dCache, so using dCache interactively for code building or analysis, for example, is not efficient and not recommended, please see disk page for build and analysis areas.
You can access the files through several protocols, including a nfs server, which makes dCache look like a simple file system mounted as the /pnfs file system. If you are moving data in and out of dCache using a few interactive processes, you can use simple unix commands: cp, mv, rm. Once you need to move data using may parallel processes, such as to or from grid nodes, please use the tools here.
dCache has a home and a lab home and monitors.
Flavors
There are three flavors of dCache.
- scratch Anyone can write here, and you should use this area as temporary output for your grid jobs. If space is needed, files are deleted according to a least-recently-used algorithm. Your files may last for as little as 20 days since the last time you wrote or read them, and technically there is no guaranteed minimum lifetime, so plan ahead.
- persistent Files written here will stay on disk until the user deletes them, so this area can fill up. Only production files are written here - it is not for general use, though special cases might be considered. The one current exception is that users can write fcl files here instead of uploading them.
- tape-backed All files written to this area are copied to tape automatically. If space is needed, files are deleted off disk according to a least-recently-used algorithm. As files are requested, they are copied from tape to disk as needed, and a request will hang during tape access. This is the way collaborations "write to tape". Do not copy data to this area - it is carefully orangized and only the production scripts can write here. Large datasets should be prestaged from tape to disk before they are read from grid job.
Official production datasets, and user datasets manipulated by the file tools will appear under the following designated dataset areas, corresponding to the above flavors:
- /pnfs/mu2e/scratch/datasets
- /pnfs/mu2e/persistent/datasets
- /pnfs/mu2e/tape
Using the scratch area
On the interactive nodes, you can create your area in scratch dCache:
mkdir /pnfs/mu2e/scratch/users/$USER
If you are moving a few files using a few processes, you can use the unix commands in /pnfs: ls, rm, mv, mkdir, rmdir, chmod, cp, cat, more, less, etc.
/pnfs is not a physical directory, it is an interface to a database and servers, implemented as an nfs server. Because there is latency to database and server access, there are restrictions to this interface that don't usually apply to local disk systems.
- You should avoid commands which make large demands on the database: "find .", "ls -lr *" and similar. When you use "ls" there is a quick database access of the directory record, but if you use "find" or "ls -l" there is a much slower database access of the full file records, so a plain "ls" is always preferred.
- Try to keep the number of files in a directory under 1000 to maintain good response time. Avoid excessive numbers of small files, or frequent renaming of files.
- If you are writing or reading dCache with a large number of processes, such as from a grid job, please use the data transfer tools
Other Access Protocols
While we will typically use nfs protocol for small numbers of accesses, and data transfer tools to read or write files from grid nodes, there are other protocols to access dCache files.
- dcap the native protocol
dccp /pnfs/mu2e/scratch/users/$USER/filename .
dccp is installed on mu2egpvm, but it may have to be installed or setup (the product is named dcap) on other nodes
- root protocol
root [0] ff = TFile::Open("/pnfs/mu2e/scratch/users/$USER/file")
There are other versions of this access, through different plugins which trigger different authentication, protocols and transfer queues.
- webdav There is CS-doc-5050 reference
curl -1 -L --cacert $X509_USER_PROXY --capath /etc/grid-security/certificates --cert /tmp/x509up_u1311 https://fndca1.fnal.gov:2880/pnfs/fnal.gov/usr/mu2e/scratch/users/rlc/s1 (not currently working)
- gridFtp
kinit getcert export X509_USER_CERT=/tmp/x509up_u`id -u` export X509_USER_KEY=$X509_USER_CERT export X509_USER_PROXY=$X509_USER_CERT grid-proxy-init voms-proxy-init -noregen -rfc -voms fermilab:/fermilab/mu2e/Role=Analysis globus-url-copy gsiftp://fndca1.fnal.gov:2811/scratch/users/$USER/file-name file:///$PWD/file-name