Workflows: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
No edit summary
Line 4: Line 4:
* [[Grids|Grids]] - overview of the computing farms and job submission
* [[Grids|Grids]] - overview of the computing farms and job submission
* [[HPC]] - high performance computing resources
* [[HPC]] - high performance computing resources
** [[Docker]] software containers for submitting to special farms
* [[Docker]] software containers for submitting to special farms
* [[AnalysisWorkflow| analysis workflow]] - how to read existing datasets
* [[AnalysisWorkflow| analysis workflow]] - how to read existing datasets
* [[MCProdWorkflow| production simulation workflow]] - how to run simulation, concatenate and upload the files
* [[MCProdWorkflow| production simulation workflow]] - how to run simulation, concatenate and upload the files

Revision as of 17:06, 15 October 2019


Workflow for jobs

Data Handling

  • dCache - the large aggregated data disk system
  • enstore - the mass storage tape system
  • Data transfer - how to move larger datasets, necessary for grid jobs
  • File names - how to name files for upload or for production
  • File families - the logical grouping of data on tapes
  • SAM - file metadata and data handling management system
    • SAM metadata - the metadata stored for each file
    • SAM expert - some notes for the future, not of general interest
  • File tools - tools for manipulating large file datasets and metadata
  • FTS Upload - deprecated method to upload by the FTS area

Operations