HPC: Difference between revisions
(Created page with "{{Expert}} ==Introduction== HPC stands for High Performance Computing, which is shorthand for the next generation of large-scale computing that is being developed. (This is...") |
No edit summary |
||
Line 19: | Line 19: | ||
* function level - some tasks are naturally independent and and be dome in a parallel, such as tracing a particle or fitting a track. This approach is being pursued in geant, where each particle is naturally a thread. | * function level - some tasks are naturally independent and and be dome in a parallel, such as tracing a particle or fitting a track. This approach is being pursued in geant, where each particle is naturally a thread. | ||
* module level - art's rules enforce that analysis and output modules can be run independently and in parallel, for example | * module level - art's rules enforce that analysis and output modules can be run independently and in parallel, for example | ||
* event level - a central input module distributes whole events out to a set of threads. This is the focus of the art team in 2017. | * event level - a central input module distributes whole events out to a set of threads ("multi-scheduling"). This is the focus of the art team in 2017. Will use Intel Thread Building Blocks (TBB) | ||
==Machines== | ==Machines== | ||
See also [https://mu2e-docdb.fnal.gov:440/cgi-bin/ShowDocument?docid=9040 Doc 9040] | |||
mu2e might be able to run on Edison and CORI I with little effort. MIRA with dedicated effort. | |||
*NERSC (National Energy Research Scientific Computing Center) | |||
**Edison - 2.57 Petaflops, Cray XC30, Intel "Ivy Bridge" processor,2.5 GB of memory per core, uses Docker, no special compiler | |||
**CORI Phase I - Cray XC40, Intel Xeon Haswell processors, 4 GB of memory per core, uses Docker, requires special compilation | |||
**CORI Phase II - second generation Intel Xeon Phi, Knight’s Landing (KNL), 4 hardware thread per code, 50 MB/thread requires function-level parallel programming, requires special compilation | |||
* | *ALCF (Argonne Leadership Computing Facility) | ||
** MIRA 10 petaflops IBM Blue Gene/Q, 200 MB/core, requires special compilation | |||
** Theta 9.65 petaflops, 4GB/core | |||
** MIRA 200 MB/core | |||
** Theta | |||
** Aurora - future | ** Aurora - future | ||
Revision as of 16:53, 18 April 2017
This page page needs expert review!
Introduction
HPC stands for High Performance Computing, which is shorthand for the next generation of large-scale computing that is being developed. (This is being written in 2017.) We expect mu2e to be come more involved in these sites and advanced styles of programming, and taking full advantage of these resource by the time we take data.
As physical limitations to Moore's and other scaling laws approach, the economics of computing has evolved. Instead of many machines with large memory and few cores, the trend is towards fewer machines with limited memory and many cores. This forces users into reducing memory where possible and sharing as much of the rest as possible. It also motivates making programs with parts that can run in parallel on the cores.
On top of this trend is another strong trend for the user to bring their operating system environment with them. Instead of the user just submitting a job to a machine with a known environment (such as SL6) coordinated with the site, the user submits their jobs as a virtual machine plus their workflow. The worker node runs the virtual machine which then starts the workflow. In this way, the site can be much more generic and host almost any type of user. The virtual machine may or may not be visible to the user. The user may simply check a box for what VM to use, or may build a custom VM for the experiment, or even specific to the workflow.
Methods
There are different levels of memory sharing..
One method that is being used at several sites is the "container", such as Docker or Singularity. This is a type of a virtual machine that can maximize sharing between your jobs. The worker node is running a lunix kernel, but the exact OS is not important. The container is built up of the libraries and executables that the workflow will need. Contents such has the art libraries might be collected by an expert. The experiment or user then would add the particular version of mu2e code and utilities needed. This container is run on the worker node, and then one or many copies of the job workflow can be run inside the container. The sharing comes from the many jobs running on one set of shared libraries instead of each job being run in its own virtual machine, with its own copy of the libraries.
There are different levels of parallelization.
- bit-level - larger buses and FPU's
- multithreading - the CPU decides at the level of a few instructions what can be computed in parallel
- vectorization of code - each line of code is examined by a programmer and threads inserted where possible. For example, if an initialization of several objects is independent, then each can be done is separate parallel thread.
- function level - some tasks are naturally independent and and be dome in a parallel, such as tracing a particle or fitting a track. This approach is being pursued in geant, where each particle is naturally a thread.
- module level - art's rules enforce that analysis and output modules can be run independently and in parallel, for example
- event level - a central input module distributes whole events out to a set of threads ("multi-scheduling"). This is the focus of the art team in 2017. Will use Intel Thread Building Blocks (TBB)
Machines
See also Doc 9040 mu2e might be able to run on Edison and CORI I with little effort. MIRA with dedicated effort.
- NERSC (National Energy Research Scientific Computing Center)
- Edison - 2.57 Petaflops, Cray XC30, Intel "Ivy Bridge" processor,2.5 GB of memory per core, uses Docker, no special compiler
- CORI Phase I - Cray XC40, Intel Xeon Haswell processors, 4 GB of memory per core, uses Docker, requires special compilation
- CORI Phase II - second generation Intel Xeon Phi, Knight’s Landing (KNL), 4 hardware thread per code, 50 MB/thread requires function-level parallel programming, requires special compilation
- ALCF (Argonne Leadership Computing Facility)
- MIRA 10 petaflops IBM Blue Gene/Q, 200 MB/core, requires special compilation
- Theta 9.65 petaflops, 4GB/core
- Aurora - future
- SDSC (San Diego Supercomputer Center)
- COMET, similar to CORI I.
- XSEDE - a collaboration helping users with supercomputers