Docker
Introduction
Docker is a company producing an open-source "software container platform", but the word is also used is also used to refer to the platform itself. Docker allows a user to collect all the code needed to run an application, in our case a grid simulation job, into an image. This image will contain everything from the OS-level utilities and libraries, to the lab UPS products, to the collaboration code, to our grid scripts and environmental variables. This image starts with a OS image, usually provided by the OS experts, and uploaded to the Docker repository. The user then adds layers of software to this base image. The computing division has added the art software layer to an OS image and uploaded this new, larger image to the repository. Mu2e can download the art image and add our own layers. This final mu2e image can be passed around and run on almost any platform with x86_64 architecture.
The Docker image can only be run on a machine with Docker software installed and the Docker daemon running. When the user issues a docker command, the daemon starts the image. The user's command can run in the image's environment, exit, and return control back to the user. The user may also run the image in an interactive mode, gain the image's prompt and issue commands like any linux command line. In effect, the user can fire up their own pseudo-virtual machine. The user's process will have a virtual working directory which can be destroyed, or saved, on exit from the container. File systems mounted on the host machine (your home directory, for example) can be mounted in the pseudo-VM and you can read and write permanently to the file system. This pseudo-VM is called a container because it is isolated in software from other copies of image that may be running.
The main point of the image/container model is that the user brings their entire software stack to the CPU, therefore the execution of the application is very reliable and reproducible. This model differs from the model of running several full virtual machines on the host in an important way. If we start a virtual machine, it requires a separate memory space and will load the shared libraries once in each virtual machine, quickly using up the memory. The Docker daemon knows if the users are running multiple containers for one image and shares as much memory as possible, such as the shared libraries, while keeping each container's data space strictly isolated. If you are running on a machine with 64 GB of memory and 64 cores, you may be able to run 64 containers of one Docker image, but you can't run 64 full virtual machines. Docker is also more CPU efficient than a full VM.
Docker Software
The Docker software can be installed on Mac, Windows and Linux. SL6 has an older kernel which cannot implement all the features of Docker and limits the size of a Docker image to 10GB. Windows (>=10), MacOS (>=10.11), and SL7 are all suitable to run Docker. A few collaborators with access to the computing division machine called woof can access it there. The code needs to be installed by admin/root because it will need a higher level of access to the kernel than a typical user application, and will run a system daemon.
Creating a Docker image
A Docker image is a file, like a sophisticated tarball, which contains all the code needed to run our executables. This image will contain everything from the OS-level utilities and libraries, to the lab UPS products, to the collaboration code, to our grid scripts. Anything you would use to run the job interactively needs to be provided in the image. The image might be up to 15GB in size or it might be as small as 1 GB if it is carefully pruned.
One could start an image from scratch, but it we will always start with an operating system image, such as an SL6 or CentOS image, provided by experts, and add to that base image. In fact, the computing division has provided an OS image with an added layer of the mu art software manifest. We then have to add a mu2e release and some odds and ends such as the mu2egrid product.
To download the base image, you will need an account on hub.docker.com. Searching for FNAL will show the existing images. All hub images are public by default, so can be uploaded and shared from any account.
Theoretically, the cvmfs client could be added to the image and run in the container to mount and read a standard cvmfs repository. In practice this has not been debugged by the experts, so we need to include all the software directly in the image.
To build a useful image, you will need to pick a base image and collect the files you want to put in the image. There is a control file called Dockerfile which defines how to add your files to the base image to create a new image. Here is an example
FROM fnalart/mu:v2_06_02-s47-e10-prof ADD products /products ADD artexternals /artexternals ADD DataFiles /DataFiles ADD Offline /Offline ADD setupmu2e-art.sh / ENV MU2E_DOCKER_IMAGE v6_2_1-0 CMD ["/bin/bash" "-l"]
The FROM command defines the base image. The ADD command move files into the image. It has two arguments - the first is the directory relative to the directory in the build command and the second is the target directory in the image. The ENV command causes an environmental to be defined every time the image is run. CMD defines what is the default command for the image. This is what is run when you run it interactively. The CMD in the highest layer over-rides any CMD from previous layers.
The pattern docker expects is to put your files to be added in a directory along with the Dockerfile, then tell it to build.
docker build -t <user/repo:tag> <directory> docker build -t rlcfnal/test:v6_2_1-0 /scratch/rlc/docker/v6_2_1-0
The image itself goes into the working area which was created when docker was installed. You generally don't need to know where it is. You will push the image to hub.docker.com when it is ready to use.
If you build many images off of one base image, docker is smart enough to re-use base image and not waste any disk space.
To upload the image to docker hub, which is recommended whenever they need to be shared, you will need an account on [hub.docker.com hub.docker.com]. Then login and push the image.
docker login (let it prompt you, I couldn't get switches to work, may special characters...) docker push rlcfnal/test:v6_2_1-0
Running a Docker image
There are two basic ways to run a docker image. The first is create a container, run a single command and exit. An example is
docker run --rm <image> <command> docker run --rm rlcfnal/test:v0 ls; pwd docker run --rm rlcfnal/test:v0 bash -l -c "ups list -aK+"
The "--rm" says do not save the container after running the command - you want this for any simple single commands or the working disk space will fill up with old non-running containers. If the command prints anything, it will appear on your screen. There seem to be many ways to use docker. One way is to save a container, then "commit" the container as a new image.
You can also run an image interactively.
docker -it <image> docker run --rm -it -v /scratch/rlc/docker/v6_2_1-0:/builddir rlcfnal/test:v6_2_1-0
The "--rm" switch says do not save the container after exit. The "-it" says start the container and connect the terminal. The "-v" option is of the form /localdir:/imagedir
and causes the /localdir
to be mounted in the container as /imagedir
. There doesn't seem to be an x connection since root -l
fails to display.
Submitting a Docker Image
When the Docker imagine is run at NERSC, it is actually converted to a shifter image. This conversion adds security, access to the aggregated disk system on the cluster (their dcache), and access to the internet. We should be able to write output with ifdh.
Submission is by a special fifebatch url, but otherwise acts like a regular batch job.
NERSC has several queues.
- whole nodes This will run your job a node with 64 cores and 128 GB memory so you can plan your memory sharing.
- singles This will run on one of the the same nodes, but with only about 2GB reserved for your job.
- debug Runs quickly, but only for 30min, and much of that time may be wasted by the glide-in.
The queue at NERSC is FIFO, there is no scheduling based on priority or "fair share" Wall clock time limits are 48h, but shorter are likely to start sooner, because you might fit into a few limited time slots, able to jump ahead in the queue.
Note for Windows
On a windows 10 laptop. The basic download only works for Windows pro, which I didn't have. Switched to downloading Docker toolkit. You need to run as an admin. Running the first example
docker run hello-world
resulted in errors
docker: error during connect: ...
I performed a cleanup found on blogs
docker-machine rm default docker-machine create --driver virtualbox default docker-machine env --shell cmd default (cut and paste printed commands)
and then the hello-world example worked.
docker run -it ubuntu bash