Usage of Singularity
Singularity is a container virtualization software specifically designed for HPC environments. You may imagine containers as lightweight virtual operating systems with preinstalled and preconfigured software that can be run just like any other program on a host system. You could, for example, run software in an Ubuntu environment within our CentOS compute cluster. This helps overcome portability issues with software that has very specific dependencies or was not built to run under RHEL-based distributions.
Please note: We are currently testing Singularity with selected use cases. If you need to run software on the cluster that profits from containerization and are interested in using singularity, please contact the IT-ServiceDesk (servicedesk@itc.rwth-aachen.de). Software that has already been containerized for Docker can often be ported to Singularity with virtually no effort. |
The Container Environment
Usual containers isolate applications from the host system. This poses problems in the context of HPC because users are running jobs across multiple nodes using special interconnects that need software support. Singularity, however, allows containerized multi-node MPI jobs and leveraging the Intel OmniPath fabric. Whilst you do not have access to the host operating system, within the container you may still do the following things:
- Access all your personal directories ($HOME, $WORK, $HPCWORK). Within most containers (and all containers supplied by us) you may access these via the aforementioned variables as usual. You may thus comfortably share data between containerized and native applications.
- Access other nodes through the network and run multi-node jobs. You may also switch frontends although we do not recommend this workflow.
- Access host MPI implementations
Limitations
Notably, you will not have access to the module system within the container!
Dual Use of the $HOME Directory
Not only do you have access to your home directory within the container but it will also, by default, serve as the container's home directory for every container that you execute. This means that configuration files stored within your home directory, such as application config files (most notably Zsh!) will be used within the container as well. This can prove both advantageous and disadvantageous since a shared configuration may make working within the container more comfortable but at the same time introduce settings that are incompatible with the containerized environment.
Shell-based compatibility issues are mitigated by Singularity's default behavior of invoking containers with /bin/sh. You may invoke another shell by specifying its path via the "--shell" argument. The shell needs to exist within the container image which is usually the case for bash but not for zsh.
Python software within containers should make use of virtual environments or package managers like conda to avoid hard-to-trace side effects.
If you wish to use an empty home directory within a container instead, please add the "--contain" flag to your container invokation.
Usage
We are using so-called container images, i.e. containers that are comprised of a single image file. Users cannot build custom images on the cluster since this requires elevated privileges. Additionally, during the testing phase users may only run images that have been pre-selected by the HPC team.
Run a container
There are three standard ways of running a singularity container: The shell subcommand, the run and the exec subcommand.
- The shell subcommand allows you to start an interactive shell within the container and is intended for use on frontends or within interactive batch jobs.
- The run subcommand triggers the execution of a pre-defined runscript within the container. Container creators may choose to provide a default workflow which can be accessed this way.
- The exec subcommand allows users to run custom shell commands in a containerized shell. This is the default way of using Singularity within batch scripts.
Using Singularity to start a container |
Use GPUs inside the container
Providing access to GPUs inside containers is a non-trivial task. Luckily, singularity supports this with a simple command line argument. To use NVidia GPUs simply add the "--nv" option after your desired subcommand like so:
singularity exec --nv tensorflow-gpu.sif $HOME/my_tensorflow_script.sh |
Naturally using the --nv flag will only work on systems that actually have a GPU installed. If run on a non-GPU host, singularity will issue a warning but still execute the container.
CUDA
Singularity will use the host's CUDA installation where possible. You can thus change the CUDA version used by loading another version via the module system.
Example Batch Script
#!/usr/local_rwth/bin/zsh ### Job name ### File / path where STDOUT will be written, %J is the job id ### Request the time you need for execution. The full format is D-HH:MM:SS ### Request memory you need for your job in MB ### Request number of hosts ### Request number of CPUs ### Change to the work directory ### Execute the container |
Converting Docker Images for Singularity
Please note: While you can download images like this during the testing phase, you will not be able to run the containers due to path constraints. |
Pull Docker Image From External Resource
Singularity's pull command allows pulling arbitrary docker containers and converting them to singularity SIF containers in a single step. Container registries or software documentation will often explain how to retrieve a container like so:
docker pull nvcr.io/nvidia/tensorflow:20.06-tf2-py3 |
This tells docker to pull the container "tensorflow" in version "20.06-tf2-py3" from the NVidia container registry. Pulling the same image via singularity is almost as easy:
singularity pull docker://nvcr.io/nvidia/tensorflow:20.06-tf2-py3 |
The prefix "docker://" tells singularity that the following URI points to a docker image and should be treated as such.
Pull Image from NVidia Container Registry
This snippet shows the full process from pulling to executing an image from the NVCR.
# Pull Tensorflow 20.06 # Inspect a container with an interactive shell and GPU support # Execute a predefined script in a container with GPU support, e.g. within SLURM |
Build Singularity Images on top of Docker Images
Please note: You need elevated privileges, i.e. the ability to run singularity as root, to build containers. Therefore, users can not build singularity recipes on the cluster but have to resort to other machines. |
Singularity supports building containers on top of docker images via the "docker" bootstrapping option. A stub for this purpose would look like this:
Bootstrap: docker # Use the image "ubuntu/18.04" from the docker registry as the foundation for this container %post %help |