Apptainer (formerly Singularity) is a container virtualization software specifically designed for HPC environments. You can imagine containers as lightweight virtual operating systems with preinstalled and preconfigured software that can be run just like any other program on the cluster. You could, for example, run software in an Ubuntu environment within our Rocky compute cluster. This helps overcome portability issues with software that has very specific dependencies, was not built to be run under RHEL-based distributions or needs to be ported with the exact same configuration that was used on another system.
Table of Contents
- Container Environment
- Run a container
- Apptainer Autocompletion
- Apptainer and MPI
- GPU Usage in Containers
- Example Batch Scripts
- Example for Exec Script
- Special problems with shared
- Converting Docker Images for Apptainer
- Best Practices for Apptainer Usage
- Common Usecases
- Common Errors
- Further Questions
Containers allow software developers and users to package software and its dependencies in a virtual environment that can easily be ported to completely different systems. Not only does this eliminate the need for complex software installation, it also makes results received through the containerized software reproducible. Computations are always run using the exact same software.
Usual containerization software isolates applications from the host system. This poses problems in the context of HPC because users are often running jobs across multiple nodes using special interconnects that need software support. Apptainer, however, allows containerized multi-node MPI jobs and leveraging the Intel OmniPath fabric. Whilst you do not have access to the host operating system, within the container you may still do the following things:
- Access all your personal directories (
$HPCWORK). Within most containers you may access these via the aforementioned variables as usual. You may thus comfortably share data between containerized and native applications.
- Access other nodes through the network and run multi-node jobs.
- Access GPUs
In the following paragraphs, container image refers to the files that are used to run a container whereas container refers to the running instance of such an image.
There are three standard ways of running an Apptainer container: The
shell subcommand, the
run and the
shellsubcommand allows you to start an interactive shell within the container and is intended for use on frontends or within interactive batch jobs.
execsubcommand allows users to run custom shell commands in a containerized shell. This is the default way of using Apptainer within batch scripts and can be coupled with a separate exec script.
runsubcommand triggers the execution of a pre-defined runscript within the container. Container creators may choose to provide a default workflow which can be accessed this way.
Example of starting a container:
apptainer shell $HOME/my_container apptainer run $HOME/my_container apptainer exec $HOME/my_container cat /etc/os-release apptainer exec $HOME/my_container $HOME/my_exec_script.sh
Apptainer can generate a file to add command autocompletion to your shell. Autocompletion can both make typing long commands easier and give you more information on the available subcommands and arguments. To add autocompletion to your shell, run the following commands:
apptainer completion zsh > $HOME/.apptainer.autocomp.sh # Make sure the file was created successfully and not filled with an error message cat $HOME/.apptainer.autocomp.sh echo "source $HOME/.apptainer.autocomp.sh" >> $HOME/.zshrc . $HOME/.zshrc # or restart your session
After that, you can use autocompletion by entering
"apptainer " and then pressing TAB. Repeatedly pressing TAB cycles through the subcommands. Argument autocompletion is supported after double dashes, e.g.,
apptainer shell --<TAB>
To remove autocompletion, delete the created autocompletion file and remove the source line from $HOME/.zshrc
Apptainer supports two models for MPI usage. The first, which we strongly recommend, uses the host MPI implementation to handle the actual communication between processes and utilization of the Intel Omni-Path interconnect between compute nodes. The container must simply contain binary-compatible MPI libraries, which often comes down to installing a similar or even the same MPI implementation in the container. This is called the "hybrid model". Notably, it allows you to use our regular IntelMPI and OpenMPI modules with your container. Another option is the "bind model" which involves binding the host MPI implementations into the container. This requires additional effort and should not be used unless necessary.
Using the hybrid model is very easy if the containerized MPI version is supported. Instead of running the MPI wrapper
$MPIEXEC inside the container, you run the container inside the wrapper, i.e.
$MPIEXEC apptainer run my_container. For an example batch script, see down below.
Providing access to GPUs inside containers is a non-trivial task. Luckily, Apptainer supports this with a simple command line argument. To use NVidia GPUs simply add the
--nv option after your desired subcommand:
apptainer exec --nv $HOME/my_container $HOME/my_tensorflow_script.sh
--nv flag will only work correctly on systems that actually have a GPU installed. If run on a non-GPU host, Apptainer will issue a warning but still execute the container.
Apptainer will use the host's CUDA installation where possible. This works well for a lot of applications that support a recent CUDA version.
#!/usr/bin/zsh ### Job name #SBATCH --job-name=APPTAINER_EXAMPLE ### File / path where STDOUT will be written, %J is the job id #SBATCH --output=apptainer-job-out.%J ### Request the time you need for execution. The full format is D-HH:MM:SS ### You must at least specify minutes or days and hours and may add or ### leave out any other parameters #SBATCH --time=30 ### Request memory you need for your job in MB #SBATCH --mem-per-cpu=2000 ### Request number of hosts #SBATCH --nodes=1 ### Request number of CPUs #SBATCH --cpus-per-task=4 ### Change to the work directory cd $HOME/jobdirectory ### Execute the container ### myexecscript.sh contains all the commands that should be run inside the container apptainer exec /path/to/my/container $HOME/myexecscript.sh
#!/usr/bin/zsh ### Job name #SBATCH --job-name=APPTAINER_MPI_EXAMPLE ### File / path where STDOUT will be written, %J is the job id #SBATCH --output=apptainer-job-out.%J ### Request the time you need for execution. The full format is D-HH:MM:SS ### You must at least specify minutes or days and hours and may add or ### leave out any other parameters #SBATCH --time=30 ### Request memory you need for your job in MB #SBATCH --mem-per-cpu=3800 ### Request number of tasks/MPI ranks #SBATCH --ntasks=4 ### Change to the work directory cd $HOME/jobdirectory ### Execute the container ### myexecscript.sh contains all the commands that should be run inside the container $MPIEXEC apptainer exec /path/to/my/container $HOME/myexecscript.sh
#!/bin/bash # The shell in the shebang line above has to exist in the container. # /bin/bash is a sane default, use /bin/dash (Ubuntu) or /bin/sh if bash is not available # Place your application calls here python ./script.py arg1 arg2
Not only do you have access to your home directory within the container but it will also, by default, serve as the container's home directory for every container that you execute. This means that configuration files stored within your home directory, such as application config files (zsh!) will be used within the container as well. This can prove both advantageous and disadvantageous since a shared configuration may make working within the container more comfortable but at the same time introduce settings that are incompatible with the containerized environment.
Shell-based compatibility issues are mitigated by Apptainer's default behavior of invoking containers with
/bin/sh. You may invoke another shell by specifying its path via the
--shell argument. The shell needs to exist within the container image which is usually the case for bash but not for zsh
Python software within containers should make use of virtual environments or package managers like conda to avoid hard-to-trace side effects.
If you wish to use an empty home directory within a container instead, please add the
--no-home flag to your container invocation. This requires you to start the container from a path that is not within your home directory. You can also use a different directory as your temporary home directory via
Pull Docker Image From External Resource
The build command allows pulling arbitrary docker images and converting them to Apptainer images in a single step. Container registries or software documentation will often explain how to retrieve an image using
docker pull. The following command can be used instead:
apptainer pull ubuntu-22.04.sif docker://ubuntu:23.04
docker:// tells singularity that the following URI points to a docker image and should be treated as such.
Pull Image From Nvidia Container Registry
This snippet shows the full process from pulling to executing an image from the NVCR.
# Pull Tensorflow 23.08 apptainer pull tensorflow_23.08-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:23.08-tf2-py3 # Inspect a container with an interactive shell and GPU support apptainer shell --nv tensorflow_23.08-tf2-py3.sif # Execute a predefined script in a container with GPU support, e.g. within Slurm apptainer exec --nv tensorflow_23.08-tf2-py3.sif ./myexecscript.sh
Build Apptainer ImagesPlease refer to the official user guide for information on how to build apptainer images yourself. The section on definition files is very exhaustive and contains multiple examples, including one to build apptainer images based on docker images.
Working with containers can be daunting at first. To get you started, we have compiled this list of best practices to follow when using Apptainer on our system. Not all of these may apply to your use case but we still recommend that you skim over them.
Do not save container images on $HOME
Container images are easy to regenerate and usually quite big in file size. All files stored under $HOME are backed up regularly and thus storing non-crucial data here should be avoided. Thus, we strongly recommend to store container images on $WORK or $HPCWORK. For the same reason, downloaded image blobs are cached on $WORK rather than $HOME.
Stay with SIF containers
SIF files are the default container format for Apptainer and store the container image in a single file. Containers can also be stored in a directory-based format (sandbox directories). Sandbox directories tend to perform a lot worse on $HOME, $WORK and $HPCWORK and should be avoided on these filesystems.
Unload all unnecessary modules
Most modules are not supported by containers with MPI modules as a notable exception. Loading modules changes a shell's environment and these changes are carried over to a container invoked within this shell. This does not necessarily break things within the container as long as the environment variables changed by the module are not used by the containerized software. In some instances such as compiler modules, however, these changes may cause software to break, e.g. the C compiler variable CC being set to "icc" (the Intel compiler binary). To avoid such issues we recommend unloading all modules that are not needed for the container before starting Apptainer. If your program does not rely on MPI, you may use
ml purge. If you want to go even further, you can use the
--contain-envargument to eliminate all environment
Use compatible MPI versions
Containers run via MPI need to be provided with a compatible MPI implementation. This can usually be achieved by choosing a compatible version from our module tree, loading the module and starting Apptainer via the proper MPI wrapper (see above for an MPI batch example). If the container has been provided by a third party it should contain information on the MPI version against which the program was linked. It should be noted that for Open MPI versions below 3.0 compatibility is only guaranteed for versions matching exactly.
- I want to run a container without
By default both
$HPCWORKare mapped into each container run on the cluster giving you access to all personal directories you would have access to in a native environment. When filesystems are temporarily not available or you want to selectively restrict access to your personal directories, you may want to prevent binding them into the container. This can be achieved like so:
apptainer shell --no-mount $WORK,$HPCWORK my_image
--no-mountflag disables bind mounts for the passed paths. In this case we have excluded
$HPCWORK. You can adjust the list of directories according to your needs.
- I want to run Python software in a container
In general, container images will provide all necessary Python modules in a default system path which normally takes precedence over any locally installed modules. In this case potentially conflicting module installations within your
$HOMEdirectory will not cause any problems. However, some images - mainly those distributed for Docker - might store modules in a custom location that needs to be added to the
PYTHONPATHenvironment variable in order to use the image as intended. You may find further information on this topic in our Python documentation. A properly configured Apptainer image will take care of such issues by setting the
PYTHONPATHaccordingly. If this is not being done, you should make sure to start the container with an empty
PYTHONPATHvariable, e.g. by executing
You should abstain from any software installations to default locations while inside a container as this can easily break existing or upcoming software installations. The PIP tool should only be used if you are fully aware of the consequences and if you see the need to use it, you are probably doing something wrong. Likewise sharing venv or conda environments between containers and the host system is almost guaranteed to lead to problems. If your containerized software behaves oddly, you should test dropping your home directory with
--no-homeor switching it to a test directory with
I want to pull an image from a different container registry such as GitHub or GitLab but require login credentials
Registries that require authentication cannot be used without a valid endpoint configuration. Luckily, this is supported via a special set of commands. Please follow the instructions on the official user guide
exec: ...: a shared library is likely missing in the image
This error can be caused by numerous issues:
You are trying to execute a script that uses an invalid shebang (any scripting language). Please make sure that the path in your shebang, e.g.
#!/bin/bash, is indeed available in your container.
You are trying to execute a python script that relies on modules which have not been installed in your container. In this case please see "Running Python Software in a Container" above.
If you have any questions that were not (fully) answered above or have any suggestions for improvements, please contact us via firstname.lastname@example.org . If your questions regard Apptainer itself, you may find the official User Guide helpful. Please be aware that while we offer support for Apptainer problems that occur on our system (problems while building, problems to run images etc.), we cannot offer support for the software included in images that were not provided by us. Please contact the image creators or software developers where possible.