Skip to content

Intro to Singularity

Singularity is a container platform (like Docker) intended for use on HPC clusters. Unlike the Docker it does not need sudo rights and solves some other security issues.

On cluster is Singularity available as a module (see Modules)

You can load Singularity with command

ml Singularity

Converting from Docker to Singularity

Often your application is available as Docker container, not Singularity container. Now what? You can convert it. If you need to tinker with Docker container first, you can do it on your own machine, then convert it into Singularity (See Docker tutorial)

If you only want original container from the Docker hub, or Nvidia GPU Cloud, you can simply convert it directly on the cluster.

Info

By default singularity uses /tmp dir for converting the images. On the Cluster is only small /tmp dir that does not hold large files. So we have to modify environment variables to use a local scratch dir for conversion. The example uses COLMAP,

export SINGULARITY_CACHEDIR=/lscratch/$USER
export SINGULARITY_TMPDIR=/lscratch/$USER
mkdir $SINGULARITY_TMPDIR

ml Singularity
singularity build colmap.simg docker://colmap/colmap

After conversion you will find singularity image colmap.simg in the path where you have run the build command. Please note, that the image has size about 2 GB. That also means that the intermediate docker layers are still in the /lscratch directory. You are better to remove them directly after successful conversion.

rm -R $SINGULARITY_TMPDIR/docker

Extended conversion with authentication

In some cases, you must be authenticated when accessing docker images. This is especially true for Nvidia GPU Cloud (NGC) and some private repositories. In such a case you have to provide authentication information via environment variables.

For Docker

export SINGULARITY_DOCKER_USERNAME=<USERNAME>
export SINGULARITY_DOCKER_PASSWORD=<YOUR_AUTH_TOKEN>

For NGC

export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
export SINGULARITY_DOCKER_PASSWORD=<YOUR_AUTH_TOKEN>

Please note, that in the case of NGC the SINGULARITY_DOCKER_USERNAME is literal string $oauthtoken not the variable which should be expanded to your username.

How to generate the auth tokens in both cases is beyond scope of this documentation. If you are interested in it, you can write request to #cluster channel on CIIRC Slack and we will fill in some Howto.

Existing singularity images

On the Cluster there are some already converted images from Nvidia GPU Cloud. If you need some else, you can ask in #cluster channel on CIIRC Slack and we will convert it for you.

The images are in the path

/opt/apps/singularity_images

Examples

singularity run docker://godlovedc/lolcow

Adapted from https://docs.nvidia.com/ngc/ngc-user-guide/singularity.html

The Singularity commands below will mount the present working directory on the host to /host_pwd in the container process and set the present working directory of the container process to /host_pwd. With this set of flags the <cmd> to be run will be launched from the host directory Singularity was called from.

$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd <image.simg> <cmd>

Note:

Note: Binding to a directory which doesn't exist within the container image requires kernel and configuration support that may not be available on all systems, particularly those running older kernels such as CentOS/RHEL 6. When in doubt contact your system administrator.

Command Line Execution with Singularity

Running the container with Singularity from the command line looks similar to the command below.

$ singularity exec --nv .simg

For example, to run the NAMD executable in the container

$ singularity exec --nv namd_2.12-171025.simg /opt/namd/namd-multicore

Interactive Shell with Singularity

To start a shell within the container, run the command below

$ singularity exec --nv <app_tag>.simg /bin/bash

For example, to start an interactive shell in the NAMD container

$ singularity exec --nv namd_2.12-171025.simg