This document describes how to use Singularity, a software tool that facilitates the movement of software applications and workflows between various computational environments, on the Savio high-performance computing cluster at the University of California, Berkeley.
Singularity is a software tool provided on the Savio cluster. It allows you to bring already-built research applications and workflows from other Linux environments onto Savio and run them on the cluster, without any installation or reconfiguration required. Singularity packages those applications and workflows in “containers,” and runs them within the Singularity container’s boundaries.
Singularity allows you to create containers, or find and obtain containers from others, and then run them on any Linux platform where Singularity is installed. Research software that you or others have packaged up into Singularity containers can be copied to -- and run on -- multiple clusters, cloud environments, workstations, and laptops.
Singularity thus enables “Bring Your Own Environment” computing. It is conceptually similar to Docker, a well known software containerization platform that isn’t compatible with the security models used on Savio and other traditional High Performance Computing (HPC) environments. Both Singularity and Docker, in turn, have some similarities to virtual machines.
Singularity containers that you use on Savio must be created on a different computer. Root permission is required to create Singularity containers, and users are not allowed to run as root on the cluster. Options for creating image-based Singularity containers, which can then be run on Savio under a user’s normal set of permissions, are described below.
Support for Singularity on Savio
The Berkeley Research Computing program makes Singularity available on UC Berkeley’s Savio HPC cluster without warranty. Users are responsible for a basic understanding of software containerization technology, which Singularity implements.
Users of the software are responsible for understanding that Singularity is an emerging technology, and that its future maintenance and development is provided by (and dependent upon) the Singularity open source community; BRC staff participate in this community, but the Singularity project is otherwise independent of BRC.
Documentation and user support is provided by the Singularity project and its several community resources, not by BRC; please see the project web site, Singularity documentation, Google Group, Slack channel, and/or GitHub repository.
Users may not attempt to escalate privileges on the Savio nodes that underlie Singularity containers, per the BRC User Access Agreement section on "Altering Authorized Access." Permission to use Savio will be revoked for any user who violates this policy.
BRC reserves the right to disable access to Singularity, should the Program determine that the software poses a security threat or other unacceptable risk to the Savio cluster.
Creating Singularity container images
A number of methods are available to create Singularity container images. For more information on each of these, please see the resources listed in the Support section, above.
- Install the Singularity application and create a container on a physical or virtual Linux machine where a user has root privileges, as documented
- Use (or extend) BRC's existing Singlarity image creation recipes found here on Github.
- Use the Singularity Hub workflow, as described on the Singularity Hub website.
- Import an existing Docker image to Singularity, as described in the Singularity documentation (look for how to "download pre-built images" for the version of Singularity you are using).
Other methods are in the pipeline, and will be added to the list above when they become available.
Running Singularity containers on Savio
To submit a batch job that runs Singularity on Savio, create a SLURM job script file. A simple example follows:
# Job name:
# Wall clock limit:
## Command(s) to run:
singularity exec /path/to/container/mycontainer.img \
Usage instructions for Singularity’s
exec command can be found in Singularity documentation (look for the "Interact with Images" section of the User Guide).
Accessing your Savio storage from within Singularity containers
It can be useful for scripts running within Singularity to reference directories outside the Singularity container, e.g., the user’s directory in
/global/scratch on the Savio cluster. This allows processes within the Singularity container to access scripts and data that are not packaged in the container itself.
A user’s home directory is automatically bound from within the container, as
/global/home/users/mylogin/ (substituting your user ID for
To reference other paths or files on Savio from inside the container, you’ll need to specify a bind point using the ‘
-B’ option to the
exec command. The following example, performing such a bind, could replace the last line of the batch script example given above:
singularity exec /path/to/container/mycontainer.img -B \
In the above example:
- The Singularity container image path (
/path/to/container/mycontainer.img) might be in a user’s home or group directory, or on the cluster's scratch file system. The choice often hinges on image file size, as larger images are usually stored on scratch due either to capacity limits in a home or group directory, or because a large image will load faster from scratch storage.
/global/scratch/path-on-savio/is a directory in Savio’s scratch file system, where the data to be computed over can be found, and where outputs from the computation will be stored for retrieval once the job is complete.
- References to the directory
/global/scratch/path-on-savio/on Savio’s scratch file system appear within
/path-in-container/is bound (mapped) to that directory via the ‘
-B’ option to the
- The script file
execscript.sh, to be run by Singularity, is located in
scripts/within the ‘
mylogin’ user’s home directory. (We recommend that you maintain scripts outside your Singularity container.)
The Singularity documentation offers further documentation about user-defined bind points (see "Working with Files").