Page tree
Skip to end of metadata
Go to start of metadata
Singularity services are still considered to be in BETA

 

Singularity services are still considered to be in BETA

See note above!

We at RIT do not have enough experience with it yet to guarantee functionality or know what kinds of applications work well/wont work at this time but we will provide reasonable effort support as we are able

 

Background on Singularity

Singularity is developed by Lawrence Berkeley National Laboratory, National Institute of Health and Lawrence Livermore National Laboratory. It has been widely adopted in research computing, high performance clusters and higher education applications. It's primary benefit is that it enables us to host applications that fall outside our models of standard support, or supportable operating systems and because each container is not run as root, the user inside the container is the user that launched the container, which preserves/enforces file permissions and allows us to safely run images within that users security context.  

 Singularity is a containerization platform for use in research environments and HPC clusters. It is similar to other containerization systems - such as LXC, Docker, or Kubernetes - allowing us to build stand-alone images using unsupportable (outdated, non-standard, or non LTS releases) base OS images and then configure application suites to run within that image. Images can be configured to use host hardware (GPUs), software (CUDA), and storage (research-storage, user homedirs, other storage paths). 

An issue we frequently run into with software in the research space is guaranteeing reproduce-ability of a set of code or experiments over time. As OS and application libraries and binaries get updated over time, we do not have any mechanism to guarantee (or even make a good attempt) at replicating the state of an application environment when previous research was done - or when a course lab was last conducted. Even with a fairly relaxed upgrade/update cycle of ~24+ months, we routinely run into issues (usually many months after an update) where a basic OS patch has broken something in an application and is then very time consuming to track down, and when possible, develop a solution for. 

At RIT, we hope to use Singularity to enable us to support applications such as TensorFlow, Multi2Sim, Torch7, and other tools we cannot normally support. At this time, we provide pre-made images for you to use. Eventually we plan to allow for user generated images but for the time being all images must be generated by IT support. One general rule of thumb, is that if there is a docker definition (from dockerhub or elsewhere) available we can usually very easily add that application to the list of supported apps. In time, we hope to also allow for images to be built which contain many applications in one image - this would be ideal for research labs or academic courses where all users need the same environment or applications interact with each other. 

As of writing this (7/06/2017), we currently have images built for the following applications:

  • Caffe
  • Multi2sim
  • pytorch
  • tensorflow
  • theano
  • base Cent5 - for legacy application support

We plan to offer many more applications as time comes, if you need an application not listed here, let us know and we can look into building an image for it. 

Images

At this time, we provide pre-made images for you to use. Eventually we plan to allow for user generated images but for the time being all images must be generated by IT support
Images are stored in /opt/singularity/images. If the system you wish to use does not have this directory, let us know. 
The image names describe what software or suite it provides. We make a reasonable attempt to maintain these images, but we are not responsible for debugging your code as newer images are created.

In the images folder there are images for each application using the following naming convention: <image/app name>_<internal versioning>_<git hash for this specific build>.img

Once an image has been built, it is static and will not be changed. As newer versions of applications are available we will build additional images with those versions, leaving the older images unchanged. At some point we may develop a policy to only keep specific versions of images around for long periods of time, but at this time there is policy regarding image retention. 

Usage

Standalone usage of singularity can be done using the singularity run <image name> <command - optional> - once the appropriate modules have been loaded.

For use in a cluster, use SBATCH files and to submit jobs through SLURM. If you have not already, please read the getting started page on the RC wiki.

Singularity is simply a binary that you can invoke to bring you into a new environment.
It can take commands and will run those commands inside the container.
If given no commands, it will simply open up a bash shell.
It will mount your home directory inside the container,
so anything inside your home directory will be accessible inside the container as well.
Make sure to save to your home directory or other mapped folder,
or if on the cluster use the SBATCH file to output to your home directory.

We still will need to enable /opt/singularity on many hosts, so if you don’t see that path on your system(s), please let us know. 

The basic process for running singularity images is as follows:

  1. Load Singularity into your environment

    module load module_future #this is to enable unsupported/beta modules
    module load singularity
  2. Run the command inside a container or omitting the command should present a shell form within the container

    #singularity run /opt/singularity/images/<image name>.img <command you would normally run>
    singularity run /opt/singularity/images/tensorflow_0.0.1_82f00df1c07bc4a3ad242da2a272c116a4cbede3.img
    singularity run /opt/singularity/images/tensorflow_0.0.1_82f00df1c07bc4a3ad242da2a272c116a4cbede3.img python /test/test_tensorflow.py cpu 1000

    Another common request is to map a local(or network) directory into a singularity image. This can be accomplished using a bind between the host filesystem and the container filesystem

    #singularity run --bind /<host_filepath>:/<container_image_filepath> <image path and name> <command-optional>
    singularity run --bind /research-storage:/mnt /opt/singularity/images/multi2sim_0.0.1_c45b81225dd2e0476055d71972d8b7fb020f84cd.img

While in a singularity shell, you should have a prefix of "(singularity)USER@HOSTNAME:#" as opposed to a standard shell of just "USER@HOSTNAME:#"

 

Singularity-Hub

Singularity hub is a docker-hub like repository of singularity definition files created by the larger singularity community to make the addition of applications even easier. 

https://singularity-hub.org/

 

Example usage

Once logged in to ion or any of the other head nodes– either submit a job or start an interactive session with gpu access with the slurm scheduler:

If running the job interactively, answer any resource questions for your reservation/allocation ie: 4 cores, 20480 MB ram, 550minutes, leave the qos as “free” and partition set to “work”

Then within your job file (or interactively) run the following


Example Usage
sbatch --gres=gpu jobfile.sh

#sinteractive --gres=gpu 
##Answer any resource questions for your reservation/allocation ie: 4 cores, 20480 MB ram, 550minutes, leave the qos as “free” and partition set to “work”

module load module_future

module load singularity

singularity run /opt/singularity/images/tensorflow_0.0.5_e9744a3d1cafc8687b86b9d5397ef4fa64d9c361.img 

Do whatever you would like within the image such as “python import tf” or whatever you need using the tensorflow tools.

We have several other images also available in /opt/singularity/images with various versions of tools.

http://singularity.rit.edu:8080/

Additional information

Below are some additional pages for documentation on Singularity:

http://singularity.lbl.gov/quickstart

http://singularity.lbl.gov/user-guide 

https://hpc.nih.gov/apps/singularity.html

  • No labels