SPORC - High Performance Computing Cluster
SPORC (Scheduled Processing On Research Computing) is Research Computing's High Performance Computing (HPC) cluster. An HPC cluster is a collection of connected computers coordinated to perform tasks with high efficiency. SPORC is specially made to assist researchers in expediting the process of collecting and analyzing data with its computational power. A more in-depth look at SPORC can be found on the SPORC - Overview page.
- 2304 cores (Intel® Xeon® Gold 6150 CPU @ 2.70GHz)
- 24 TB RAM
- 100 Gbit/sec RoCEv2 interconnect (Mellanox MLX5/Juniper QFX210-64c)
- 64 SuperMicro X11 systems with space for 4 GPU cards
- 16 Nvidia V100 cards
- 96 Nvidia P4 cards
Our large single-system image (SSI) compute node, affectionately referred to as "the Ocho" is a node in SPORC.
- SYS-7089P Supermicro 8-way
- 192 cores (Intel® Xeon® Platinum 8168 CPU @ 2.70GHz)
- 3.06 TB RAM 1 Nvidia V100 card
- 1 Nvidia V100 card
RC will have regularly scheduled maintenance during the first Thursday of every month starting at 0800 EST and ending at midnight. These outage windows may or may not be utilized at the discretion of the RC support staff and will generally not require the entire allotted time. Some and/or all services may be unavailable during the window. SLURM jobs that have been scheduled with long run times will be prevented from running if the requested run time will overlap the outage window.
We currently offer numerous applications useful for gathering and analyzing data. A large number of research topics have applications that would assist in the researching process. The following is a sample of applications we currently provide for researchers. To see all the software available, after logging on to SPORC type
module avail into the command line. More information can be found on the Using Modules page. We are currently in the process of creating documentation for the applications we provide, see Instructions.
- Ansys - complex engineering simulation
- Candence design electronics (chips, circuit boards, etc.)
- Comsol - interactive environment for modeling and simulating scientific and engineering problems
- MATLAB - create models and applications, develop algorithms, analyze data
- Torch- open-source machine learning
High Throughput Computing (HTC)
High Throughput Computing (HTC) splits large jobs into much smaller, simpler tasks. These jobs are run in parallel on SPORC which allows for data to be collected more efficiently and for researchers to receive their results quicker. Our HTC service gives researchers access to large amounts of computational power and high-quality resources which can be shared among their group, allowing for collaboration. Researchers are able to manage their jobs, check the status of each job, and see the availability of resources.
Accelerator Assisted Compute
An accelerator is a hardware device or software program whose purpose is to enhance the overall performance of the computer. The hardware accelerator works by performing functions with its own custom logic faster than the CPU. With this, the time taken to perform specific jobs is cut down significantly and the CPU is unburdened to perform other tasks. For researchers this means that, by utilizing accelerators, you can get results for your research faster.
Research Computing will help researchers develop a cloud computer environment tailored to the researcher's needs. Assistance is provided to help researchers utilize cloud services and tools so that research data can be collected, stored, and analyzed in an efficient way.
Virtual Hosting Services
Virtual Hosting is when a physical server is divided into multiple isolated virtual environments by a software application. RIT schools, departments, researchers, or any other member of RIT can request virtualized hosting services for a new project or application, a new service, new portions or an existing environment, or for a hosting platform. Guest hosting services are also available for research and administrative applications.