HII-HPC Cluster
In partnership with USF Research Computing, the Health Informatics Institute offers the HII-HPC Cluster for its faculty and partners requiring large-scale computational resources for bioinformatics workloads.
Overview
- Purpose
- Linux
- Connecting
- Filesystems
- Slurm
- Modules
- Getting Help
- Frequently Asked Questions
- Other Topics
Availability
(Note: this section may be out of date)
The HII-HPC Cluster reserves two maintenance windows on the same day each week:
- Every Thursday @ 10:00 AM EDT until 12:00 Noon EDT
- Description: Updates to the Head Nodes (
hii.rc.usf.edu
andhii2.rc.usf.edu
) - SSH Logins and Screen/Tmux sessions may be terminated but no Slurm jobs already submitted will be affected. A broadcast message will generally be sent to logged in users on Wednesday if a downtime is expected the following day.
- Description: Updates to the Head Nodes (
- Every Thursday @ 10:00 PM EDT until 06:00 AM EDT
- Description: Updates to the Head Nodes and the Compute Nodes - SSH/Screen/Tmux sessions and Scheduled Slurm jobs may be affected. A broadcast message will generally be sent to logged in users on Wednesday if a downtime is expected the following day.
If you are running any long-term jobs during the Thursday night change window, please make sure to verify your jobs are still running Friday morning. Job-affecting changes are rare but occasionally necessary for maintaining a healthy cluster.
News
(Note: this section is be out of date)
2017
- Please disregard the USF Research Computing message regarding quotas - HII maintains separate filesystems with enhanced quotas.
2016
- 2 Nodes: 28 Xeon E5-2650 v4 @ 2.30GHz cores with 1024 GB RAM @ 2400 MHz (High Memory Nodes)
- 40 Nodes: 20 Xeon E5-2650 v3 @ 2.30GHz cores with 128 GB RAM @ 2133 MHz
2015
- 40 Nodes: 16 Xeon E5-2650 v2 @ 2.60GHz with 128 GB RAM @ 1600 MHz
- DDN General Parallel File System (GPFS) Storage Cluster providing Petabytes of scalable I/O