New pool of Big Memory compute nodes now accessible on Savio

August 17, 2016

A new pool of compute nodes is now accessible to users of Savio, the campus’s high-performance computing cluster. The Savio 2 BigMem pool currently offers four nodes with twice the memory of the nodes in the standard Savio 2 pool (128 GB versus 64 GB).

Other than having more RAM, these nodes feature hardware specifications identical to that of Savio 2’s standard compute nodes, including dual 12-core 2.3GHz Intel Haswell Xeon CPUs, for a total of 24 cores per node. (For more details on these and other compute nodes on Savio, please visit the cluster’s System Overview.)

The Savio 2 BigMem pool represents a fourth pool of second generation compute nodes based on Intel’s Haswell processor architecture, supplementing the set of three new compute pools that were were added to the cluster in October 2015: Savio 2 standard compute nodes, Graphics Processor Unit (GPU) nodes, and High Throughput Computing (HTC) nodes. Each of these pools of compute nodes on Savio offers unique functionality, helping to support the highly diverse set of computational workflows used by campus researchers, across a broad range of scientific domains.

Any Savio user can request that their job run on nodes in the new Savio 2 BigMem pool by specifying the savio2_bigmem partition, when scheduling that job via SLURM sbatch or srun commands. (For Faculty Computing Allowance users, the cost in Service Units of using Savio 2 BigMem nodes is 1.2x the cost of using  standard Savio 2 nodes; see the Service Units on Savio page for details.) For more information on the available scheduler partitions on Savio, and on running jobs, please see the Savio User Guide and Running Your Jobs documentation.