Aron D. Roberts's picture

High Throughput Computing pool on Savio cluster

CSIRO HPC Cluster, via Wikimedia Commons

The campus’s high-performance computing (HPC) cluster, Savio, offers a High Throughput Computing (HTC) pool of compute nodes, enabling research computation that doesn’t fit traditional parallel computing paradigms.

If your compute jobs are loosely-coupled -- with few interdependencies and little or no need to communicate amongst themselves during runtime -- they might be well-suited to run on Savio’s HTC nodes. Representative examples of such jobs, according to XSEDE, “include Monte Carlo simulations and other parameter sweep applications, where the same program is run with varying inputs, resulting in hundreds or thousands of executions” of that program.

Savio’s pool of HTC nodes was launched in October 2015, alongside a second, specialized pool of Graphics Processing Unit (GPU) nodes, and a third pool containing a new generation of standard compute nodes.

The twelve compute nodes in the HTC pool offer a total of 144 Haswell processor-based cores. Compared with Savio’s general purpose compute nodes, they are optimized for single core jobs, with faster CPUs (3.4 Ghz vs 2.5 or 2.3 GHz) and more memory (128 GB vs 64 GB) per node, along with fewer cores per node (12 vs 24 or 20 cores). You can also schedule your jobs in this pool to run on individual cores, rather than scheduling full nodes.

Savio’s HTC nodes can presently be accessed in the following ways:

  • By faculty and PIs who purchase HTC nodes when making a condo contribution to the Savio cluster, and by their affiliated researchers with Savio user accounts. This is the preferred option for researchers whose jobs regularly exceed the 72-hour runtime limit on shared Savio nodes.
  • By condo owners – including those who have contributed non-HTC nodes to the cluster – and their affiliated researchers with Savio user accounts, who can run jobs on the HTC nodes by specifying that scheduler queue and a “low-priority” quality of service (QoS) option in their job scripts. (These jobs are subject to possible preemption by higher priority usages of those nodes.)
  • By early adopters. If you’re not a condo owner (e.g., if you are using Savio via a Faculty Computing Allowance), and wish to obtain access to the HTC nodes, please let us know of your interest.

Savio’s ability to accommodate non-traditional HPC jobs that are better suited for an HTC paradigm or that require GPUs, strengthens the Berkeley Research Computing (BRC) Program’s support for an increasing variety of the computational workflows used by campus researchers, across a broad range of scientific domains.

Increased support for diverse types of computation is also the goal of BRC’s ongoing flexible compute experiments, and of its services that facilitate access to emerging types of computational resources offered by national computing facilities and the commercial cloud.

 

 

Tags: 

Program: 

Service: 

Partnership: