Aron D. Roberts's picture

GPU nodes added to campus's Savio HPC cluster

NVIDIA Tesla K80 dual-GPU accelerator board

A set of fifteen compute nodes equipped with graphics processing units (GPUs) have been added to UC Berkeley’s high performance computing (HPC) cluster, Savio.

If your research can take advantage of the large number of cores offered by GPUs, these new nodes might be a good fit for your work. Researchers using Savio plan to use the GPU nodes for such diverse applications as machine learning, image processing, brain modeling, and simulation of vision correction. Nine of the cluster’s fifteen GPU nodes represent a condo contribution by Professor Hiroshi Nikaido in Molecular and Cell Biology, who is working with Attilio V. Vargiu at the University of Cagliari in Italy to study the interaction of drugs with the multidrug transporter (a biological mechanism that protects cells from foreign substances) and to process X-ray crystallography images. Their research aims at better understanding the mechanisms used by bacteria to provide resistance to antibiotic drugs.

Each of Savio’s fifteen new GPU nodes is equipped with two NVIDIA Tesla K80 dual-GPU accelerator boards, for a total of four GPUs per node, and thus a cluster-wide total of sixty GPUs. Each individual GPU, in turn, provides 2,496 processor cores and 12 GB of onboard memory. Every GPU node also includes dual quad-core Intel Haswell processors and 64 GB of physical memory, to accommodate jobs requiring both GPUs and CPUs.

Programs that utilize GPUs can be written in high level languages, such as C, C++, and Python, using either the OpenCL or CUDA frameworks. Pre-built GPU-capable applications or GPU-enhanced modules for your favorite analytics tools may also be available. (If you have questions about writing or installing software that makes use of Savio’s GPU nodes, please contact us.)

At present, Savio’s GPU nodes can be accessed in the following ways:

  • By faculty and PIs who purchase GPU nodes when making a condo contribution to the Savio cluster, and by their affiliated researchers with Savio user accounts.
  • By condo owners – including those who have contributed non-GPU nodes to the cluster – and their affiliated researchers with Savio user accounts, who can run jobs on the GPU nodes by specifying that scheduler queue and a “low-priority” quality of service (QoS) option in their job scripts. (These jobs are subject to possible preemption by higher priority usages of those nodes.) See Savio’s User Guide for information on specifying these options.
  • By early adopters. If you’re using Savio via a Faculty Computing Allowance or any other form of access, other than condo ownership, and wish to obtain early access to the scheduler queue that provides access to the GPU nodes, please let us know of your interest.

Following a period of early adopter testing, access to the scheduler queue for the GPU nodes is expected to be expanded to include all Savio users.

 

 

Tags: 

Program: 

Service: 

Partnership: