Cortex User Guide

Account Requests | Logging in | Transferring Data | Storage and Backup | Hardware Configuration | Scheduler Configuration | Software Configuration | Cluster Status | Getting Help

This is the User Guide for users of Cortex, a Neurosciences high-performance computing cluster administered by Berkeley Research Computing at the University of California, Berkeley.

Account Requests

You can request new accounts on the Cortex cluster via this online form.

Logging in

Cortex uses One Time Passwords (OTPs) for login authentication. For details, please see Logging in.

Use SSH to log into:  hpc.brc.berkeley.edu

Transferring Data

To transfer data to and from Cortex, use the cluster's dedicated Data Transfer Node (DTN):  dtn.brc.berkeley.edu

If you're using Globus to transfer files, the Globus endpoint is:  ucb#brc

For details about how to transfer files to and from the cluster, please see Transferring Data.

Storage and Backup

The following storage systems are available to Cortex cluster users:

Name Location Quota Backup Allocation Description
HOME /global/home/users/$USER 10GB Yes Per User Home directory ($HOME) for permanent data
CLUSTERFS /clusterfs/cortex/users/$USER none No Per User Private storage

Hardware Configuration

Cortex is a heterogeneous cluster, with a mix of three different types of nodes, each with different hardware configurations. Two of these configurations offer Graphics Processing Units (GPUs) in addition to CPUs:

Partition Nodes Node List CPU Cores Memory GPU
cortex 14 n00[00-01].cortex0 INTEL Xeon E5410 8 16GB 1x Tesla K40
n00[02-11].cortex0 INTEL Xeon E3-1220 4 16GB  
n00[12-13].cortex0 INTEL Xeon X5650 12 24GB 2x Tesla M2050

Please be aware of these various hardware configurations, along with their associated scheduler configurations, when specifying options for running your jobs.

Scheduler Configuration

The Cortex cluster uses the SLURM scheduler to manage jobs. For many examples of job script files that you can adapt and use for running your own jobs, please see Running Your Jobs.

When submitting your jobs via SLURM's sbatch or srun commands, use the following options:

  • The "cortex" partition: --partition=cortex
  • The "cortex" account: --account=cortex
  • Optionally, a node feature, as shown in  the "Node Features" column in the table below. For example, to select only nodes providing a Tesla K40 GPU: --constraint=cortex_k40
    If the node feature ("--constraint") option is not used, the default order of dispatch to nodes will be to n00[02-11].cortex0, then to n00[00-01].cortex0, and finally to n00[12-13].cortex0.
  • The "cortex_c32" QoS will be applied by default. Thus no QoS option is required when using the Cortex resources.
     
Partition Account Nodes Node List Node Feature Shared QoS QoS Limit
cortex cortex 14 n00[00-01].cortex0 cortex, cortex_k40 cortex_c32 Yes 48 CPUs max per user
n00[02-11].cortex0 cortex, cortex_nogpu
n00[12-13].cortex0 cortex, cortex_fermi

To help ensure fair access to cluster resources for all of its users, a standard fair-share policy, with a decay half life value of 14 days (2 weeks), is enforced.

Software Configuration

For details about how to find and access the software provided on the cluster, as well as on how to install your own, please see Accessing and Installing Software.

Cluster Status

The live status of the CORTEX cluster is displayed here.

Getting Help

For inquiries or service requests, please see BRC's Getting Help page or send email to brc-hpc-help@berkeley.edu.