This is the User Guide for users of Cortex, a Neurosciences high-performance computing cluster administered by Berkeley Research Computing at the University of California, Berkeley.
You can request new accounts on the Cortex cluster via this online form.
Cortex uses One Time Passwords (OTPs) for login authentication. For details, please see Logging in.
Use SSH to log into:
To transfer data to and from Cortex, use the cluster's dedicated Data Transfer Node (DTN):
If you're using Globus to transfer files, the Globus endpoint is:
For details about how to transfer files to and from the cluster, please see Transferring Data.
Storage and Backup
The following storage systems are available to Cortex cluster users:
||10GB||Yes||Per User||Home directory ($HOME) for permanent data|
||none||No||Per User||Private storage|
Cortex is a heterogeneous cluster, with a mix of three different types of nodes, each with different hardware configurations. Two of these configurations offer Graphics Processing Units (GPUs) in addition to CPUs:
|cortex||14||n00[00-01].cortex0||INTEL Xeon E5410||8||16GB||1x Tesla K40|
|n00[02-11].cortex0||INTEL Xeon E3-1220||4||16GB|
|n00[12-13].cortex0||INTEL Xeon X5650||12||24GB||2x Tesla M2050|
Please be aware of these various hardware configurations, along with their associated scheduler configurations, when specifying options for running your jobs.
The Cortex cluster uses the SLURM scheduler to manage jobs. For many examples of job script files that you can adapt and use for running your own jobs, please see Running Your Jobs.
When submitting your jobs via SLURM's
srun commands, use the following options:
- The "cortex" partition:
- The "cortex" account:
- Optionally, a node feature, as shown in the "Node Features" column in the table below. For example, to select only nodes providing a Tesla K40 GPU:
If the node feature ("
--constraint") option is not used, the default order of dispatch to nodes will be to
n00[02-11].cortex0, then to
n00[00-01].cortex0, and finally to
- The "cortex_c32" QoS will be applied by default. Thus no QoS option is required when using the Cortex resources.
|Partition||Account||Nodes||Node List||Node Feature||Shared||QoS||QoS Limit|
|cortex||cortex||14||n00[00-01].cortex0||cortex, cortex_k40||cortex_c32||Yes||48 CPUs max per user|
To help ensure fair access to cluster resources for all of its users, a standard fair-share policy, with a decay half life value of 14 days (2 weeks), is enforced.
For details about how to find and access the software provided on the cluster, as well as on how to install your own, please see Accessing and Installing Software.
The live status of the CORTEX cluster is displayed here.