Savio is all back online as of 11:15 AM today. All services have been restored as before.
Savio back online
Update : Unscheduled downtime for BRC/Savio due to a power event in the datacenter
Savio partially online
Savio outage 8/12/2019
We experienced an unexpected power event disrupting power supply to all the compute nodes of the Savio cluster sometime yesterday, 8/11. Our engineers are in the datacenter early in the morning today making fixes and changes to the power layout and to restore services back online.
Right now users can login to the cluster front end nodes and access their data in the cluster filesystems but no jobs are running in the Savio cluster queues.
Once we finish power rebalance and bring nodes online jobs will resume running as scheduled before. CGRL's Vector cluster nodes are not impacted by this power event.
Reach us at firstname.lastname@example.org if you have any questions or concerns.
BRC Jupyterhub service experiencing problems
Users are having trouble accessing the BRC Jupyterhub service. BRC staff are looking into the problem. As an alternative workaround for the time being, you can currently get access to Jupyter notebooks on the Savio visualization node following the instructions in the RIT documentation here
Update: BRC cluster login returning to normal
We believe we've resolved the login issues. Please let us know if you experience problems.
Ongoing Savio login issues
Users have been reporting problems logging in, with their password not being accepted. BRC staff are looking into this.
In the meantime, simply waiting for a minute and trying again may allow you to get access.
BRC Savio Cluster expected to be online by 5 pm August 5
Our original post about Savio being online first thing on the morning of August 5 was incorrect (and contrary to the email message that was sent out).
BRC Savio Cluster shutdown planned for the weekend of Aug 3
BRC Savio will be shutdown on Friday Aug 2 after 5pm to accommodate electrical work in the data center. Savio will be brought back online first thing on Monday morning Aug 5.
BRC cluster downtime planned for 8/6-8/7
BRC staff have made arrangements with the vendor to perform an upgrade of our Lustre file storage system on August 6th - 7th, which was unable to take place during our most recent scheduled downtime.
If you have questions or concerns, please contact us at email@example.com.
Scheduled downtime 7/24-7/25
Our next maintenance downtime for the BRC HPC Supercluster is scheduled for July 24th and 25th. It will be a two day downtime starting from 8:00 am on Tuesday till 5:00 pm on Wednesday.
We need to do some long pending maintenance tasks and improvements to the scratch filesystem which will help us manage it better.
All access to cluster login nodes, data transfer node, scheduler queues and data on all the cluster filesystems will be blocked. This downtime impacts all the three clusters, Savio, Cortex & Vector in the supercluster infrastructure. After the downtime, access will be restored as before.
We have scheduler reservations put in place such that there will not be any jobs running after 8:00 am on July 24th. So if you are submitting jobs to any cluster queues before the downtime please make sure you request proper wallclock time such that they finish running before 8:00 am on 24th or else your jobs will wait in the queue until after the downtime.
(Resolved) Job submission errors on Savio
[Update 9:30 AM: This issue should be resolved. Please contact us at firstname.lastname@example.org if you continue experiencing problems.]
Since 1:30 AM on 7/17/18, users have been reporting issues with job submission on Savio. Staff are investigating the problem and hope to restore service soon.
[Resolved] Ongoing scratch and DTN issues
As of 11:30 PM on 6/4/18, the scratch and DTN issues should be resolved. Please contact email@example.com if you encounter further issues.
BRC cluster users are continuing to report issues with scratch storage and DTN access. Support staff are currently working on the issue, and will post an update when a fix is in place, or we have an ETA for a fix.
Scratch storage issue on BRC clusters
Starting Sunday afternoon (6/3/18), users have been reporting issues with scratch storage on BRC clusters. Cluster sysadmins will look into it as soon as possible.
Scratch storage issue on BRC clusters
Beginning around 1 PM today, BRC clusters began experiencing issues with scratch storage, where any attempt to access the filesystem might cause it to freeze. BRC staff are currently working to restore service.
(Resolved) Login problems on Savio
Update: The login problems, which were caused by a storage issue, have now been resolved. Please email firstname.lastname@example.org if the issue reoccurs for you.
Since around midnight on 5/10/18, users have been reporting problems with logging into Savio, including the DTN. The systems team is currently looking into the issue.
Jupyterhub on Savio currently unavailable
Since 4/27/18, Jupyterhub on Savio has experienced a number of outages. The systems team is investigating and will restore service as soon as possible.
Emergency downtime for BRC clusters
We are currently undergoing an emergency downtime from 9-12 on 4/17 to address recent scratch storage issues.
Users should receive a notification when the system is back online. If you have any concerns in the meantime, please email email@example.com.
Savio scratch file creation issues
Update: (3/15/18, 4:30 PM) With help from users with high file counts, we are continuing to work towards stabilizing scratch, but users may continue to experience sporadic issues through tomorrow. Deleting unused files is still helpful, if possible.
Since 10 AM on 3/15/18, we have been experiencing some issues with Savio scratch, where users may be unable to create new files. BRC staff are working on resolving the problem, but deleting unused files will help us restore access more quickly. We will continue to update users, but if you have specific concerns, please email firstname.lastname@example.org.
Scratch filesystem returning to normal
Thanks to the quick assistance of a number of top scratch storage users, scratch should be available for use again. If you continue to experience errors, please contact us at email@example.com.