Earlier this afternoon, we received word that the non-UPS power had been restored to the Warren Hall datacenter so we immediately started work to bring Savio back online. At this point, Savio is back in full production so users can return to doing their work. We did notice a few jobs were still running when we brought the system down before the power outage. Users should check their last running jobs and restart if necessary.
Savio back online
Savio still down: update
While parts of the campus are running on power from the Cogen plant we have been requested to hold off bringing the Savio cluster back up due to concerns around overloading the generation facility. To this end we are waiting for clearance from campus facilities before beginning to restore service.
As soon as we have a green light to bring systems back up we will send out another note with an estimated time for service availability followed by a confirmation of service availability. Thank you for your patience as we navigate this outage. All hands are on standby to restore service as soon as possible.
Savio down on 10/9 due to power shutoff
Based on information from campus leadership, we are expecting that PG&E power to campus will be shut off on Wednesday, October 9 at 8am due to the Fire Weather Watch and likelihood of high winds. We will need to shutdown Savio starting at 6am.
Users with access to other computing resources may want to copy their data over there as a precaution. Note that a PG&E outage at UC Berkeley will also affect LBNL computing resources too.
Update on Savio Status: Back Online
Datacenter staff finished repairs to the transformer this morning (Monday, 9/30) and were able to switch the power source from the generator to the house power. We paused the SLURM scheduler queues around 7:00 am today to shutdown all compute resources and allow the power switch from the generator to the transformer power. After that we were able to power back all compute resources and release the job queues at around 12:30 pm. We would like to thank all of our users for their patience and cooperation during this unexpected outage.
Update on Savio availability from 9/24-9/30
Savio unexpectedly unavailable from 9/23-9/30
Due to an unexpected power system emergency in the Warren Hall data center, Savio will be shut down from the evening of Mon, 9/23 until the repairs are complete on the morning of Mon, 9/30.
Savio back online
Savio is all back online as of 11:15 AM today. All services have been restored as before.
Update : Unscheduled downtime for BRC/Savio due to a power event in the datacenter
Savio partially online
Savio outage 8/12/2019
We experienced an unexpected power event disrupting power supply to all the compute nodes of the Savio cluster sometime yesterday, 8/11. Our engineers are in the datacenter early in the morning today making fixes and changes to the power layout and to restore services back online.
Right now users can login to the cluster front end nodes and access their data in the cluster filesystems but no jobs are running in the Savio cluster queues.
Once we finish power rebalance and bring nodes online jobs will resume running as scheduled before. CGRL's Vector cluster nodes are not impacted by this power event.
Reach us at firstname.lastname@example.org if you have any questions or concerns.
BRC Jupyterhub service experiencing problems
Users are having trouble accessing the BRC Jupyterhub service. BRC staff are looking into the problem. As an alternative workaround for the time being, you can currently get access to Jupyter notebooks on the Savio visualization node following the instructions in the RIT documentation here
Update: BRC cluster login returning to normal
We believe we've resolved the login issues. Please let us know if you experience problems.
Ongoing Savio login issues
Users have been reporting problems logging in, with their password not being accepted. BRC staff are looking into this.
In the meantime, simply waiting for a minute and trying again may allow you to get access.
BRC Savio Cluster expected to be online by 5 pm August 5
Our original post about Savio being online first thing on the morning of August 5 was incorrect (and contrary to the email message that was sent out).
BRC Savio Cluster shutdown planned for the weekend of Aug 3
BRC Savio will be shutdown on Friday Aug 2 after 5pm to accommodate electrical work in the data center. Savio will be brought back online first thing on Monday morning Aug 5.
BRC cluster downtime planned for 8/6-8/7
BRC staff have made arrangements with the vendor to perform an upgrade of our Lustre file storage system on August 6th - 7th, which was unable to take place during our most recent scheduled downtime.
If you have questions or concerns, please contact us at email@example.com.
Scheduled downtime 7/24-7/25
Our next maintenance downtime for the BRC HPC Supercluster is scheduled for July 24th and 25th. It will be a two day downtime starting from 8:00 am on Tuesday till 5:00 pm on Wednesday.
We need to do some long pending maintenance tasks and improvements to the scratch filesystem which will help us manage it better.
All access to cluster login nodes, data transfer node, scheduler queues and data on all the cluster filesystems will be blocked. This downtime impacts all the three clusters, Savio, Cortex & Vector in the supercluster infrastructure. After the downtime, access will be restored as before.
We have scheduler reservations put in place such that there will not be any jobs running after 8:00 am on July 24th. So if you are submitting jobs to any cluster queues before the downtime please make sure you request proper wallclock time such that they finish running before 8:00 am on 24th or else your jobs will wait in the queue until after the downtime.
(Resolved) Job submission errors on Savio
[Update 9:30 AM: This issue should be resolved. Please contact us at firstname.lastname@example.org if you continue experiencing problems.]
Since 1:30 AM on 7/17/18, users have been reporting issues with job submission on Savio. Staff are investigating the problem and hope to restore service soon.
[Resolved] Ongoing scratch and DTN issues
As of 11:30 PM on 6/4/18, the scratch and DTN issues should be resolved. Please contact email@example.com if you encounter further issues.
BRC cluster users are continuing to report issues with scratch storage and DTN access. Support staff are currently working on the issue, and will post an update when a fix is in place, or we have an ETA for a fix.
Scratch storage issue on BRC clusters
Starting Sunday afternoon (6/3/18), users have been reporting issues with scratch storage on BRC clusters. Cluster sysadmins will look into it as soon as possible.