raiders of the lost awkward

I applied a patch to the RAID controller on the NAS a few days ago.

Twelve hours later. part way through a volume sync, the RAID controller got its knickers in a right knot.

As a result of its panic-fest, the RAID controller gobbled up 100% of the available RAM and then assumed 100% of the CPU.

These unplanned events effectively locked up everything behind my router/primary firewall.

The NAS squeaked out an email to me that it was in trouble.

I remote-accessed the NAS, intending to kill the rogue processes, but although I could authenticate on to it, the device had insufficient processing power/CPU to allow me to step in and intervene.

When I got home, I powered off the NAS and booted the device back up, which restored everything once more.

After another volume sync (I have the NAS set up as a RAIDed device), I performed a controlled shutdown and restart.

This brought everything back up correctly, after the previous power-off/start up.

With hindsight (I have 20/20 hindsight sometimes), I should have performed a controlled shutdown/restart after applying the patch to the RAID controller.

But I didn’t.

Oh well, I’m learning.

If you’re interested in the architecture, this is how things currently look:

Hardware:
The NAS has 2x 1Tb discs.
Server #1 has 4x 2Tb discs
Server #2 has 4x 2Tb discs

Drawing2

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *