Tearing it up

My four-bay Synology NAS (4x Western Digital Red 2TB, 5400 RPM, 8.89 cm – 3.5″) chassis sits discretely in a quiet corner.

It ticks along, very quietly, adding files (many different formats) to various libraries.

Sometimes (not often) I access the NAS remotely to get a file that will help me at work, because I’m keen never to reinvent the wheel.

But by and large the NAS maintains my music library and my video projects; between them that’s about 1.75TB of data.

The NAS also maintains a record of everything my limited companies have ever done, because HMRC.

NAS backs itself up, at 3am every morning, to a 3TB EDD – a routine that takes an impressive five minutes.

But there’s been a recurring problem. Not with the chassis itself, but with the drives.

As I’ve already mentioned, the NAS media comprises 4x Western Digital Red 2TB, 5400 RPM, 8.89 cm – 3.5″ drives.

But in the last six months three drives have failed.

There’s no problem here, from an operational aspect. The NAS keeps on running when a drive fails.

In fact, due to the RAID I’ve used, I can afford to lose two drives yet still maintain full read/write access to my data.

But why would three drives fail, over a period of months?

I had a spare, so the first time a drive failed I used that.

The second time a drive failed I didn’t have a spare, so I just reused the first (faile) drive.

The NAS reformatted the (previously failed) drive, then repaired and spread the Volume over the new hardware config.

Odd, I thought.

And a couple of months later the same thing occurred.

Exactly the same thing: drive failed, replaced the failed drive with a previously failed drive, get back up and running.

It wasn’t even the same drive bay that failed – on either occasion.

Last week, another couple of months down the line, another drive failure occurred.

And I replaced it with a previously failed drive, which, once again, was reformatted and then accepted into service.

Most odd.

I have absolutely no reason to suspect any component in particular, but if a drive failed I would expect that failed drive to be beyond normal use ever again.

I wouldn’t expect a failed drive to be reusable.

And I have no idea why this might be.