Tearing it up

My four-bay Synology NAS (4x Western Digital Red 2TB, 5400 RPM, 8.89 cm – 3.5″) chassis sits discretely in a quiet corner.

It ticks along, very quietly, adding files (many different formats) to various libraries.

Sometimes (not often) I access the NAS remotely to get a file that will help me at work, because I’m keen never to reinvent the wheel.

But by and large the NAS maintains my music library and my video projects; between them that’s about 1.75TB of data.

The NAS also maintains a record of everything my limited companies have ever done, because HMRC.

NAS backs itself up, at 3am every morning, to a 3TB EDD – a routine that takes an impressive five minutes.

But there’s been a recurring problem. Not with the chassis itself, but with the drives.

As I’ve already mentioned, the NAS media comprises 4x Western Digital Red 2TB, 5400 RPM, 8.89 cm – 3.5″ drives.

But in the last six months three drives have failed.

There’s no problem here, from an operational aspect. The NAS keeps on running when a drive fails.

In fact, due to the RAID I’ve used, I can afford to lose two drives yet still maintain full read/write access to my data.

But why would three drives fail, over a period of months?

I had a spare, so the first time a drive failed I used that.

The second time a drive failed I didn’t have a spare, so I just reused the first (faile) drive.

The NAS reformatted the (previously failed) drive, then repaired and spread the Volume over the new hardware config.

Odd, I thought.

And a couple of months later the same thing occurred.

Exactly the same thing: drive failed, replaced the failed drive with a previously failed drive, get back up and running.

It wasn’t even the same drive bay that failed – on either occasion.

Last week, another couple of months down the line, another drive failure occurred.

And I replaced it with a previously failed drive, which, once again, was reformatted and then accepted into service.

Most odd.

I have absolutely no reason to suspect any component in particular, but if a drive failed I would expect that failed drive to be beyond normal use ever again.

I wouldn’t expect a failed drive to be reusable.

And I have no idea why this might be.

A funny thing happened on the way to the forum

(Zero Mostel, Phil Silvers; 1966)

I have been messing about with various forum softwares for about a year.

What I wanted was an easy to design, easy to build, easy to configure, easy to administer solution.

Largely, over the last year, I’ve spent considerable amounts of time working with software that was none of those things.

Worst by far was phpBB, which is so complex that not one single aspect of it could be called user friendly.

And yet, for some bizarre reason, phpBB is the most common forum software – software that so many forum admins spend years (yes, years!) setting it up and staying on top of it.


I set up a dummy domain and threw all of my prototypes up there, and pushed and pulled my ideas into each of them.

This weekend I had to bin that domain. phpBB is a high-value (and oh so easy) target for spammers.

My latest phpBB build was getting (successfully) hit by c. 50 ‘registered’ forum users (who I never actually registered), who just pumped out the usual spam links.

So it’s goodbye to that domain, and hello to another. And also hello to another (hopefully far more usable) forum solution.

Anyway, you can find the prototype on this domain in a sub-dir.

I’m now playing with content and structure.

I’ll get around to visual customisation as soon as I can.

NAS update (2)

The Synology DS418 and 4x 2TB drives arrived very promptly from Servers Direct (I can’t recommend them high enough for price and speedy delivery).

Slotting the disks into the DS418 chassis was extremely simple and required no screwdrivers.

I plugged the unit into the internet, powered it up and watched it run through its validation and OS and app updating process.

Then I logged in as admin, created the user profiles that my old NAS had, and then plugged in the EDD and copied all of the data in to the profiles.

Then I rebooted the DS418, logged in as my main user, checked that everything was exactly as it had been on the old NAS and that was it.

Nothing else to report.

NAS update (1)

My Synology NAS started beeping at me recently.

I logged on and the control panel told me that one of the two (mirrored) hard disks had failed.

No problem. Everything was mirrored on the other hard disk.

But just to be safe I backed up the 1.75TB of data to an external USB drive and then shut the NAS down to save wear and tear.

Then I ordered a new Synology NAS chassis, but this time I’m going for a four-bay unit, a DS418.

To complete the new NAS build, I’ve ordered four 2TB drives.

As soon as the NAS and drives have arrived I’ll put them together and get the NAS up and running.

Then I’ll build the profiles that I use, then copy the backed-up data from the EDD and that should be it.

Synology NAS: unexpected ease of goodness

My (very) long-standing Synology NAS had a bit of a brainfart and died.

I will admit that I may have caused the brainfart.

I wanted to take it off cable; remove the direct connection to my router, and put the NAS up the far end of the room, with no need to trail a connection between the two.

So I bought a USB plug-in WiFi device.

And when I plugged it in the NAS died.



All my business and personal files, all my video and writing projects, and all of my audio projects and my 8,000+ iTunes tracks.


So I did what any normal person would do; I went to eBay and bought a used by working Synology NAS chassis, to the same spec.

A couple of weeks later, when it arrived, I removed the drives from the dead NAS and placed them (in the same order) in the new NAS.

Then I powered it up and left it to run on for a couple of hours.

Then I located the IP address the NAS has assumed, logged on to the control panel, mapped the iTunes directory to the same drive as the legacy had used.

And then I carried on using the new NAS, as if nothing had happened.

I didn’t even have to resort to an archive copy of anything.

If you ever have to change from any Synology NAS to any other corresponding product, for whatever reason, if you swap the disks over – keeping them in the same order that they were in the legacy NAS – the Linux-based OS will just continue to work.

This is a stunningly good use of a stable platform.







But now I’m wondering about upgrading everything.

Maybe move from a two-bay to a four-bay chassis?

Maybe double the 1.8Tb storage capacity? I am, after all, using 60% of 1.8Tb.




Fixing my iPod Classic

How to fix your iPod, and how to reset your iPod as a USB Disk


My iPod Classic stopped working properly a couple of weeks ago.

My trusty iPod Classic

My trusty iPod Classic










It wouldn’t sync with iTunes.

I’d plug it in to my laptop and the iPod display would read ‘connected’.

But after about 15 seconds, iTunes would freeze.

If I unplugged the iPod, iTunes would immediately become responsive.

But plug the iPod back in, and once again iTunes would lock up.

If I opened Windows Explorer I couldn’t see the iPod as a device, even if it was plugged in.

The unhelpful information on Apple’s help webpages suggested that I reset the iPod back to factory settings.

But in order to do that, you need to plug the iPod in and then access iTunes.

Except that never worked; iTunes became unresponsive every single time I tried.

I managed, on one occasion, to plug the iPod in and update the Apple USB driver, but that didn’t get me anywhere.

After playing with this problem, and a lot of googling, I came across a random piece of badly meta-tagged information that showed me how to reset my iPod in to USB Disk mode.

I figured why not, I had nothing to lose.

So I performed a hard reset of the iPod, then went straight in to switching it in to USB Disk mode.

Stone me, that worked.

With the iPod thinking it was now a USB Disk, I plugged it in to the laptop, then right clicked on it, and selected Format Disk.

That didn’t work

Then I right clicked on the iPod and selected Scan and Fix Problems.

The laptop thought about this for a while, showing a very slowly-moving progress bar.

After a while the progress bar sped up and then lo and behold, my iPod appeared in Windows Explorer.

And iTunes.

My iPod was in iTunes!

I did a full factory reset in iTunes (just in case), and then the iPod came back as a clean/vanilla device.

I syncd my iTunes library (7,173 tracks and 400 podcasts), and within a couple of hours, I had my iPod Classic back in full working order.

Now for the $60,000 question.

How do you reset your iPod in to USB Disk mode?

We all know how to restart our iPods, yes?

You just hold down the middle button and the menu button like this:

How to reset an iPod to USB Disk 1

How to reset an iPod to USB Disk 1










But, if you want to restart your iPod as a USB Disk, as soon as you’ve done this, and the screen has gone black, you need to hold down the middle button and the next track button, like this:

How to reset an iPod to USB Disk 2

How to reset an iPod to USB Disk 2










After a few moments your iPod display will look like this:

How to reset an iPod to USB Disk 3

How to reset an iPod to USB Disk 3









Then you just plug your iPod in to your USB port and do the kind of disk housekeeping that I have described above.

And that’s it.

WordPress hacked on GoDaddy

During the very many changes to my tech and personal life over the last three years, I have had to let one website/domain languish, ignored but not forgotten, over in my hosting account at GoDaddy.

I still host three websites in that account:

  1. My music/podcast website I have hosted for me since 2008(content and RSS & iTunes feeds in a customised WordPress development)
  2. A third-party cookery/recipe website that I have hosted for almost four years (content and RSS feeds in a customised WordPress development)
  3. A third-party blog/website that I have hosted for a friend for just over three years (content and RSS feeds in a customised WordPress development)

While I have plenty of free time on my hands, I thought it would be a good idea to migrate the music/podcast website content and iTunes feeds away from GoDaddy and in to my datacentre.

But when I viewed the domain and contents I noticed garbage text at the very foot of the front page, advertising drugs.

Damn it, I’ve been hacked!

And also, damn it, I’ve been hacked and haven’t noticed!

Curse me for taking my eye off the ball.

Curse me for being so busy.

Anyway, I am unpicking the hack and, as soon as I have finished this, I shall migrate this domain to my datacentre.

But I have spent some time wondering about the hack.

How come just one domain was successfully attacked, when there are other domains in the same hosting account?

So I googled the hackers content, that had caught my eye when I first spotted it.

And it showed up, word for word, in three other WordPress websites.

All of them hosted at GoDaddy (none of them were mine).

What are the chances?

Remote monitoring

A thing has been exercising my nocturnal wakefulnesses.

I have strong monitoring systems on the datacentre servers.

I get all the usual goodnesses:

  • Uptime
  • CPU Load/Utilisation
  • Real Memory Utilisation
  • Virtual Memory Utilisation
  • Local Disk Space Used/Free
  • Disk IO Reads/Writes
  • CPU IO Reads/Writes
  • Swap Space Free/Used

But despite these things, I’m looking for another tool for my monitoring armoury.

What I want is an email (or SMS) sent to me if certain conditions are encountered by an external monitoring server (a monitoring server outside of my WAN).

There are a number of paid monitoring services who will do this, but I don’t want to rely on anyone else (also, I don’t want to pay anyone for this, when I have it within my own infrastructure capacity).

I just need to figure out the best way to make it happen.

The end is in sight

The ‘to do’ list is lacking significant items.

There is no remaining tech debt.

In fact, undocumented here, a significant amount of quality has been added to the datacentre project.

I now have real time replication between hosts.

Every VM update, every physical host change, is now replicated to a redundancy environment.

I can spin up a new VM in about 25 seconds (this includes a full LAMP stack, and webmail, mobile device mail, and FTP access).

For extra redundancy, in addition to RAID and real time replication, every physical host is also backed up to another host.

This currently sits in a separate location, but within the same building.

I plan to relocate this feature to another location as soon as I can find one.

So all in all, this project has come a very long way.

And oh what a lot has been learned!

Cursing recursive permissions recursively

During a phase of system testing on server c1, in the new datacentre, an interesting problem was discovered.

Using WordPress as the template (so the results could be applied to Drupal installations – and any other of those .php-related content management systems), we discovered that permissions on high-level directories were not being replicated down to low-level directories.

This meant a loss of function (where that function relied on scripts that are installed in those low-level directories by the vanilla application).

In WordPress and Drupal, for example, uploading media of any type wouldn’t work.

This is a significant barrier for a content management system.

The first workaround seemed to solve the problem, except that the uploaded files were owned by Apache (the webserver in the LAMP stack).

Unfortunately this took us to another permissions-based problem which stopped the owner (system user) modifying those files – even through ftp.

If you think about it for a minute, it’s an interesting problem – where a vanilla software installation granted the higher-level directories one set of permissions, while the lower-level directories were granted different (and functionality-limiting) permissions.


The first attempt to fix the problem was to deploy suEXEC on the server (VM). Unfortunately suEXEC didn’t get us all the way out of the problem, so we needed to look for another solution.

The second attempt to make the problem go away was to use fastCGI.

Yesterday afternoon Manuel, our brilliant technical resource, deployed fastCGI on the VM we have been using as a test.

I used the standard WordPress admin control panel to upload an image in to a test post and successfully published that.

Then created another test post, uploaded the image in to that and successfully published it.

Then I went back to both published posts and using the standard WordPress screen, I modified the second image and republished the post.

All of these tests worked.

Manuel’s next job is to add the deployment of fastCGI in to the VM creation template.

This will enable the datacentre to deploy a fully-functional LAMP-stack VM for a customer within a matter of seconds.

Well done Manuel!