HP PageWide Pro MFP 477DW freezing

TL/DR; HP make decent printers, but their software updates sometimes breaks their own kit

This is an interesting one and it harks back to the early 90s when I was doing hands-on training for my MCSE. Yesterday afternoon I visited a person who lives in the village who had reported her printer had just stopped working.

The environment: a lovely and newish HP Elite Mini PC (about the size of a cigar box but crammed full of SSD functionality and RAM), an HP widescreen monitor, the aforementioned HP PageWide Pro MFP 477DW, a BT homehub and a BT network device being used as a switch. Connecting the printer to the switch was an OpenReach cable, with another OpenReach cable connecting the switch to the PC. This is not a new setup, and is what was in use up to and including the time the printer stopped working.

First thoughts. The MFP has four connection options. USB Port, Printer Port, Network Port, and WiFi. I asked why they were using a wired connection to the network and was told that this works (worked), whereas WiFi kept dropping out. Nothing else had been tried.

Second thoughts. What had changed. User said nothing. I opened a command prompt and interrogated the PC which also said nothing. I ran IPCONFIG /ALL while I was there and that gave me a healthy return. Exit command.

The printer console (on the printer, not a remote software console) gave us a bunch of settings which included IP. It was set for ‘automatic’ (typical HP, ignoring the network standard label of ‘dynamic’ and inventing their own terminology!). Instinctively I wanted to change that to static (which HP have again ignored, and have labelled it as ‘manual’) but I didn’t have an internal IP address allocation table. I could have just guessed a .45 or .52 suffix, but there was still a risk I could impact something in the house, so I left printer IP alone (and it still makes my skin itch that I didn’t fix that).

I checked the IO on the PC and the IO on the printer. Both looked good and didn’t display any faults. Despite this, I changed the network cable between the switch and printer to one I’d brought with me. I changed nothing, no config or network elements (apart from the new network cable). I fired up the HP printer wizard on the PC and set it to install the printer. It didn’t find it. I checked the printer and it had frozen. Not gone to sleep, it had frozen. Well that’s why the PC couldn’t find it. I rebooted the printer which came up satisfactorily, set the wizard to install the printer and once again, after five seconds, the printer had frozen and the PC couldn’t find it. I checked with the user that this layout/config is what had always previously worked and that was the case. So now I began to suspect the PC of fibbing. I rebooted it, a complete, cold, reboot. It soon popped back up. I changed no config or network elements. I rebooted the printer which came up, offered me it’s console. I set the PC to install the printer and then the printer froze after five seconds and the PC couldn’t find it. So now I started to suspect the printer.

I went on the HP support log (if you know, you know) and found that four other people had reported the identical problem with the same printer device within the last 72 hours. Now I knew we had a printer problem. We needed to find an alternate method of connecting the printer to the PC. As the user’s preference was for cable, I went with that. we found an old USB to Printer cable, disconnected the network cable from the printer, set the USB to Printer cable up between the PC and the printer, rebooted the printer and told the PC to connect to the printer. It did! I ran a couple of tests which all worked. So to break down the fault, the printer was freezing after five seconds if it was connected to the network via cable. But connected to the PC directly, it worked. I would have liked to connect it direct to the PC via a USB cable, but we didn’t have one long enough.

Fault diagnosis: HP has pushed out an automatic update which has adversely impacted a growing number of users of this MFD who use a certain connection type, The fix is to use a different connection type.

So that’s how I left it. Printer working via a wired connection (user’s preference). Fault resolved. However, now that everything is good I would like to revisit the user to reconfigure the printer to dynamic IP address, check the frequency of HP automatic updates and put an accept/deny switch on that, and set up the printer for WiFi rather than cable. But that’s just my preferences. The bottom line is the user is happy with the resolution. Time involved: 60 minutes analysis/diagnostics, 15 minutes fault resolution.

I also fixed a Bluetooth mouse problem while I was there, but a) that only took five minutes and b) the user is going to continue with the cable mouse anyway.

Uncaught Error: Call to undefined function

TL/DR; Your users may get a ‘There Has Been a Critical Error on This Website’ message when they submit a comment

It probably means your webhost has disabled the PHPMailer function on a shared platform for security reasons. The fix is simple: download an SMTP plugin and use your website’s SMTP server and one of its email accounts and you’re good to go.

NB, although your users get that scary message, their comment may still be posted on your website. The point of origin of that message is your website sending the notification to you not something your user is trying to do. But best fix it, eh?

Big shout out (because that’s what us kids do) to Masher for spotting this error and to me for fixing it.

Down/up, the pop3 pipe

TL/DR; POP3 false positives can really mess you up. Sort them out the minute you get ‘can’t log in’ messages

So I got my core domain/sub-domain up and running on the new host after battling with a) corrupt user.ini file and b) .htaccess with incorrect permissions. It took just short of 24 hours of downtime to identify and fix the issues and I really didn’t like that much downtime. Actually it took 23-1/2 hours of downtime to identify and 10 minutes to fix, then 10 minutes to test.

I went to bed a happy bunny. When I woke up this morning I ran checks which all came back green. Then I tried to log in to the core domain front end except I couldn’t even see the front end because I got a 500 error. So I tried the sub-domain and couldn’t see that either. Then I tried this website and couldn’t see that. All three properties returned 500 error messages.

So I got onto the host chat and asked the to look. They said they were all up and in the green then they went off to check something. Five minutes later they came back to say my Internet node’s IP address had been banned. They’d removed the ban and everything should be sorted. I checked. It was indeed all sorted.

I asked how my IP address could have been banned. They pasted a report that said it was due to repeated but failed POP3 login attempts. I checked my phone and yes, in my email client there nestled half a dozen failed login attempts on one of the new email accounts I’d set up against the core domain. Apparently the hosting infrastructure registered my IP as unfriendly and banned me from all domains in my hosting hosting account. Ooops.

Anyway, I’ve reset the POP3 password and everything’s great now. How embarrassing though.

Fkin users!

TL/DR: user.ini file (which only your host should be able to get at) can screw you over

I started to move my core domain to a new host on Friday. I followed the usual procedure and blow me down what a problem I had getting /blog back up. Error messages all over the place. I deleted the database, removed the wordpress files, deleted the database user and basically I tried to reset everything. Except the problem(s) wouldn’t go away. Through various tinkering I could change the error message the site was putting out but I couldn’t get past the errors. I tried everything. Every single trick in the book. Eventually, after a long time of running into brick walls, I called tech support. Once they’d worked out what the problem was, they fixed it very sharply, but it took a while to get to the issue which was…

A corrupt user.ini file.

Moving stuff

TL/DR: moved WordPress to a new host and website didn’t work (and how to fix it)

I’ve just moved this domain to a new host. In the last 15 years moving domains from host A to host B is a thing I’ve done for fun a couple of thousand times and a thing I’ve done professionally a few hundred thousand times (the biggest challenge was moving the entire corporate dot com website plus mobile apps from a legacy datacentre to a super-dooper hybrid cloud datacentre, fitting in upgrades whilst in-flight, and putting in a few thousand mapping and routing changes, and overlaying the new infrastructure with a new security model along the way was no picnic, I can tell you!). So moving this little domain from one host to another was a piece of cake, yes? Well, no.

On the new host I created the new database, created a user, assigned permissions to the new user then I created the new domain in cpanel. Flipped over to the old host and downloaded all the WordPress files and directories, then downloaded a copy of the database. While WordPress was still downloading I went back to the new host and imported the legacy database into the new one. When WordPress had finished downloading, I opened an ftp session to the new host and uploaded all the WordPress stuff. Then I opened the wp-config.php file and changed the username, database name, localhost, and password, then sent that on its way too. Then I created an email address for this domain on the new host.

Then I went to the domain registrar (a third party, just to keep things complicated) and updated the nameservers and then the DNS settings, pointing everything at the new host. This should have resulted in a seamless switchover while I was catching some Zs. So when I woke up and say the blandest of bland error messages that said, in a nutshell, this website isn’t working today (or ever, unless you fix it), I was a mite disappointed.

Ping and tracert both told me that the new DNS settings had propagated and all calls to this domain were being packaged to the new host correctly. On a whim I tried the MX settings, I created a new email on the new account against this domain on the new host and sent it off. The email rattled into my other inbox. I replied to it. The reply rattled into the new inbox. So MX settings mirrored what ping and tracert were telling me – that the new domain was up and running. Except the error message said it wasn’t.

The only thing left to pick apart was the config file in WordPress. I read through every single character very carefully and on line 25 I found the gremlin. When I created the new database I’d given it a password similar to abcde12e4t99!!” and that was why the migration failed. The password contained the character ‘ (yes, a simple single quotation mark) and the coding structure of WordPress expects to see the two characters ); after every single quotation mark at the end of a line of code. So this, for example:

define(‘DB_HOST’, ‘localhost:3306’);

But because I’d used a ‘ after the second exclamation mark I’d tripped up on the WordPress syntax, because it expected to see ‘); and not expecting to see another quotation mark. And that was the problem. The whole website was brought down by a ‘ in the password text. The solution was, obviously, not to use a quote mark in the password. I changed the password and Robert’s your Mother’s Brother, everything just clicked into place.

Backups broken/not broken

I did a package upgrade on my NAS last week. Not even a DSM upgrade, just a couple of packages (HyperBackup and node.js). Unfortunately the HyperBackup upgrade broke the overnight backup process. No amount of fiddling and farting about (it’s a technical term) could get the process reinstated. In the end I removed everything from HyperBackup and set a new global all-user backup. On completing the process and the new mapping, HyperBackup asked me if I’d like to kick off a process right now? Sure, I said. I’m here for that. This is the reporting screenshot from that first backup:

I had forgotten that the first global all-user backup not only packs and transfers the target files and directories, it also has to create metadata and then map all files/directories to that metadata as it creates a mirror (which it can use as a point of reference for the next – and all future – backups). Der. Anyway, the second backup process reported this:

Much better!

Is this the end?

Now that my days of geekery are officially over, is it time to pull the plug on this website? I do very little tech these days, and the tech that I do is now marine electrics, marine electronics, and marine engineering. I’m trying to see if this website has a future, but right now – 18th February 2023 – I can’t see one. I shall think on this a little longer. Mind you, there are a couple of email addresses running off this domain that I hold for emergency backdoor access to webservices.

NAS down

First geekblog entry in a while…

At 06.00 Friday (today is Sunday) my Synology NAS started making it’s ‘something’s gone wrong’ beeping sound. I logged on to the admin portal and saw lots of red messages which said, basically, that the Volume was corrupt. I dug a little further and saw two ‘bad sector’ error messages for drive 2.

Accepting a drive failure, but the volume going down really puzzled me because a single drive having an issue should not have brought the whole NAS/Volume down – I have a four-drive NAS with failure/redundancy built in through RAID. About 18 months ago a drive failed and the NAS carried on. While I was waiting for a replacement drive to be delivered, a second drive failed and – unsurprisingly – the NAS just carried right on working.

Anyway, I put this line of though behind me and got on with things. I took an emergency backup onto a 3Tb USB EDD (the NAS backs itself up at 03.00 every morning anyway, I was just being super cautious). Then I replaced the defective drive, formatted it and mounted it into the storage pool. Next I deleted the defective Volume. I installed Synology Hybrid RAID (SHR) with two-drive fault tolerance. Next I created a new Volume. Then I did a consistency check on Volume Root and Volume Swap and on the new storage pool.

Then I installed Hyperbackup and did a full sysconfig-, app-, users-, userconfigs- and data restore which took an overnight job to complete.

Yesterday I ran a bunch of checks on hardware, system, config, and data integrity and everything has come back green. Nothing lost, no problems.

I’m a bit perplexed as to why the RAIDed two-drive fault tolerance didn’t prevent having to rely on backups, but the bottom line is the failure/recovery regime worked.


One of the biggest additions to my reading activities has undoubtedly been the Kindle. But maybe not in the way that you might think.

The discovery that I can put bulky Word or PDF documents onto the kindle for proofing (but not editing, sadly) or just for straightforward reading for pleasure or even reading for work, has been a revelation.

Maybe you already know this (and how to do it), maybe I’m late to the game. But regardless of when the knowledge dropped, in my little corner of the universe this has been a game changer.

Moving house (1)

I closed down the datacentre in Arnold, Nottingham a couple of years ago.

Since then I have continued hosting various websites and webservices within a dedicated hosting account in another, somewhat larger, public cloud provider.

But in the last couple of years, in the hosting market, things have changed.

Prices have generally fallen, offerings have become more featuretastic and scaleable, and uptimes have become more consistent, and much more rooted in the 99.95% – 99.99% range.

Plus, because I’m an undemanding sort, my personal hosting requirements haven’t become more complex.

So after a long hard look at the market I’m in the process of closing down my current hosting account, and have started up a hosting account with a different provider.

This website, as you would expect, wass the first to be migrated across.

I haven’t brought over the forum that’s associated with this website yet; I will probably do that later this week.

But the bottom line is that my monthly outgoings have fallen considerably, while no functionality has altered.

That, for me, is a win.