remote SSH problems

I spent last weekend working through a real pain in the arse problem: couldn’t get remote SSH access configured on to server C1.

Local access via console worked brilliantly.

And I could attach another device to the internal network and run SSH sessions to the internal IP that server C1 had been configured with.

But I couldn’t get SSH consistently working, in a stable, always-up, kind of way, via remote.

The best I could get was for remote SSH to stay up and running for around 20 minutes.

In the end, frustrated beyond belief, I binned the router that Plusnet had supplied (a neat-looking Sagecom device).

Then I looked out a spare Netgear router that I had at home.

I copied the config details from my home-hosted NAS in to the spare Netgear router, and installed that in the datacentre.

Changed the account credentials to match those of the datacentre, obv.

And lo and behold, I had remote SSH.

But for how long?

An hour later it was still working.

This was a new record.

Three hours later it was still up and running.

Twenty-one hours later, we were still golden.

So it seems like, to me at least, that the Plusnet router had a ‘go to sleep’ rule set in the firmware.

If it hadn’t seen any port 22 traffic for around 20 minutes, it shut down port 22.

And wouldn’t wake up when port 22 traffic came knocking on the door.


Except not, obv.

But the Netgear router fixed that and I had remote SSH and public port 80 access to the server C1.

For 72 hours.

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.