reviewing datacentre servers and racks

Rack1:

  • UPS #1
  • primary router
  • primary network switch
  • power distribution unit

Rack 2:

  • server c1
  • server c2
  • server c3

In its physical state server c1 is a RAID6 platform that runs CentOS6.6, DenyHosts and the kvm hypervisor, and iLO2.

In its virtual state server c1 hosts the full LAMP stack, hosts an SFTP server, hosts email servers, and runs some security functions.

Server c2 is an extended storage platform to complement/support server c1.

Server c3 is a real-time replica of the physical and virtual entities that are server c1.

I plan on moving server c3 out of the Nottingham datacentre and in to a secondary location. This would give me failover resilience, in the event of something cataclysmic happening to the datacentre.

Rack 3:
When it arrives, I plan on populating rack 3 with a secondary router, a secondary network switch, a secondary power distribution unit and UPS #2. And maybe some Blades.

spitting venom

I did a controlled shutdown and restart of server c1 in the datacentre today.

This shutdown and restart of the physical server meant that the primary LAMP server – and all of the hosted VMs – were also shutdown and restarted in a controlled state.

The reason for this event was to embed the Venom fix that has been released (that my servers downloaded during the week), to close the latest Unix/Linux vulnerability.

Nothing else to report.

I love how reliable and robust the infrastructure is proving to be.

datacentre

This is what DC1 looked like the first time I saw it:

Datacentre 1 first look

 

 

 

 

 

 

 

 

 

 

 

Datacentre 1 first look 2

 

 

 

 

 

 

 

 

 

 

 

 

A few weeks later, this is rack 1, server c1, after having upped the RAM, installed all the disks, configured the disks to RAID6, installed the CentOS operating system, and I’m part-way through installing the KVM hypervisor. You can see the top of server c2 below:

Rack 1 Server c1

 

 

 

 

 

 

 

 

 

 

 

 

 

And this is a screenshot of me starting to configure eth0:

eth0

 

 

 

 

 

 

 

 

And here’s a screenshot of me configuring iptables:

iptables

 

 

 

 

 

 

 

 

For reasons of security, I’m not posting any other photos.

 

icing the climactic win

Today I learned an inelegant but very effective way to access iLO2 remotely.

From the safety and comfort of my own bed I was able to log in to the iLO2 function, access the full remote physical- and virtual-console for the server, and carry out the usual range of console-related management/admin functions.

Brilliant.

climactic win!

Meanwhile, back in the datacentre…

One of the early problems I encountered with the first DL380 out of the box (server c1, obv), was an inability to configure iLO2.

I’d racked the server up, plugged the three NICs in to the router (eth0, eth1, iLO2), installed the base CentOS operating system, and configured eth0 for static IP.

Then I spent ages trying to get to the bottom of why iLO2 wouldn’t work.

And I mean four-five weeks, off and on.

A couple of nights ago (while watching a YouTube video of someone configuring their iLO2 with annoying ease), I read a revelation.

iLO2 won’t work outside the host network (LAN).

So even if I got iLO2 working, I couldn’t use it remotely (in the pure sense of the word – remotely), unless I built and stretched a VLAN to wherever in the world I was working remotely from.

This is a massive pain in the backside.

Obv.

Armed with this new information I dismissed not having iLO2 as an inconvenience – a non operational inconvenience – as not having iLO2 wasn’t actually stopping anything from proceeding,

So I set this slight disappointment aside and carried on rolling out the datacentre.

Last weekend I fitted the remaining disks to the second server in the rollout, server c2.

I configured the disks to RAID 6, installed CentOS, virtualised the physical environment, configured a static ip on the network on eth0, and brought the server online.

From server c2 I successfully pinged the internal ip address for server c1, to confirm everything was working on the LAN, and return pinged server c2 from server c1 but left c2 unconfigured for WAN access for now, then left the datacentre feeling a bit pleased.

Over the last couple of weeks I’ve been reading and watching a lot of tutorials.

One or two have gone in to detail about how both eth0 and eth1 should be configured.

I had only configured eth0 on both of the servers I have brought up so far.

Having eth0 and eth1 configured is more to do with redundancy and resilience rather than enhancing speed (because it won’t).

So that’s a new task on my list of things to do.

I’ve also spent a lot of time reading up on iLO2 problems, because I don’t like an unanswered query.

I kept these things in my mental pot, and mulled them over when I had lots of time (commuting!).

Last Friday afternoon I got a notification that one of the sshd components on server c1 – the public-facing physical host – had stopped working.

c1 was still ‘there’ and pinging away to my queries, but sshd wasn’t behaving exactly as it should.

Unfortunately I couldn’t access server c1 remotely, because whatever it was that was adversely affecting sshd, was blocking my attempt to remote on to the server.

Saturday morning I rocked in to the datacentre and accessed server c1 on console.

I queried sshd (service sshd status) and got a ‘not found’ response.

Hmm, sshd stopped working and shut itself down?

What could cause that?

I checked eth0 status (ifconfig -a) and got the responses I expected (found: eth0 configured to 192.168.1.4, found eth1 unconfigured state, found lo unconfigured state).

So, as I already knew from my ping responses, the server was actually still online, just not 100% there.

I then checked the devices attached to the LAN via the router admin panel.

The attached devices query found ILOGB8724JETY on 192.168.1.4 and (MAC address of c1 eth0 NIC) on 192.168.1.5 and – surprisingly – ILOGB87505WE7 on 192.168.1.6 and (MAC address of c2 eth0 NIC) on 192.168.1.7

This puzzled me.

All ports were forwarded to 192.168.1.4, but the router was telling me that the attached device on that ip address wasn’t (MAC address for eth0), but was the iLO2 address.

I checked the config on eth0 and it was definitely set to pick up a static address of 192.168.1.4

Puzzling!

Where had ILOGB8724JETY on 192.168.1.4 come from?

And why hadn’t the static address config on eth0 over-ruled it?

I decided to leave the iLO2 address alone for now and go for simplicity.

I powered down server c2 and unplugged it from the mains, to remove all traces it of from the network.

On server c1 I configured eth1 to a static ip of 192.168.1.6 and reconfigured eth0 to a higher static ip of 192.168.1.5

Rebooted the server.

Checked the attached devices in the router.

Sure enough, I suddenly – and for the first time – had a full house:
ILOGB8724JETY on 192.168.1.4
(MAC address for c1 eth0 NIC) on 192.168.1.5
(MAC address for c1 eth1 NIC) on 192.168.1.6

On a hunch I attempted to access and login to the iLO2 console.

Success!

And then I changed the port-forwarding rules to pick up 192.168.1.5, and saw that the sshd service was fully working.

Of course, it might have been the reboot that brought the sshd back online, but I thought there was more to it than that.

I wanted to test a hunch that was forming, so I reconfigured eth0 to 192.168.1.7 and eth1 to 192.168.1.8 and rebooted the server.

When I checked the attached devices in the router I still had a full house, but the addressing was updated:
ILOGB8724JETY on 192.168.1.6
(MAC address for c1 eth0 NIC) on 192.168.1.7
(MAC address for c1 eth1 NIC) on 192.168.1.8

Aha!

So iLO2 attaches to the LAN with an ip address of eth0 -1

Well that was a revelation (and thank you, HP, for probably burying this information in the reams of words about iLO2 and for not making it plain and obvious).

This discovery meant that all of the I/O traffic that server c1 had been processing for the last couple of weeks on 192.168.1.4 was actually being forced to the iLO2 NIC by virtue of the port-forwarding rules, and not being passed via the eth0 NIC.

I hadn’t noticed any speed issues, despite this misrouting, but I resolved to fix this.

I reset the ip addresses, rebooted the server and checked the attached devices in the router. and saw the (now expected) full house of:
ILOGB8724JETY on 192.168.1.4
(MAC address for c1 eth0 NIC) on 192.168.1.5
(MAC address for c1 eth1 NIC) on 192.168.1.6

Then I reset the router’s port-forwarding rules to pick up the eth0 NIC on 192.18.1.5

I ran some WAN tests, just to be sure, and was pleasantly surprised by the speed responses.

The bottom line here is that it looks as though an internal IP address conflict between the iLO2 and the static 192.168.1.4 for eth0 is what stopped sshd from running.

So this is a good result. I now have iLO2 running and I have detected and resolved the internal IP address conflict – and sshd is running normally.

anticlimactic win!

Server c1 is done.

The OS is installed.

The environment has been virtualised.

MySQL and Postfix installed.

The environment has been pen/security tested.

Four (client-facing) VMs have been built and are being used by various people, trying to break them.

Three layers of firewall have been implemented (2x physical, 1x software).

As far as hosting BaaS data goes, the environment feels very close to being absolutely right.

And I have to say that the environment is very fast.

I would like to put some time and effort in to practicing building VMs for FQDN hosting.

I guess that’s what I’ll be doing this week.

cheating at hardware fixes

Somewhere around Wednesday evening, about 72 hours after I fixed the remote SSH problem by changing the Plusnet-supplied Sagecom router for a Netgear router, all port 80 and all port 22 calls to the server c1 started being dropped.

There was nothing I could do, because I was down in Bristol and server C1 needed an onsite visit back at the Nottingham datacentre.

Frustrating!

Eventually the weekend rolled around and I tottered off my sickbed in to the datacentre to begin explorations.

Server c1 is an HP DL380/G5.

It had just one (500Gb) disk, which contained all the CentOS 6.6 goodies that had been rolled out so far.

Which wasn’t much cop, because server c1 wouldn’t stay alive.

When I walked up to the cabinet, c1 was definitely receiving power, but was switched off.

I pushed the button and it whirred and whined, noisily, to life.

The console showed me the usual boot sequence.

Then server c1 just powered itself down.

I tried again; it booted up. This time it got as far as the CentOS login prompt.

And then powered down again.

Long story short, I removed the PSU from server c1, cleaned all the PSU and serverside contacts, and replaced it.

The server booted up and stayed up.

I logged in as root and performed some basic functions.

Everything looked fine.

Rather than leave things like that for the week, I decided I’d like to add some extra resilience to the situation.

I removed the PSU from server c2 (another HP DL380/G5), and slotted that in to the spare PSU bay in server c1 (the HP DL380/G5 servers have the capability for two independent PSUs running at the same time).

So server c1 is now running two PSUs, and I’ll keep an eye on the server logs to see if the original PSU drops out, or if there any more powerdown problems.

remote SSH problems

I spent last weekend working through a real pain in the arse problem: couldn’t get remote SSH access configured on to server C1.

Local access via console worked brilliantly.

And I could attach another device to the internal network and run SSH sessions to the internal IP that server C1 had been configured with.

But I couldn’t get SSH consistently working, in a stable, always-up, kind of way, via remote.

The best I could get was for remote SSH to stay up and running for around 20 minutes.

In the end, frustrated beyond belief, I binned the router that Plusnet had supplied (a neat-looking Sagecom device).

Then I looked out a spare Netgear router that I had at home.

I copied the config details from my home-hosted NAS in to the spare Netgear router, and installed that in the datacentre.

Changed the account credentials to match those of the datacentre, obv.

And lo and behold, I had remote SSH.

But for how long?

An hour later it was still working.

This was a new record.

Three hours later it was still up and running.

Twenty-one hours later, we were still golden.

So it seems like, to me at least, that the Plusnet router had a ‘go to sleep’ rule set in the firmware.

If it hadn’t seen any port 22 traffic for around 20 minutes, it shut down port 22.

And wouldn’t wake up when port 22 traffic came knocking on the door.

Marvellous.

Except not, obv.

But the Netgear router fixed that and I had remote SSH and public port 80 access to the server C1.

For 72 hours.

ups downs and ups in the datacentre

It’s been a mixed bag on the datacentre project, this weekend.

I feel that I’m about half a dozen steps further forward, and have only taken one or two steps back.

But it has been a weekend of problems.

The biggest obstacle to making progress actually took me a while to realise exactly what the problem was.

I had downloaded CentOS 7, as this was to be my operating system and virtualisation agent of choice for the hosting servers.

I’d set aside Saturday as the main day of installation.

I inserted the DVD media containing CentOS 7, and booted up the first server.

The system went through its normal start-up/boot sequences, and I took this opportunity to set the iLO2 config.

Then I set the system language, keyboard language, timezone and country settings.

And then the OS wouldn’t let me go any further because it said I had no storage space.

Except the server had half a terabyte of storage, live and flashing a green light at me.

I ran through the boot cycle five times, and each time the OS said I had no storage, and the flashing green light continued to contradict.

I stepped through the server boot sequence, and sure enough the array controller said there was plenty of storage space.

So I did a google, and you know what?

It turns out that CentOS 7 has a compatibility problem with the HP DL360/G5 array controller.

So I downloaded CentOS 6, and burned that to DVD as an iso.

Some hours later I put the CentOS 6 media in the server DVD drive and booted up.

Success!

After the installation, I ran the yum update command except it wouldn’t run.

I tried several commands for online activity and none of them worked.

A bit more googling told me that by default CentOS 6.5 produces a closed server – unlike CentOS 7, which is the product I’ve been doing all my reading on.

CentOS 6.5 needs to have the eth0 and eth1 ports opened by the root administrator.

I did this, and then ran yum update, and downloaded and installed 79Mb of update packages.

I rebooted the system and then successfully pinged a FQDN or two.

Then I shutdown the server and called it a day.

I had intended to get as far as enabling remote access via SSH, but I haven’t even got in to Firewall rules and Securitisation.

And I know that’s another solid half-day of effort.

I’m guessing another 10 hours to bring just the first server in the cluster, to where I want it.

So that’s next weekend then.

freelancer required (CentOS/RHEL)

I’m looking for a very experienced, remotely-located freelancer for some ad hoc work on a small datacentre.

The skills required are:

  • CentOS
  • DNS Server
  • LAMP admin
  • PostFix
  • MariaDB/MySQL
  • php
  • Perl
  • Virtualisation
  • VPN

The work is in two areas:

  1. Project delivery (to assist with consultancy and advice, and, if things go wrong, to take a hands-on role in installation, setup, config)
  2. Ad Hoc support (on an ‘as required’ basis)

The salary will be an agreed hourly rate, paid by whatever means you prefer (PayPal, etc).

I’m not too fussy where in the world you’re based – timezone parity isn’t a big deal for me.

If you’re interested in the role, please drop me a line in the comments box, and I’ll email you back, and we can take the conversation forward.