Intel SR1680MV

A while ago I picked up a couple of Intel SR1680MV 1U Servers from Kijiji for next to nothing. Each server contains two nodes that are completely standalone systems.

Server 1: 2 Nodes
Each Node:
CPU: 2x Intel X5550 Quad Core HT (8 Threads)
RAM: 48GB DDR3 1066
HD: None

Server 2: 2 Nodes
Each Node:
CPU: 2x Intel E5520 Quad Core HT (8 Threads)
RAM: 72GB DDR3 1066
HD: 2x 2.5″ 256GB SATA


From what I can gather these systems are modified versions of the normal SR1680 server by now defunct company named Liquid Computing out of Ottawa. I’ve tried digging up more info on these servers as they use some kind of proprietary backplane with a 10Gbe interface. There’s expandable on-board memory slot, several pin headers, RJ45 connectors and some large processors covered by massive heatsinks which I haven’t had a chance to look at yet. I’d be really interesting to find out what all this hardware was capable of.

One of the servers will end up at a datacenter for hosting purposes, the other one I’ll keep at home rack. Because the backplane is of no real use to me, I’ll use Intel quad NICs to add networking to the nodes.

I’ve also ran into a small snag with the USB sticks I bought. They were physically too thick to have both fit into adjacent slots in the internal USB connector. I ended up removing the outer sleeve and it worked like a charm. Odd design to have a single USB header that provides USB storage to two separate nodes.


Installing vSphere went without a hitch. What I found really interesting is that the 10Gbe backplane interface is actually shared across the two nodes. Each vSphere instance sees 4 Nics with the exact same MACs.

To Be Continued…

Boot Disks

More goodies arrived today. These will be used as internal boot drives for couple of virtualization servers that I’m assembling.


Should be sufficiently fast to boot the OS (either VSphere or  ProxMox). Can’t wait to start on the servers. Still waiting for a couple of quad Intel nics.

100% Super Happy Network Failure Occurence

The D-Link DGS-3024 finally bit it.It hasn’t occurred to me that there could be a problem, even though my network throughput speeds have absolutely plummeted. I was able to move a file across the network at easy 100MB/s. Last time I copied an iso, it was moving at only 11MB/s. Of course, figures that the switch will go down while I’m doing on-site at a client. Took the whole network down, including the Site-To-Site VPN tunnels between my two DCs. No email, no source control, no nothing.

Oh well, I bought that switch on Kijiji about  3-4 years ago for about $80. Can’t complain really. I wonder now if I can get bring it back to life.

It’s a good thing I held on to an extra Dell PowerConnect 5224 I bought on eBay about 8 months ago when I was setting up a new half-rack at one of my DCs.

What a pain the the ass it was to replace it though. With all the patch cables between the two switches and the patch panels, I had to disconnect everything just to pull the dead switch out of the rack. Then of course, I figured if I’m gonna go that far, might as well reorganize the whole network. Pulled everything out, switches, patch cables, unplugged all servers. Took about 4 hours to rewire everything just enough to get me back up and running. This included all the necessary connections and configuring the switches for the VLAN’s and Trunking.

Once I get my new servers racked up I’ll wire the rest.

Just noticed how filthy the server case below the switch is. I guess I got some house cleaning todo.