SR1680MV – Conclusion

The time has come to rack up the servers at the data center. The plan is to decomission the old Dell PowerEdge 1950’s and replace them with these Intel SR1680MV servers.

Out with the old. The PE1950’s have been real workhorses. Sure they top out at 32GB of RAM, but back in Windows 2003 days, this was sufficient to run a lot of instances of the OS, and countless instances of CentOS Linux. I was originally running 2 vSphere Nodes and 1 ProxMox node.
Psylocke is a purpose built firewall. It’s got off the shelf components Intel Motherboard, Intel Celeron 440 and 2GB of RAM. Connections are handled by an Intel Pro/1000 VT Quad NIC. The OS is pfSense 2.0.1. This combination has been shown to easily throughput 500Mbit of inter-zone traffic. pfSense is by far the most powerful free firewall solution I’ve come across.
I’m planning to convert a Firebox firewall to pfSense at some time in the future. But that’s another project.

New servers racked up and fired up. I had to reconfigure the servers on site due the fact that some VLANs had to be moved around as I had to rewire some of the switch connections. The file server is another Intel Server. It’s an Intel SR1550 running two E5160 3.0Ghz Xeons with external SAS controller and running NexentaStor 3.1. NexentaStor is Solaris based file server utilizing ZFS for the file system. ZFS provides on-the-fly block-level deduplication and on-the-fly compression. NexentaStor has been proven to be a rock-solid file server solution. With iSCSI MPIO support, balancing bandwidth across multiple NIC is trivial. ZFS also utilizes SSD’s for intermediate cache, causing random access performance to skyrocket due to block level caching.

Now I’ve got myself some spare PE1950’s. Two of them will end up for sale on Kijiji. I’ll keep one of them for the lab at home, at least until I can score another deal on a server like the SR1680MV.

I’ve wasted on time racking up the servers at home. They need a little bit of cleaning since they haven’t been cleaned for over 8 months. I’ll wipe the drives and post them on Kijiji. Hopefully I can still get a decent coin out of them. The 4GB modules that make up the 32GB of RAM are still pretty expensive nowadays.

Windows 8 :(

Sad to say, but I deleted Windows 8 and re-installed Windows 7 today. Windows 8 Preview is just incomplete enough that I can’t get stuff done in timely fashion. When RTM version hits MSDN I’ll download and try it in a VM to see if Microsoft resolved any of the issues that I ran into during normal use. Otherwise, I will probably wait till next year to complete the transition.

SR1680MV – More Testing

I picked up a whole bunch of Western Digital Velociraptors. These are 10,000 RPM SATA “Prosumer” versions of enterprise drives. They’re basically high performance, RAID ready drives priced for the computer enthusiast. I’ll be using these to provide boot and local storage to the SR1680MV nodes running ProxMox since the servers do not support SAS drives which is rather odd for server hardware.

Since the SR1680MV’s take in 2.5″ drives, I had to remove the drives from the “Ice Pack”. But oh, what’s this? Western Digital uses “tamper-proof” screws to secure the drives to the Ice Pack.

It’s only tamper-proof if nobody can actually tamper with it.

Couple of minutes later and I have some nice 2.5″ 10K drives ready for installation into the servers. Additionally, the Ice Packs can be reused to mount 2.5″ SAS or SSD drives into 3.5″ how-swap bays!

Windows 8 – The first week

So, after a solid week of seriously using Windows 8 Preview on my primary workstation, I can finally give an educated opinion on the product.

VPN
Epic. Fail. What was Microsoft thinking? What was wrong with the previous VPN dialog? Due to the way the new dialog is implemented, it is impossible to copy/paste credentials into the dialog because the dialog cancels out as soon as it loses focus. PPTP VPN is also broken in Win 8 Preview since I can no longer log into one of my VPN networks, Windows reports invalid credentials even though I have no problem logging into the VPN server from Windows 7. This is definitely a HUGE problem for me in the long run, I hope they fixed it in the RTM. For now using a VM to VPN into a client site.

Start Button
Ok, I got used to not having the start button pretty quickly. However, the implementation of the UI on multiple screens is completely broken. When I click on an icon on my right screen, why does it slide out on the center screen? The “start” menu is so full of crap that it’s impossible to find the software i’m looking for, fortunately i’m used to type-search the program I’m looking for so I don’t often have to hunt through that mess. Also, when running a windows VM in full screen, attempting to click the Start button in the VM causes the host to popup the Start menu, because it decides that it wants to popup on hover. It does this even on screens that have no taskbar!

vSphere Client
Works for the most part. Can’t connect to VM console. Can’t really blame Microsoft entirely on it, but it helps. I hope VMware steps up and releases an update to the vSphere client. This is kinda important, not critical since VMware Workstation 8 supports connecting the vSphere hosts and opening console through it works fine.

Devices
Here again most devices worked without a problem. The only problem I had was with Creative X-Fi Titanium card. I installed Windows 7 driver which was fine, until next reboot where all traces of the install disappeared. Attempting to re-install the driver was met with “You already installed the driver, must reboot before installing again”. Countless reboots later, registry tweaks, could not get rid of the error. Even after finding Windows 8 beta drivers for it. Same message. Gave up and installed Asus Xonar card with the hacked Unified drivers, since no Official/Beta drivers are available either.

There’s no doubt without addressing some of the UI quirks, Windows 8 is going to be the new Vista. I’m hoping Windows 9 is right around the corner.

SR1680MV – Continued

SR1680MV’s will be receiving some eBay low profile nics that arrived today. Intel Pro/1000 Dual Port PCI-e x4 cards.

I originally mounted them in the second server ‘s backplane. This SR1680MV was originally supposed to run ProxMox but it occurred to me that it would be a better solution to have one node from each server run ProxMox and the other node vSphere. This is because these servers do not have redundant power supplies. So that if one server’s power supply fails, my entire infrastructure won’t come crashing down.At worst, one node from each cluster will be affected, not the entire cluster.

I’ve added the new power LED’s to the second set of nodes too. All racked up and ready for testing. I’m hoping to have these servers racked up in the datacenter by next weekend.

 

Yihua 852D+SE Rework Station

Picked up this solder station on Kijiji yesterday. Seen those units on eBay prior, figured might as well pick up locally and save myself the shipping fees and the 3 week lead time.

Been reading about these units online for a while. Basically same factory cranks these out under various brand names (Hakko, Yihua, Xytronix, etc).

The premise is all the same. Ability to solder SMD/SMT and of course ability to remove components.

The unit came nicely packaged. The guy I bought it from also threw in an additional ceramic heating element and 5 more soldering tips. Additional focus heads would would be nice, but I’m sure I can find those on eBay for cheap.

The soldering iron itself does feel a bit cheap. Haven’t tried soldering with it yet, so will have a more accurate opinion of it at that time.I wish the soldering iron base was a bit heavier though. I don’t like when they slide around when trying to park the iron it on. Will see if I can weigh it down a bit since the inside of it is hollow.

The air gun is rather nice though. It heats up quickly and moves fair bit of air on high setting. Again, I haven’t tried it on an actual board yet, but I’m really looking forward to it. The air gun does shut off automatically when placed in the cradle which is a nice feature, but considering the source, I’ll be sure to shut off the unit when I’m not using it.

Overall, this is a pretty decent unit for what I costs. Hopefully the soldering tips will last a fair bit, I used to have a cheapie soldering iron and it was going through tips so much that it was just more cost effective to buy a more expensive soldering iron with a tip that lasted a very long time.

Awesome Logitech Speakers (X-140)

A while ago I picked up a pair of Logitech X-140 speakers, mainly to be used a background noise when working in the garage or the shop. Well, today they decided to stop working. So, I took them apart to see if I can figure out the cause of the failure.

Logitech advertises these as “Two-Driver Speakers” to experience “deeper bass”.

So imagine my surprise when I opened them up. So much for two drivers. The other “speaker” is just a passive diaphragm. There is only one “driver” per speaker. Talk about being cheated. Though, I can’t really expect much from $30 worth of computer speakers.

Anyways, it turned out to be a power wire that got loose due to shoddy soldering. A quick soldering job and the speakers were as good as new (i.e. not that good).

Intel SR1680MV (Continued)

I’ve added power indicator LED’s to both nodes. Is blue overdone by now?

The nodes have been running benchmark VM’s for a few days with zero problems. I think these servers were definitely a kick ass deal. Still waiting for NICs for the other server, will repeat the procedure and once they pass all tests, will drop it off at the datacenter. I’ll retire a few PowerEdge 1950’s from the rack there. Will probably put those up on Kijiji as well. I’ve always been a fan of the Dell PowerEdge servers. I’ve ran PE’s since the 1750 came out (that server was LOUD). The PE2950/1950 series were my favourite, though they sucked power like there’s no tomorrow.

Windows 8! (Initial Impressions)

I’ve been running Windows 8 virtualized on and off for a while, but because it ran in a VM, I never really forced myself to use it, it was more of a novelty than anything. So I decided to take the plunge and installed Windows 8 Release Preview on my primary workstation. Along with it I also installed Visual Studio 2012 and Office 2013 (just came out on MSDN)

So far I must say I really like it. Sure, the UI changes take a bit getting used to and lack of the start button seemed a bit odd at first, but I got used to hitting the start button the keyboard pretty quick. The OS is pretty snappy and seems stable.

I did notice that the Firefox does tend to choke a bit occasionally, so using IE10 as the main browser for the moment. Thankfully there’s a compatible Xmarks plugin so all my settings have moved over to IE10. Will give Chrome a spin under Windows 8 too.

I’m still installing software but so far haven’t come across anything that hasn’t worked which is always a good thing. All devices worked straight away during the install, including the Marvel SATA III controller and USB 3 Controller.

I was surprised to find out that Windows 8 ships with Hyper-V. Too bad VMware Workstation will not install while Hyper-V is enabled. It wasn’t a tough decision which one to keep. Workstation has much better integration (Unity) than Hyper-V.  I’ll try re-enabling Hyper-V post Workstation installation, see if it’ll still work.  I’m also running Oracle VirtualBox along VMware Workstation (not installed it yet on Windows 8). There’s some features of VirtualBox that I like over Workstation, but I’ll leave that discussion for another day.

 

 

Visual Studio 2012 is nice. I’m really digging the new color scheme. Whatever next project I start, I’ll use VS2012 and .NET 4.5 for it. I had no problems connecting to TFS and the new TFS integration is very nice. Unfortunately (as expected), VS2012 wanted to migrate existing project files to VS2012 and I didn’t want to lose the ability to continue developing in VS2010 so I’ll leave that for now.

 

I am also looking forward to testing Windows Server 2012 with Hyper-V 3. Would love to see some real competition for vSphere.

Intel SR1680MV (Part 2)

Continued working on the server today. Installed a couple of Intel Pro/1000 VT Quad Port network cards. 2 Ports for SAN, 1 port for LAN and 1 port for DMZ/WAN traffic for each node. The cards I had did not have low profile brackets so I ended up rigging the cards so they wouldn’t move. Not the best fix, but since this server will stay home for Lab work and testing, it’s not really all that important.

Both cards installed and ready to be plugged back into the server. The server does have rear RJ45 jacks on it, but they do not seem to be used for typical network purpose as they do not light up when hooked up to a switch. From what I read, these servers required a Liquid Computing switch to operate.

vSphere had no trouble recognizing the Intel NICs.

 

Server racked up. The other SR1680MV server will be operated on once my low profile network cards arrive.

 

Network cables hooked up…

 

…and patched into the switches. The Dell 5324 takes care of SAN traffic. The Dell 5224 takes care of LAN/DMZ traffic. The networks are segmented on different VLAN’s too.

 

Noticed that after few minutes after bootup the LED’s on the front of the servers go off. I assume due to the custom nature of the servers, the LEDs have other meaning past bootup. It shouldn’t be to hard to add a power LED indicator as I saw that the nodes have an internal molex header where I can draw 5V from to power the LED. I’ll make this my next project.

Adding the server to the cluster was a snap. Will create some test VMs to stress these nodes for a few days.