Yihua 852D+SE Rework Station

Picked up this solder station on Kijiji yesterday. Seen those units on eBay prior, figured might as well pick up locally and save myself the shipping fees and the 3 week lead time.

Been reading about these units online for a while. Basically same factory cranks these out under various brand names (Hakko, Yihua, Xytronix, etc).

The premise is all the same. Ability to solder SMD/SMT and of course ability to remove components.

The unit came nicely packaged. The guy I bought it from also threw in an additional ceramic heating element and 5 more soldering tips. Additional focus heads would would be nice, but I’m sure I can find those on eBay for cheap.

The soldering iron itself does feel a bit cheap. Haven’t tried soldering with it yet, so will have a more accurate opinion of it at that time.I wish the soldering iron base was a bit heavier though. I don’t like when they slide around when trying to park the iron it on. Will see if I can weigh it down a bit since the inside of it is hollow.

The air gun is rather nice though. It heats up quickly and moves fair bit of air on high setting. Again, I haven’t tried it on an actual board yet, but I’m really looking forward to it. The air gun does shut off automatically when placed in the cradle which is a nice feature, but considering the source, I’ll be sure to shut off the unit when I’m not using it.

Overall, this is a pretty decent unit for what I costs. Hopefully the soldering tips will last a fair bit, I used to have a cheapie soldering iron and it was going through tips so much that it was just more cost effective to buy a more expensive soldering iron with a tip that lasted a very long time.

Awesome Logitech Speakers (X-140)

A while ago I picked up a pair of Logitech X-140 speakers, mainly to be used a background noise when working in the garage or the shop. Well, today they decided to stop working. So, I took them apart to see if I can figure out the cause of the failure.

Logitech advertises these as “Two-Driver Speakers” to experience “deeper bass”.

So imagine my surprise when I opened them up. So much for two drivers. The other “speaker” is just a passive diaphragm. There is only one “driver” per speaker. Talk about being cheated. Though, I can’t really expect much from $30 worth of computer speakers.

Anyways, it turned out to be a power wire that got loose due to shoddy soldering. A quick soldering job and the speakers were as good as new (i.e. not that good).

Intel SR1680MV (Continued)

I’ve added power indicator LED’s to both nodes. Is blue overdone by now?

The nodes have been running benchmark VM’s for a few days with zero problems. I think these servers were definitely a kick ass deal. Still waiting for NICs for the other server, will repeat the procedure and once they pass all tests, will drop it off at the datacenter. I’ll retire a few PowerEdge 1950’s from the rack there. Will probably put those up on Kijiji as well. I’ve always been a fan of the Dell PowerEdge servers. I’ve ran PE’s since the 1750 came out (that server was LOUD). The PE2950/1950 series were my favourite, though they sucked power like there’s no tomorrow.

Windows 8! (Initial Impressions)

I’ve been running Windows 8 virtualized on and off for a while, but because it ran in a VM, I never really forced myself to use it, it was more of a novelty than anything. So I decided to take the plunge and installed Windows 8 Release Preview on my primary workstation. Along with it I also installed Visual Studio 2012 and Office 2013 (just came out on MSDN)

So far I must say I really like it. Sure, the UI changes take a bit getting used to and lack of the start button seemed a bit odd at first, but I got used to hitting the start button the keyboard pretty quick. The OS is pretty snappy and seems stable.

I did notice that the Firefox does tend to choke a bit occasionally, so using IE10 as the main browser for the moment. Thankfully there’s a compatible Xmarks plugin so all my settings have moved over to IE10. Will give Chrome a spin under Windows 8 too.

I’m still installing software but so far haven’t come across anything that hasn’t worked which is always a good thing. All devices worked straight away during the install, including the Marvel SATA III controller and USB 3 Controller.

I was surprised to find out that Windows 8 ships with Hyper-V. Too bad VMware Workstation will not install while Hyper-V is enabled. It wasn’t a tough decision which one to keep. Workstation has much better integration (Unity) than Hyper-V.  I’ll try re-enabling Hyper-V post Workstation installation, see if it’ll still work.  I’m also running Oracle VirtualBox along VMware Workstation (not installed it yet on Windows 8). There’s some features of VirtualBox that I like over Workstation, but I’ll leave that discussion for another day.

 

 

Visual Studio 2012 is nice. I’m really digging the new color scheme. Whatever next project I start, I’ll use VS2012 and .NET 4.5 for it. I had no problems connecting to TFS and the new TFS integration is very nice. Unfortunately (as expected), VS2012 wanted to migrate existing project files to VS2012 and I didn’t want to lose the ability to continue developing in VS2010 so I’ll leave that for now.

 

I am also looking forward to testing Windows Server 2012 with Hyper-V 3. Would love to see some real competition for vSphere.

Intel SR1680MV (Part 2)

Continued working on the server today. Installed a couple of Intel Pro/1000 VT Quad Port network cards. 2 Ports for SAN, 1 port for LAN and 1 port for DMZ/WAN traffic for each node. The cards I had did not have low profile brackets so I ended up rigging the cards so they wouldn’t move. Not the best fix, but since this server will stay home for Lab work and testing, it’s not really all that important.

Both cards installed and ready to be plugged back into the server. The server does have rear RJ45 jacks on it, but they do not seem to be used for typical network purpose as they do not light up when hooked up to a switch. From what I read, these servers required a Liquid Computing switch to operate.

vSphere had no trouble recognizing the Intel NICs.

 

Server racked up. The other SR1680MV server will be operated on once my low profile network cards arrive.

 

Network cables hooked up…

 

…and patched into the switches. The Dell 5324 takes care of SAN traffic. The Dell 5224 takes care of LAN/DMZ traffic. The networks are segmented on different VLAN’s too.

 

Noticed that after few minutes after bootup the LED’s on the front of the servers go off. I assume due to the custom nature of the servers, the LEDs have other meaning past bootup. It shouldn’t be to hard to add a power LED indicator as I saw that the nodes have an internal molex header where I can draw 5V from to power the LED. I’ll make this my next project.

Adding the server to the cluster was a snap. Will create some test VMs to stress these nodes for a few days.

Intel SR1680MV

A while ago I picked up a couple of Intel SR1680MV 1U Servers from Kijiji for next to nothing. Each server contains two nodes that are completely standalone systems.

Server 1: 2 Nodes
Each Node:
CPU: 2x Intel X5550 Quad Core HT (8 Threads)
RAM: 48GB DDR3 1066
HD: None

Server 2: 2 Nodes
Each Node:
CPU: 2x Intel E5520 Quad Core HT (8 Threads)
RAM: 72GB DDR3 1066
HD: 2x 2.5″ 256GB SATA

 

From what I can gather these systems are modified versions of the normal SR1680 server by now defunct company named Liquid Computing out of Ottawa. I’ve tried digging up more info on these servers as they use some kind of proprietary backplane with a 10Gbe interface. There’s expandable on-board memory slot, several pin headers, RJ45 connectors and some large processors covered by massive heatsinks which I haven’t had a chance to look at yet. I’d be really interesting to find out what all this hardware was capable of.

One of the servers will end up at a datacenter for hosting purposes, the other one I’ll keep at home rack. Because the backplane is of no real use to me, I’ll use Intel quad NICs to add networking to the nodes.

I’ve also ran into a small snag with the USB sticks I bought. They were physically too thick to have both fit into adjacent slots in the internal USB connector. I ended up removing the outer sleeve and it worked like a charm. Odd design to have a single USB header that provides USB storage to two separate nodes.

 

Installing vSphere went without a hitch. What I found really interesting is that the 10Gbe backplane interface is actually shared across the two nodes. Each vSphere instance sees 4 Nics with the exact same MACs.

To Be Continued…

Boot Disks

More goodies arrived today. These will be used as internal boot drives for couple of virtualization servers that I’m assembling.

 

Should be sufficiently fast to boot the OS (either VSphere or  ProxMox). Can’t wait to start on the servers. Still waiting for a couple of quad Intel nics.

100% Super Happy Network Failure Occurence

The D-Link DGS-3024 finally bit it.It hasn’t occurred to me that there could be a problem, even though my network throughput speeds have absolutely plummeted. I was able to move a file across the network at easy 100MB/s. Last time I copied an iso, it was moving at only 11MB/s. Of course, figures that the switch will go down while I’m doing on-site at a client. Took the whole network down, including the Site-To-Site VPN tunnels between my two DCs. No email, no source control, no nothing.

Oh well, I bought that switch on Kijiji about  3-4 years ago for about $80. Can’t complain really. I wonder now if I can get bring it back to life.

It’s a good thing I held on to an extra Dell PowerConnect 5224 I bought on eBay about 8 months ago when I was setting up a new half-rack at one of my DCs.

What a pain the the ass it was to replace it though. With all the patch cables between the two switches and the patch panels, I had to disconnect everything just to pull the dead switch out of the rack. Then of course, I figured if I’m gonna go that far, might as well reorganize the whole network. Pulled everything out, switches, patch cables, unplugged all servers. Took about 4 hours to rewire everything just enough to get me back up and running. This included all the necessary connections and configuring the switches for the VLAN’s and Trunking.

Once I get my new servers racked up I’ll wire the rest.

Just noticed how filthy the server case below the switch is. I guess I got some house cleaning todo.