In reply to @CoilDomain's blog post, I have decided to embark on a little mental exercise myself: what would my home lab really be like if I had $50k to throw at it. And, of course, out of all places, I have turned to eBay as my main supplier of cheap hardware. I don't need support for a home lab, right?
First, of course, I would pick up a chassis to hold my beloved blade servers: a time-proven BladeCenter E - $1590.93
It already comes with a couple Cisco gigabit switches, and I'd get a couple more 2Kw PSUs ($100) in order to keep both power domains happy for what comes later. Also, a BladeCenter with all 4 power supply slots filled sounds A LOT quieter than one with non-redundant power. Compare a vacuum cleaner to a Boeing 747 that is about to take off.
Then, drop in a couple IBM 7870CCU HS22 blades ($1800 a piece), get 4 4Gb chips of RAM so I get 10 gigs per server ($132 per chip), and add two QLogic CIOv cards for those blades, at $563 each.
So why did I get a BladeCenter for just two blades? No reason…. except i want to add more servers! Since I really don't need to have my lab based on the latest and greatest Nehalem line of Xeons, I can just as well play with HS21XM blades that are:
a) based on the previous generation Xeon 5400 series, and
b) cheaper.
These look pretty good, I'll take three ($700 each). Now throw away the 1Gb ram sticks, and get six 4Gb kits at $100 each.
I think i'll play with iScsi/NFS with these blades. Sounds like fun. FC is good, but i'm "open to other options".
And, just for past reference, get a couple bare-metal HS20's with local drives. I can always use them as shelves in my garage, since they're only $175 a piece.
So, my total on servers is: $9994.93.
Of course, I forgot the FC BladeCenter modules, but I'll add those later as I'm pricing out the storage portion of this exercise.
Storage… Storage… so much has been argued about it in the past years. One thing I know for sure: in order to be on top of things, I can't afford to not have SSDs in my home lab. I mean, seriously, it's not like I'm building a production environment. I can always wait for the RMA on a ruined SSD (and believe me, IOmeter does a consistently good job at that if you know what I mean).
And when choosing storage that uses SSD for both read and write cache, and doesn't cost an arm and a leg, ZFS is a pretty obvious choice. To be more specific, Nexenta would fill in my blank, since it actually offers a convenient way to manage that storage. I dislike command line interfaces if I have a choice between using one and not using one (exclusively).
So, what I need is write-oriented SSDs for ZIL and read-oriented SSDs for L2ARC, and normally the choice would be pretty clear for SLC when it comes to writes, and MLC when it comes to reads. However, there is all the hype about the new SandForce controller that supposedly makes MLC flash as reliable as SLC in write-oriented workloads (10,000 random write IOPS are no joke), so I wouldn't mind playing with it as well.
So, I'll take a 50Gb OWC Mercury Extreme Pro RE for write cache ($210), a 100Gb drive of the same brand for read cache ($399), a 160Gb Intel X25M G2 ($405) to compare the read cache results, and a 32Gb Intel X25E ($380) for proper SLC write cache. This may seem like a waste of money, but I think that SSD is the future. Total: $1394 on SSD. Wait… Let me also grab a Crucial RealSSD 256Gb for $599 in order to give it a run for the money. $1993 it is. I expect the Crucial to fail within a couple months anyway, which means i can return it under warranty and buy new tires for my car.
Of course, no storage system will be complete without spinning rust (which ultimately holds all the data, the SSDs are added for data sprints). Since I think that SSD will soon take the crown for low-latency data access, there is no reason to splurge on 15k drives: 10k's will do just as well. So, 10x 10k 600Gb Seagate Savvio's 10.4's it is (@$471 a piece). I honestly don't want to go into details about SAS-to-SATA multiplexors and 24-disk 2U enclosures (partially because I'm under a couple NDA's), but let's just say that $2500 will cover that with ease, especially since for a home lab redundancy for SAS controllers is a bit too much.
I'm getting a little bit tired of pricing out all the bits and pieces already… I'm going to estimate $5,000 for a decent storage controller (dual quad Nehalem's, 32Gb RAM, FC card + quad gigabit, SAS controller) that will theoretically be able to utilize all of my SSD cache as well as de-dupe everything quickly and efficiently. This takes the total storage costs to $14,203.
Now comes the fun part: networking - both FC and Ethernet. A Cisco ASA 5505 can be found for around $400 on eBay; a Juniper SRX100B would be about $600; a QLogic SANBox 1400 for $900, a 2Gb FC switch module for the Bladecenter is another $600. And, for the most fun part, a fully populated Cisco 6509 chassis for only $1395 - that'll last me a while. :)
So, I have the switches and firewalls, but i don't yet have any routers. Hmm, D-Link? Netgear? Nah, I think I'll get a Cisco 2600 series: it's cheap, abundant, and has more than enough capacity for my home broadband connection. ($100 tops). In fact, I think I'll get three of those just to practice weird routing schemes in real life: Cisco emulators will work fine for everything else.
Total for the network portion: $4195. Well, in order to be totally realistic, let's say power/network/FC wiring for everything will add another grand.
So, the total for my dream home lab is $29393.
Did i leave anything important out? Don't think so. A tape library, maybe, but for a home lab that's just overkill. Packet shapers? Proxy servers? Deep packet inspection appliances and spam filters? That can all be done either as a virtual appliance or using open source software (and, a lot of times, both at once).
I think that this setup has plenty of hardware to simulate any production situation I could ever encounter, on a smaller scale. Except for, maybe, 100 VDI users going to http://buffalowildwings.com (courtesy @JoeShonk for the link) all at once. I've got a rudimentary, yet full-featured switched SAN, which can do FC, NFS, iScsi and CIFS; my network setup is probably way overkill; my storage kicks ass for anything i can think of throwing at it, and a total of 52 gigs of RAM shared across the "server farm" (not including the storage controller) are more than I've seen in most ~100 user production environments. Could I build this for less? Definitely.
More? Well, I could make the costs astronomical just to scare people away from IT. However, the truth is, you don't need brand new under-warranty equipment for a home lab - hell, you don't even need it for production a lot of times, if you take time to properly set everything up and make sure you have no single point of failure.
And, on top of everything, I still have over $20k to spend in case the hypothetical home lab suddenly needs an upgrade.
Oh wait, I forgot a rack. Super-conveniently, the cheapest IKEA corner tables can serve me well: http://lifehacker.com/5459719/build-a-network-rack-with-an-ikea-table
That's all.
Sunday, July 25, 2010
Subscribe to:
Comments (Atom)