[Beowulf] advice for newbie

Vincent Diepeveen diep at xs4all.nl
Mon Aug 20 00:17:39 PDT 2012


On Aug 20, 2012, at 9:12 AM, Vincent Diepeveen wrote:

> Look at ebay, there is cheap machines there $150 or less for 8 core  
> Xeons L5420,
> i'm using those as well. For that $150 you get 8GB ram as well.
>
> Boot them over the network.
>
> That's 16 * $150 = $3k.
>
> Now there might be a difference between my and your wishes.
> For me the network is important so i bought pci-e 2.0 motherboards;
>
> In short i bought each component separate which is more expensive  
> than $150.
>
> And those rackmounts make huge noise; i'm doing it here with 14 CM  
> fans that's not so noisy,
> but i assume your institute has a spare room that may make people  
> who walk their deaf before
> they're 30.
>
> As for you, buy 8 drives SATA from 2 TB and a $30 raid card from  
> ebay Those raid cards 2nd hand
> are dirt cheap especially the pci-x ones and with 8 drives you  
> won't get a bigger bandwidth
> anyway and most those online nodes have that.

online nodes => those $150k nodes have an empty pci-x slot

Note pci-x is a lot slower than pci-e, yet for file server with  
limited amount of drives you won't
get more than 700 MB/s out of it anyway and pci-x easily delivers that.

>
> They're 100 euro a piece here those drives. So that's 800 euro =  
> $1k or so. And $30 from
> ebay for a total superior raid card. Put them in a ready rackmount  
> that allows such drives,
> it's a couple of hundreds of dollars on ebay, with the same L5420's  
> and motherboards and 8GB
> ram. So say a $500 for that rackmount with drives you can plug in.
>
> Put in the drives, build a raid6 partition and your fileserver ,  
> the 17th machine, it's ready to serve you
> at around a 700MB/s readspeed.
>
> Now i don't know the latest about genome research; the last Phd  
> student i helped out
> there, his university used 90s software to do his research.
>
> That really required big crunching for months for each calculation,  
> at hundreds of cpu's,
> yet new commercial software finished within 15 minutes each  
> calculation at a single core.
>
> That 90s software uses MPI if i recall but that'll depend upon what  
> sort of software your
> guys want to use.
>
> You might want to decide next to buy the cheapest gigabit switch  
> you can get, in order to
> boot all the nodes over the network using pci-e.

correction : pxe

>
> It's possible those motherboards won't boot over infiniband, some  
> might.
>
> Then i'd really advice you buy a cheap 2nd hand switch infiniband,  
> maybe DDR, of $300 or so.
> Cables $20 a piece times 17 = $140, and a bunch of 2nd hand DDR  
> infiniband cards and put
> each one in each machine.
>
> So after boot over the gigabit switch, assuming the motherboards  
> don't boot over infiniband,
> they might boot actually over infiniband in which case you don't  
> need the gigabit switch,
> then in that case infiniband will take over and can serve you  
> either as 10 gigabit network
> cards or for the MPI that much software in that area needs.
>
> So all what's left is buy a 17 infiniband cards DDR 2nd hand off  
> ebay. Not sure about prices,
> maybe a $80 maybe $100. a piece. Let's say it's under $1600
>
> Now you're done with a 16 node cluster with 17 machines from which  
> 1 is fileserver,
> for a part of the budget you had in mind.
>
> Just it's noisy and loud.
>
> Also it's pretty low power compared to alternatives. It'll eat a  
> 180 watt a node or so under full load. It's 170 watt a node
> here under full load (but that's with a much better psu).
>
> As for software to install in case you decide for infiniband, your  
> choices are limited as OpenFED doesn't give
> much alternatives.
>
> Fedora Core or Scientific Linux are for free and probably your only  
> 2 options if you want to use free software
> that are easy to get things done as you want to.
>
> Then install OpenFED that has the openmpi and other infiniband  
> stuff for free.
>
> Probably Debian works as well, provided you use the exact kernel  
> that OpenFED recommends.
> Any other kernel won't work. So you have to download some older  
> debian then, get the exact kernel recommended
> and then i guess OpenFED will install as well.
>
> Good Luck,
> Vincent
>
> On Aug 20, 2012, at 6:55 AM, Duke Nguyen wrote:
>
>> Hi folks,
>>
>> First let me start that I am total novice with cluster and/or  
>> beowulf. I
>> am familiar with unix/linux and have a few years working in a cluster
>> (HPC) environment, but I never have a chance to design and admin a  
>> cluster.
>>
>> Now my new institute decides to build a (small) cluster for our next
>> research focus area: genome research. The requirements are simple:
>> expandable and capable of doing genome research. The budget is low,
>> about $15,000, and we have decided:
>>
>>   * cluster is a box cluster, not rack (well, mainly because our  
>> funding
>> is low)
>>   * cluster OS is scientific linux with OpenMPI
>>   * cluster is about 16-node with a master node, expandable is a must
>>
>> Now next step for us is to decide hardwares and other aspects:
>>
>>   * any recommendation for a reliable 24-port gigabit switch for the
>> cluster? I heard of HP ProCurve 2824 but it is a little bit hard  
>> to find
>> it in my country
>>   * should our boxes be diskless or should they have a hard disk  
>> inside?
>> I am still not very clear the advantages if the clients has about  
>> 80GB
>> hard disk internally except that their OS are independent and does  
>> not
>> depend on the master node, and maybe faster data processing  
>> (temporay),
>> but 80GB each for genome research is too small
>>   * hard drives/data storage: we want to have storage of about  
>> 10TB but
>> I am not sure how to design this. Should all the hard disk be in the
>> master node, or they can be on each of the node, or should it be a  
>> NAS?
>>   * any recommendation for a mainboard (gigabit network, at least  
>> 4 RAM
>> slots) about $200-$300 good for cluster?
>>
>> I would love to hear any advice/suggestion from you, especially if  
>> you
>> had built a similar cluster with similar purpose.
>>
>> Thank you in advance,
>>
>> Duke.
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
>> Computing
>> To change your subscription (digest mode or unsubscribe) visit  
>> http://www.beowulf.org/mailman/listinfo/beowulf
>



More information about the Beowulf mailing list