[Beowulf] advice for newbie

Vincent Diepeveen diep at xs4all.nl
Wed Aug 22 11:27:11 PDT 2012


On Aug 22, 2012, at 6:23 PM, Duke Nguyen wrote:

> Hi Vincent,
>
> Thanks so much for your detail messages with lots of suggestions. I  
> will try to understand/catch up with what you said.
>
> On 8/20/12 2:17 PM, Vincent Diepeveen wrote:
>> On Aug 20, 2012, at 9:12 AM, Vincent Diepeveen wrote:
>>
>>> Look at ebay, there is cheap machines there $150 or less for 8 core
>>> Xeons L5420,
>>> i'm using those as well. For that $150 you get 8GB ram as well.
>>>
>>> Boot them over the network.
>>>
>>> That's 16 * $150 = $3k.
>>>
>>> Now there might be a difference between my and your wishes.
>>> For me the network is important so i bought pci-e 2.0 motherboards;
>>>
>>> In short i bought each component separate which is more expensive
>>> than $150.
>
> Not sure if our wishes are different, but our plan was to buy  
> components separately as well (and ensembl them as boxes - which we  
> thought should be cheaper with the same-configuration boxes which  
> are selling out there). Based on my own experience on genome  
> research, network is very important too, so we want to have the  
> best network capable within our budget.




>
>>>
>>> And those rackmounts make huge noise; i'm doing it here with 14 CM
>>> fans that's not so noisy,
>>> but i assume your institute has a spare room that may make people
>>> who walk their deaf before
>>> they're 30.
>
> We do have a dedicated room for the cluster.
>

That's a huge difference with me. HUGE.

The only 2 disadvantages of the ebay offered rackmounts is that they  
are LOUD and not pci-e 2.0.

Ready assembled rackmounts really are cheaper in this case except if  
you intend to build a cluster of hundreds of nodes.
These guys selling on ebay they sell these rackmounts by the  
thousands. You can't build it cheaper than them just buying
in components. You'll lose it bigtime on transport costs and  
component prices.

>>>
>>> As for you, buy 8 drives SATA from 2 TB and a $30 raid card from
>>> ebay Those raid cards 2nd hand
>>> are dirt cheap especially the pci-x ones and with 8 drives you
>>> won't get a bigger bandwidth
>>> anyway and most those online nodes have that.
>> online nodes => those $150k nodes have an empty pci-x slot
>>
>> Note pci-x is a lot slower than pci-e, yet for file server with
>> limited amount of drives you won't
>> get more than 700 MB/s out of it anyway and pci-x easily delivers  
>> that.
>>
>>> They're 100 euro a piece here those drives. So that's 800 euro =
>>> $1k or so. And $30 from
>>> ebay for a total superior raid card. Put them in a ready rackmount
>>> that allows such drives,
>>> it's a couple of hundreds of dollars on ebay, with the same L5420's
>>> and motherboards and 8GB
>>> ram. So say a $500 for that rackmount with drives you can plug in.
>>>
>>> Put in the drives, build a raid6 partition and your fileserver ,
>>> the 17th machine, it's ready to serve you
>>> at around a 700MB/s readspeed.
>
> My understand this point is that we buy a file server with 8 or  
> more slots for hard drives with the same motherboard and 8 GB RAM.  
> What I dont understand is that how we can get a speed of 700MB/s  
> for this file server?

the raidcard delivers that speed. You also need a $30 second hand  
raidcard that you put in this machine.

It's true that if you would order a pci-e card doing the same, a new  
one, that this one is way faster. Yet that would require
more drives and therefore also a raidcard that can handle more than 8  
drives.

Now i don't know much yet about doing i/o over infiniband, but the  
idea is that you will use the infiniband network for i/o.
It has a bandwidth that's much larger than the gigabit ethernet  
that's built in.

So the agreggated speed you  can expect is 700MB/s.

If you have, like me a bigger hunger for such i/o speed, then feel  
free to say so, but it'll be involving giving each node its own  
scratch that is real fast.

So as the switch you can buy on ebay easily will have 24 ports, you  
can build a 24 node cluster easily as you have a separated room.

You will need enough ventilation of course - but i'm sure you know  
more there than i do :)

>
>>>
>>> Now i don't know the latest about genome research; the last Phd
>>> student i helped out
>>> there, his university used 90s software to do his research.
>>>
>>> That really required big crunching for months for each calculation,
>>> at hundreds of cpu's,
>>> yet new commercial software finished within 15 minutes each
>>> calculation at a single core.
>>>
>>> That 90s software uses MPI if i recall but that'll depend upon what
>>> sort of software your
>>> guys want to use.
>>>
>>> You might want to decide next to buy the cheapest gigabit switch
>>> you can get, in order to
>>> boot all the nodes over the network using pci-e.
>> correction : pxe
>>
>>> It's possible those motherboards won't boot over infiniband, some
>>> might.
>>>
>>> Then i'd really advice you buy a cheap 2nd hand switch infiniband,
>>> maybe DDR, of $300 or so.
>>> Cables $20 a piece times 17 = $140, and a bunch of 2nd hand DDR
>>> infiniband cards and put
>>> each one in each machine.
>>>
>>> So after boot over the gigabit switch, assuming the motherboards
>>> don't boot over infiniband,
>>> they might boot actually over infiniband in which case you don't
>>> need the gigabit switch,
>>> then in that case infiniband will take over and can serve you
>>> either as 10 gigabit network
>>> cards or for the MPI that much software in that area needs.
>>>
>>> So all what's left is buy a 17 infiniband cards DDR 2nd hand off
>>> ebay. Not sure about prices,
>>> maybe a $80 maybe $100. a piece. Let's say it's under $1600
>>>
>>> Now you're done with a 16 node cluster with 17 machines from which
>>> 1 is fileserver,
>>> for a part of the budget you had in mind.
>>>
>>> Just it's noisy and loud.
>
> I might have lost this point, so I will try to wrap it up:
>
>  * 16 nodes: pci-e motherboard, 2x 4 core xeons L5420, infiniband  
> DDR card, no hard drives, about 4-8GB RAM
>  * file server: rackmount with 8 hard SATA drives, raid card, 8GB  
> RAM, pci-e motherboard, 2x 4 core xeons L5420
>  * infiniband switch
>
> We will try to see if we can afford new hard wares first (prices  
> from ebay):
>
> pci-e motherboard: ~ 17x 100 = 1700
> 2x 4core L5420 ~ 17x 200 = 3400
> 8G DDR3 ~ 17x 100 = 1700
> infiniband card ~ 17x 100 = 1700
> infiniband cables ~ 17x 20 = 140
> 8 SATA ~ 1000
> RAID card ~ 50
> file server ~ 500?
> infiniband switch ~ 500?
> a server rack (or PC shelf) ~ ?
> 5-6kW PSU ~ ?
>
> So that will be around $11k (not including the shelf/rack and the  
> PSU). It looks like we can afford this system. Am I missing  
> anything else? Are there those above components for box PC? I did  
> some quick search on ebay and they are all seem to be for rack- 
> mount servers.
>
>>>
>>> Also it's pretty low power compared to alternatives. It'll eat a
>>> 180 watt a node or so under full load. It's 170 watt a node
>>> here under full load (but that's with a much better psu).
>>>
>>> As for software to install in case you decide for infiniband, your
>>> choices are limited as OpenFED doesn't give
>>> much alternatives.
>>>
>>> Fedora Core or Scientific Linux are for free and probably your only
>>> 2 options if you want to use free software
>>> that are easy to get things done as you want to.
>>>
>>> Then install OpenFED that has the openmpi and other infiniband
>>> stuff for free.
>>>
>>> Probably Debian works as well, provided you use the exact kernel
>>> that OpenFED recommends.
>>> Any other kernel won't work. So you have to download some older
>>> debian then, get the exact kernel recommended
>>> and then i guess OpenFED will install as well.
>
> Thanks for this suggestion about softwares. I think we will go with  
> SL6. I actually tried a 3-node cluster (2 clients and 1 master)  
> with SL6 and OpenMPI, and it works fine. For the infiniband cards,  
> I have zero experience but I assume it is not too hard to install/ 
> configure?
>
> Thanks,
>
> D.
>
>>>
>>> Good Luck,
>>> Vincent
>>>
>>> On Aug 20, 2012, at 6:55 AM, Duke Nguyen wrote:
>>>
>>>> Hi folks,
>>>>
>>>> First let me start that I am total novice with cluster and/or
>>>> beowulf. I
>>>> am familiar with unix/linux and have a few years working in a  
>>>> cluster
>>>> (HPC) environment, but I never have a chance to design and admin a
>>>> cluster.
>>>>
>>>> Now my new institute decides to build a (small) cluster for our  
>>>> next
>>>> research focus area: genome research. The requirements are simple:
>>>> expandable and capable of doing genome research. The budget is low,
>>>> about $15,000, and we have decided:
>>>>
>>>>    * cluster is a box cluster, not rack (well, mainly because our
>>>> funding
>>>> is low)
>>>>    * cluster OS is scientific linux with OpenMPI
>>>>    * cluster is about 16-node with a master node, expandable is  
>>>> a must
>>>>
>>>> Now next step for us is to decide hardwares and other aspects:
>>>>
>>>>    * any recommendation for a reliable 24-port gigabit switch  
>>>> for the
>>>> cluster? I heard of HP ProCurve 2824 but it is a little bit hard
>>>> to find
>>>> it in my country
>>>>    * should our boxes be diskless or should they have a hard disk
>>>> inside?
>>>> I am still not very clear the advantages if the clients has about
>>>> 80GB
>>>> hard disk internally except that their OS are independent and does
>>>> not
>>>> depend on the master node, and maybe faster data processing
>>>> (temporay),
>>>> but 80GB each for genome research is too small
>>>>    * hard drives/data storage: we want to have storage of about
>>>> 10TB but
>>>> I am not sure how to design this. Should all the hard disk be in  
>>>> the
>>>> master node, or they can be on each of the node, or should it be a
>>>> NAS?
>>>>    * any recommendation for a mainboard (gigabit network, at least
>>>> 4 RAM
>>>> slots) about $200-$300 good for cluster?
>>>>
>>>> I would love to hear any advice/suggestion from you, especially if
>>>> you
>>>> had built a similar cluster with similar purpose.
>>>>
>>>> Thank you in advance,
>>>>
>>>> Duke.
>>>>
>>>> _______________________________________________
>>>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
>>>> Computing
>>>> To change your subscription (digest mode or unsubscribe) visit
>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
>> Computing
>> To change your subscription (digest mode or unsubscribe) visit  
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>




More information about the Beowulf mailing list