<div>Speaking of nodes w/ disks vs. nodes without - I was thinking of equiping a small cluster (microwulf style) with each node having a single USB thumbdrive instead of a disk. I thought it might be easier than trying to get nodes to boot PXE style over the network. And it seemed to me that thumbdrives might be easier than disk-per-node to keep in sync: I'd just unplug them from the nodes, plug them into to a USB hub on another computer where I build my distribution, and copy files to them, then plug them back into their nodes. Also the USB drives would serve for any local filesystem needs,
e.g., for logging or whatever. With a 1Gb key available for about $12 it seemed a pretty easy and cheap and low power solution. And no moving parts means the "disks" won't die for mechanical reasons (and they won't be written to enough to worry about flash-wear).
<div>Does anyone have any thoughts on this? Tried it? Knows why it won't work?</div>
<div>Thanks! -- David Bakin<br><br> </div>
<div><span class="gmail_quote">On 12/21/07, <b class="gmail_sendername">Mark Hahn</b> <<a href="mailto:firstname.lastname@example.org">email@example.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">> 1. I'd like to go diskless. I've never done this before (the other two<br>> clusters are...diskful?). I've used Fedora on both of the previous
<br>> clusters. Is this a good choice for diskless? Any advice on where to<br>> start with diskless or operating system choices?<br><br>I prefer diskless installs:<br> - NFS root: fast, can be RO, no significant server load.
<br> - node-specific files on tmpfs: hardly any - pidfiles mostly.<br> - local disk for swap, /tmp: disks are cheap and fast, why not?<br><br>such an approach is really nicely scalable and very pleasant to<br>
maintain. a diskful cluster, by comparison, is often annoying:<br>disk failures actually matter, and it's not that hard for nodes<br>to get out of sync. systemimager does a good job of reimaging nodes,<br>but it's still not quite as "liberating" as just resetting a node,
<br>knowing it's ephemeral...<br><br>> 2. Given my budget (about 20K), I plan on going with GigE on about 24<br>> nodes. Am I right in thinking that faster network interconnects are<br>> just too expensive for this budget?
<br><br>Greg's right: buy the right interconnect, not just the cheapest.<br><br>> 3. I'll be spending most of my cluster's time diagonalizing large<br>> matrices. I plan on using ScaLAPACK eventually; currently I just use
<br>> LAPACK/ATLAS and do individual matrices on each node. The only thing<br><br>my experience with scalapack and diagonalization is with monster-sized<br>sparse matrices, which seem to be fairly latency-sensitive. if your
<br>workload is anything like that, gigabit isn't going to scale well,<br>at least with a conventional mpi+tcp stack. (I'm looking forward to<br>the OpenMX stack for this reason.)<br><br>> * Intel Core 2 Duo E6850 Conroe
3.0GHz ($280)<br>> * 8 GB (4 X 2 GB) DDR2 800 (~$200)<br><br>did you consider AMD? "large matrices" makes me think of memory balance<br>(bandwidth per flop), where AMD normally leads Intel.<br><br>> The motherboard does NOT have integrated video. Will I need video
<br>> output? Can you even build a node without it?<br><br>this is a bios issue: will the board boot without a video card?<br>I guess you can try configuring it with the card, then remove the card<br>and see if it still boots. I would make sure you can't get integrated
<br>video - these days, such boards are often cheaper.<br><br>> motherboards with adequate support for 8GB memory and 1333 FSB don't<br>> have video.<br><br>I would also consider AMD, which has lots of integrated-video options.
<br><br>> seems like a waste. From reading around, it seems like there is no<br>> advantage really to DDR3 memory...is that right? Any advice on the<br><br>power savings, probably some headroom in clock, but it's really at
<br>the early-adopter stage, I think.<br><br>regards, mark hahn.<br>_______________________________________________<br>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>To change your subscription (digest mode or unsubscribe) visit