[Beowulf] [EXTERNAL] Re: Have machine, will compute: ESXi or bare metal?

Jonathan Aquilina jaquilina at eagleeyet.net
Mon Feb 10 05:57:35 PST 2020


I might have a solution for those of you guys that would sort out the spaghetti of network cables. Let me know if you’re interested and I’ll send you guys the website

Regards,
Jonathan Aquilina
Owner managing director

Phone (356) 20330099
Mobile (356) 79957942

Email sales at eagleeyet.net
________________________________
From: Beowulf <beowulf-bounces at beowulf.org> on behalf of Lux, Jim (US 337K) via Beowulf <beowulf at beowulf.org>
Sent: Monday, February 10, 2020 2:54:50 PM
To: beowulf at beowulf.org <beowulf at beowulf.org>
Subject: Re: [Beowulf] [EXTERNAL] Re: Have machine, will compute: ESXi or bare metal?


One comment on “building a cluster with VMs”

Part of bringing up a cluster is learning how to manage the interconnects, and loading software into the nodes, and then finding the tools to manage a bunch of different machines simultaneously, as well as issues around shared network drives, boot images, etc.

I would think (but have not tried) that the multi-VM approach is a bit too unrealistically easy – I assume you can do MPI between VMs, so you could certainly practice with parallel coding.  But it seems that spinning up identical instances, all that can see the same host resources, on the same machine with the same display and keyboard kind of bypasses a lot of the hard stuff.



OTOH, If you want a cheap experience at getting the booting working, controlling multiple machines, learning pdsh, etc. you could just get 3 or 4 Rpis or beagles, and face all the problems of a real cluster (including managing a rat’s nest of wires and cables)





From: Beowulf <beowulf-bounces at beowulf.org> on behalf of "jaquilina at eagleeyet.net" <jaquilina at eagleeyet.net>
Date: Sunday, February 9, 2020 at 10:30 PM
To: "Renfro, Michael" <Renfro at tntech.edu>, "beowulf at beowulf.org" <beowulf at beowulf.org>
Subject: [EXTERNAL] Re: [Beowulf] Have machine, will compute: ESXi or bare metal?



Hi Guys just piggy backing on this thread



I am considering upgrading my pc to 64gb of ram and setting it up as a win 10 based hyper-v host. Would you say this is a good way to learn how to put a cluster together with out the need to invest in a small number of servers? My pc is a ryzen 5 3600 6 core 12 thread cpu motherboard is an msi b450 tomahawk max gaming motherboard currently 32gb ddr4 3200 upgradable to 64.



Let me know your thoughts.



Regards,

Jonathan Aquilina



EagleEyeT

Phone +356 20330099

Sales – sales at eagleeyet.net<mailto:sales at eagleeyet.net>

Support – support at eagleeyet.net



From: Beowulf <beowulf-bounces at beowulf.org> On Behalf Of Renfro, Michael
Sent: Monday, 10 February 2020 03:17
To: beowulf at beowulf.org
Subject: Re: [Beowulf] Have machine, will compute: ESXi or bare metal?



No reason you can’t, especially if you’re not interested in benchmark runs (there’s a chance that if you ran a lot of heavily-loaded VMs, there could be CPU contention on the host).



Any cluster development work I’ve done lately has used VMware VMs exclusively.




On Feb 9, 2020, at 7:10 PM, Mark Kosmowski <mark.kosmowski at solidstatecomputation.com> wrote:

External Email Warning

This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.

________________________________

I purchased a Cisco UCS C460 M2 (4 @ 10 core Xeons, 128 GB total RAM) for $115 in my local area.  If I used ESXi (free license), I am limited to 8 vcpu per VM.  Could I make a virtual Beowulf cluster out of some of these VMs?  I'm thinking this way I can learn cluster admin without paying the power bill for my ancient Opteron boxes and also scratch my illumos itch while computing on Linux.

Thank you!

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200210/cced0147/attachment.html>


More information about the Beowulf mailing list