[Beowulf] Reusing cores was Re: Bright Cluster Manager
chris at csamuel.org
Sat May 5 21:40:45 PDT 2018
On Sunday, 6 May 2018 9:30:13 AM AEST Lux, Jim (337K) wrote:
> but compare the "value" of the computational work those otherwise unused
> cores can do versus the "cost" of a more complex system management
> environment. Isn't the whole idea that "hardware is cheap, wetware is
Ah, but the routing is transparent to our users as the filtering into the
correct partitions in Slurm is done in our submit filter.
If they request GPUs (which they must to be able to see them as we use cgroups
to control access) then they end up in the GPU partition and if they don't
then they will end up in the ordinary partition.
If they ask for too many cores per node for non-GPU jobs then they get a
message to tell them the maximum they can request.
All the best!
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
More information about the Beowulf