<meta content="text/html; charset=ISO-8859-1"
<body bgcolor="#FFFFFF" text="#000000">
Not at all. The restriction / affinity of jobs / process to a given
core or core subset is very much in mind. Memory management is also
potentially rather useful. In the case of most schedulers the memory
used is obtained via a poll report. <br>
The enforcement of the memory limit has to date either been via
wrapping jobs on startup by the scheduler with ulimit or via a local
daemon sending a kill command when it notices that the job or job
component exceeded the initial set limits.<br>
Both the above approaches have limitations which can confuse users.
The CGROUP approach seems to effectively take on the roll of ulimits
on steroids and allows for accurate memory tracking and enforcement.
This ensures that the job output includes the actual memory usage
when killed as well as ensuring that the job cannot break the set
<div class="moz-cite-prefix">On 27/11/13 15:39, John Hearns wrote:<br>
<div>I use cpusets very successfully.</div>
<div>I rather idly wonder if on a cluster with manycore nodes
(such as we have these days) if cgroups should be used to keep
the OS processes on the first core,</div>
<div>and as Igor says let the scheduler run the applications in
<div>The aim being to reduce 'OS jitter' .</div>
<div>I suppose it depends on the application being run of course.</div>
<div>Apologies if I am, yet again, wittering.</div>
Beowulf mailing list, <a class="moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit <a class="moz-txt-link-freetext" href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a>
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in England with number 1021457 and a
company registered in England with number 2742969, whose registered
office is 215 Euston Road, London, NW1 2BE.