[Beowulf] typical protocol for cleanup of /tmp: on reboot? cron job? tmpfs?
hahn at mcmaster.ca
Fri Aug 20 20:32:30 PDT 2010
> What's the typical protocol about the cleanup of /tmp folders? Do
we just leave the default (2 week) cron-driven tmpwatch in place.
we don't have a lot of users who bother with /tmp, though, since
we have reasonable lustre-based storage on all our big clusters.
if we took the time to do it right, we'd probably make per-job
subdirectories in /tmp, then the tree after some delay (few days).
> people clean them on each reboot
on reboot doesn't make sense to me - why should the user care whether
we've rebooted a node?
> or at intervals with a cron (sounds a bad idea).
> One other option that I've seen mentioned is mounting /tmp on a tmpfs.
> Is that a good idea? The risk of using up too much RAM if a program
> gets out of hand writing to /tmp.
well, tmpfs can be given a max size.
> I suppose most programs ought to cleanup behind them on /tmp but then
> again there are bound to be bad apples.
to me, /tmp is for transient files: created during a job and normally
not expected to live beyond the job. but providing a delay so users
can grab files (say, logs after a job crash) is a little less BOFHish.
for files of more than transient value (say, checkpoints, outputs)
the user should write to another filesystem. we provide /home (but
very small, and discouraged for IO, /work (Lustre, bigger), /scratch
(Lustre, no quota, month or two expiry) and /tmp (disk, not managed
other than 2-week expire.)
I'm not really sure how well we get users to go with the purpose
and tuning of these filesystems. we've never tried to do serious
profiling of user IO (strace, I suppose, not sure how much overhead
that would impose. a kernel module that hooked into VFS could be less
intrusive.) (for context, we're an academic HPC consortium, 21 institutions,
> 30 clusters, 3800 user accounts).
More information about the Beowulf