john.hearns at mclaren.com
Mon Dec 3 11:04:57 PST 2012
I have also often wondered about the feasibility
of running something like a BOINC-distributed project locally
across all available personal machines in an organization
to accomplish large calculation that were perhaps
embarassingly parallel and not as time-sensitive
as most HPC modeling and crunching seem to be.
It seems quite possibly to setup and run such projects
and still keep all the work "in house."
I have always been against this, for the reason that from the outside people only think of CPU.
They think 'well then - there are plenty of powerful CPUs lying idel at night in my organisation.
I'm paying for them - why not get them doign this wonderful HPC stuff'
But its not about CPU.
It is about the software environment- tha availability and versions of scientific libraries (etc. etc.)
The heteroginity of operating systems.
The paths to storage - that data you want to crunch is somewhere, and if you have gigabit to the desktop
You still may not have huge pipes between the routers in your ogrganisation.
The workflow management.
I have always argues that the benefits are outweighed by the difficulties in a scenario like this.
HOWEVER I will correct myself - I think you do have a valid point.
As you say, you are considering a BOINC type distributed process.
Mayeb we could ship out virtual machines, where the software environment is controlled which would be run on the
And as you say, maybe the paradigm for computation will change - we send all those units out there,
And expect some of them not to return.
The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
More information about the Beowulf