[Beowulf] Re: Re: Home beowulf - NIC latencies

Rob Ross rross at mcs.anl.gov
Fri Feb 11 13:47:39 PST 2005


On Fri, 11 Feb 2005, Isaac Dooley wrote:

> >True. I always wonder what the low-CPU-usage-advocates want the MPI 
> >process to do while i.e. an MPI_Send() is executed. 
> >
> They don't want the process to do anything when the call MPI_Send, 
> however carefully using asynchronous or non-blocking messaging ideally 
> would not use the CPU.

Unless your code is multi-threaded, why do you care what the CPU 
utilization is during MPI_Send()?  Saving on the power bill?

When you call MPI_Send() semantically you've said "Hey, send this, and 
btw I can't do anything else until you are done."  Likewise for 
MPI_Recv().  So the implementation will be built to get things done as 
quickly as possible.

Often the path to lowest latency leads to polling, which leads to the high
CPU utilization.  Same issue with interrupt mitigation, as mentioned
earlier in the thread; you can save CPU by coalescing, or you can get 
better performance.

> Using MPI_ISend() allows programs to not waste CPU cycles waiting on the
> completion of a message transaction.

No, it allows the programmer to express that it wants to send a message 
but not wait for it to complete right now.  The API doesn't specify the 
semantics of CPU utilization.  It cannot, because the API doesn't have 
knowledge of the hardware that will be used in the implementation.

> This is critical for some tightly coupled fine grained applications.

What exactly is critical for tightly coupled, fine grained applications?  

I would think that extremely low latency communication would be the most 
important factor, not whether or not we crank on the CPU to get that.

> Also it allows for overlapping computation and communication, which is
> beneficial.

Sure!

Rob
---
Rob Ross, Mathematics and Computer Science Division, Argonne National Lab




More information about the Beowulf mailing list