[Beowulf] getting a phi, dammit.

Brice Goglin brice.goglin at gmail.com
Wed Mar 6 01:00:17 PST 2013


Le 06/03/2013 07:00, Mark Hahn a écrit :
> and drop it into most any gen1/2/3 PCIe x16 slot and it'll work (assuming
> I provide the right power and cooling, of course.)
>> The issue here is that because we offer 8GB of memory on the cards, some
>> BIOSes are unable to map all of it through the PCI either due to bugs or
>> failure to support so much memory. This is not the only people suffering
> interesting.  but it seems like there are quite a few cards out there
> with 4-6GB (admittedly, mostly higher-end workstation/gp-gpu cards.)
> is this issue a bigger deal for Phi than the Nvidia family? 
> is it more critical for using Phi in offload mode?

Those non-Phi cards have a lot of internal memory but they only export a
small part of it directly to the host (look at lspci -vvv, you'll see
only about 200 MB for your big NVIDIA Teslas). This small part can be
mapped, everything else needs DMA.

Phi lets you map everything (at least in the kernel, not sure if their
driver lets you map to userspace too) but the BIOS has more work to
setup such big mappings.

Brice




More information about the Beowulf mailing list