[Beowulf] New Spectre attacks - no software mitigation - what impact for HPC?

Lux, Jim (337K) james.p.lux at jpl.nasa.gov
Tue Jul 17 11:32:24 PDT 2018


Perhaps it’s more about “system management” for HPC – these sort of vulnerabilities only occur when you have one process able to see what’s going on in another process.  From a “security” standpoint, the answer is simple – don’t share the same entity between processes owned by different users. (presumably a given user doesn’t care about “leaking” from one of his/her processes to another).

Sure, in the abstract, it would be nice to design leakproof processors – hey, it would be nice to make processors and hardware that don’t radiate EMI, which can also provide a leakage path.

I’m not sure that for the *vast majority* of HPC applications this is a problem – sure, in “the cloud” or in a heavily shared resource with very little control over who is sharing (i.e. Azure or AWS kind of server farming) you’ve got a potential problem.  And, on the desktop where a seemingly innocuous program can theoretically get data from another.   Neither of the latter, though, is anything like what one would consider “secure computing”.

If the data YOU are processing is sufficiently valuable that it is worth it to try and exploit via a Spectre type attack,  maybe it’s worth it for you to have a dedicated computing resource?  Are HPC jobs sufficiently fine grained and disperseable that you would have multiple user’s processes running on a single CPU in a 1000 node cluster? Or would you say “User 1, you get nodes 1-500, User 2 you get nodes 501-1000”?  I find it hard to believe that we *must* distribute processes more finely grained than this (user 1 gets cores 1,2,3,4 on node1, cores 1,2 on node2, user 2 gets cores 3,4, on node 2, cores 1,2,3,4 on node 3)

And really, these Spectre type attacks are sort of a theoretical vulnerability – there’s a long way from having iron ore and reading about its processing to having machine tools and swords in your hands.  Sure, there are adversaries with sufficient resources to figure it out (maybe it’s cheaper to steal secrets than to do it yourself, if the original secrets cost billions), and it’s of great theoretical interest to figure out how to make processors that are immune to this.  But really, is this a significant threat?  Or, even more “conspiracy theory”, you create this potential threat which is *very* expensive to counter, so you throw all your resources at it, and starve the mitigation of the other threats.

For all those millions of dollars you’d invest in “industrializing” an attack like Spectre, you could go out and bribe employees and probably achieve the same end result.

James Lux
Task Manager, DARPA High Frequency Research (DHFR) Space Testbed
Jet Propulsion Laboratory  (Mail Stop 161-213)
4800 Oak Grove Drive
Pasadena CA 91109
(818)354-2075 (office)
(818)395-2714 (cell)

From: Beowulf [mailto:beowulf-bounces at beowulf.org] On Behalf Of Scott Atchley
Sent: Tuesday, July 17, 2018 6:38 AM
To: John Hearns <hearnsj at googlemail.com>
Cc: Beowulf Mailing List <beowulf at beowulf.org>
Subject: Re: [Beowulf] New Spectre attacks - no software mitigation - what impact for HPC?

I saw that article as well. It seems like they are targeting using RISC-V to build an accelerator. One could argue that you do not need speculation within a GPU-like accelerator, but you have to get your performance from very wide execution units with lots of memory requests in flight as a GPU does today.

On Tue, Jul 17, 2018 at 8:19 AM, John Hearns via Beowulf <beowulf at beowulf.org<mailto:beowulf at beowulf.org>> wrote:
This article is well worth a read, on European Exascale projects

https://www.theregister.co.uk/2018/07/17/europes_exascale_supercomputer_chips/

The automotive market seems to have got mixed in there also!
The main thrust  dual ARM based and RISC-V

Also I like the plexiglass air shroud pictured at Barcelona. I saw something similar at the HPE centre in Grenoble.
Damn good idea.







On 17 July 2018 at 13:07, Scott Atchley <e.scott.atchley at gmail.com<mailto:e.scott.atchley at gmail.com>> wrote:
Hi Chris,

They say that no announced silicon is vulnerable. Your link makes it clear that no ISA is immune if the implementation performs speculative execution. I think your point about two lines of production may make sense. Vendors will have to assess vulnerabilities and the performance trade-off.

Personally, I do not see a large HPC system being built out of non-speculative hardware. You would need much more hardware to reach a level of performance and the additional power could lead to a lower performance per Watt (i.e., exceed the facility's power budget).

Scott

On Tue, Jul 17, 2018 at 2:33 AM, Chris Samuel <chris at csamuel.org<mailto:chris at csamuel.org>> wrote:
On Tuesday, 17 July 2018 11:08:42 AM AEST Chris Samuel wrote:

> Currently these new vulnerabilities are demonstrated on Intel & ARM, it will
> be interesting to see if AMD is also vulnerable (I would guess so).

Interestingly RISC-V claims immunity, and that looks like it'll be one of the
two CPU architectures blessed by the Europeans in their Exascale project
(along with ARM).

https://riscv.org/2018/01/more-secure-world-risc-v-isa/

All the best,
Chris
--
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org<mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180717/79274683/attachment-0001.html>


More information about the Beowulf mailing list