Spectre mitigation on heap access overflow checks.

This PR adds a conditional move following a heap bounds check through
which the address to be accessed flows. This conditional move ensures
that even if the branch is mispredicted (access is actually out of
bounds, but speculation goes down in-bounds path), the acually accessed
address is zero (a NULL pointer) rather than the out-of-bounds address.

The mitigation is controlled by a flag that is off by default, but can
be set by the embedding. Note that in order to turn it on by default,
we would need to add conditional-move support to the current x86
backend; this does not appear to be present. Once the deprecated
backend is removed in favor of the new backend, IMHO we should turn
this flag on by default.

Note that the mitigation is unneccessary when we use the "huge heap"
technique on 64-bit systems, in which we allocate a range of virtual
address space such that no 32-bit offset can reach other data. Hence,
this only affects small-heap configurations.
This commit is contained in:
Chris Fallin
2020-06-29 14:04:26 -07:00
parent ae634417a0
commit e694fb1312
9 changed files with 148 additions and 47 deletions

View File

@@ -1,4 +1,5 @@
test legalizer
set enable_heap_access_spectre_mitigation=false
target x86_64
; Test legalization for various forms of heap addresses.

View File

@@ -1,5 +1,6 @@
; Test the legalization of memory objects.
test legalizer
set enable_heap_access_spectre_mitigation=false
target x86_64
; regex: V=v\d+

View File

@@ -1,4 +1,5 @@
test compile
set enable_heap_access_spectre_mitigation=true
target aarch64
function %dynamic_heap_check(i64 vmctx, i32) -> i64 {
@@ -11,20 +12,23 @@ block0(v0: i64, v1: i32):
return v2
}
; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp
; nextln: ldur w2, [x0]
; nextln: add w2, w2, #0
; nextln: subs wzr, w1, w2
; nextln: b.ls label1 ; b label2
; nextln: Block 1:
; check: add x0, x0, x1, UXTW
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16
; nextln: ret
; nextln: Block 2:
; check: udf
; check: Block 0:
; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp
; nextln: ldur w2, [x0]
; nextln: add w2, w2, #0
; nextln: subs wzr, w1, w2
; nextln: b.ls label1 ; b label2
; check: Block 1:
; check: add x0, x0, x1, UXTW
; nextln: subs wzr, w1, w2
; nextln: movz x1, #0
; nextln: csel x0, x1, x0, hi
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16
; nextln: ret
; check: Block 2:
; check: udf
function %static_heap_check(i64 vmctx, i32) -> i64 {
gv0 = vmctx
@@ -35,15 +39,18 @@ block0(v0: i64, v1: i32):
return v2
}
; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp
; nextln: subs wzr, w1, #65536
; nextln: b.ls label1 ; b label2
; nextln: Block 1:
; check: add x0, x0, x1, UXTW
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16
; nextln: ret
; nextln: Block 2:
; check: udf
; check: Block 0:
; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp
; nextln: subs wzr, w1, #65536
; nextln: b.ls label1 ; b label2
; check: Block 1:
; check: add x0, x0, x1, UXTW
; nextln: subs wzr, w1, #65536
; nextln: movz x1, #0
; nextln: csel x0, x1, x0, hi
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16
; nextln: ret
; check: Block 2:
; check: udf