Implement Spectre mitigations for table accesses and br_tables. (#4092)
Currently, we have partial Spectre mitigation: we protect heap accesses with dynamic bounds checks. Specifically, we guard against errant accesses on the misspeculated path beyond the bounds-check conditional branch by adding a conditional move that is also dependent on the bounds-check condition. This data dependency on the condition is not speculated and thus will always pick the "safe" value (in the heap case, a NULL address) on the misspeculated path, until the pipeline flushes and recovers onto the correct path. This PR uses the same technique both for table accesses -- used to implement Wasm tables -- and for jumptables, used to implement Wasm `br_table` instructions. In the case of Wasm tables, the cmove picks the table base address on the misspeculated path. This is equivalent to reading the first table entry. This prevents loads of arbitrary data addresses on the misspeculated path. In the case of `br_table`, the cmove picks index 0 on the misspeculated path. This is safer than allowing a branch to an address loaded from an index under misspeculation (i.e., it preserves control-flow integrity even under misspeculation). The table mitigation is controlled by a Cranelift setting, on by default. The br_table mitigation is always on, because it is part of the single lowering pseudoinstruction. In both cases, the impact should be minimal: a single extra cmove in a (relatively) rarely-used operation. The table mitigation is architecture-independent (happens during legalization); the br_table mitigation has been implemented for both x64 and aarch64. (I don't know enough about s390x to implement this confidently there, but would happily review a PR to do the same on that platform.)
This commit is contained in:
@@ -3411,16 +3411,6 @@ impl LowerBackend for X64Backend {
|
||||
ext_spec,
|
||||
);
|
||||
|
||||
// Bounds-check (compute flags from idx - jt_size) and branch to default.
|
||||
// We only support u32::MAX entries, but we compare the full 64 bit register
|
||||
// when doing the bounds check.
|
||||
let cmp_size = if ty == types::I64 {
|
||||
OperandSize::Size64
|
||||
} else {
|
||||
OperandSize::Size32
|
||||
};
|
||||
ctx.emit(Inst::cmp_rmi_r(cmp_size, RegMemImm::imm(jt_size), idx));
|
||||
|
||||
// Emit the compound instruction that does:
|
||||
//
|
||||
// lea $jt, %rA
|
||||
@@ -3443,6 +3433,23 @@ impl LowerBackend for X64Backend {
|
||||
// Cranelift type is thus unused).
|
||||
let tmp2 = ctx.alloc_tmp(types::I64).only_reg().unwrap();
|
||||
|
||||
// Put a zero in tmp1. This is needed for Spectre
|
||||
// mitigations (a CMOV that zeroes the index on
|
||||
// misspeculation).
|
||||
let inst = Inst::imm(OperandSize::Size64, 0, tmp1);
|
||||
ctx.emit(inst);
|
||||
|
||||
// Bounds-check (compute flags from idx - jt_size)
|
||||
// and branch to default. We only support
|
||||
// u32::MAX entries, but we compare the full 64
|
||||
// bit register when doing the bounds check.
|
||||
let cmp_size = if ty == types::I64 {
|
||||
OperandSize::Size64
|
||||
} else {
|
||||
OperandSize::Size32
|
||||
};
|
||||
ctx.emit(Inst::cmp_rmi_r(cmp_size, RegMemImm::imm(jt_size), idx));
|
||||
|
||||
let targets_for_term: Vec<MachLabel> = targets.to_vec();
|
||||
let default_target = targets[0];
|
||||
|
||||
|
||||
Reference in New Issue
Block a user