Implement inline stack probes for AArch64 (#5353)
* Turn off probestack by default in Cranelift The probestack feature is not implemented for the aarch64 and s390x backends and currently the on-by-default status requires the aarch64 and s390x implementations to be a stub. Turning off probestack by default allows the s390x and aarch64 backends to panic with an error message to avoid providing a false sense of security. When the probestack option is implemented for all backends, however, it may be reasonable to re-enable. * aarch64: Improve codegen for AMode fallback Currently the final fallback for finalizing an `AMode` will generate both a constant-loading instruction as well as an `add` instruction to the base register into the same temporary. This commit improves the codegen by removing the `add` instruction and folding the final add into the finalized `AMode`. This changes the `extendop` used but both registers are 64-bit so shouldn't be affected by the extending operation. * aarch64: Implement inline stack probes This commit implements inline stack probes for the aarch64 backend in Cranelift. The support here is modeled after the x64 support where unrolled probes are used up to a particular threshold after which a loop is generated. The instructions here are similar in spirit to x64 except that unlike x64 the stack pointer isn't modified during the unrolled loop to avoid needing to re-adjust it back up at the end of the loop. * Enable inline probestack for AArch64 and Riscv64 This commit enables inline probestacks for the AArch64 and Riscv64 architectures in the same manner that x86_64 has it enabled now. Some more testing was additionally added since on Unix platforms we should be guaranteed that Rust's stack overflow message is now printed too. * Enable probestack for aarch64 in cranelift-fuzzgen * Address review comments * Remove implicit stack overflow traps from x64 backend This commit removes implicit `StackOverflow` traps inserted by the x64 backend for stack-based operations. This was historically required when stack overflow was detected with page faults but Wasmtime no longer requires that since it's not suitable for wasm modules which call host functions. Additionally no other backend implements this form of implicit trap-code additions so this is intended to synchronize the behavior of all the backends. This fixes a test added prior for aarch64 to properly abort the process instead of accidentally being caught by Wasmtime. * Fix a style issue
This commit is contained in:
@@ -63,21 +63,14 @@ pub fn mem_finalize(
|
||||
(smallvec![], mem)
|
||||
} else {
|
||||
let tmp = writable_spilltmp_reg();
|
||||
let mut const_insts = Inst::load_constant(tmp, off as u64);
|
||||
// N.B.: we must use AluRRRExtend because AluRRR uses the "shifted register" form
|
||||
// (AluRRRShift) instead, which interprets register 31 as the zero reg, not SP. SP
|
||||
// is a valid base (for SPOffset) which we must handle here.
|
||||
// Also, SP needs to be the first arg, not second.
|
||||
let add_inst = Inst::AluRRRExtend {
|
||||
alu_op: ALUOp::Add,
|
||||
size: OperandSize::Size64,
|
||||
rd: tmp,
|
||||
rn: basereg,
|
||||
rm: tmp.to_reg(),
|
||||
extendop: ExtendOp::UXTX,
|
||||
};
|
||||
const_insts.push(add_inst);
|
||||
(const_insts, AMode::reg(tmp.to_reg()))
|
||||
(
|
||||
Inst::load_constant(tmp, off as u64),
|
||||
AMode::RegExtended {
|
||||
rn: basereg,
|
||||
rm: tmp.to_reg(),
|
||||
extendop: ExtendOp::SXTX,
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3424,6 +3417,72 @@ impl MachInstEmit for Inst {
|
||||
}
|
||||
|
||||
&Inst::DummyUse { .. } => {}
|
||||
|
||||
&Inst::StackProbeLoop { start, end, step } => {
|
||||
assert!(emit_info.0.enable_probestack());
|
||||
let start = allocs.next_writable(start);
|
||||
let end = allocs.next(end);
|
||||
|
||||
// The loop generated here uses `start` as a counter register to
|
||||
// count backwards until negating it exceeds `end`. In other
|
||||
// words `start` is an offset from `sp` we're testing where
|
||||
// `end` is the max size we need to test. The loop looks like:
|
||||
//
|
||||
// loop_start:
|
||||
// sub start, start, #step
|
||||
// stur xzr, [sp, start]
|
||||
// cmn start, end
|
||||
// br.gt loop_start
|
||||
// loop_end:
|
||||
//
|
||||
// Note that this loop cannot use the spilltmp and tmp2
|
||||
// registers as those are currently used as the input to this
|
||||
// loop when generating the instruction. This means that some
|
||||
// more flavorful address modes and lowerings need to be
|
||||
// avoided.
|
||||
//
|
||||
// Perhaps someone more clever than I can figure out how to use
|
||||
// `subs` or the like and skip the `cmn`, but I can't figure it
|
||||
// out at this time.
|
||||
|
||||
let loop_start = sink.get_label();
|
||||
sink.bind_label(loop_start);
|
||||
|
||||
Inst::AluRRImm12 {
|
||||
alu_op: ALUOp::Sub,
|
||||
size: OperandSize::Size64,
|
||||
rd: start,
|
||||
rn: start.to_reg(),
|
||||
imm12: step,
|
||||
}
|
||||
.emit(&[], sink, emit_info, state);
|
||||
Inst::Store32 {
|
||||
rd: regs::zero_reg(),
|
||||
mem: AMode::RegReg {
|
||||
rn: regs::stack_reg(),
|
||||
rm: start.to_reg(),
|
||||
},
|
||||
flags: MemFlags::trusted(),
|
||||
}
|
||||
.emit(&[], sink, emit_info, state);
|
||||
Inst::AluRRR {
|
||||
alu_op: ALUOp::AddS,
|
||||
size: OperandSize::Size64,
|
||||
rd: regs::writable_zero_reg(),
|
||||
rn: start.to_reg(),
|
||||
rm: end,
|
||||
}
|
||||
.emit(&[], sink, emit_info, state);
|
||||
|
||||
let loop_end = sink.get_label();
|
||||
Inst::CondBr {
|
||||
taken: BranchTarget::Label(loop_start),
|
||||
not_taken: BranchTarget::Label(loop_end),
|
||||
kind: CondBrKind::Cond(Cond::Gt),
|
||||
}
|
||||
.emit(&[], sink, emit_info, state);
|
||||
sink.bind_label(loop_end);
|
||||
}
|
||||
}
|
||||
|
||||
let end_off = sink.cur_offset();
|
||||
|
||||
Reference in New Issue
Block a user