Implement interrupting wasm code, reimplement stack overflow (#1490)
* Implement interrupting wasm code, reimplement stack overflow This commit is a relatively large change for wasmtime with two main goals: * Primarily this enables interrupting executing wasm code with a trap, preventing infinite loops in wasm code. Note that resumption of the wasm code is not a goal of this commit. * Additionally this commit reimplements how we handle stack overflow to ensure that host functions always have a reasonable amount of stack to run on. This fixes an issue where we might longjmp out of a host function, skipping destructors. Lots of various odds and ends end up falling out in this commit once the two goals above were implemented. The strategy for implementing this was also lifted from Spidermonkey and existing functionality inside of Cranelift. I've tried to write up thorough documentation of how this all works in `crates/environ/src/cranelift.rs` where gnarly-ish bits are. A brief summary of how this works is that each function and each loop header now checks to see if they're interrupted. Interrupts and the stack overflow check are actually folded into one now, where function headers check to see if they've run out of stack and the sentinel value used to indicate an interrupt, checked in loop headers, tricks functions into thinking they're out of stack. An interrupt is basically just writing a value to a location which is read by JIT code. When interrupts are delivered and what triggers them has been left up to embedders of the `wasmtime` crate. The `wasmtime::Store` type has a method to acquire an `InterruptHandle`, where `InterruptHandle` is a `Send` and `Sync` type which can travel to other threads (or perhaps even a signal handler) to get notified from. It's intended that this provides a good degree of flexibility when interrupting wasm code. Note though that this does have a large caveat where interrupts don't work when you're interrupting host code, so if you've got a host import blocking for a long time an interrupt won't actually be received until the wasm starts running again. Some fallout included from this change is: * Unix signal handlers are no longer registered with `SA_ONSTACK`. Instead they run on the native stack the thread was already using. This is possible since stack overflow isn't handled by hitting the guard page, but rather it's explicitly checked for in wasm now. Native stack overflow will continue to abort the process as usual. * Unix sigaltstack management is now no longer necessary since we don't use it any more. * Windows no longer has any need to reset guard pages since we no longer try to recover from faults on guard pages. * On all targets probestack intrinsics are disabled since we use a different mechanism for catching stack overflow. * The C API has been updated with interrupts handles. An example has also been added which shows off how to interrupt a module. Closes #139 Closes #860 Closes #900 * Update comment about magical interrupt value * Store stack limit as a global value, not a closure * Run rustfmt * Handle review comments * Add a comment about SA_ONSTACK * Use `usize` for type of `INTERRUPTED` * Parse human-readable durations * Bring back sigaltstack handling Allows libstd to print out stack overflow on failure still. * Add parsing and emission of stack limit-via-preamble * Fix new example for new apis * Fix host segfault test in release mode * Fix new doc example
This commit is contained in:
1
Cargo.lock
generated
1
Cargo.lock
generated
@@ -2093,6 +2093,7 @@ dependencies = [
|
||||
"faerie",
|
||||
"file-per-thread-logger",
|
||||
"filecheck",
|
||||
"humantime",
|
||||
"libc",
|
||||
"more-asserts",
|
||||
"pretty_env_logger",
|
||||
|
||||
@@ -39,6 +39,7 @@ file-per-thread-logger = "0.1.1"
|
||||
wat = "1.0.10"
|
||||
libc = "0.2.60"
|
||||
rayon = "1.2.1"
|
||||
humantime = "1.3.0"
|
||||
|
||||
[dev-dependencies]
|
||||
filecheck = "0.5.0"
|
||||
|
||||
@@ -392,6 +392,8 @@ pub enum AnyEntity {
|
||||
Heap(Heap),
|
||||
/// A table.
|
||||
Table(Table),
|
||||
/// A function's stack limit
|
||||
StackLimit,
|
||||
}
|
||||
|
||||
impl fmt::Display for AnyEntity {
|
||||
@@ -409,6 +411,7 @@ impl fmt::Display for AnyEntity {
|
||||
Self::SigRef(r) => r.fmt(f),
|
||||
Self::Heap(r) => r.fmt(f),
|
||||
Self::Table(r) => r.fmt(f),
|
||||
Self::StackLimit => write!(f, "stack_limit"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -95,6 +95,13 @@ pub struct Function {
|
||||
///
|
||||
/// This is used for some ABIs to generate unwind information.
|
||||
pub epilogues_start: Vec<Inst>,
|
||||
|
||||
/// An optional global value which represents an expression evaluating to
|
||||
/// the stack limit for this function. This `GlobalValue` will be
|
||||
/// interpreted in the prologue, if necessary, to insert a stack check to
|
||||
/// ensure that a trap happens if the stack pointer goes below the
|
||||
/// threshold specified here.
|
||||
pub stack_limit: Option<ir::GlobalValue>,
|
||||
}
|
||||
|
||||
impl Function {
|
||||
@@ -119,6 +126,7 @@ impl Function {
|
||||
srclocs: SecondaryMap::new(),
|
||||
prologue_end: None,
|
||||
epilogues_start: Vec::new(),
|
||||
stack_limit: None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -140,6 +148,7 @@ impl Function {
|
||||
self.srclocs.clear();
|
||||
self.prologue_end = None;
|
||||
self.epilogues_start.clear();
|
||||
self.stack_limit = None;
|
||||
}
|
||||
|
||||
/// Create a new empty, anonymous function with a Fast calling convention.
|
||||
|
||||
@@ -685,21 +685,32 @@ fn insert_common_prologue(
|
||||
fpr_slot: Option<&StackSlot>,
|
||||
isa: &dyn TargetIsa,
|
||||
) {
|
||||
if stack_size > 0 {
|
||||
// Check if there is a special stack limit parameter. If so insert stack check.
|
||||
if let Some(stack_limit_arg) = pos.func.special_param(ArgumentPurpose::StackLimit) {
|
||||
// Total stack size is the size of all stack area used by the function, including
|
||||
// pushed CSRs, frame pointer.
|
||||
// Also, the size of a return address, implicitly pushed by a x86 `call` instruction,
|
||||
// also should be accounted for.
|
||||
// If any FPR are present, count them as well as necessary alignment space.
|
||||
// TODO: Check if the function body actually contains a `call` instruction.
|
||||
let mut total_stack_size =
|
||||
(csrs.iter(GPR).len() + 1 + 1) as i64 * (isa.pointer_bytes() as isize) as i64;
|
||||
|
||||
total_stack_size += csrs.iter(FPR).len() as i64 * types::F64X2.bytes() as i64;
|
||||
|
||||
insert_stack_check(pos, total_stack_size, stack_limit_arg);
|
||||
// If this is a leaf function with zero stack, then there's no need to
|
||||
// insert a stack check since it can't overflow anything and
|
||||
// forward-progress is guarantee so long as loop are handled anyway.
|
||||
//
|
||||
// If this has a stack size it could stack overflow, or if it isn't a leaf
|
||||
// it could be part of a long call chain which we need to check anyway.
|
||||
//
|
||||
// First we look for the stack limit as a special argument to the function,
|
||||
// and failing that we see if a custom stack limit factory has been provided
|
||||
// which will be used to likely calculate the stack limit from the arguments
|
||||
// or perhaps constants.
|
||||
if stack_size > 0 || !pos.func.is_leaf() {
|
||||
let scratch = ir::ValueLoc::Reg(RU::rax as RegUnit);
|
||||
let stack_limit_arg = match pos.func.special_param(ArgumentPurpose::StackLimit) {
|
||||
Some(arg) => {
|
||||
let copy = pos.ins().copy(arg);
|
||||
pos.func.locations[copy] = scratch;
|
||||
Some(copy)
|
||||
}
|
||||
None => pos
|
||||
.func
|
||||
.stack_limit
|
||||
.map(|gv| interpret_gv(pos, gv, scratch)),
|
||||
};
|
||||
if let Some(stack_limit_arg) = stack_limit_arg {
|
||||
insert_stack_check(pos, stack_size, stack_limit_arg);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -811,16 +822,76 @@ fn insert_common_prologue(
|
||||
);
|
||||
}
|
||||
|
||||
/// Inserts code necessary to calculate `gv`.
|
||||
///
|
||||
/// Note that this is typically done with `ins().global_value(...)` but that
|
||||
/// requires legalization to run to encode it, and we're running super late
|
||||
/// here in the backend where legalization isn't possible. To get around this
|
||||
/// we manually interpret the `gv` specified and do register allocation for
|
||||
/// intermediate values.
|
||||
///
|
||||
/// This is an incomplete implementation of loading `GlobalValue` values to get
|
||||
/// compared to the stack pointer, but currently it serves enough functionality
|
||||
/// to get this implemented in `wasmtime` itself. This'll likely get expanded a
|
||||
/// bit over time!
|
||||
fn interpret_gv(pos: &mut EncCursor, gv: ir::GlobalValue, scratch: ir::ValueLoc) -> ir::Value {
|
||||
match pos.func.global_values[gv] {
|
||||
ir::GlobalValueData::VMContext => pos
|
||||
.func
|
||||
.special_param(ir::ArgumentPurpose::VMContext)
|
||||
.expect("no vmcontext parameter found"),
|
||||
ir::GlobalValueData::Load {
|
||||
base,
|
||||
offset,
|
||||
global_type,
|
||||
readonly: _,
|
||||
} => {
|
||||
let base = interpret_gv(pos, base, scratch);
|
||||
let ret = pos
|
||||
.ins()
|
||||
.load(global_type, ir::MemFlags::trusted(), base, offset);
|
||||
pos.func.locations[ret] = scratch;
|
||||
return ret;
|
||||
}
|
||||
ref other => panic!("global value for stack limit not supported: {}", other),
|
||||
}
|
||||
}
|
||||
|
||||
/// Insert a check that generates a trap if the stack pointer goes
|
||||
/// below a value in `stack_limit_arg`.
|
||||
fn insert_stack_check(pos: &mut EncCursor, stack_size: i64, stack_limit_arg: ir::Value) {
|
||||
use crate::ir::condcodes::IntCC;
|
||||
|
||||
// Our stack pointer, after subtracting `stack_size`, must not be below
|
||||
// `stack_limit_arg`. To do this we're going to add `stack_size` to
|
||||
// `stack_limit_arg` and see if the stack pointer is below that. The
|
||||
// `stack_size + stack_limit_arg` computation might overflow, however, due
|
||||
// to how stack limits may be loaded and set externally to trigger a trap.
|
||||
//
|
||||
// To handle this we'll need an extra comparison to see if the stack
|
||||
// pointer is already below `stack_limit_arg`. Most of the time this
|
||||
// isn't necessary though since the stack limit which triggers a trap is
|
||||
// likely a sentinel somewhere around `usize::max_value()`. In that case
|
||||
// only conditionally emit this pre-flight check. That way most functions
|
||||
// only have the one comparison, but are also guaranteed that if we add
|
||||
// `stack_size` to `stack_limit_arg` is won't overflow.
|
||||
//
|
||||
// This does mean that code generators which use this stack check
|
||||
// functionality need to ensure that values stored into the stack limit
|
||||
// will never overflow if this threshold is added.
|
||||
if stack_size >= 32 * 1024 {
|
||||
let cflags = pos.ins().ifcmp_sp(stack_limit_arg);
|
||||
pos.func.locations[cflags] = ir::ValueLoc::Reg(RU::rflags as RegUnit);
|
||||
pos.ins().trapif(
|
||||
IntCC::UnsignedGreaterThanOrEqual,
|
||||
cflags,
|
||||
ir::TrapCode::StackOverflow,
|
||||
);
|
||||
}
|
||||
|
||||
// Copy `stack_limit_arg` into a %rax and use it for calculating
|
||||
// a SP threshold.
|
||||
let stack_limit_copy = pos.ins().copy(stack_limit_arg);
|
||||
pos.func.locations[stack_limit_copy] = ir::ValueLoc::Reg(RU::rax as RegUnit);
|
||||
let sp_threshold = pos.ins().iadd_imm(stack_limit_copy, stack_size);
|
||||
let sp_threshold = pos.ins().iadd_imm(stack_limit_arg, stack_size);
|
||||
pos.func.locations[sp_threshold] = ir::ValueLoc::Reg(RU::rax as RegUnit);
|
||||
|
||||
// If the stack pointer currently reaches the SP threshold or below it then after opening
|
||||
|
||||
@@ -107,6 +107,11 @@ pub trait FuncWriter {
|
||||
self.write_entity_definition(w, func, cref.into(), cval)?;
|
||||
}
|
||||
|
||||
if let Some(limit) = func.stack_limit {
|
||||
any = true;
|
||||
self.write_entity_definition(w, func, AnyEntity::StackLimit, &limit)?;
|
||||
}
|
||||
|
||||
Ok(any)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
test compile
|
||||
set opt_level=speed_and_size
|
||||
set is_pic
|
||||
set enable_probestack=false
|
||||
target x86_64 haswell
|
||||
|
||||
; An empty function.
|
||||
@@ -244,7 +245,7 @@ block0(v0: i64):
|
||||
; nextln:
|
||||
; nextln: block0(v0: i64 [%rdi], v4: i64 [%rbp]):
|
||||
; nextln: v1 = copy v0
|
||||
; nextln: v2 = iadd_imm v1, 16
|
||||
; nextln: v2 = iadd_imm v1, 176
|
||||
; nextln: v3 = ifcmp_sp v2
|
||||
; nextln: trapif uge v3, stk_ovf
|
||||
; nextln: x86_push v4
|
||||
@@ -254,3 +255,60 @@ block0(v0: i64):
|
||||
; nextln: v5 = x86_pop.i64
|
||||
; nextln: return v5
|
||||
; nextln: }
|
||||
|
||||
function %big_stack_limit(i64 stack_limit) {
|
||||
ss0 = explicit_slot 40000
|
||||
block0(v0: i64):
|
||||
return
|
||||
}
|
||||
|
||||
; check: function %big_stack_limit(i64 stack_limit [%rdi], i64 fp [%rbp]) -> i64 fp [%rbp] fast {
|
||||
; nextln: ss0 = explicit_slot 40000, offset -40016
|
||||
; nextln: ss1 = incoming_arg 16, offset -16
|
||||
; nextln:
|
||||
; nextln: block0(v0: i64 [%rdi], v5: i64 [%rbp]):
|
||||
; nextln: v1 = copy v0
|
||||
; nextln: v2 = ifcmp_sp v1
|
||||
; nextln: trapif uge v2, stk_ovf
|
||||
; nextln: v3 = iadd_imm v1, 0x9c40
|
||||
; nextln: v4 = ifcmp_sp v3
|
||||
; nextln: trapif uge v4, stk_ovf
|
||||
; nextln: x86_push v5
|
||||
; nextln: copy_special %rsp -> %rbp
|
||||
; nextln: adjust_sp_down_imm 0x9c40
|
||||
; nextln: adjust_sp_up_imm 0x9c40
|
||||
; nextln: v6 = x86_pop.i64
|
||||
; nextln: return v6
|
||||
; nextln: }
|
||||
|
||||
function %limit_preamble(i64 vmctx) {
|
||||
gv0 = vmctx
|
||||
gv1 = load.i64 notrap aligned gv0
|
||||
gv2 = load.i64 notrap aligned gv1+4
|
||||
stack_limit = gv2
|
||||
ss0 = explicit_slot 20
|
||||
block0(v0: i64):
|
||||
return
|
||||
}
|
||||
|
||||
; check: function %limit_preamble(i64 vmctx [%rdi], i64 fp [%rbp]) -> i64 fp [%rbp] fast {
|
||||
; nextln: ss0 = explicit_slot 20, offset -36
|
||||
; nextln: ss1 = incoming_arg 16, offset -16
|
||||
; nextln: gv0 = vmctx
|
||||
; nextln: gv1 = load.i64 notrap aligned gv0
|
||||
; nextln: gv2 = load.i64 notrap aligned gv1+4
|
||||
; nextln: stack_limit = gv2
|
||||
; nextln:
|
||||
; nextln: block0(v0: i64 [%rdi], v5: i64 [%rbp]):
|
||||
; nextln: v1 = load.i64 notrap aligned v0
|
||||
; nextln: v2 = load.i64 notrap aligned v1+4
|
||||
; nextln: v3 = iadd_imm v2, 32
|
||||
; nextln: v4 = ifcmp_sp v3
|
||||
; nextln: trapif uge v4, stk_ovf
|
||||
; nextln: x86_push v5
|
||||
; nextln: copy_special %rsp -> %rbp
|
||||
; nextln: adjust_sp_down_imm 32
|
||||
; nextln: adjust_sp_up_imm 32
|
||||
; nextln: v6 = x86_pop.i64
|
||||
; nextln: return v6
|
||||
; nextln: }
|
||||
|
||||
@@ -358,6 +358,15 @@ impl<'a> Context<'a> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Configure the stack limit of the current function.
|
||||
fn add_stack_limit(&mut self, limit: GlobalValue, loc: Location) -> ParseResult<()> {
|
||||
if self.function.stack_limit.is_some() {
|
||||
return err!(loc, "stack limit defined twice");
|
||||
}
|
||||
self.function.stack_limit = Some(limit);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Resolve a reference to a constant.
|
||||
fn check_constant(&self, c: Constant, loc: Location) -> ParseResult<()> {
|
||||
if !self.map.contains_constant(c) {
|
||||
@@ -598,6 +607,15 @@ impl<'a> Parser<'a> {
|
||||
err!(self.loc, "expected constant number: const«n»")
|
||||
}
|
||||
|
||||
// Match and consume a stack limit token
|
||||
fn match_stack_limit(&mut self) -> ParseResult<()> {
|
||||
if let Some(Token::Identifier("stack_limit")) = self.token() {
|
||||
self.consume();
|
||||
return Ok(());
|
||||
}
|
||||
err!(self.loc, "expected identifier: stack_limit")
|
||||
}
|
||||
|
||||
// Match and consume a block reference.
|
||||
fn match_block(&mut self, err_msg: &str) -> ParseResult<Block> {
|
||||
if let Some(Token::Block(block)) = self.token() {
|
||||
@@ -1455,6 +1473,7 @@ impl<'a> Parser<'a> {
|
||||
// * function-decl
|
||||
// * signature-decl
|
||||
// * jump-table-decl
|
||||
// * stack-limit-decl
|
||||
//
|
||||
// The parsed decls are added to `ctx` rather than returned.
|
||||
fn parse_preamble(&mut self, ctx: &mut Context) -> ParseResult<()> {
|
||||
@@ -1503,6 +1522,11 @@ impl<'a> Parser<'a> {
|
||||
self.parse_constant_decl()
|
||||
.and_then(|(c, v)| ctx.add_constant(c, v, self.loc))
|
||||
}
|
||||
Some(Token::Identifier("stack_limit")) => {
|
||||
self.start_gathering_comments();
|
||||
self.parse_stack_limit_decl()
|
||||
.and_then(|gv| ctx.add_stack_limit(gv, self.loc))
|
||||
}
|
||||
// More to come..
|
||||
_ => return Ok(()),
|
||||
}?;
|
||||
@@ -1907,6 +1931,28 @@ impl<'a> Parser<'a> {
|
||||
Ok((name, data))
|
||||
}
|
||||
|
||||
// Parse a stack limit decl
|
||||
//
|
||||
// stack-limit-decl ::= * StackLimit "=" GlobalValue(gv)
|
||||
fn parse_stack_limit_decl(&mut self) -> ParseResult<GlobalValue> {
|
||||
self.match_stack_limit()?;
|
||||
self.match_token(Token::Equal, "expected '=' in stack limit decl")?;
|
||||
let limit = match self.token() {
|
||||
Some(Token::GlobalValue(base_num)) => match GlobalValue::with_number(base_num) {
|
||||
Some(gv) => gv,
|
||||
None => return err!(self.loc, "invalid global value number for stack limit"),
|
||||
},
|
||||
_ => return err!(self.loc, "expected global value"),
|
||||
};
|
||||
self.consume();
|
||||
|
||||
// Collect any trailing comments.
|
||||
self.token();
|
||||
self.claim_gathered_comments(AnyEntity::StackLimit);
|
||||
|
||||
Ok(limit)
|
||||
}
|
||||
|
||||
// Parse a function body, add contents to `ctx`.
|
||||
//
|
||||
// function-body ::= * { extended-basic-block }
|
||||
|
||||
@@ -12,7 +12,8 @@ use std::fmt::{Display, Formatter, Result};
|
||||
|
||||
/// A run command appearing in a test file.
|
||||
///
|
||||
/// For parsing, see [Parser::parse_run_command].
|
||||
/// For parsing, see
|
||||
/// [Parser::parse_run_command](crate::parser::Parser::parse_run_command).
|
||||
#[derive(PartialEq, Debug)]
|
||||
pub enum RunCommand {
|
||||
/// Invoke a function and print its result.
|
||||
@@ -66,6 +67,8 @@ impl Display for Invocation {
|
||||
|
||||
/// Represent a data value. Where [Value] is an SSA reference, [DataValue] is the type + value
|
||||
/// that would be referred to by a [Value].
|
||||
///
|
||||
/// [Value]: cranelift_codegen::ir::Value
|
||||
#[allow(missing_docs)]
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub enum DataValue {
|
||||
|
||||
@@ -176,6 +176,7 @@ macro_rules! getters {
|
||||
// of the closure. Pass the export in so that we can call it.
|
||||
let instance = self.instance.clone();
|
||||
let export = self.export.clone();
|
||||
let max_wasm_stack = self.store.engine().config().max_wasm_stack;
|
||||
|
||||
// ... and then once we've passed the typechecks we can hand out our
|
||||
// object since our `transmute` below should be safe!
|
||||
@@ -191,7 +192,7 @@ macro_rules! getters {
|
||||
>(export.address);
|
||||
let mut ret = None;
|
||||
$(let $args = $args.into_abi();)*
|
||||
wasmtime_runtime::catch_traps(export.vmctx, || {
|
||||
wasmtime_runtime::catch_traps(export.vmctx, max_wasm_stack, || {
|
||||
ret = Some(fnptr(export.vmctx, ptr::null_mut(), $($args,)*));
|
||||
}).map_err(Trap::from_jit)?;
|
||||
|
||||
@@ -558,14 +559,18 @@ impl Func {
|
||||
|
||||
// Call the trampoline.
|
||||
if let Err(error) = unsafe {
|
||||
wasmtime_runtime::catch_traps(self.export.vmctx, || {
|
||||
(self.trampoline)(
|
||||
self.export.vmctx,
|
||||
ptr::null_mut(),
|
||||
self.export.address,
|
||||
values_vec.as_mut_ptr(),
|
||||
)
|
||||
})
|
||||
wasmtime_runtime::catch_traps(
|
||||
self.export.vmctx,
|
||||
self.store.engine().config().max_wasm_stack,
|
||||
|| {
|
||||
(self.trampoline)(
|
||||
self.export.vmctx,
|
||||
ptr::null_mut(),
|
||||
self.export.address,
|
||||
values_vec.as_mut_ptr(),
|
||||
)
|
||||
},
|
||||
)
|
||||
} {
|
||||
return Err(Trap::from_jit(error).into());
|
||||
}
|
||||
|
||||
@@ -35,6 +35,7 @@ fn instantiate(
|
||||
&mut resolver,
|
||||
sig_registry,
|
||||
config.memory_creator.as_ref().map(|a| a as _),
|
||||
config.max_wasm_stack,
|
||||
host,
|
||||
)
|
||||
.map_err(|e| -> Error {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use crate::externals::MemoryCreator;
|
||||
use crate::trampoline::MemoryCreatorProxy;
|
||||
use anyhow::Result;
|
||||
use anyhow::{bail, Result};
|
||||
use std::cell::RefCell;
|
||||
use std::cmp::min;
|
||||
use std::fmt;
|
||||
@@ -9,11 +9,10 @@ use std::rc::Rc;
|
||||
use std::sync::Arc;
|
||||
use wasmparser::{OperatorValidatorConfig, ValidatingParserConfig};
|
||||
use wasmtime_environ::settings::{self, Configurable};
|
||||
use wasmtime_environ::CacheConfig;
|
||||
use wasmtime_environ::Tunables;
|
||||
use wasmtime_environ::{CacheConfig, Tunables};
|
||||
use wasmtime_jit::{native, CompilationStrategy, Compiler};
|
||||
use wasmtime_profiling::{JitDumpAgent, NullProfilerAgent, ProfilingAgent, VTuneAgent};
|
||||
use wasmtime_runtime::{debug_builtins, RuntimeMemoryCreator};
|
||||
use wasmtime_runtime::{debug_builtins, RuntimeMemoryCreator, VMInterrupts};
|
||||
|
||||
// Runtime Environment
|
||||
|
||||
@@ -33,6 +32,7 @@ pub struct Config {
|
||||
pub(crate) cache_config: CacheConfig,
|
||||
pub(crate) profiler: Arc<dyn ProfilingAgent>,
|
||||
pub(crate) memory_creator: Option<MemoryCreatorProxy>,
|
||||
pub(crate) max_wasm_stack: usize,
|
||||
}
|
||||
|
||||
impl Config {
|
||||
@@ -66,6 +66,11 @@ impl Config {
|
||||
.set("opt_level", "speed")
|
||||
.expect("should be valid flag");
|
||||
|
||||
// We don't use probestack as a stack limit mechanism
|
||||
flags
|
||||
.set("enable_probestack", "false")
|
||||
.expect("should be valid flag");
|
||||
|
||||
Config {
|
||||
tunables,
|
||||
validating_config: ValidatingParserConfig {
|
||||
@@ -82,6 +87,7 @@ impl Config {
|
||||
cache_config: CacheConfig::new_cache_disabled(),
|
||||
profiler: Arc::new(NullProfilerAgent),
|
||||
memory_creator: None,
|
||||
max_wasm_stack: 1 << 20,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -94,6 +100,37 @@ impl Config {
|
||||
self
|
||||
}
|
||||
|
||||
/// Configures whether functions and loops will be interruptable via the
|
||||
/// [`Store::interrupt_handle`] method.
|
||||
///
|
||||
/// For more information see the documentation on
|
||||
/// [`Store::interrupt_handle`].
|
||||
///
|
||||
/// By default this option is `false`.
|
||||
pub fn interruptable(&mut self, enable: bool) -> &mut Self {
|
||||
self.tunables.interruptable = enable;
|
||||
self
|
||||
}
|
||||
|
||||
/// Configures the maximum amount of native stack space available to
|
||||
/// executing WebAssembly code.
|
||||
///
|
||||
/// WebAssembly code currently executes on the native call stack for its own
|
||||
/// call frames. WebAssembly, however, also has well-defined semantics on
|
||||
/// stack overflow. This is intended to be a knob which can help configure
|
||||
/// how much native stack space a wasm module is allowed to consume. Note
|
||||
/// that the number here is not super-precise, but rather wasm will take at
|
||||
/// most "pretty close to this much" stack space.
|
||||
///
|
||||
/// If a wasm call (or series of nested wasm calls) take more stack space
|
||||
/// than the `size` specified then a stack overflow trap will be raised.
|
||||
///
|
||||
/// By default this option is 1 MB.
|
||||
pub fn max_wasm_stack(&mut self, size: usize) -> &mut Self {
|
||||
self.max_wasm_stack = size;
|
||||
self
|
||||
}
|
||||
|
||||
/// Configures whether the WebAssembly threads proposal will be enabled for
|
||||
/// compilation.
|
||||
///
|
||||
@@ -552,6 +589,97 @@ impl Store {
|
||||
pub fn same(a: &Store, b: &Store) -> bool {
|
||||
Rc::ptr_eq(&a.inner, &b.inner)
|
||||
}
|
||||
|
||||
/// Creates an [`InterruptHandle`] which can be used to interrupt the
|
||||
/// execution of instances within this `Store`.
|
||||
///
|
||||
/// An [`InterruptHandle`] handle is a mechanism of ensuring that guest code
|
||||
/// doesn't execute for too long. For example it's used to prevent wasm
|
||||
/// programs for executing infinitely in infinite loops or recursive call
|
||||
/// chains.
|
||||
///
|
||||
/// The [`InterruptHandle`] type is sendable to other threads so you can
|
||||
/// interact with it even while the thread with this `Store` is executing
|
||||
/// wasm code.
|
||||
///
|
||||
/// There's one method on an interrupt handle:
|
||||
/// [`InterruptHandle::interrupt`]. This method is used to generate an
|
||||
/// interrupt and cause wasm code to exit "soon".
|
||||
///
|
||||
/// ## When are interrupts delivered?
|
||||
///
|
||||
/// The term "interrupt" here refers to one of two different behaviors that
|
||||
/// are interrupted in wasm:
|
||||
///
|
||||
/// * The head of every loop in wasm has a check to see if it's interrupted.
|
||||
/// * The prologue of every function has a check to see if it's interrupted.
|
||||
///
|
||||
/// This interrupt mechanism makes no attempt to signal interrupts to
|
||||
/// native code. For example if a host function is blocked, then sending
|
||||
/// an interrupt will not interrupt that operation.
|
||||
///
|
||||
/// Interrupts are consumed as soon as possible when wasm itself starts
|
||||
/// executing. This means that if you interrupt wasm code then it basically
|
||||
/// guarantees that the next time wasm is executing on the target thread it
|
||||
/// will return quickly (either normally if it were already in the process
|
||||
/// of returning or with a trap from the interrupt). Once an interrupt
|
||||
/// trap is generated then an interrupt is consumed, and further execution
|
||||
/// will not be interrupted (unless another interrupt is set).
|
||||
///
|
||||
/// When implementing interrupts you'll want to ensure that the delivery of
|
||||
/// interrupts into wasm code is also handled in your host imports and
|
||||
/// functionality. Host functions need to either execute for bounded amounts
|
||||
/// of time or you'll need to arrange for them to be interrupted as well.
|
||||
///
|
||||
/// ## Return Value
|
||||
///
|
||||
/// This function returns a `Result` since interrupts are not always
|
||||
/// enabled. Interrupts are enabled via the [`Config::interruptable`]
|
||||
/// method, and if this store's [`Config`] hasn't been configured to enable
|
||||
/// interrupts then an error is returned.
|
||||
///
|
||||
/// ## Examples
|
||||
///
|
||||
/// ```
|
||||
/// # use anyhow::Result;
|
||||
/// # use wasmtime::*;
|
||||
/// # fn main() -> Result<()> {
|
||||
/// // Enable interruptable code via `Config` and then create an interrupt
|
||||
/// // handle which we'll use later to interrupt running code.
|
||||
/// let engine = Engine::new(Config::new().interruptable(true));
|
||||
/// let store = Store::new(&engine);
|
||||
/// let interrupt_handle = store.interrupt_handle()?;
|
||||
///
|
||||
/// // Compile and instantiate a small example with an infinite loop.
|
||||
/// let module = Module::new(&store, r#"
|
||||
/// (func (export "run") (loop br 0))
|
||||
/// "#)?;
|
||||
/// let instance = Instance::new(&module, &[])?;
|
||||
/// let run = instance
|
||||
/// .get_func("run")
|
||||
/// .ok_or(anyhow::format_err!("failed to find `run` function export"))?
|
||||
/// .get0::<()>()?;
|
||||
///
|
||||
/// // Spin up a thread to send us an interrupt in a second
|
||||
/// std::thread::spawn(move || {
|
||||
/// std::thread::sleep(std::time::Duration::from_secs(1));
|
||||
/// interrupt_handle.interrupt();
|
||||
/// });
|
||||
///
|
||||
/// let trap = run().unwrap_err();
|
||||
/// assert!(trap.message().contains("wasm trap: interrupt"));
|
||||
/// # Ok(())
|
||||
/// # }
|
||||
/// ```
|
||||
pub fn interrupt_handle(&self) -> Result<InterruptHandle> {
|
||||
if self.engine().config.tunables.interruptable {
|
||||
Ok(InterruptHandle {
|
||||
interrupts: self.compiler().interrupts().clone(),
|
||||
})
|
||||
} else {
|
||||
bail!("interrupts aren't enabled for this `Store`")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Store {
|
||||
@@ -560,10 +688,32 @@ impl Default for Store {
|
||||
}
|
||||
}
|
||||
|
||||
/// A threadsafe handle used to interrupt instances executing within a
|
||||
/// particular `Store`.
|
||||
///
|
||||
/// This structure is created by the [`Store::interrupt_handle`] method.
|
||||
pub struct InterruptHandle {
|
||||
interrupts: Arc<VMInterrupts>,
|
||||
}
|
||||
|
||||
impl InterruptHandle {
|
||||
/// Flags that execution within this handle's original [`Store`] should be
|
||||
/// interrupted.
|
||||
///
|
||||
/// This will not immediately interrupt execution of wasm modules, but
|
||||
/// rather it will interrupt wasm execution of loop headers and wasm
|
||||
/// execution of function entries. For more information see
|
||||
/// [`Store::interrupt_handle`].
|
||||
pub fn interrupt(&self) {
|
||||
self.interrupts.interrupt()
|
||||
}
|
||||
}
|
||||
|
||||
fn _assert_send_sync() {
|
||||
fn _assert<T: Send + Sync>() {}
|
||||
_assert::<Engine>();
|
||||
_assert::<Config>();
|
||||
_assert::<InterruptHandle>();
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -53,6 +53,8 @@ pub(crate) fn create_handle(
|
||||
.operator_config
|
||||
.enable_bulk_memory,
|
||||
state,
|
||||
store.compiler().interrupts().clone(),
|
||||
store.engine().config().max_wasm_stack,
|
||||
)?)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,11 +50,18 @@ impl Trap {
|
||||
.downcast()
|
||||
.expect("only `Trap` user errors are supported")
|
||||
}
|
||||
wasmtime_runtime::Trap::Jit { pc, backtrace } => {
|
||||
let code = info
|
||||
wasmtime_runtime::Trap::Jit {
|
||||
pc,
|
||||
backtrace,
|
||||
maybe_interrupted,
|
||||
} => {
|
||||
let mut code = info
|
||||
.lookup_trap_info(pc)
|
||||
.map(|info| info.trap_code)
|
||||
.unwrap_or(TrapCode::StackOverflow);
|
||||
if maybe_interrupted && code == TrapCode::StackOverflow {
|
||||
code = TrapCode::Interrupt;
|
||||
}
|
||||
Trap::new_wasm(&info, Some(pc), code, backtrace)
|
||||
}
|
||||
wasmtime_runtime::Trap::Wasm {
|
||||
|
||||
135
crates/api/tests/iloop.rs
Normal file
135
crates/api/tests/iloop.rs
Normal file
@@ -0,0 +1,135 @@
|
||||
use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
|
||||
use wasmtime::*;
|
||||
|
||||
fn interruptable_store() -> Store {
|
||||
let engine = Engine::new(Config::new().interruptable(true));
|
||||
Store::new(&engine)
|
||||
}
|
||||
|
||||
fn hugely_recursive_module(store: &Store) -> anyhow::Result<Module> {
|
||||
let mut wat = String::new();
|
||||
wat.push_str(
|
||||
r#"
|
||||
(import "" "" (func))
|
||||
(func (export "loop") call 2 call 2)
|
||||
"#,
|
||||
);
|
||||
for i in 0..100 {
|
||||
wat.push_str(&format!("(func call {0} call {0})\n", i + 3));
|
||||
}
|
||||
wat.push_str("(func call 0)\n");
|
||||
|
||||
Module::new(&store, &wat)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn loops_interruptable() -> anyhow::Result<()> {
|
||||
let store = interruptable_store();
|
||||
let module = Module::new(&store, r#"(func (export "loop") (loop br 0))"#)?;
|
||||
let instance = Instance::new(&module, &[])?;
|
||||
let iloop = instance.get_func("loop").unwrap().get0::<()>()?;
|
||||
store.interrupt_handle()?.interrupt();
|
||||
let trap = iloop().unwrap_err();
|
||||
assert!(trap.message().contains("wasm trap: interrupt"));
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn functions_interruptable() -> anyhow::Result<()> {
|
||||
let store = interruptable_store();
|
||||
let module = hugely_recursive_module(&store)?;
|
||||
let func = Func::wrap(&store, || {});
|
||||
let instance = Instance::new(&module, &[func.into()])?;
|
||||
let iloop = instance.get_func("loop").unwrap().get0::<()>()?;
|
||||
store.interrupt_handle()?.interrupt();
|
||||
let trap = iloop().unwrap_err();
|
||||
assert!(
|
||||
trap.message().contains("wasm trap: interrupt"),
|
||||
"{}",
|
||||
trap.message()
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn loop_interrupt_from_afar() -> anyhow::Result<()> {
|
||||
// Create an instance which calls an imported function on each iteration of
|
||||
// the loop so we can count the number of loop iterations we've executed so
|
||||
// far.
|
||||
static HITS: AtomicUsize = AtomicUsize::new(0);
|
||||
let store = interruptable_store();
|
||||
let module = Module::new(
|
||||
&store,
|
||||
r#"
|
||||
(import "" "" (func))
|
||||
|
||||
(func (export "loop")
|
||||
(loop
|
||||
call 0
|
||||
br 0)
|
||||
)
|
||||
"#,
|
||||
)?;
|
||||
let func = Func::wrap(&store, || {
|
||||
HITS.fetch_add(1, SeqCst);
|
||||
});
|
||||
let instance = Instance::new(&module, &[func.into()])?;
|
||||
|
||||
// Use the instance's interrupt handle to wait for it to enter the loop long
|
||||
// enough and then we signal an interrupt happens.
|
||||
let handle = store.interrupt_handle()?;
|
||||
let thread = std::thread::spawn(move || {
|
||||
while HITS.load(SeqCst) <= 100_000 {
|
||||
// continue ...
|
||||
}
|
||||
handle.interrupt();
|
||||
});
|
||||
|
||||
// Enter the infinitely looping function and assert that our interrupt
|
||||
// handle does indeed actually interrupt the function.
|
||||
let iloop = instance.get_func("loop").unwrap().get0::<()>()?;
|
||||
let trap = iloop().unwrap_err();
|
||||
thread.join().unwrap();
|
||||
assert!(
|
||||
trap.message().contains("wasm trap: interrupt"),
|
||||
"bad message: {}",
|
||||
trap.message()
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn function_interrupt_from_afar() -> anyhow::Result<()> {
|
||||
// Create an instance which calls an imported function on each iteration of
|
||||
// the loop so we can count the number of loop iterations we've executed so
|
||||
// far.
|
||||
static HITS: AtomicUsize = AtomicUsize::new(0);
|
||||
let store = interruptable_store();
|
||||
let module = hugely_recursive_module(&store)?;
|
||||
let func = Func::wrap(&store, || {
|
||||
HITS.fetch_add(1, SeqCst);
|
||||
});
|
||||
let instance = Instance::new(&module, &[func.into()])?;
|
||||
|
||||
// Use the instance's interrupt handle to wait for it to enter the loop long
|
||||
// enough and then we signal an interrupt happens.
|
||||
let handle = store.interrupt_handle()?;
|
||||
let thread = std::thread::spawn(move || {
|
||||
while HITS.load(SeqCst) <= 100_000 {
|
||||
// continue ...
|
||||
}
|
||||
handle.interrupt();
|
||||
});
|
||||
|
||||
// Enter the infinitely looping function and assert that our interrupt
|
||||
// handle does indeed actually interrupt the function.
|
||||
let iloop = instance.get_func("loop").unwrap().get0::<()>()?;
|
||||
let trap = iloop().unwrap_err();
|
||||
thread.join().unwrap();
|
||||
assert!(
|
||||
trap.message().contains("wasm trap: interrupt"),
|
||||
"bad message: {}",
|
||||
trap.message()
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
60
crates/api/tests/stack-overflow.rs
Normal file
60
crates/api/tests/stack-overflow.rs
Normal file
@@ -0,0 +1,60 @@
|
||||
use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
|
||||
use wasmtime::*;
|
||||
|
||||
#[test]
|
||||
fn host_always_has_some_stack() -> anyhow::Result<()> {
|
||||
static HITS: AtomicUsize = AtomicUsize::new(0);
|
||||
// assume hosts always have at least 512k of stack
|
||||
const HOST_STACK: usize = 512 * 1024;
|
||||
|
||||
let store = Store::default();
|
||||
|
||||
// Create a module that's infinitely recursive, but calls the host on each
|
||||
// level of wasm stack to always test how much host stack we have left.
|
||||
let module = Module::new(
|
||||
&store,
|
||||
r#"
|
||||
(module
|
||||
(import "" "" (func $host))
|
||||
(func $recursive (export "foo")
|
||||
call $host
|
||||
call $recursive)
|
||||
)
|
||||
"#,
|
||||
)?;
|
||||
let func = Func::wrap(&store, test_host_stack);
|
||||
let instance = Instance::new(&module, &[func.into()])?;
|
||||
let foo = instance.get_func("foo").unwrap().get0::<()>()?;
|
||||
|
||||
// Make sure that our function traps and the trap says that the call stack
|
||||
// has been exhausted.
|
||||
let trap = foo().unwrap_err();
|
||||
assert!(
|
||||
trap.message().contains("call stack exhausted"),
|
||||
"{}",
|
||||
trap.message()
|
||||
);
|
||||
|
||||
// Additionally, however, and this is the crucial test, make sure that the
|
||||
// host function actually completed. If HITS is 1 then we entered but didn't
|
||||
// exit meaning we segfaulted while executing the host, yet still tried to
|
||||
// recover from it with longjmp.
|
||||
assert_eq!(HITS.load(SeqCst), 0);
|
||||
|
||||
return Ok(());
|
||||
|
||||
fn test_host_stack() {
|
||||
HITS.fetch_add(1, SeqCst);
|
||||
assert!(consume_some_stack(0, HOST_STACK) > 0);
|
||||
HITS.fetch_sub(1, SeqCst);
|
||||
}
|
||||
|
||||
#[inline(never)]
|
||||
fn consume_some_stack(ptr: usize, stack: usize) -> usize {
|
||||
if stack == 0 {
|
||||
return ptr;
|
||||
}
|
||||
let mut space = [0u8; 1024];
|
||||
consume_some_stack(space.as_mut_ptr() as usize, stack.saturating_sub(1024))
|
||||
}
|
||||
}
|
||||
@@ -49,6 +49,8 @@ enum wasmtime_profiling_strategy_t { // ProfilingStrategy
|
||||
WASM_API_EXTERN ret wasmtime_config_##name##_set(wasm_config_t*, ty);
|
||||
|
||||
WASMTIME_CONFIG_PROP(void, debug_info, bool)
|
||||
WASMTIME_CONFIG_PROP(void, interruptable, bool)
|
||||
WASMTIME_CONFIG_PROP(void, max_wasm_stack, size_t)
|
||||
WASMTIME_CONFIG_PROP(void, wasm_threads, bool)
|
||||
WASMTIME_CONFIG_PROP(void, wasm_reference_types, bool)
|
||||
WASMTIME_CONFIG_PROP(void, wasm_simd, bool)
|
||||
@@ -131,6 +133,23 @@ WASM_API_EXTERN own wasm_func_t* wasmtime_func_new_with_env(
|
||||
|
||||
WASM_API_EXTERN own wasm_extern_t* wasmtime_caller_export_get(const wasmtime_caller_t* caller, const wasm_name_t* name);
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// wasmtime_interrupt_handle_t extension, allowing interruption of running wasm
|
||||
// modules.
|
||||
//
|
||||
// Note that `wasmtime_interrupt_handle_t` is safe to send to other threads and
|
||||
// interrupt/delete.
|
||||
//
|
||||
// Also note that `wasmtime_interrupt_handle_new` may return NULL if interrupts
|
||||
// are not enabled in `wasm_config_t`.
|
||||
|
||||
WASMTIME_DECLARE_OWN(interrupt_handle)
|
||||
|
||||
WASM_API_EXTERN own wasmtime_interrupt_handle_t *wasmtime_interrupt_handle_new(wasm_store_t *store);
|
||||
|
||||
WASM_API_EXTERN void wasmtime_interrupt_handle_interrupt(wasmtime_interrupt_handle_t *handle);
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// Extensions to `wasm_frame_t`
|
||||
|
||||
@@ -44,6 +44,16 @@ pub extern "C" fn wasmtime_config_debug_info_set(c: &mut wasm_config_t, enable:
|
||||
c.config.debug_info(enable);
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn wasmtime_config_interruptable_set(c: &mut wasm_config_t, enable: bool) {
|
||||
c.config.interruptable(enable);
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn wasmtime_config_max_wasm_stack_set(c: &mut wasm_config_t, size: usize) {
|
||||
c.config.max_wasm_stack(size);
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn wasmtime_config_wasm_threads_set(c: &mut wasm_config_t, enable: bool) {
|
||||
c.config.wasm_threads(enable);
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use crate::wasm_engine_t;
|
||||
use wasmtime::{HostRef, Store};
|
||||
use wasmtime::{HostRef, InterruptHandle, Store};
|
||||
|
||||
#[repr(C)]
|
||||
#[derive(Clone)]
|
||||
@@ -16,3 +16,24 @@ pub extern "C" fn wasm_store_new(engine: &wasm_engine_t) -> Box<wasm_store_t> {
|
||||
store: HostRef::new(Store::new(&engine.borrow())),
|
||||
})
|
||||
}
|
||||
|
||||
#[repr(C)]
|
||||
pub struct wasmtime_interrupt_handle_t {
|
||||
handle: InterruptHandle,
|
||||
}
|
||||
|
||||
wasmtime_c_api_macros::declare_own!(wasmtime_interrupt_handle_t);
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn wasmtime_interrupt_handle_new(
|
||||
store: &wasm_store_t,
|
||||
) -> Option<Box<wasmtime_interrupt_handle_t>> {
|
||||
Some(Box::new(wasmtime_interrupt_handle_t {
|
||||
handle: store.store.borrow().interrupt_handle().ok()?,
|
||||
}))
|
||||
}
|
||||
|
||||
#[no_mangle]
|
||||
pub extern "C" fn wasmtime_interrupt_handle_interrupt(handle: &wasmtime_interrupt_handle_t) {
|
||||
handle.handle.interrupt();
|
||||
}
|
||||
|
||||
@@ -1,5 +1,90 @@
|
||||
//! Support for compiling with Cranelift.
|
||||
|
||||
// # How does Wasmtime prevent stack overflow?
|
||||
//
|
||||
// A few locations throughout the codebase link to this file to explain
|
||||
// interrupts and stack overflow. To start off, let's take a look at stack
|
||||
// overflow. Wasm code is well-defined to have stack overflow being recoverable
|
||||
// and raising a trap, so we need to handle this somehow! There's also an added
|
||||
// constraint where as an embedder you frequently are running host-provided
|
||||
// code called from wasm. WebAssembly and native code currently share the same
|
||||
// call stack, so you want to make sure that your host-provided code will have
|
||||
// enough call-stack available to it.
|
||||
//
|
||||
// Given all that, the way that stack overflow is handled is by adding a
|
||||
// prologue check to all JIT functions for how much native stack is remaining.
|
||||
// The `VMContext` pointer is the first argument to all functions, and the first
|
||||
// field of this structure is `*const VMInterrupts` and the first field of that
|
||||
// is the stack limit. Note that the stack limit in this case means "if the
|
||||
// stack pointer goes below this, trap". Each JIT function which consumes stack
|
||||
// space or isn't a leaf function starts off by loading the stack limit,
|
||||
// checking it against the stack pointer, and optionally traps.
|
||||
//
|
||||
// This manual check allows the embedder (us) to give wasm a relatively precise
|
||||
// amount of stack allocation. Using this scheme we reserve a chunk of stack
|
||||
// for wasm code relative from where wasm code was called. This ensures that
|
||||
// native code called by wasm should have native stack space to run, and the
|
||||
// numbers of stack spaces here should all be configurable for various
|
||||
// embeddings.
|
||||
//
|
||||
// Note that we do not consider each thread's stack guard page here. It's
|
||||
// considered that if you hit that you still abort the whole program. This
|
||||
// shouldn't happen most of the time because wasm is always stack-bound and
|
||||
// it's up to the embedder to bound its own native stack.
|
||||
//
|
||||
// So all-in-all, that's how we implement stack checks. Note that stack checks
|
||||
// cannot be disabled because it's a feature of core wasm semantics. This means
|
||||
// that all functions almost always have a stack check prologue, and it's up to
|
||||
// us to optimize away that cost as much as we can.
|
||||
//
|
||||
// For more information about the tricky bits of managing the reserved stack
|
||||
// size of wasm, see the implementation in `traphandlers.rs` in the
|
||||
// `update_stack_limit` function.
|
||||
//
|
||||
// # How is Wasmtime interrupted?
|
||||
//
|
||||
// Ok so given all that background of stack checks, the next thing we want to
|
||||
// build on top of this is the ability to *interrupt* executing wasm code. This
|
||||
// is useful to ensure that wasm always executes within a particular time slice
|
||||
// or otherwise doesn't consume all CPU resources on a system. There are two
|
||||
// major ways that interrupts are required:
|
||||
//
|
||||
// * Loops - likely immediately apparent but it's easy to write an infinite
|
||||
// loop in wasm, so we need the ability to interrupt loops.
|
||||
// * Function entries - somewhat more subtle, but imagine a module where each
|
||||
// function calls the next function twice. This creates 2^n calls pretty
|
||||
// quickly, so a pretty small module can export a function with no loops
|
||||
// that takes an extremely long time to call.
|
||||
//
|
||||
// In many cases if an interrupt comes in you want to interrupt host code as
|
||||
// well, but we're explicitly not considering that here. We're hoping that
|
||||
// interrupting host code is largely left to the embedder (e.g. figuring out
|
||||
// how to interrupt blocking syscalls) and they can figure that out. The purpose
|
||||
// of this feature is to basically only give the ability to interrupt
|
||||
// currently-executing wasm code (or triggering an interrupt as soon as wasm
|
||||
// reenters itself).
|
||||
//
|
||||
// To implement interruption of loops we insert code at the head of all loops
|
||||
// which checks the stack limit counter. If the counter matches a magical
|
||||
// sentinel value that's impossible to be the real stack limit, then we
|
||||
// interrupt the loop and trap. To implement interrupts of functions, we
|
||||
// actually do the same thing where the magical sentinel value we use here is
|
||||
// automatically considered as considering all stack pointer values as "you ran
|
||||
// over your stack". This means that with a write of a magical value to one
|
||||
// location we can interrupt both loops and function bodies.
|
||||
//
|
||||
// The "magical value" here is `usize::max_value() - N`. We reserve
|
||||
// `usize::max_value()` for "the stack limit isn't set yet" and so -N is
|
||||
// then used for "you got interrupted". We do a bit of patching afterwards to
|
||||
// translate a stack overflow into an interrupt trap if we see that an
|
||||
// interrupt happened. Note that `N` here is a medium-size-ish nonzero value
|
||||
// chosen in coordination with the cranelift backend. Currently it's 32k. The
|
||||
// value of N is basically a threshold in the backend for "anything less than
|
||||
// this requires only one branch in the prologue, any stack size bigger requires
|
||||
// two branches". Naturally we want most functions to have one branch, but we
|
||||
// also need to actually catch stack overflow, so for now 32k is chosen and it's
|
||||
// assume no valid stack pointer will ever be `usize::max_value() - 32k`.
|
||||
|
||||
use crate::address_map::{FunctionAddressMap, InstructionAddressMap};
|
||||
use crate::cache::{ModuleCacheDataTupleType, ModuleCacheEntry};
|
||||
use crate::compilation::{
|
||||
@@ -13,6 +98,7 @@ use cranelift_codegen::{binemit, isa, Context};
|
||||
use cranelift_entity::PrimaryMap;
|
||||
use cranelift_wasm::{DefinedFuncIndex, FuncIndex, FuncTranslator, ModuleTranslationState};
|
||||
use rayon::prelude::{IntoParallelRefIterator, ParallelIterator};
|
||||
use std::convert::TryFrom;
|
||||
use std::hash::{Hash, Hasher};
|
||||
|
||||
/// Implementation of a relocation sink that just saves all the information for later
|
||||
@@ -208,12 +294,47 @@ fn compile(env: CompileEnv<'_>) -> Result<ModuleCacheDataTupleType, CompileError
|
||||
context.func.collect_debug_info();
|
||||
}
|
||||
|
||||
let mut func_env = FuncEnvironment::new(isa.frontend_config(), env.local, env.tunables);
|
||||
|
||||
// We use these as constant offsets below in
|
||||
// `stack_limit_from_arguments`, so assert their values here. This
|
||||
// allows the closure below to get coerced to a function pointer, as
|
||||
// needed by `ir::Function`.
|
||||
//
|
||||
// Otherwise our stack limit is specially calculated from the vmctx
|
||||
// argument, where we need to load the `*const VMInterrupts`
|
||||
// pointer, and then from that pointer we need to load the stack
|
||||
// limit itself. Note that manual register allocation is needed here
|
||||
// too due to how late in the process this codegen happens.
|
||||
//
|
||||
// For more information about interrupts and stack checks, see the
|
||||
// top of this file.
|
||||
let vmctx = context
|
||||
.func
|
||||
.create_global_value(ir::GlobalValueData::VMContext);
|
||||
let interrupts_ptr = context.func.create_global_value(ir::GlobalValueData::Load {
|
||||
base: vmctx,
|
||||
offset: i32::try_from(func_env.offsets.vmctx_interrupts())
|
||||
.unwrap()
|
||||
.into(),
|
||||
global_type: isa.pointer_type(),
|
||||
readonly: true,
|
||||
});
|
||||
let stack_limit = context.func.create_global_value(ir::GlobalValueData::Load {
|
||||
base: interrupts_ptr,
|
||||
offset: i32::try_from(func_env.offsets.vminterrupts_stack_limit())
|
||||
.unwrap()
|
||||
.into(),
|
||||
global_type: isa.pointer_type(),
|
||||
readonly: false,
|
||||
});
|
||||
context.func.stack_limit = Some(stack_limit);
|
||||
func_translator.translate(
|
||||
env.module_translation.0,
|
||||
input.data,
|
||||
input.module_offset,
|
||||
&mut context.func,
|
||||
&mut FuncEnvironment::new(isa.frontend_config(), env.local),
|
||||
&mut func_env,
|
||||
)?;
|
||||
|
||||
let mut code_buf: Vec<u8> = Vec::new();
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use crate::module::{MemoryPlan, MemoryStyle, ModuleLocal, TableStyle};
|
||||
use crate::vmoffsets::VMOffsets;
|
||||
use crate::WASM_PAGE_SIZE;
|
||||
use crate::{Tunables, INTERRUPTED, WASM_PAGE_SIZE};
|
||||
use cranelift_codegen::cursor::FuncCursor;
|
||||
use cranelift_codegen::ir;
|
||||
use cranelift_codegen::ir::condcodes::*;
|
||||
@@ -135,13 +135,16 @@ pub struct FuncEnvironment<'module_environment> {
|
||||
data_drop_sig: Option<ir::SigRef>,
|
||||
|
||||
/// Offsets to struct fields accessed by JIT code.
|
||||
offsets: VMOffsets,
|
||||
pub(crate) offsets: VMOffsets,
|
||||
|
||||
tunables: &'module_environment Tunables,
|
||||
}
|
||||
|
||||
impl<'module_environment> FuncEnvironment<'module_environment> {
|
||||
pub fn new(
|
||||
target_config: TargetFrontendConfig,
|
||||
module: &'module_environment ModuleLocal,
|
||||
tunables: &'module_environment Tunables,
|
||||
) -> Self {
|
||||
Self {
|
||||
target_config,
|
||||
@@ -157,6 +160,7 @@ impl<'module_environment> FuncEnvironment<'module_environment> {
|
||||
memory_init_sig: None,
|
||||
data_drop_sig: None,
|
||||
offsets: VMOffsets::new(target_config.pointer_bytes(), module),
|
||||
tunables,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1246,4 +1250,37 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn translate_loop_header(&mut self, mut pos: FuncCursor) -> WasmResult<()> {
|
||||
if !self.tunables.interruptable {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Start out each loop with a check to the interupt flag to allow
|
||||
// interruption of long or infinite loops.
|
||||
//
|
||||
// For more information about this see comments in
|
||||
// `crates/environ/src/cranelift.rs`
|
||||
let vmctx = self.vmctx(&mut pos.func);
|
||||
let pointer_type = self.pointer_type();
|
||||
let base = pos.ins().global_value(pointer_type, vmctx);
|
||||
let offset = i32::try_from(self.offsets.vmctx_interrupts()).unwrap();
|
||||
let interrupt_ptr = pos
|
||||
.ins()
|
||||
.load(pointer_type, ir::MemFlags::trusted(), base, offset);
|
||||
let interrupt = pos.ins().load(
|
||||
pointer_type,
|
||||
ir::MemFlags::trusted(),
|
||||
interrupt_ptr,
|
||||
i32::from(self.offsets.vminterrupts_stack_limit()),
|
||||
);
|
||||
// Note that the cast to `isize` happens first to allow sign-extension,
|
||||
// if necessary, to `i64`.
|
||||
let interrupted_sentinel = pos.ins().iconst(pointer_type, INTERRUPTED as isize as i64);
|
||||
let cmp = pos
|
||||
.ins()
|
||||
.icmp(IntCC::Equal, interrupt, interrupted_sentinel);
|
||||
pos.ins().trapnz(cmp, ir::TrapCode::Interrupt);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ pub use crate::module_environ::{
|
||||
ModuleEnvironment, ModuleTranslation,
|
||||
};
|
||||
pub use crate::tunables::Tunables;
|
||||
pub use crate::vmoffsets::{TargetSharedSignatureIndex, VMOffsets};
|
||||
pub use crate::vmoffsets::{TargetSharedSignatureIndex, VMOffsets, INTERRUPTED};
|
||||
|
||||
/// WebAssembly page sizes are defined to be 64KiB.
|
||||
pub const WASM_PAGE_SIZE: u32 = 0x10000;
|
||||
|
||||
@@ -26,7 +26,11 @@ impl crate::compilation::Compiler for Lightbeam {
|
||||
return Err(CompileError::DebugInfoNotSupported);
|
||||
}
|
||||
|
||||
let env = FuncEnvironment::new(isa.frontend_config(), &translation.module.local);
|
||||
let env = FuncEnvironment::new(
|
||||
isa.frontend_config(),
|
||||
&translation.module.local,
|
||||
&translation.tunables,
|
||||
);
|
||||
let mut relocations = PrimaryMap::new();
|
||||
let mut codegen_session: lightbeam::CodeGenSession<_> =
|
||||
lightbeam::CodeGenSession::new(translation.function_body_inputs.len() as u32, &env);
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use crate::func_environ::FuncEnvironment;
|
||||
use crate::module::{EntityIndex, MemoryPlan, Module, TableElements, TablePlan};
|
||||
use crate::tunables::Tunables;
|
||||
use cranelift_codegen::ir;
|
||||
@@ -46,13 +45,6 @@ pub struct ModuleTranslation<'data> {
|
||||
pub module_translation: Option<ModuleTranslationState>,
|
||||
}
|
||||
|
||||
impl<'data> ModuleTranslation<'data> {
|
||||
/// Return a new `FuncEnvironment` for translating a function.
|
||||
pub fn func_env(&self) -> FuncEnvironment<'_> {
|
||||
FuncEnvironment::new(self.target_config, &self.module.local)
|
||||
}
|
||||
}
|
||||
|
||||
/// Object containing the standalone environment information.
|
||||
pub struct ModuleEnvironment<'data> {
|
||||
/// The result to be filled in.
|
||||
|
||||
@@ -12,6 +12,14 @@ pub struct Tunables {
|
||||
|
||||
/// Whether or not to generate DWARF debug information.
|
||||
pub debug_info: bool,
|
||||
|
||||
/// Whether or not to enable the ability to interrupt wasm code dynamically.
|
||||
///
|
||||
/// More info can be found about the implementation in
|
||||
/// crates/environ/src/cranelift.rs. Note that you can't interrupt host
|
||||
/// calls and interrupts are implemented through the `VMInterrupts`
|
||||
/// structure, or `InterruptHandle` in the `wasmtime` crate.
|
||||
pub interruptable: bool,
|
||||
}
|
||||
|
||||
impl Default for Tunables {
|
||||
@@ -44,6 +52,7 @@ impl Default for Tunables {
|
||||
dynamic_memory_offset_guard_size: 0x1_0000,
|
||||
|
||||
debug_info: false,
|
||||
interruptable: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,21 @@
|
||||
//! Offsets and sizes of various structs in wasmtime-runtime's vmcontext
|
||||
//! module.
|
||||
|
||||
// Currently the `VMContext` allocation by field looks like this:
|
||||
//
|
||||
// struct VMContext {
|
||||
// interrupts: *const VMInterrupts,
|
||||
// signature_ids: [VMSharedSignatureIndex; module.num_signature_ids],
|
||||
// imported_functions: [VMFunctionImport; module.num_imported_functions],
|
||||
// imported_tables: [VMTableImport; module.num_imported_tables],
|
||||
// imported_memories: [VMMemoryImport; module.num_imported_memories],
|
||||
// imported_globals: [VMGlobalImport; module.num_imported_globals],
|
||||
// tables: [VMTableDefinition; module.num_defined_tables],
|
||||
// memories: [VMMemoryDefinition; module.num_defined_memories],
|
||||
// globals: [VMGlobalDefinition; module.num_defined_globals],
|
||||
// builtins: VMBuiltinFunctionsArray,
|
||||
// }
|
||||
|
||||
use crate::module::ModuleLocal;
|
||||
use crate::BuiltinFunctionIndex;
|
||||
use cranelift_codegen::ir;
|
||||
@@ -11,6 +26,11 @@ use cranelift_wasm::{
|
||||
use more_asserts::assert_lt;
|
||||
use std::convert::TryFrom;
|
||||
|
||||
/// Sentinel value indicating that wasm has been interrupted.
|
||||
// Note that this has a bit of an odd definition. See the `insert_stack_check`
|
||||
// function in `cranelift/codegen/src/isa/x86/abi.rs` for more information
|
||||
pub const INTERRUPTED: usize = usize::max_value() - 32 * 1024;
|
||||
|
||||
#[cfg(target_pointer_width = "32")]
|
||||
fn cast_to_u32(sz: usize) -> u32 {
|
||||
u32::try_from(sz).unwrap()
|
||||
@@ -226,6 +246,14 @@ impl VMOffsets {
|
||||
}
|
||||
}
|
||||
|
||||
/// Offsets for `VMInterrupts`.
|
||||
impl VMOffsets {
|
||||
/// Return the offset of the `stack_limit` field of `VMInterrupts`
|
||||
pub fn vminterrupts_stack_limit(&self) -> u8 {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
/// Offsets for `VMCallerCheckedAnyfunc`.
|
||||
impl VMOffsets {
|
||||
/// The offset of the `func_ptr` field.
|
||||
@@ -253,9 +281,16 @@ impl VMOffsets {
|
||||
|
||||
/// Offsets for `VMContext`.
|
||||
impl VMOffsets {
|
||||
/// Return the offset to the `VMInterrupts` structure
|
||||
pub fn vmctx_interrupts(&self) -> u32 {
|
||||
0
|
||||
}
|
||||
|
||||
/// The offset of the `signature_ids` array.
|
||||
pub fn vmctx_signature_ids_begin(&self) -> u32 {
|
||||
0
|
||||
self.vmctx_interrupts()
|
||||
.checked_add(u32::from(self.pointer_size))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
/// The offset of the `tables` array.
|
||||
|
||||
@@ -106,6 +106,7 @@ pub struct Config {
|
||||
debug_info: bool,
|
||||
canonicalize_nans: bool,
|
||||
spectest: usize,
|
||||
interruptable: bool,
|
||||
}
|
||||
|
||||
impl Config {
|
||||
@@ -115,7 +116,8 @@ impl Config {
|
||||
cfg.debug_info(self.debug_info)
|
||||
.cranelift_nan_canonicalization(self.canonicalize_nans)
|
||||
.cranelift_debug_verifier(self.debug_verifier)
|
||||
.cranelift_opt_level(self.opt_level.to_wasmtime());
|
||||
.cranelift_opt_level(self.opt_level.to_wasmtime())
|
||||
.interruptable(self.interruptable);
|
||||
return cfg;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,6 +22,7 @@ use wasmparser::*;
|
||||
#[derive(Arbitrary, Debug)]
|
||||
struct Swarm {
|
||||
config_debug_info: bool,
|
||||
config_interruptable: bool,
|
||||
module_new: bool,
|
||||
module_drop: bool,
|
||||
instance_new: bool,
|
||||
@@ -35,6 +36,7 @@ struct Swarm {
|
||||
pub enum ApiCall {
|
||||
ConfigNew,
|
||||
ConfigDebugInfo(bool),
|
||||
ConfigInterruptable(bool),
|
||||
EngineNew,
|
||||
StoreNew,
|
||||
ModuleNew { id: usize, wasm: super::WasmOptTtf },
|
||||
@@ -163,9 +165,10 @@ impl Arbitrary for ApiCalls {
|
||||
// minimum size.
|
||||
arbitrary::size_hint::and(
|
||||
<Swarm as Arbitrary>::size_hint(depth),
|
||||
// `arbitrary_config` uses two bools when
|
||||
// `swarm.config_debug_info` is true.
|
||||
<(bool, bool) as Arbitrary>::size_hint(depth),
|
||||
// `arbitrary_config` uses four bools:
|
||||
// 2 when `swarm.config_debug_info` is true
|
||||
// 2 when `swarm.config_interruptable` is true
|
||||
<(bool, bool, bool, bool) as Arbitrary>::size_hint(depth),
|
||||
),
|
||||
// We can generate arbitrary `WasmOptTtf` instances, which have
|
||||
// no upper bound on the number of bytes they consume. This sets
|
||||
@@ -187,6 +190,10 @@ fn arbitrary_config(
|
||||
calls.push(ConfigDebugInfo(bool::arbitrary(input)?));
|
||||
}
|
||||
|
||||
if swarm.config_interruptable && bool::arbitrary(input)? {
|
||||
calls.push(ConfigInterruptable(bool::arbitrary(input)?));
|
||||
}
|
||||
|
||||
// TODO: flags, features, and compilation strategy.
|
||||
|
||||
Ok(())
|
||||
|
||||
@@ -285,6 +285,11 @@ pub fn make_api_calls(api: crate::generators::api::ApiCalls) {
|
||||
config.as_mut().unwrap().debug_info(b);
|
||||
}
|
||||
|
||||
ApiCall::ConfigInterruptable(b) => {
|
||||
log::trace!("enabling interruption");
|
||||
config.as_mut().unwrap().interruptable(b);
|
||||
}
|
||||
|
||||
ApiCall::EngineNew => {
|
||||
log::trace!("creating engine");
|
||||
assert!(engine.is_none());
|
||||
|
||||
@@ -9,6 +9,7 @@ use cranelift_codegen::Context;
|
||||
use cranelift_codegen::{binemit, ir};
|
||||
use cranelift_frontend::{FunctionBuilder, FunctionBuilderContext};
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use wasmtime_debug::{emit_debugsections_image, DebugInfoData};
|
||||
use wasmtime_environ::entity::{EntityRef, PrimaryMap};
|
||||
use wasmtime_environ::isa::{TargetFrontendConfig, TargetIsa};
|
||||
@@ -19,7 +20,8 @@ use wasmtime_environ::{
|
||||
Relocations, Traps, Tunables, VMOffsets,
|
||||
};
|
||||
use wasmtime_runtime::{
|
||||
InstantiationError, SignatureRegistry, VMFunctionBody, VMSharedSignatureIndex, VMTrampoline,
|
||||
InstantiationError, SignatureRegistry, VMFunctionBody, VMInterrupts, VMSharedSignatureIndex,
|
||||
VMTrampoline,
|
||||
};
|
||||
|
||||
/// Select which kind of compilation to use.
|
||||
@@ -51,6 +53,7 @@ pub struct Compiler {
|
||||
strategy: CompilationStrategy,
|
||||
cache_config: CacheConfig,
|
||||
tunables: Tunables,
|
||||
interrupts: Arc<VMInterrupts>,
|
||||
}
|
||||
|
||||
impl Compiler {
|
||||
@@ -68,6 +71,7 @@ impl Compiler {
|
||||
strategy,
|
||||
cache_config,
|
||||
tunables,
|
||||
interrupts: Arc::new(VMInterrupts::default()),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -95,6 +99,11 @@ impl Compiler {
|
||||
&self.tunables
|
||||
}
|
||||
|
||||
/// Return the handle by which to interrupt instances
|
||||
pub fn interrupts(&self) -> &Arc<VMInterrupts> {
|
||||
&self.interrupts
|
||||
}
|
||||
|
||||
/// Compile the given function bodies.
|
||||
pub(crate) fn compile<'data>(
|
||||
&mut self,
|
||||
|
||||
@@ -21,6 +21,7 @@ use wasmtime_environ::{
|
||||
ModuleEnvironment, Traps,
|
||||
};
|
||||
use wasmtime_profiling::ProfilingAgent;
|
||||
use wasmtime_runtime::VMInterrupts;
|
||||
use wasmtime_runtime::{
|
||||
GdbJitImageRegistration, InstanceHandle, InstantiationError, RuntimeMemoryCreator,
|
||||
SignatureRegistry, VMFunctionBody, VMSharedSignatureIndex, VMTrampoline,
|
||||
@@ -138,6 +139,7 @@ pub struct CompiledModule {
|
||||
dbg_jit_registration: Option<Rc<GdbJitImageRegistration>>,
|
||||
traps: Traps,
|
||||
address_transform: ModuleAddressMap,
|
||||
interrupts: Arc<VMInterrupts>,
|
||||
}
|
||||
|
||||
impl CompiledModule {
|
||||
@@ -162,6 +164,7 @@ impl CompiledModule {
|
||||
raw.dbg_jit_registration,
|
||||
raw.traps,
|
||||
raw.address_transform,
|
||||
compiler.interrupts().clone(),
|
||||
))
|
||||
}
|
||||
|
||||
@@ -175,6 +178,7 @@ impl CompiledModule {
|
||||
dbg_jit_registration: Option<GdbJitImageRegistration>,
|
||||
traps: Traps,
|
||||
address_transform: ModuleAddressMap,
|
||||
interrupts: Arc<VMInterrupts>,
|
||||
) -> Self {
|
||||
Self {
|
||||
module: Arc::new(module),
|
||||
@@ -185,6 +189,7 @@ impl CompiledModule {
|
||||
dbg_jit_registration: dbg_jit_registration.map(Rc::new),
|
||||
traps,
|
||||
address_transform,
|
||||
interrupts,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -203,6 +208,7 @@ impl CompiledModule {
|
||||
resolver: &mut dyn Resolver,
|
||||
sig_registry: &SignatureRegistry,
|
||||
mem_creator: Option<&dyn RuntimeMemoryCreator>,
|
||||
max_wasm_stack: usize,
|
||||
host_state: Box<dyn Any>,
|
||||
) -> Result<InstanceHandle, InstantiationError> {
|
||||
let data_initializers = self
|
||||
@@ -225,6 +231,8 @@ impl CompiledModule {
|
||||
self.dbg_jit_registration.as_ref().map(|r| Rc::clone(&r)),
|
||||
is_bulk_memory,
|
||||
host_state,
|
||||
self.interrupts.clone(),
|
||||
max_wasm_stack,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -54,7 +54,6 @@ fn apply_reloc(
|
||||
FloorF64 => wasmtime_f64_floor as usize,
|
||||
TruncF64 => wasmtime_f64_trunc as usize,
|
||||
NearestF64 => wasmtime_f64_nearest as usize,
|
||||
Probestack => PROBESTACK as usize,
|
||||
other => panic!("unexpected libcall: {}", other),
|
||||
}
|
||||
}
|
||||
@@ -121,38 +120,3 @@ fn apply_reloc(
|
||||
_ => panic!("unsupported reloc kind"),
|
||||
}
|
||||
}
|
||||
|
||||
// A declaration for the stack probe function in Rust's standard library, for
|
||||
// catching callstack overflow.
|
||||
cfg_if::cfg_if! {
|
||||
if #[cfg(all(
|
||||
target_os = "windows",
|
||||
target_env = "msvc",
|
||||
target_pointer_width = "64"
|
||||
))] {
|
||||
extern "C" {
|
||||
pub fn __chkstk();
|
||||
}
|
||||
const PROBESTACK: unsafe extern "C" fn() = __chkstk;
|
||||
} else if #[cfg(all(target_os = "windows", target_env = "gnu"))] {
|
||||
extern "C" {
|
||||
// ___chkstk (note the triple underscore) is implemented in compiler-builtins/src/x86_64.rs
|
||||
// by the Rust compiler for the MinGW target
|
||||
#[cfg(all(target_os = "windows", target_env = "gnu"))]
|
||||
pub fn ___chkstk();
|
||||
}
|
||||
const PROBESTACK: unsafe extern "C" fn() = ___chkstk;
|
||||
} else if #[cfg(not(any(target_arch = "x86_64", target_arch = "x86")))] {
|
||||
// As per
|
||||
// https://github.com/rust-lang/compiler-builtins/blob/cae3e6ea23739166504f9f9fb50ec070097979d4/src/probestack.rs#L39,
|
||||
// LLVM only has stack-probe support on x86-64 and x86. Thus, on any other CPU
|
||||
// architecture, we simply use an empty stack-probe function.
|
||||
extern "C" fn empty_probestack() {}
|
||||
const PROBESTACK: unsafe extern "C" fn() = empty_probestack;
|
||||
} else {
|
||||
extern "C" {
|
||||
pub fn __rust_probestack();
|
||||
}
|
||||
static PROBESTACK: unsafe extern "C" fn() = __rust_probestack;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -11,8 +11,8 @@ use crate::traphandlers;
|
||||
use crate::traphandlers::{catch_traps, Trap};
|
||||
use crate::vmcontext::{
|
||||
VMBuiltinFunctionsArray, VMCallerCheckedAnyfunc, VMContext, VMFunctionBody, VMFunctionImport,
|
||||
VMGlobalDefinition, VMGlobalImport, VMMemoryDefinition, VMMemoryImport, VMSharedSignatureIndex,
|
||||
VMTableDefinition, VMTableImport, VMTrampoline,
|
||||
VMGlobalDefinition, VMGlobalImport, VMInterrupts, VMMemoryDefinition, VMMemoryImport,
|
||||
VMSharedSignatureIndex, VMTableDefinition, VMTableImport, VMTrampoline,
|
||||
};
|
||||
use crate::{ExportFunction, ExportGlobal, ExportMemory, ExportTable};
|
||||
use memoffset::offset_of;
|
||||
@@ -110,6 +110,10 @@ pub(crate) struct Instance {
|
||||
/// Handler run when `SIGBUS`, `SIGFPE`, `SIGILL`, or `SIGSEGV` are caught by the instance thread.
|
||||
pub(crate) signal_handler: Cell<Option<Box<SignalHandler>>>,
|
||||
|
||||
/// Externally allocated data indicating how this instance will be
|
||||
/// interrupted.
|
||||
pub(crate) interrupts: Arc<VMInterrupts>,
|
||||
|
||||
/// Additional context used by compiled wasm code. This field is last, and
|
||||
/// represents a dynamically-sized array that extends beyond the nominal
|
||||
/// end of the struct (similar to a flexible array member).
|
||||
@@ -275,6 +279,11 @@ impl Instance {
|
||||
unsafe { self.vmctx_plus_offset(self.offsets.vmctx_builtin_functions_begin()) }
|
||||
}
|
||||
|
||||
/// Return a pointer to the interrupts structure
|
||||
pub fn interrupts(&self) -> *mut *const VMInterrupts {
|
||||
unsafe { self.vmctx_plus_offset(self.offsets.vmctx_interrupts()) }
|
||||
}
|
||||
|
||||
/// Return a reference to the vmctx used by compiled wasm code.
|
||||
pub fn vmctx(&self) -> &VMContext {
|
||||
&self.vmctx
|
||||
@@ -377,17 +386,21 @@ impl Instance {
|
||||
}
|
||||
|
||||
/// Invoke the WebAssembly start function of the instance, if one is present.
|
||||
fn invoke_start_function(&self) -> Result<(), InstantiationError> {
|
||||
fn invoke_start_function(&self, max_wasm_stack: usize) -> Result<(), InstantiationError> {
|
||||
let start_index = match self.module.start_func {
|
||||
Some(idx) => idx,
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
self.invoke_function_index(start_index)
|
||||
self.invoke_function_index(start_index, max_wasm_stack)
|
||||
.map_err(InstantiationError::StartTrap)
|
||||
}
|
||||
|
||||
fn invoke_function_index(&self, callee_index: FuncIndex) -> Result<(), Trap> {
|
||||
fn invoke_function_index(
|
||||
&self,
|
||||
callee_index: FuncIndex,
|
||||
max_wasm_stack: usize,
|
||||
) -> Result<(), Trap> {
|
||||
let (callee_address, callee_vmctx) =
|
||||
match self.module.local.defined_func_index(callee_index) {
|
||||
Some(defined_index) => {
|
||||
@@ -404,17 +417,18 @@ impl Instance {
|
||||
}
|
||||
};
|
||||
|
||||
self.invoke_function(callee_vmctx, callee_address)
|
||||
self.invoke_function(callee_vmctx, callee_address, max_wasm_stack)
|
||||
}
|
||||
|
||||
fn invoke_function(
|
||||
&self,
|
||||
callee_vmctx: *mut VMContext,
|
||||
callee_address: *const VMFunctionBody,
|
||||
max_wasm_stack: usize,
|
||||
) -> Result<(), Trap> {
|
||||
// Make the call.
|
||||
unsafe {
|
||||
catch_traps(callee_vmctx, || {
|
||||
catch_traps(callee_vmctx, max_wasm_stack, || {
|
||||
mem::transmute::<
|
||||
*const VMFunctionBody,
|
||||
unsafe extern "C" fn(*mut VMContext, *mut VMContext),
|
||||
@@ -869,6 +883,8 @@ impl InstanceHandle {
|
||||
dbg_jit_registration: Option<Rc<GdbJitImageRegistration>>,
|
||||
is_bulk_memory: bool,
|
||||
host_state: Box<dyn Any>,
|
||||
interrupts: Arc<VMInterrupts>,
|
||||
max_wasm_stack: usize,
|
||||
) -> Result<Self, InstantiationError> {
|
||||
let tables = create_tables(&module);
|
||||
let memories = create_memories(&module, mem_creator.unwrap_or(&DefaultMemoryCreator {}))?;
|
||||
@@ -906,6 +922,7 @@ impl InstanceHandle {
|
||||
dbg_jit_registration,
|
||||
host_state,
|
||||
signal_handler: Cell::new(None),
|
||||
interrupts,
|
||||
vmctx: VMContext {},
|
||||
};
|
||||
let layout = instance.alloc_layout();
|
||||
@@ -964,6 +981,7 @@ impl InstanceHandle {
|
||||
instance.builtin_functions_ptr() as *mut VMBuiltinFunctionsArray,
|
||||
VMBuiltinFunctionsArray::initialized(),
|
||||
);
|
||||
*instance.interrupts() = &*instance.interrupts;
|
||||
|
||||
// Check initializer bounds before initializing anything. Only do this
|
||||
// when bulk memory is disabled, since the bulk memory proposal changes
|
||||
@@ -986,7 +1004,7 @@ impl InstanceHandle {
|
||||
|
||||
// The WebAssembly spec specifies that the start function is
|
||||
// invoked automatically at instantiation time.
|
||||
instance.invoke_start_function()?;
|
||||
instance.invoke_start_function(max_wasm_stack)?;
|
||||
|
||||
Ok(handle)
|
||||
}
|
||||
|
||||
@@ -47,8 +47,8 @@ pub use crate::traphandlers::resume_panic;
|
||||
pub use crate::traphandlers::{catch_traps, raise_lib_trap, raise_user_trap, Trap};
|
||||
pub use crate::vmcontext::{
|
||||
VMCallerCheckedAnyfunc, VMContext, VMFunctionBody, VMFunctionImport, VMGlobalDefinition,
|
||||
VMGlobalImport, VMInvokeArgument, VMMemoryDefinition, VMMemoryImport, VMSharedSignatureIndex,
|
||||
VMTableDefinition, VMTableImport, VMTrampoline,
|
||||
VMGlobalImport, VMInterrupts, VMInvokeArgument, VMMemoryDefinition, VMMemoryImport,
|
||||
VMSharedSignatureIndex, VMTableDefinition, VMTableImport, VMTrampoline,
|
||||
};
|
||||
|
||||
/// Version number of this crate.
|
||||
|
||||
@@ -2,13 +2,14 @@
|
||||
//! signalhandling mechanisms.
|
||||
|
||||
use crate::instance::{InstanceHandle, SignalHandler};
|
||||
use crate::vmcontext::VMContext;
|
||||
use crate::VMContext;
|
||||
use backtrace::Backtrace;
|
||||
use std::any::Any;
|
||||
use std::cell::Cell;
|
||||
use std::error::Error;
|
||||
use std::io;
|
||||
use std::ptr;
|
||||
use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
|
||||
use std::sync::Once;
|
||||
use wasmtime_environ::ir;
|
||||
|
||||
@@ -104,7 +105,6 @@ cfg_if::cfg_if! {
|
||||
// out what to do based on the result of the trap handling.
|
||||
let jmp_buf = info.handle_trap(
|
||||
get_pc(context),
|
||||
false,
|
||||
|handler| handler(signum, siginfo, context),
|
||||
);
|
||||
|
||||
@@ -198,7 +198,6 @@ cfg_if::cfg_if! {
|
||||
let record = &*(*exception_info).ExceptionRecord;
|
||||
if record.ExceptionCode != EXCEPTION_ACCESS_VIOLATION &&
|
||||
record.ExceptionCode != EXCEPTION_ILLEGAL_INSTRUCTION &&
|
||||
record.ExceptionCode != EXCEPTION_STACK_OVERFLOW &&
|
||||
record.ExceptionCode != EXCEPTION_INT_DIVIDE_BY_ZERO &&
|
||||
record.ExceptionCode != EXCEPTION_INT_OVERFLOW
|
||||
{
|
||||
@@ -226,7 +225,6 @@ cfg_if::cfg_if! {
|
||||
};
|
||||
let jmp_buf = info.handle_trap(
|
||||
(*(*exception_info).ContextRecord).Rip as *const u8,
|
||||
record.ExceptionCode == EXCEPTION_STACK_OVERFLOW,
|
||||
|handler| handler(exception_info),
|
||||
);
|
||||
if jmp_buf.is_null() {
|
||||
@@ -302,22 +300,6 @@ pub unsafe fn resume_panic(payload: Box<dyn Any + Send>) -> ! {
|
||||
tls::with(|info| info.unwrap().unwind_with(UnwindReason::Panic(payload)))
|
||||
}
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
fn reset_guard_page() {
|
||||
extern "C" {
|
||||
fn _resetstkoflw() -> winapi::ctypes::c_int;
|
||||
}
|
||||
|
||||
// We need to restore guard page under stack to handle future stack overflows properly.
|
||||
// https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/resetstkoflw?view=vs-2019
|
||||
if unsafe { _resetstkoflw() } == 0 {
|
||||
panic!("failed to restore stack guard page");
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
fn reset_guard_page() {}
|
||||
|
||||
/// Stores trace message with backtrace.
|
||||
#[derive(Debug)]
|
||||
pub enum Trap {
|
||||
@@ -330,6 +312,10 @@ pub enum Trap {
|
||||
pc: usize,
|
||||
/// Native stack backtrace at the time the trap occurred
|
||||
backtrace: Backtrace,
|
||||
/// An indicator for whether this may have been a trap generated from an
|
||||
/// interrupt, used for switching what would otherwise be a stack
|
||||
/// overflow trap to be an interrupt trap.
|
||||
maybe_interrupted: bool,
|
||||
},
|
||||
|
||||
/// A trap raised from a wasm libcall
|
||||
@@ -372,7 +358,11 @@ impl Trap {
|
||||
/// returning them as a `Result`.
|
||||
///
|
||||
/// Highly unsafe since `closure` won't have any dtors run.
|
||||
pub unsafe fn catch_traps<F>(vmctx: *mut VMContext, mut closure: F) -> Result<(), Trap>
|
||||
pub unsafe fn catch_traps<F>(
|
||||
vmctx: *mut VMContext,
|
||||
max_wasm_stack: usize,
|
||||
mut closure: F,
|
||||
) -> Result<(), Trap>
|
||||
where
|
||||
F: FnMut(),
|
||||
{
|
||||
@@ -380,7 +370,7 @@ where
|
||||
#[cfg(unix)]
|
||||
setup_unix_sigaltstack()?;
|
||||
|
||||
return CallThreadState::new(vmctx).with(|cx| {
|
||||
return CallThreadState::new(vmctx).with(max_wasm_stack, |cx| {
|
||||
RegisterSetjmp(
|
||||
cx.jmp_buf.as_ptr(),
|
||||
call_closure::<F>,
|
||||
@@ -401,7 +391,6 @@ where
|
||||
pub struct CallThreadState {
|
||||
unwind: Cell<UnwindReason>,
|
||||
jmp_buf: Cell<*const u8>,
|
||||
reset_guard_page: Cell<bool>,
|
||||
prev: Option<*const CallThreadState>,
|
||||
vmctx: *mut VMContext,
|
||||
handling_trap: Cell<bool>,
|
||||
@@ -421,15 +410,19 @@ impl CallThreadState {
|
||||
unwind: Cell::new(UnwindReason::None),
|
||||
vmctx,
|
||||
jmp_buf: Cell::new(ptr::null()),
|
||||
reset_guard_page: Cell::new(false),
|
||||
prev: None,
|
||||
handling_trap: Cell::new(false),
|
||||
}
|
||||
}
|
||||
|
||||
fn with(mut self, closure: impl FnOnce(&CallThreadState) -> i32) -> Result<(), Trap> {
|
||||
fn with(
|
||||
mut self,
|
||||
max_wasm_stack: usize,
|
||||
closure: impl FnOnce(&CallThreadState) -> i32,
|
||||
) -> Result<(), Trap> {
|
||||
tls::with(|prev| {
|
||||
self.prev = prev.map(|p| p as *const _);
|
||||
let _reset = self.update_stack_limit(max_wasm_stack)?;
|
||||
let ret = tls::set(&self, || closure(&self));
|
||||
match self.unwind.replace(UnwindReason::None) {
|
||||
UnwindReason::None => {
|
||||
@@ -443,7 +436,15 @@ impl CallThreadState {
|
||||
UnwindReason::LibTrap(trap) => Err(trap),
|
||||
UnwindReason::JitTrap { backtrace, pc } => {
|
||||
debug_assert_eq!(ret, 0);
|
||||
Err(Trap::Jit { pc, backtrace })
|
||||
let maybe_interrupted = unsafe {
|
||||
(*self.vmctx).instance().interrupts.stack_limit.load(SeqCst)
|
||||
== wasmtime_environ::INTERRUPTED
|
||||
};
|
||||
Err(Trap::Jit {
|
||||
pc,
|
||||
backtrace,
|
||||
maybe_interrupted,
|
||||
})
|
||||
}
|
||||
UnwindReason::Panic(panic) => {
|
||||
debug_assert_eq!(ret, 0);
|
||||
@@ -453,6 +454,87 @@ impl CallThreadState {
|
||||
})
|
||||
}
|
||||
|
||||
/// Checks and/or initializes the wasm native call stack limit.
|
||||
///
|
||||
/// This function will inspect the current state of the stack and calling
|
||||
/// context to determine which of three buckets we're in:
|
||||
///
|
||||
/// 1. We are the first wasm call on the stack. This means that we need to
|
||||
/// set up a stack limit where beyond which if the native wasm stack
|
||||
/// pointer goes beyond forces a trap. For now we simply reserve an
|
||||
/// arbitrary chunk of bytes (1 MB from roughly the current native stack
|
||||
/// pointer). This logic will likely get tweaked over time.
|
||||
///
|
||||
/// 2. We aren't the first wasm call on the stack. In this scenario the wasm
|
||||
/// stack limit is already configured. This case of wasm -> host -> wasm
|
||||
/// we assume that the native stack consumed by the host is accounted for
|
||||
/// in the initial stack limit calculation. That means that in this
|
||||
/// scenario we do nothing.
|
||||
///
|
||||
/// 3. We were previously interrupted. In this case we consume the interrupt
|
||||
/// here and return a trap, clearing the interrupt and allowing the next
|
||||
/// wasm call to proceed.
|
||||
///
|
||||
/// The return value here is a trap for case 3, a noop destructor in case 2,
|
||||
/// and a meaningful destructor in case 1
|
||||
///
|
||||
/// For more information about interrupts and stack limits see
|
||||
/// `crates/environ/src/cranelift.rs`.
|
||||
///
|
||||
/// Note that this function must be called with `self` on the stack, not the
|
||||
/// heap/etc.
|
||||
fn update_stack_limit(&self, max_wasm_stack: usize) -> Result<impl Drop + '_, Trap> {
|
||||
// Make an "educated guess" to figure out where the wasm sp value should
|
||||
// start trapping if it drops below.
|
||||
let wasm_stack_limit = self as *const _ as usize - max_wasm_stack;
|
||||
|
||||
let interrupts = unsafe { &**(&*self.vmctx).instance().interrupts() };
|
||||
let reset_stack_limit = match interrupts.stack_limit.compare_exchange(
|
||||
usize::max_value(),
|
||||
wasm_stack_limit,
|
||||
SeqCst,
|
||||
SeqCst,
|
||||
) {
|
||||
Ok(_) => {
|
||||
// We're the first wasm on the stack so we've now reserved the
|
||||
// `max_wasm_stack` bytes of native stack space for wasm.
|
||||
// Nothing left to do here now except reset back when we're
|
||||
// done.
|
||||
true
|
||||
}
|
||||
Err(n) if n == wasmtime_environ::INTERRUPTED => {
|
||||
// This means that an interrupt happened before we actually
|
||||
// called this function, which means that we're now
|
||||
// considered interrupted. Be sure to consume this interrupt
|
||||
// as part of this process too.
|
||||
interrupts.stack_limit.store(usize::max_value(), SeqCst);
|
||||
return Err(Trap::Wasm {
|
||||
trap_code: ir::TrapCode::Interrupt,
|
||||
backtrace: Backtrace::new_unresolved(),
|
||||
});
|
||||
}
|
||||
Err(_) => {
|
||||
// The stack limit was previously set by a previous wasm
|
||||
// call on the stack. We leave the original stack limit for
|
||||
// wasm in place in that case, and don't reset the stack
|
||||
// limit when we're done.
|
||||
false
|
||||
}
|
||||
};
|
||||
|
||||
struct Reset<'a>(bool, &'a AtomicUsize);
|
||||
|
||||
impl Drop for Reset<'_> {
|
||||
fn drop(&mut self) {
|
||||
if self.0 {
|
||||
self.1.store(usize::max_value(), SeqCst);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Reset(reset_stack_limit, &interrupts.stack_limit))
|
||||
}
|
||||
|
||||
fn any_instance(&self, func: impl Fn(&InstanceHandle) -> bool) -> bool {
|
||||
unsafe {
|
||||
if func(&InstanceHandle::from_vmctx(self.vmctx)) {
|
||||
@@ -475,8 +557,6 @@ impl CallThreadState {
|
||||
/// Trap handler using our thread-local state.
|
||||
///
|
||||
/// * `pc` - the program counter the trap happened at
|
||||
/// * `reset_guard_page` - whether or not to reset the guard page,
|
||||
/// currently Windows specific
|
||||
/// * `call_handler` - a closure used to invoke the platform-specific
|
||||
/// signal handler for each instance, if available.
|
||||
///
|
||||
@@ -492,7 +572,6 @@ impl CallThreadState {
|
||||
fn handle_trap(
|
||||
&self,
|
||||
pc: *const u8,
|
||||
reset_guard_page: bool,
|
||||
call_handler: impl Fn(&SignalHandler) -> bool,
|
||||
) -> *const u8 {
|
||||
// If we hit a fault while handling a previous trap, that's quite bad,
|
||||
@@ -532,7 +611,6 @@ impl CallThreadState {
|
||||
return ptr::null();
|
||||
}
|
||||
let backtrace = Backtrace::new_unresolved();
|
||||
self.reset_guard_page.set(reset_guard_page);
|
||||
self.unwind.replace(UnwindReason::JitTrap {
|
||||
backtrace,
|
||||
pc: pc as usize,
|
||||
@@ -542,14 +620,6 @@ impl CallThreadState {
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for CallThreadState {
|
||||
fn drop(&mut self) {
|
||||
if self.reset_guard_page.get() {
|
||||
reset_guard_page();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// A private inner module for managing the TLS state that we require across
|
||||
// calls in wasm. The WebAssembly code is called from C++ and then a trap may
|
||||
// happen which requires us to read some contextual state to figure out what to
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
|
||||
use crate::instance::Instance;
|
||||
use std::any::Any;
|
||||
use std::sync::atomic::{AtomicUsize, Ordering::SeqCst};
|
||||
use std::{ptr, u32};
|
||||
use wasmtime_environ::BuiltinFunctionIndex;
|
||||
|
||||
@@ -612,6 +613,52 @@ impl VMInvokeArgument {
|
||||
}
|
||||
}
|
||||
|
||||
/// Structure used to control interrupting wasm code, currently with only one
|
||||
/// atomic flag internally used.
|
||||
#[derive(Debug)]
|
||||
#[repr(C)]
|
||||
pub struct VMInterrupts {
|
||||
/// Current stack limit of the wasm module.
|
||||
///
|
||||
/// This is used to control both stack overflow as well as interrupting wasm
|
||||
/// modules. For more information see `crates/environ/src/cranelift.rs`.
|
||||
pub stack_limit: AtomicUsize,
|
||||
}
|
||||
|
||||
impl VMInterrupts {
|
||||
/// Flag that an interrupt should occur
|
||||
pub fn interrupt(&self) {
|
||||
self.stack_limit
|
||||
.store(wasmtime_environ::INTERRUPTED, SeqCst);
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for VMInterrupts {
|
||||
fn default() -> VMInterrupts {
|
||||
VMInterrupts {
|
||||
stack_limit: AtomicUsize::new(usize::max_value()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test_vminterrupts {
|
||||
use super::VMInterrupts;
|
||||
use memoffset::offset_of;
|
||||
use std::mem::size_of;
|
||||
use wasmtime_environ::{Module, VMOffsets};
|
||||
|
||||
#[test]
|
||||
fn check_vminterrupts_interrupted_offset() {
|
||||
let module = Module::new();
|
||||
let offsets = VMOffsets::new(size_of::<*mut u8>() as u8, &module.local);
|
||||
assert_eq!(
|
||||
offset_of!(VMInterrupts, stack_limit),
|
||||
usize::from(offsets.vminterrupts_stack_limit())
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// The VM "context", which is pointed to by the `vmctx` arg in Cranelift.
|
||||
/// This has information about globals, memories, tables, and other runtime
|
||||
/// state associated with the current instance.
|
||||
|
||||
@@ -401,7 +401,7 @@ impl<'a, T> GuestPtr<'a, [T]> {
|
||||
/// trait documentation.
|
||||
///
|
||||
/// For safety against overlapping mutable borrows, the user must use the
|
||||
/// same `GuestBorrows` to create all *mut str or *mut [T] that are alive
|
||||
/// same `GuestBorrows` to create all `*mut str` or `*mut [T]` that are alive
|
||||
/// at the same time.
|
||||
pub fn as_raw(&self, bc: &mut GuestBorrows) -> Result<*mut [T], GuestError>
|
||||
where
|
||||
@@ -503,8 +503,8 @@ impl<'a> GuestPtr<'a, str> {
|
||||
/// trait documentation.
|
||||
///
|
||||
/// For safety against overlapping mutable borrows, the user must use the
|
||||
/// same `GuestBorrows` to create all *mut str or *mut [T] that are alive
|
||||
/// at the same time.
|
||||
/// same `GuestBorrows` to create all `*mut str` or `*mut [T]` that are
|
||||
/// alive at the same time.
|
||||
pub fn as_raw(&self, bc: &mut GuestBorrows) -> Result<*mut str, GuestError> {
|
||||
let ptr = self
|
||||
.mem
|
||||
|
||||
141
examples/interrupt.c
Normal file
141
examples/interrupt.c
Normal file
@@ -0,0 +1,141 @@
|
||||
/*
|
||||
Example of instantiating of the WebAssembly module and invoking its exported
|
||||
function.
|
||||
|
||||
You can compile and run this example on Linux with:
|
||||
|
||||
cargo build --release -p wasmtime
|
||||
cc examples/interrupt.c \
|
||||
-I crates/c-api/include \
|
||||
-I crates/c-api/wasm-c-api/include \
|
||||
target/release/libwasmtime.a \
|
||||
-lpthread -ldl -lm \
|
||||
-o interrupt
|
||||
./interrupt
|
||||
|
||||
Note that on Windows and macOS the command will be similar, but you'll need
|
||||
to tweak the `-lpthread` and such annotations as well as the name of the
|
||||
`libwasmtime.a` file on Windows.
|
||||
*/
|
||||
|
||||
#include <assert.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <wasm.h>
|
||||
#include <wasmtime.h>
|
||||
|
||||
#ifdef _WIN32
|
||||
static void spawn_interrupt(wasmtime_interrupt_handle_t *handle) {
|
||||
wasmtime_interrupt_handle_interrupt(handle);
|
||||
wasmtime_interrupt_handle_delete(handle);
|
||||
}
|
||||
#else
|
||||
#include <pthread.h>
|
||||
#include <time.h>
|
||||
|
||||
static void* helper(void *_handle) {
|
||||
wasmtime_interrupt_handle_t *handle = _handle;
|
||||
struct timespec sleep_dur;
|
||||
sleep_dur.tv_sec = 1;
|
||||
sleep_dur.tv_nsec = 0;
|
||||
nanosleep(&sleep_dur, NULL);
|
||||
printf("Sending an interrupt\n");
|
||||
wasmtime_interrupt_handle_interrupt(handle);
|
||||
wasmtime_interrupt_handle_delete(handle);
|
||||
}
|
||||
|
||||
static void spawn_interrupt(wasmtime_interrupt_handle_t *handle) {
|
||||
pthread_t child;
|
||||
int rc = pthread_create(&child, NULL, helper, handle);
|
||||
assert(rc == 0);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void exit_with_error(const char *message, wasmtime_error_t *error, wasm_trap_t *trap);
|
||||
|
||||
int main() {
|
||||
// Create a `wasm_store_t` with interrupts enabled
|
||||
wasm_config_t *config = wasm_config_new();
|
||||
assert(config != NULL);
|
||||
wasmtime_config_interruptable_set(config, true);
|
||||
wasm_engine_t *engine = wasm_engine_new_with_config(config);
|
||||
assert(engine != NULL);
|
||||
wasm_store_t *store = wasm_store_new(engine);
|
||||
assert(store != NULL);
|
||||
|
||||
// Create our interrupt handle we'll use later
|
||||
wasmtime_interrupt_handle_t *handle = wasmtime_interrupt_handle_new(store);
|
||||
assert(handle != NULL);
|
||||
|
||||
// Read our input file, which in this case is a wasm text file.
|
||||
FILE* file = fopen("examples/interrupt.wat", "r");
|
||||
assert(file != NULL);
|
||||
fseek(file, 0L, SEEK_END);
|
||||
size_t file_size = ftell(file);
|
||||
fseek(file, 0L, SEEK_SET);
|
||||
wasm_byte_vec_t wat;
|
||||
wasm_byte_vec_new_uninitialized(&wat, file_size);
|
||||
assert(fread(wat.data, file_size, 1, file) == 1);
|
||||
fclose(file);
|
||||
|
||||
// Parse the wat into the binary wasm format
|
||||
wasm_byte_vec_t wasm;
|
||||
wasmtime_error_t *error = wasmtime_wat2wasm(&wat, &wasm);
|
||||
if (error != NULL)
|
||||
exit_with_error("failed to parse wat", error, NULL);
|
||||
wasm_byte_vec_delete(&wat);
|
||||
|
||||
// Now that we've got our binary webassembly we can compile our module.
|
||||
wasm_module_t *module = NULL;
|
||||
wasm_trap_t *trap = NULL;
|
||||
wasm_instance_t *instance = NULL;
|
||||
error = wasmtime_module_new(store, &wasm, &module);
|
||||
wasm_byte_vec_delete(&wasm);
|
||||
if (error != NULL)
|
||||
exit_with_error("failed to compile module", error, NULL);
|
||||
error = wasmtime_instance_new(module, NULL, 0, &instance, &trap);
|
||||
if (instance == NULL)
|
||||
exit_with_error("failed to instantiate", error, trap);
|
||||
|
||||
// Lookup our `run` export function
|
||||
wasm_extern_vec_t externs;
|
||||
wasm_instance_exports(instance, &externs);
|
||||
assert(externs.size == 1);
|
||||
wasm_func_t *run = wasm_extern_as_func(externs.data[0]);
|
||||
assert(run != NULL);
|
||||
|
||||
// Spawn a thread to send us an interrupt after a period of time.
|
||||
spawn_interrupt(handle);
|
||||
|
||||
// And call it!
|
||||
printf("Entering infinite loop...\n");
|
||||
error = wasmtime_func_call(run, NULL, 0, NULL, 0, &trap);
|
||||
assert(error == NULL);
|
||||
assert(trap != NULL);
|
||||
printf("Got a trap!...\n");
|
||||
|
||||
// `trap` can be inspected here to see the trap message has an interrupt in it
|
||||
|
||||
wasm_trap_delete(trap);
|
||||
wasm_extern_vec_delete(&externs);
|
||||
wasm_instance_delete(instance);
|
||||
wasm_module_delete(module);
|
||||
wasm_store_delete(store);
|
||||
wasm_engine_delete(engine);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void exit_with_error(const char *message, wasmtime_error_t *error, wasm_trap_t *trap) {
|
||||
fprintf(stderr, "error: %s\n", message);
|
||||
wasm_byte_vec_t error_message;
|
||||
if (error != NULL) {
|
||||
wasmtime_error_message(error, &error_message);
|
||||
wasmtime_error_delete(error);
|
||||
} else {
|
||||
wasm_trap_message(trap, &error_message);
|
||||
wasm_trap_delete(trap);
|
||||
}
|
||||
fprintf(stderr, "%.*s\n", (int) error_message.size, error_message.data);
|
||||
wasm_byte_vec_delete(&error_message);
|
||||
exit(1);
|
||||
}
|
||||
38
examples/interrupt.rs
Normal file
38
examples/interrupt.rs
Normal file
@@ -0,0 +1,38 @@
|
||||
//! Small example of how you can interrupt the execution of a wasm module to
|
||||
//! ensure that it doesn't run for too long.
|
||||
|
||||
// You can execute this example with `cargo run --example interrupt`
|
||||
|
||||
use anyhow::Result;
|
||||
use wasmtime::*;
|
||||
|
||||
fn main() -> Result<()> {
|
||||
// Enable interruptable code via `Config` and then create an interrupt
|
||||
// handle which we'll use later to interrupt running code.
|
||||
let engine = Engine::new(Config::new().interruptable(true));
|
||||
let store = Store::new(&engine);
|
||||
let interrupt_handle = store.interrupt_handle()?;
|
||||
|
||||
// Compile and instantiate a small example with an infinite loop.
|
||||
let module = Module::from_file(&store, "examples/interrupt.wat")?;
|
||||
let instance = Instance::new(&module, &[])?;
|
||||
let run = instance
|
||||
.get_func("run")
|
||||
.ok_or(anyhow::format_err!("failed to find `run` function export"))?
|
||||
.get0::<()>()?;
|
||||
|
||||
// Spin up a thread to send us an interrupt in a second
|
||||
std::thread::spawn(move || {
|
||||
std::thread::sleep(std::time::Duration::from_secs(1));
|
||||
println!("Interrupting!");
|
||||
interrupt_handle.interrupt();
|
||||
});
|
||||
|
||||
println!("Entering infinite loop ...");
|
||||
let trap = run().unwrap_err();
|
||||
|
||||
println!("trap received...");
|
||||
assert!(trap.message().contains("wasm trap: interrupt"));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
6
examples/interrupt.wat
Normal file
6
examples/interrupt.wat
Normal file
@@ -0,0 +1,6 @@
|
||||
(module
|
||||
(func (export "run")
|
||||
(loop
|
||||
br 0)
|
||||
)
|
||||
)
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
use crate::{init_file_per_thread_logger, CommonOptions};
|
||||
use anyhow::{bail, Context as _, Result};
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
use std::{
|
||||
ffi::{OsStr, OsString},
|
||||
fs::File,
|
||||
@@ -39,6 +41,16 @@ fn parse_map_dirs(s: &str) -> Result<(String, String)> {
|
||||
Ok((parts[0].into(), parts[1].into()))
|
||||
}
|
||||
|
||||
fn parse_dur(s: &str) -> Result<Duration> {
|
||||
// assume an integer without a unit specified is a number of seconds ...
|
||||
if let Ok(val) = s.parse() {
|
||||
return Ok(Duration::from_secs(val));
|
||||
}
|
||||
// ... otherwise try to parse it with units such as `3s` or `300ms`
|
||||
let dur = humantime::parse_duration(s)?;
|
||||
Ok(dur)
|
||||
}
|
||||
|
||||
/// Runs a WebAssembly module
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(name = "run", setting = AppSettings::TrailingVarArg)]
|
||||
@@ -80,6 +92,14 @@ pub struct RunCommand {
|
||||
)]
|
||||
preloads: Vec<PathBuf>,
|
||||
|
||||
/// Maximum execution time of wasm code before timing out (1, 2s, 100ms, etc)
|
||||
#[structopt(
|
||||
long = "wasm-timeout",
|
||||
value_name = "TIME",
|
||||
parse(try_from_str = parse_dur),
|
||||
)]
|
||||
wasm_timeout: Option<Duration>,
|
||||
|
||||
// NOTE: this must come last for trailing varargs
|
||||
/// The arguments to pass to the module
|
||||
#[structopt(value_name = "ARGS")]
|
||||
@@ -96,7 +116,10 @@ impl RunCommand {
|
||||
pretty_env_logger::init();
|
||||
}
|
||||
|
||||
let config = self.common.config()?;
|
||||
let mut config = self.common.config()?;
|
||||
if self.wasm_timeout.is_some() {
|
||||
config.interruptable(true);
|
||||
}
|
||||
let engine = Engine::new(&config);
|
||||
let store = Store::new(&engine);
|
||||
|
||||
@@ -225,6 +248,13 @@ impl RunCommand {
|
||||
}
|
||||
|
||||
fn handle_module(&self, store: &Store, module_registry: &ModuleRegistry) -> Result<()> {
|
||||
if let Some(timeout) = self.wasm_timeout {
|
||||
let handle = store.interrupt_handle()?;
|
||||
thread::spawn(move || {
|
||||
thread::sleep(timeout);
|
||||
handle.interrupt();
|
||||
});
|
||||
}
|
||||
let instance = Self::instantiate_module(store, module_registry, &self.module)?;
|
||||
|
||||
// If a function to invoke was given, invoke it.
|
||||
|
||||
@@ -122,3 +122,45 @@ fn hello_wasi_snapshot1() -> Result<()> {
|
||||
assert_eq!(stdout, "Hello, world!\n");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn timeout_in_start() -> Result<()> {
|
||||
let wasm = build_wasm("tests/wasm/iloop-start.wat")?;
|
||||
let output = run_wasmtime_for_output(&[
|
||||
"run",
|
||||
wasm.path().to_str().unwrap(),
|
||||
"--wasm-timeout",
|
||||
"1ms",
|
||||
"--disable-cache",
|
||||
])?;
|
||||
assert!(!output.status.success());
|
||||
assert_eq!(output.stdout, b"");
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
assert!(
|
||||
stderr.contains("wasm trap: interrupt"),
|
||||
"bad stderr: {}",
|
||||
stderr
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn timeout_in_invoke() -> Result<()> {
|
||||
let wasm = build_wasm("tests/wasm/iloop-invoke.wat")?;
|
||||
let output = run_wasmtime_for_output(&[
|
||||
"run",
|
||||
wasm.path().to_str().unwrap(),
|
||||
"--wasm-timeout",
|
||||
"1ms",
|
||||
"--disable-cache",
|
||||
])?;
|
||||
assert!(!output.status.success());
|
||||
assert_eq!(output.stdout, b"");
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
assert!(
|
||||
stderr.contains("wasm trap: interrupt"),
|
||||
"bad stderr: {}",
|
||||
stderr
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -24,6 +24,15 @@ fn segfault() -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
fn overrun_the_stack() -> usize {
|
||||
let mut a = [0u8; 1024];
|
||||
if a.as_mut_ptr() as usize == 1 {
|
||||
return 1;
|
||||
} else {
|
||||
return a.as_mut_ptr() as usize + overrun_the_stack();
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let tests: &[(&str, fn())] = &[
|
||||
("normal segfault", || segfault()),
|
||||
@@ -33,6 +42,12 @@ fn main() {
|
||||
let _instance = Instance::new(&module, &[]).unwrap();
|
||||
segfault();
|
||||
}),
|
||||
("make instance then overrun the stack", || {
|
||||
let store = Store::default();
|
||||
let module = Module::new(&store, "(module)").unwrap();
|
||||
let _instance = Instance::new(&module, &[]).unwrap();
|
||||
println!("stack overrun: {}", overrun_the_stack());
|
||||
}),
|
||||
];
|
||||
match env::var(VAR_NAME) {
|
||||
Ok(s) => {
|
||||
@@ -76,6 +91,12 @@ fn runtest(name: &str) {
|
||||
name,
|
||||
desc
|
||||
);
|
||||
} else if name.contains("overrun the stack") {
|
||||
assert!(
|
||||
stderr.contains("thread 'main' has overflowed its stack"),
|
||||
"bad stderr: {}",
|
||||
stderr
|
||||
);
|
||||
} else {
|
||||
panic!("\n\nexpected a segfault on `{}`\n{}\n\n", name, desc);
|
||||
}
|
||||
|
||||
2
tests/wasm/iloop-invoke.wat
Normal file
2
tests/wasm/iloop-invoke.wat
Normal file
@@ -0,0 +1,2 @@
|
||||
(module
|
||||
(func (export "_start") (loop br 0)))
|
||||
3
tests/wasm/iloop-start.wat
Normal file
3
tests/wasm/iloop-start.wat
Normal file
@@ -0,0 +1,3 @@
|
||||
(module
|
||||
(start 0)
|
||||
(func (loop br 0)))
|
||||
Reference in New Issue
Block a user