Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can't grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that's what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we're going to/from wasm. Overall I've tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren't aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn't want to enable this by default naturally until we've fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it's implemented with "dynamic" heaps which have a few consequences: * All memory accesses are bounds-checked. I'm not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven't stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it's really not all that comprehensive. I've tried adding more tests for basic things as I've had to implement guards for them, but I wouldn't really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
This commit is contained in:
8
build.rs
8
build.rs
@@ -34,6 +34,7 @@ fn main() -> anyhow::Result<()> {
|
|||||||
test_directory_module(out, "tests/misc_testsuite/module-linking", strategy)?;
|
test_directory_module(out, "tests/misc_testsuite/module-linking", strategy)?;
|
||||||
test_directory_module(out, "tests/misc_testsuite/simd", strategy)?;
|
test_directory_module(out, "tests/misc_testsuite/simd", strategy)?;
|
||||||
test_directory_module(out, "tests/misc_testsuite/threads", strategy)?;
|
test_directory_module(out, "tests/misc_testsuite/threads", strategy)?;
|
||||||
|
test_directory_module(out, "tests/misc_testsuite/memory64", strategy)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
@@ -53,6 +54,7 @@ fn main() -> anyhow::Result<()> {
|
|||||||
"tests/spec_testsuite/proposals/bulk-memory-operations",
|
"tests/spec_testsuite/proposals/bulk-memory-operations",
|
||||||
strategy,
|
strategy,
|
||||||
)?;
|
)?;
|
||||||
|
test_directory_module(out, "tests/spec_testsuite/proposals/memory64", strategy)?;
|
||||||
} else {
|
} else {
|
||||||
println!(
|
println!(
|
||||||
"cargo:warning=The spec testsuite is disabled. To enable, run `git submodule \
|
"cargo:warning=The spec testsuite is disabled. To enable, run `git submodule \
|
||||||
@@ -157,7 +159,7 @@ fn write_testsuite_tests(
|
|||||||
|
|
||||||
writeln!(out, "#[test]")?;
|
writeln!(out, "#[test]")?;
|
||||||
// Ignore when using QEMU for running tests (limited memory).
|
// Ignore when using QEMU for running tests (limited memory).
|
||||||
if ignore(testsuite, &testname, strategy) || (pooling && platform_is_emulated()) {
|
if ignore(testsuite, &testname, strategy) {
|
||||||
writeln!(out, "#[ignore]")?;
|
writeln!(out, "#[ignore]")?;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -213,7 +215,3 @@ fn ignore(testsuite: &str, testname: &str, strategy: &str) -> bool {
|
|||||||
fn platform_is_s390x() -> bool {
|
fn platform_is_s390x() -> bool {
|
||||||
env::var("CARGO_CFG_TARGET_ARCH").unwrap() == "s390x"
|
env::var("CARGO_CFG_TARGET_ARCH").unwrap() == "s390x"
|
||||||
}
|
}
|
||||||
|
|
||||||
fn platform_is_emulated() -> bool {
|
|
||||||
env::var("WASMTIME_TEST_NO_HOG_MEMORY").unwrap_or_default() == "1"
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -291,6 +291,12 @@ impl From<Uimm32> for u32 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<Uimm32> for u64 {
|
||||||
|
fn from(val: Uimm32) -> u64 {
|
||||||
|
val.0.into()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl From<Uimm32> for i64 {
|
impl From<Uimm32> for i64 {
|
||||||
fn from(val: Uimm32) -> i64 {
|
fn from(val: Uimm32) -> i64 {
|
||||||
i64::from(val.0)
|
i64::from(val.0)
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ pub fn expand_heap_addr(
|
|||||||
imm,
|
imm,
|
||||||
} => {
|
} => {
|
||||||
debug_assert_eq!(opcode, ir::Opcode::HeapAddr);
|
debug_assert_eq!(opcode, ir::Opcode::HeapAddr);
|
||||||
(heap, arg, imm.into())
|
(heap, arg, u64::from(imm))
|
||||||
}
|
}
|
||||||
_ => panic!("Wanted heap_addr: {}", func.dfg.display_inst(inst, None)),
|
_ => panic!("Wanted heap_addr: {}", func.dfg.display_inst(inst, None)),
|
||||||
};
|
};
|
||||||
@@ -53,11 +53,10 @@ fn dynamic_addr(
|
|||||||
inst: ir::Inst,
|
inst: ir::Inst,
|
||||||
heap: ir::Heap,
|
heap: ir::Heap,
|
||||||
offset: ir::Value,
|
offset: ir::Value,
|
||||||
access_size: u32,
|
access_size: u64,
|
||||||
bound_gv: ir::GlobalValue,
|
bound_gv: ir::GlobalValue,
|
||||||
func: &mut ir::Function,
|
func: &mut ir::Function,
|
||||||
) {
|
) {
|
||||||
let access_size = u64::from(access_size);
|
|
||||||
let offset_ty = func.dfg.value_type(offset);
|
let offset_ty = func.dfg.value_type(offset);
|
||||||
let addr_ty = func.dfg.value_type(func.dfg.first_result(inst));
|
let addr_ty = func.dfg.value_type(func.dfg.first_result(inst));
|
||||||
let min_size = func.heaps[heap].min_size.into();
|
let min_size = func.heaps[heap].min_size.into();
|
||||||
@@ -113,12 +112,11 @@ fn static_addr(
|
|||||||
inst: ir::Inst,
|
inst: ir::Inst,
|
||||||
heap: ir::Heap,
|
heap: ir::Heap,
|
||||||
mut offset: ir::Value,
|
mut offset: ir::Value,
|
||||||
access_size: u32,
|
access_size: u64,
|
||||||
bound: u64,
|
bound: u64,
|
||||||
func: &mut ir::Function,
|
func: &mut ir::Function,
|
||||||
cfg: &mut ControlFlowGraph,
|
cfg: &mut ControlFlowGraph,
|
||||||
) {
|
) {
|
||||||
let access_size = u64::from(access_size);
|
|
||||||
let offset_ty = func.dfg.value_type(offset);
|
let offset_ty = func.dfg.value_type(offset);
|
||||||
let addr_ty = func.dfg.value_type(func.dfg.first_result(inst));
|
let addr_ty = func.dfg.value_type(func.dfg.first_result(inst));
|
||||||
let mut pos = FuncCursor::new(func).at_inst(inst);
|
let mut pos = FuncCursor::new(func).at_inst(inst);
|
||||||
|
|||||||
@@ -2164,10 +2164,6 @@ fn prepare_addr<FE: FuncEnvironment + ?Sized>(
|
|||||||
environ: &mut FE,
|
environ: &mut FE,
|
||||||
) -> WasmResult<(MemFlags, Value, Offset32)> {
|
) -> WasmResult<(MemFlags, Value, Offset32)> {
|
||||||
let addr = state.pop1();
|
let addr = state.pop1();
|
||||||
// This function will need updates for 64-bit memories
|
|
||||||
debug_assert_eq!(builder.func.dfg.value_type(addr), I32);
|
|
||||||
let offset = u32::try_from(memarg.offset).unwrap();
|
|
||||||
|
|
||||||
let heap = state.get_heap(builder.func, memarg.memory, environ)?;
|
let heap = state.get_heap(builder.func, memarg.memory, environ)?;
|
||||||
let offset_guard_size: u64 = builder.func.heaps[heap].offset_guard_size.into();
|
let offset_guard_size: u64 = builder.func.heaps[heap].offset_guard_size.into();
|
||||||
|
|
||||||
@@ -2176,13 +2172,19 @@ fn prepare_addr<FE: FuncEnvironment + ?Sized>(
|
|||||||
// segfaults) to generate traps since that means we don't have to bounds
|
// segfaults) to generate traps since that means we don't have to bounds
|
||||||
// check anything explicitly.
|
// check anything explicitly.
|
||||||
//
|
//
|
||||||
// If we don't have a guard page of unmapped memory, though, then we can't
|
// (1) If we don't have a guard page of unmapped memory, though, then we
|
||||||
// rely on this trapping behavior through segfaults. Instead we need to
|
// can't rely on this trapping behavior through segfaults. Instead we need
|
||||||
// bounds-check the entire memory access here which is everything from
|
// to bounds-check the entire memory access here which is everything from
|
||||||
// `addr32 + offset` to `addr32 + offset + width` (not inclusive). In this
|
// `addr32 + offset` to `addr32 + offset + width` (not inclusive). In this
|
||||||
// scenario our adjusted offset that we're checking is `offset + width`.
|
// scenario our adjusted offset that we're checking is `memarg.offset +
|
||||||
|
// access_size`. Note that we do saturating arithmetic here to avoid
|
||||||
|
// overflow. THe addition here is in the 64-bit space, which means that
|
||||||
|
// we'll never overflow for 32-bit wasm but for 64-bit this is an issue. If
|
||||||
|
// our effective offset is u64::MAX though then it's impossible for for
|
||||||
|
// that to actually be a valid offset because otherwise the wasm linear
|
||||||
|
// memory would take all of the host memory!
|
||||||
//
|
//
|
||||||
// If we have a guard page, however, then we can perform a further
|
// (2) If we have a guard page, however, then we can perform a further
|
||||||
// optimization of the generated code by only checking multiples of the
|
// optimization of the generated code by only checking multiples of the
|
||||||
// offset-guard size to be more CSE-friendly. Knowing that we have at least
|
// offset-guard size to be more CSE-friendly. Knowing that we have at least
|
||||||
// 1 page of a guard page we're then able to disregard the `width` since we
|
// 1 page of a guard page we're then able to disregard the `width` since we
|
||||||
@@ -2215,32 +2217,104 @@ fn prepare_addr<FE: FuncEnvironment + ?Sized>(
|
|||||||
// in-bounds or will hit the guard page, meaning we'll get the desired
|
// in-bounds or will hit the guard page, meaning we'll get the desired
|
||||||
// semantics we want.
|
// semantics we want.
|
||||||
//
|
//
|
||||||
// As one final comment on the bits with the guard size here, another goal
|
// ---
|
||||||
// of this is to hit an optimization in `heap_addr` where if the heap size
|
//
|
||||||
// minus the offset is >= 4GB then bounds checks are 100% eliminated. This
|
// With all that in mind remember that the goal is to bounds check as few
|
||||||
// means that with huge guard regions (e.g. our 2GB default) most adjusted
|
// things as possible. To facilitate this the "fast path" is expected to be
|
||||||
// offsets we're checking here are zero. This means that we'll hit the fast
|
// hit like so:
|
||||||
// path and emit zero conditional traps for bounds checks
|
//
|
||||||
|
// * For wasm32, wasmtime defaults to 4gb "static" memories with 2gb guard
|
||||||
|
// regions. This means our `adjusted_offset` is 1 for all offsets <=2gb.
|
||||||
|
// This hits the optimized case for `heap_addr` on static memories 4gb in
|
||||||
|
// size in cranelift's legalization of `heap_addr`, eliding the bounds
|
||||||
|
// check entirely.
|
||||||
|
//
|
||||||
|
// * For wasm64 offsets <=2gb will generate a single `heap_addr`
|
||||||
|
// instruction, but at this time all heaps are "dyanmic" which means that
|
||||||
|
// a single bounds check is forced. Ideally we'd do better here, but
|
||||||
|
// that's the current state of affairs.
|
||||||
|
//
|
||||||
|
// Basically we assume that most configurations have a guard page and most
|
||||||
|
// offsets in `memarg` are <=2gb, which means we get the fast path of one
|
||||||
|
// `heap_addr` instruction plus a hardcoded i32-offset in memory-related
|
||||||
|
// instructions.
|
||||||
let adjusted_offset = if offset_guard_size == 0 {
|
let adjusted_offset = if offset_guard_size == 0 {
|
||||||
u64::from(offset) + u64::from(access_size)
|
// Why saturating? see (1) above
|
||||||
|
memarg.offset.saturating_add(u64::from(access_size))
|
||||||
} else {
|
} else {
|
||||||
|
// Why is there rounding here? see (2) above
|
||||||
assert!(access_size < 1024);
|
assert!(access_size < 1024);
|
||||||
cmp::max(u64::from(offset) / offset_guard_size * offset_guard_size, 1)
|
cmp::max(memarg.offset / offset_guard_size * offset_guard_size, 1)
|
||||||
};
|
};
|
||||||
debug_assert!(adjusted_offset > 0); // want to bounds check at least 1 byte
|
|
||||||
let check_size = u32::try_from(adjusted_offset).unwrap_or(u32::MAX);
|
|
||||||
let base = builder
|
|
||||||
.ins()
|
|
||||||
.heap_addr(environ.pointer_type(), heap, addr, check_size);
|
|
||||||
|
|
||||||
// Native load/store instructions take a signed `Offset32` immediate, so adjust the base
|
debug_assert!(adjusted_offset > 0); // want to bounds check at least 1 byte
|
||||||
// pointer if necessary.
|
let (addr, offset) = match u32::try_from(adjusted_offset) {
|
||||||
let (addr, offset) = if offset > i32::MAX as u32 {
|
// If our adjusted offset fits within a u32, then we can place the
|
||||||
// Offset doesn't fit in the load/store instruction.
|
// entire offset into the offset of the `heap_addr` instruction. After
|
||||||
let adj = builder.ins().iadd_imm(base, i64::from(i32::MAX) + 1);
|
// the `heap_addr` instruction, though, we need to factor the the offset
|
||||||
(adj, (offset - (i32::MAX as u32 + 1)) as i32)
|
// into the returned address. This is either an immediate to later
|
||||||
} else {
|
// memory instructions if the offset further fits within `i32`, or a
|
||||||
(base, offset as i32)
|
// manual add instruction otherwise.
|
||||||
|
//
|
||||||
|
// Note that native instructions take a signed offset hence the switch
|
||||||
|
// to i32. Note also the lack of overflow checking in the offset
|
||||||
|
// addition, which should be ok since if `heap_addr` passed we're
|
||||||
|
// guaranteed that this won't overflow.
|
||||||
|
Ok(adjusted_offset) => {
|
||||||
|
let base = builder
|
||||||
|
.ins()
|
||||||
|
.heap_addr(environ.pointer_type(), heap, addr, adjusted_offset);
|
||||||
|
match i32::try_from(memarg.offset) {
|
||||||
|
Ok(val) => (base, val),
|
||||||
|
Err(_) => {
|
||||||
|
let adj = builder.ins().iadd_imm(base, memarg.offset as i64);
|
||||||
|
(adj, 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If the adjusted offset doesn't fit within a u32, then we can't pass
|
||||||
|
// the adjust sized to `heap_addr` raw.
|
||||||
|
//
|
||||||
|
// One reasonable question you might ask is "why not?". There's no
|
||||||
|
// fundamental reason why `heap_addr` *must* take a 32-bit offset. The
|
||||||
|
// reason this isn't done, though, is that blindly changing the offset
|
||||||
|
// to a 64-bit offset increases the size of the `InstructionData` enum
|
||||||
|
// in cranelift by 8 bytes (16 to 24). This can have significant
|
||||||
|
// performance implications so the conclusion when this was written was
|
||||||
|
// that we shouldn't do that.
|
||||||
|
//
|
||||||
|
// Without the ability to put the whole offset into the `heap_addr`
|
||||||
|
// instruction we need to fold the offset into the address itself with
|
||||||
|
// an unsigned addition. In doing so though we need to check for
|
||||||
|
// overflow because that would mean the address is out-of-bounds (wasm
|
||||||
|
// bounds checks happen on the effective 33 or 65 bit address once the
|
||||||
|
// offset is factored in).
|
||||||
|
//
|
||||||
|
// Once we have the effective address, offset already folded in, then
|
||||||
|
// `heap_addr` is used to verify that the address is indeed in-bounds.
|
||||||
|
// The access size of the `heap_addr` is what we were passed in from
|
||||||
|
// above.
|
||||||
|
//
|
||||||
|
// Note that this is generating what's likely to be at least two
|
||||||
|
// branches, one for the overflow and one for the bounds check itself.
|
||||||
|
// For now though that should hopefully be ok since 4gb+ offsets are
|
||||||
|
// relatively odd/rare. In the future if needed we can look into
|
||||||
|
// optimizing this more.
|
||||||
|
Err(_) => {
|
||||||
|
let index_type = builder.func.heaps[heap].index_type;
|
||||||
|
let offset = builder.ins().iconst(index_type, memarg.offset as i64);
|
||||||
|
let (addr, overflow) = builder.ins().iadd_ifcout(addr, offset);
|
||||||
|
builder.ins().trapif(
|
||||||
|
environ.unsigned_add_overflow_condition(),
|
||||||
|
overflow,
|
||||||
|
ir::TrapCode::HeapOutOfBounds,
|
||||||
|
);
|
||||||
|
let base = builder
|
||||||
|
.ins()
|
||||||
|
.heap_addr(environ.pointer_type(), heap, addr, access_size);
|
||||||
|
(base, 0)
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Note that we don't set `is_aligned` here, even if the load instruction's
|
// Note that we don't set `is_aligned` here, even if the load instruction's
|
||||||
|
|||||||
@@ -652,6 +652,10 @@ impl<'dummy_environment> FuncEnvironment for DummyFuncEnvironment<'dummy_environ
|
|||||||
) -> WasmResult<ir::Value> {
|
) -> WasmResult<ir::Value> {
|
||||||
Ok(pos.ins().iconst(I32, 0))
|
Ok(pos.ins().iconst(I32, 0))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn unsigned_add_overflow_condition(&self) -> ir::condcodes::IntCC {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl TargetEnvironment for DummyEnvironment {
|
impl TargetEnvironment for DummyEnvironment {
|
||||||
@@ -792,7 +796,7 @@ impl<'data> ModuleEnvironment<'data> for DummyEnvironment {
|
|||||||
&mut self,
|
&mut self,
|
||||||
_memory_index: MemoryIndex,
|
_memory_index: MemoryIndex,
|
||||||
_base: Option<GlobalIndex>,
|
_base: Option<GlobalIndex>,
|
||||||
_offset: u32,
|
_offset: u64,
|
||||||
_data: &'data [u8],
|
_data: &'data [u8],
|
||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
// We do nothing
|
// We do nothing
|
||||||
|
|||||||
@@ -697,6 +697,10 @@ pub trait FuncEnvironment: TargetEnvironment {
|
|||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns the target ISA's condition to check for unsigned addition
|
||||||
|
/// overflowing.
|
||||||
|
fn unsigned_add_overflow_condition(&self) -> ir::condcodes::IntCC;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// An object satisfying the `ModuleEnvironment` trait can be passed as argument to the
|
/// An object satisfying the `ModuleEnvironment` trait can be passed as argument to the
|
||||||
@@ -995,7 +999,7 @@ pub trait ModuleEnvironment<'data>: TargetEnvironment {
|
|||||||
&mut self,
|
&mut self,
|
||||||
memory_index: MemoryIndex,
|
memory_index: MemoryIndex,
|
||||||
base: Option<GlobalIndex>,
|
base: Option<GlobalIndex>,
|
||||||
offset: u32,
|
offset: u64,
|
||||||
data: &'data [u8],
|
data: &'data [u8],
|
||||||
) -> WasmResult<()>;
|
) -> WasmResult<()>;
|
||||||
|
|
||||||
|
|||||||
@@ -54,11 +54,11 @@ fn entity_type(
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn memory(ty: MemoryType) -> Memory {
|
fn memory(ty: MemoryType) -> Memory {
|
||||||
assert!(!ty.memory64);
|
|
||||||
Memory {
|
Memory {
|
||||||
minimum: ty.initial.try_into().unwrap(),
|
minimum: ty.initial,
|
||||||
maximum: ty.maximum.map(|i| i.try_into().unwrap()),
|
maximum: ty.maximum,
|
||||||
shared: ty.shared,
|
shared: ty.shared,
|
||||||
|
memory64: ty.memory64,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -420,7 +420,8 @@ pub fn parse_data_section<'data>(
|
|||||||
} => {
|
} => {
|
||||||
let mut init_expr_reader = init_expr.get_binary_reader();
|
let mut init_expr_reader = init_expr.get_binary_reader();
|
||||||
let (base, offset) = match init_expr_reader.read_operator()? {
|
let (base, offset) = match init_expr_reader.read_operator()? {
|
||||||
Operator::I32Const { value } => (None, value as u32),
|
Operator::I32Const { value } => (None, value as u64),
|
||||||
|
Operator::I64Const { value } => (None, value as u64),
|
||||||
Operator::GlobalGet { global_index } => {
|
Operator::GlobalGet { global_index } => {
|
||||||
(Some(GlobalIndex::from_u32(global_index)), 0)
|
(Some(GlobalIndex::from_u32(global_index)), 0)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -226,11 +226,13 @@ pub enum TableElementType {
|
|||||||
#[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))]
|
#[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))]
|
||||||
pub struct Memory {
|
pub struct Memory {
|
||||||
/// The minimum number of pages in the memory.
|
/// The minimum number of pages in the memory.
|
||||||
pub minimum: u32,
|
pub minimum: u64,
|
||||||
/// The maximum number of pages in the memory.
|
/// The maximum number of pages in the memory.
|
||||||
pub maximum: Option<u32>,
|
pub maximum: Option<u64>,
|
||||||
/// Whether the memory may be shared between multiple threads.
|
/// Whether the memory may be shared between multiple threads.
|
||||||
pub shared: bool,
|
pub shared: bool,
|
||||||
|
/// Whether or not this is a 64-bit memory
|
||||||
|
pub memory64: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// WebAssembly event.
|
/// WebAssembly event.
|
||||||
|
|||||||
@@ -594,11 +594,17 @@
|
|||||||
*
|
*
|
||||||
* The caller is responsible for deallocating the returned type.
|
* The caller is responsible for deallocating the returned type.
|
||||||
*
|
*
|
||||||
|
* For compatibility with memory64 it's recommended to use
|
||||||
|
* #wasmtime_memorytype_new instead.
|
||||||
|
*
|
||||||
* \fn const wasm_limits_t* wasm_memorytype_limits(const wasm_memorytype_t *);
|
* \fn const wasm_limits_t* wasm_memorytype_limits(const wasm_memorytype_t *);
|
||||||
* \brief Returns the limits of this memory.
|
* \brief Returns the limits of this memory.
|
||||||
*
|
*
|
||||||
* The returned #wasm_limits_t is owned by the #wasm_memorytype_t parameter, the
|
* The returned #wasm_limits_t is owned by the #wasm_memorytype_t parameter, the
|
||||||
* caller should not deallocate it.
|
* caller should not deallocate it.
|
||||||
|
*
|
||||||
|
* For compatibility with memory64 it's recommended to use
|
||||||
|
* #wasmtime_memorytype_maximum or #wasmtime_memorytype_minimum instead.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -16,6 +16,39 @@
|
|||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/**
|
||||||
|
* \brief Creates a new memory type from the specified parameters.
|
||||||
|
*
|
||||||
|
* Note that this function is preferred over #wasm_memorytype_new for
|
||||||
|
* compatibility with the memory64 proposal.
|
||||||
|
*/
|
||||||
|
WASM_API_EXTERN wasm_memorytype_t *wasmtime_memorytype_new(uint64_t min, bool max_present, uint64_t max, bool is_64);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* \brief Returns the minimum size, in pages, of the specified memory type.
|
||||||
|
*
|
||||||
|
* Note that this function is preferred over #wasm_memorytype_limits for
|
||||||
|
* compatibility with the memory64 proposal.
|
||||||
|
*/
|
||||||
|
WASM_API_EXTERN uint64_t wasmtime_memorytype_minimum(const wasm_memorytype_t *ty);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* \brief Returns the maximum size, in pages, of the specified memory type.
|
||||||
|
*
|
||||||
|
* If this memory type doesn't have a maximum size listed then `false` is
|
||||||
|
* returned. Otherwise `true` is returned and the `max` pointer is filled in
|
||||||
|
* with the specified maximum size, in pages.
|
||||||
|
*
|
||||||
|
* Note that this function is preferred over #wasm_memorytype_limits for
|
||||||
|
* compatibility with the memory64 proposal.
|
||||||
|
*/
|
||||||
|
WASM_API_EXTERN bool wasmtime_memorytype_maximum(const wasm_memorytype_t *ty, uint64_t *max);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* \brief Returns whether this type of memory represents a 64-bit memory.
|
||||||
|
*/
|
||||||
|
WASM_API_EXTERN bool wasmtime_memorytype_is64(const wasm_memorytype_t *ty);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* \brief Creates a new WebAssembly linear memory
|
* \brief Creates a new WebAssembly linear memory
|
||||||
*
|
*
|
||||||
@@ -59,7 +92,7 @@ WASM_API_EXTERN size_t wasmtime_memory_data_size(
|
|||||||
/**
|
/**
|
||||||
* \brief Returns the length, in WebAssembly pages, of this linear memory
|
* \brief Returns the length, in WebAssembly pages, of this linear memory
|
||||||
*/
|
*/
|
||||||
WASM_API_EXTERN uint32_t wasmtime_memory_size(
|
WASM_API_EXTERN uint64_t wasmtime_memory_size(
|
||||||
const wasmtime_context_t *store,
|
const wasmtime_context_t *store,
|
||||||
const wasmtime_memory_t *memory
|
const wasmtime_memory_t *memory
|
||||||
);
|
);
|
||||||
@@ -79,8 +112,8 @@ WASM_API_EXTERN uint32_t wasmtime_memory_size(
|
|||||||
WASM_API_EXTERN wasmtime_error_t *wasmtime_memory_grow(
|
WASM_API_EXTERN wasmtime_error_t *wasmtime_memory_grow(
|
||||||
wasmtime_context_t *store,
|
wasmtime_context_t *store,
|
||||||
const wasmtime_memory_t *memory,
|
const wasmtime_memory_t *memory,
|
||||||
uint32_t delta,
|
uint64_t delta,
|
||||||
uint32_t *prev_size
|
uint64_t *prev_size
|
||||||
);
|
);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ use crate::{
|
|||||||
handle_result, wasm_extern_t, wasm_memorytype_t, wasm_store_t, wasmtime_error_t, CStoreContext,
|
handle_result, wasm_extern_t, wasm_memorytype_t, wasm_store_t, wasmtime_error_t, CStoreContext,
|
||||||
CStoreContextMut,
|
CStoreContextMut,
|
||||||
};
|
};
|
||||||
|
use std::convert::TryFrom;
|
||||||
use wasmtime::{Extern, Memory};
|
use wasmtime::{Extern, Memory};
|
||||||
|
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
@@ -72,7 +73,7 @@ pub unsafe extern "C" fn wasm_memory_data_size(m: &wasm_memory_t) -> usize {
|
|||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub unsafe extern "C" fn wasm_memory_size(m: &wasm_memory_t) -> wasm_memory_pages_t {
|
pub unsafe extern "C" fn wasm_memory_size(m: &wasm_memory_t) -> wasm_memory_pages_t {
|
||||||
m.memory().size(m.ext.store.context())
|
u32::try_from(m.memory().size(m.ext.store.context())).unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
@@ -82,7 +83,7 @@ pub unsafe extern "C" fn wasm_memory_grow(
|
|||||||
) -> bool {
|
) -> bool {
|
||||||
let memory = m.memory();
|
let memory = m.memory();
|
||||||
let mut store = m.ext.store.context_mut();
|
let mut store = m.ext.store.context_mut();
|
||||||
memory.grow(&mut store, delta).is_ok()
|
memory.grow(&mut store, u64::from(delta)).is_ok()
|
||||||
}
|
}
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
@@ -113,7 +114,7 @@ pub extern "C" fn wasmtime_memory_data_size(store: CStoreContext<'_>, mem: &Memo
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern "C" fn wasmtime_memory_size(store: CStoreContext<'_>, mem: &Memory) -> u32 {
|
pub extern "C" fn wasmtime_memory_size(store: CStoreContext<'_>, mem: &Memory) -> u64 {
|
||||||
mem.size(store)
|
mem.size(store)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -121,8 +122,8 @@ pub extern "C" fn wasmtime_memory_size(store: CStoreContext<'_>, mem: &Memory) -
|
|||||||
pub extern "C" fn wasmtime_memory_grow(
|
pub extern "C" fn wasmtime_memory_grow(
|
||||||
store: CStoreContextMut<'_>,
|
store: CStoreContextMut<'_>,
|
||||||
mem: &Memory,
|
mem: &Memory,
|
||||||
delta: u32,
|
delta: u64,
|
||||||
prev_size: &mut u32,
|
prev_size: &mut u64,
|
||||||
) -> Option<Box<wasmtime_error_t>> {
|
) -> Option<Box<wasmtime_error_t>> {
|
||||||
handle_result(mem.grow(store, delta), |prev| *prev_size = prev)
|
handle_result(mem.grow(store, delta), |prev| *prev_size = prev)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use wasmtime::Limits;
|
|
||||||
|
|
||||||
#[repr(C)]
|
#[repr(C)]
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
pub struct wasm_limits_t {
|
pub struct wasm_limits_t {
|
||||||
@@ -8,13 +6,12 @@ pub struct wasm_limits_t {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl wasm_limits_t {
|
impl wasm_limits_t {
|
||||||
pub(crate) fn to_wasmtime(&self) -> Limits {
|
pub(crate) fn max(&self) -> Option<u32> {
|
||||||
let max = if self.max == u32::max_value() {
|
if self.max == u32::max_value() {
|
||||||
None
|
None
|
||||||
} else {
|
} else {
|
||||||
Some(self.max)
|
Some(self.max)
|
||||||
};
|
}
|
||||||
Limits::new(self.min, max)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
use crate::{wasm_externtype_t, wasm_limits_t, CExternType};
|
use crate::{wasm_externtype_t, wasm_limits_t, CExternType};
|
||||||
use once_cell::unsync::OnceCell;
|
use once_cell::unsync::OnceCell;
|
||||||
|
use std::convert::TryFrom;
|
||||||
use wasmtime::MemoryType;
|
use wasmtime::MemoryType;
|
||||||
|
|
||||||
#[repr(transparent)]
|
#[repr(transparent)]
|
||||||
@@ -50,22 +51,63 @@ impl CMemoryType {
|
|||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern "C" fn wasm_memorytype_new(limits: &wasm_limits_t) -> Box<wasm_memorytype_t> {
|
pub extern "C" fn wasm_memorytype_new(limits: &wasm_limits_t) -> Box<wasm_memorytype_t> {
|
||||||
Box::new(wasm_memorytype_t::new(MemoryType::new(
|
Box::new(wasm_memorytype_t::new(MemoryType::new(
|
||||||
limits.to_wasmtime(),
|
limits.min,
|
||||||
|
limits.max(),
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern "C" fn wasm_memorytype_limits(mt: &wasm_memorytype_t) -> &wasm_limits_t {
|
pub extern "C" fn wasm_memorytype_limits(mt: &wasm_memorytype_t) -> &wasm_limits_t {
|
||||||
let mt = mt.ty();
|
let mt = mt.ty();
|
||||||
mt.limits_cache.get_or_init(|| {
|
mt.limits_cache.get_or_init(|| wasm_limits_t {
|
||||||
let limits = mt.ty.limits();
|
min: u32::try_from(mt.ty.minimum()).unwrap(),
|
||||||
wasm_limits_t {
|
max: u32::try_from(mt.ty.maximum().unwrap_or(u64::from(u32::max_value()))).unwrap(),
|
||||||
min: limits.min(),
|
|
||||||
max: limits.max().unwrap_or(u32::max_value()),
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[no_mangle]
|
||||||
|
pub extern "C" fn wasmtime_memorytype_new(
|
||||||
|
minimum: u64,
|
||||||
|
maximum_specified: bool,
|
||||||
|
maximum: u64,
|
||||||
|
memory64: bool,
|
||||||
|
) -> Box<wasm_memorytype_t> {
|
||||||
|
let maximum = if maximum_specified {
|
||||||
|
Some(maximum)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
Box::new(wasm_memorytype_t::new(if memory64 {
|
||||||
|
MemoryType::new64(minimum, maximum)
|
||||||
|
} else {
|
||||||
|
MemoryType::new(
|
||||||
|
u32::try_from(minimum).unwrap(),
|
||||||
|
maximum.map(|i| u32::try_from(i).unwrap()),
|
||||||
|
)
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[no_mangle]
|
||||||
|
pub extern "C" fn wasmtime_memorytype_minimum(mt: &wasm_memorytype_t) -> u64 {
|
||||||
|
mt.ty().ty.minimum()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[no_mangle]
|
||||||
|
pub extern "C" fn wasmtime_memorytype_maximum(mt: &wasm_memorytype_t, out: &mut u64) -> bool {
|
||||||
|
match mt.ty().ty.maximum() {
|
||||||
|
Some(max) => {
|
||||||
|
*out = max;
|
||||||
|
true
|
||||||
|
}
|
||||||
|
None => false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[no_mangle]
|
||||||
|
pub extern "C" fn wasmtime_memorytype_is64(mt: &wasm_memorytype_t) -> bool {
|
||||||
|
mt.ty().ty.is_64()
|
||||||
|
}
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern "C" fn wasm_memorytype_as_externtype(ty: &wasm_memorytype_t) -> &wasm_externtype_t {
|
pub extern "C" fn wasm_memorytype_as_externtype(ty: &wasm_memorytype_t) -> &wasm_externtype_t {
|
||||||
&ty.ext
|
&ty.ext
|
||||||
|
|||||||
@@ -56,7 +56,8 @@ pub extern "C" fn wasm_tabletype_new(
|
|||||||
) -> Box<wasm_tabletype_t> {
|
) -> Box<wasm_tabletype_t> {
|
||||||
Box::new(wasm_tabletype_t::new(TableType::new(
|
Box::new(wasm_tabletype_t::new(TableType::new(
|
||||||
ty.ty,
|
ty.ty,
|
||||||
limits.to_wasmtime(),
|
limits.min,
|
||||||
|
limits.max(),
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -71,12 +72,9 @@ pub extern "C" fn wasm_tabletype_element(tt: &wasm_tabletype_t) -> &wasm_valtype
|
|||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern "C" fn wasm_tabletype_limits(tt: &wasm_tabletype_t) -> &wasm_limits_t {
|
pub extern "C" fn wasm_tabletype_limits(tt: &wasm_tabletype_t) -> &wasm_limits_t {
|
||||||
let tt = tt.ty();
|
let tt = tt.ty();
|
||||||
tt.limits_cache.get_or_init(|| {
|
tt.limits_cache.get_or_init(|| wasm_limits_t {
|
||||||
let limits = tt.ty.limits();
|
min: tt.ty.minimum(),
|
||||||
wasm_limits_t {
|
max: tt.ty.maximum().unwrap_or(u32::max_value()),
|
||||||
min: limits.min(),
|
|
||||||
max: limits.max().unwrap_or(u32::max_value()),
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -555,6 +555,60 @@ impl<'module_environment> FuncEnvironment<'module_environment> {
|
|||||||
|
|
||||||
builder.switch_to_block(continuation_block);
|
builder.switch_to_block(continuation_block);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn memory_index_type(&self, index: MemoryIndex) -> ir::Type {
|
||||||
|
if self.module.memory_plans[index].memory.memory64 {
|
||||||
|
I64
|
||||||
|
} else {
|
||||||
|
I32
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn cast_pointer_to_memory_index(
|
||||||
|
&self,
|
||||||
|
mut pos: FuncCursor<'_>,
|
||||||
|
val: ir::Value,
|
||||||
|
index: MemoryIndex,
|
||||||
|
) -> ir::Value {
|
||||||
|
let desired_type = self.memory_index_type(index);
|
||||||
|
let pointer_type = self.pointer_type();
|
||||||
|
assert_eq!(pos.func.dfg.value_type(val), pointer_type);
|
||||||
|
|
||||||
|
// The current length is of type `pointer_type` but we need to fit it
|
||||||
|
// into `desired_type`. We are guaranteed that the result will always
|
||||||
|
// fit, so we just need to do the right ireduce/sextend here.
|
||||||
|
if pointer_type == desired_type {
|
||||||
|
val
|
||||||
|
} else if pointer_type.bits() > desired_type.bits() {
|
||||||
|
pos.ins().ireduce(desired_type, val)
|
||||||
|
} else {
|
||||||
|
// Note that we `sextend` instead of the probably expected
|
||||||
|
// `uextend`. This function is only used within the contexts of
|
||||||
|
// `memory.size` and `memory.grow` where we're working with units of
|
||||||
|
// pages instead of actual bytes, so we know that the upper bit is
|
||||||
|
// always cleared for "valid values". The one case we care about
|
||||||
|
// sextend would be when the return value of `memory.grow` is `-1`,
|
||||||
|
// in which case we want to copy the sign bit.
|
||||||
|
//
|
||||||
|
// This should only come up on 32-bit hosts running wasm64 modules,
|
||||||
|
// which at some point also makes you question various assumptions
|
||||||
|
// made along the way...
|
||||||
|
pos.ins().sextend(desired_type, val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn cast_memory_index_to_i64(
|
||||||
|
&self,
|
||||||
|
pos: &mut FuncCursor<'_>,
|
||||||
|
val: ir::Value,
|
||||||
|
index: MemoryIndex,
|
||||||
|
) -> ir::Value {
|
||||||
|
if self.memory_index_type(index) == I64 {
|
||||||
|
val
|
||||||
|
} else {
|
||||||
|
pos.ins().uextend(I64, val)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'module_environment> TargetEnvironment for FuncEnvironment<'module_environment> {
|
impl<'module_environment> TargetEnvironment for FuncEnvironment<'module_environment> {
|
||||||
@@ -1190,7 +1244,7 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
min_size: 0.into(),
|
min_size: 0.into(),
|
||||||
offset_guard_size,
|
offset_guard_size,
|
||||||
style: heap_style,
|
style: heap_style,
|
||||||
index_type: I32,
|
index_type: self.memory_index_type(index),
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1395,10 +1449,13 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
&mut pos,
|
&mut pos,
|
||||||
BuiltinFunctionIndex::memory32_grow(),
|
BuiltinFunctionIndex::memory32_grow(),
|
||||||
);
|
);
|
||||||
|
|
||||||
|
let val = self.cast_memory_index_to_i64(&mut pos, val, index);
|
||||||
let call_inst = pos
|
let call_inst = pos
|
||||||
.ins()
|
.ins()
|
||||||
.call_indirect(func_sig, func_addr, &[vmctx, val, memory_index]);
|
.call_indirect(func_sig, func_addr, &[vmctx, val, memory_index]);
|
||||||
Ok(*pos.func.dfg.inst_results(call_inst).first().unwrap())
|
let result = *pos.func.dfg.inst_results(call_inst).first().unwrap();
|
||||||
|
Ok(self.cast_pointer_to_memory_index(pos, result, index))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn translate_memory_size(
|
fn translate_memory_size(
|
||||||
@@ -1436,12 +1493,8 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
let current_length_in_pages = pos
|
let current_length_in_pages = pos
|
||||||
.ins()
|
.ins()
|
||||||
.udiv_imm(current_length_in_bytes, i64::from(WASM_PAGE_SIZE));
|
.udiv_imm(current_length_in_bytes, i64::from(WASM_PAGE_SIZE));
|
||||||
if pointer_type == I32 {
|
|
||||||
Ok(current_length_in_pages)
|
Ok(self.cast_pointer_to_memory_index(pos, current_length_in_pages, index))
|
||||||
} else {
|
|
||||||
assert_eq!(pointer_type, I64);
|
|
||||||
Ok(pos.ins().ireduce(I32, current_length_in_pages))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn translate_memory_copy(
|
fn translate_memory_copy(
|
||||||
@@ -1455,13 +1508,26 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
src: ir::Value,
|
src: ir::Value,
|
||||||
len: ir::Value,
|
len: ir::Value,
|
||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
let src_index = pos.ins().iconst(I32, i64::from(src_index.as_u32()));
|
|
||||||
let dst_index = pos.ins().iconst(I32, i64::from(dst_index.as_u32()));
|
|
||||||
|
|
||||||
let (vmctx, func_addr) = self
|
let (vmctx, func_addr) = self
|
||||||
.translate_load_builtin_function_address(&mut pos, BuiltinFunctionIndex::memory_copy());
|
.translate_load_builtin_function_address(&mut pos, BuiltinFunctionIndex::memory_copy());
|
||||||
|
|
||||||
let func_sig = self.builtin_function_signatures.memory_copy(&mut pos.func);
|
let func_sig = self.builtin_function_signatures.memory_copy(&mut pos.func);
|
||||||
|
let dst = self.cast_memory_index_to_i64(&mut pos, dst, dst_index);
|
||||||
|
let src = self.cast_memory_index_to_i64(&mut pos, src, src_index);
|
||||||
|
// The length is 32-bit if either memory is 32-bit, but if they're both
|
||||||
|
// 64-bit then it's 64-bit. Our intrinsic takes a 64-bit length for
|
||||||
|
// compatibility across all memories, so make sure that it's cast
|
||||||
|
// correctly here (this is a bit special so no generic helper unlike for
|
||||||
|
// `dst`/`src` above)
|
||||||
|
let len = if self.memory_index_type(dst_index) == I64
|
||||||
|
&& self.memory_index_type(src_index) == I64
|
||||||
|
{
|
||||||
|
len
|
||||||
|
} else {
|
||||||
|
pos.ins().uextend(I64, len)
|
||||||
|
};
|
||||||
|
let src_index = pos.ins().iconst(I32, i64::from(src_index.as_u32()));
|
||||||
|
let dst_index = pos.ins().iconst(I32, i64::from(dst_index.as_u32()));
|
||||||
pos.ins().call_indirect(
|
pos.ins().call_indirect(
|
||||||
func_sig,
|
func_sig,
|
||||||
func_addr,
|
func_addr,
|
||||||
@@ -1481,9 +1547,9 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
len: ir::Value,
|
len: ir::Value,
|
||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
let func_sig = self.builtin_function_signatures.memory_fill(&mut pos.func);
|
let func_sig = self.builtin_function_signatures.memory_fill(&mut pos.func);
|
||||||
let memory_index = memory_index.index();
|
let dst = self.cast_memory_index_to_i64(&mut pos, dst, memory_index);
|
||||||
|
let len = self.cast_memory_index_to_i64(&mut pos, len, memory_index);
|
||||||
let memory_index_arg = pos.ins().iconst(I32, memory_index as i64);
|
let memory_index_arg = pos.ins().iconst(I32, i64::from(memory_index.as_u32()));
|
||||||
|
|
||||||
let (vmctx, func_addr) = self
|
let (vmctx, func_addr) = self
|
||||||
.translate_load_builtin_function_address(&mut pos, BuiltinFunctionIndex::memory_fill());
|
.translate_load_builtin_function_address(&mut pos, BuiltinFunctionIndex::memory_fill());
|
||||||
@@ -1514,6 +1580,8 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
|
|
||||||
let (vmctx, func_addr) = self.translate_load_builtin_function_address(&mut pos, func_idx);
|
let (vmctx, func_addr) = self.translate_load_builtin_function_address(&mut pos, func_idx);
|
||||||
|
|
||||||
|
let dst = self.cast_memory_index_to_i64(&mut pos, dst, memory_index);
|
||||||
|
|
||||||
pos.ins().call_indirect(
|
pos.ins().call_indirect(
|
||||||
func_sig,
|
func_sig,
|
||||||
func_addr,
|
func_addr,
|
||||||
@@ -1755,4 +1823,8 @@ impl<'module_environment> cranelift_wasm::FuncEnvironment for FuncEnvironment<'m
|
|||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn unsigned_add_overflow_condition(&self) -> ir::condcodes::IntCC {
|
||||||
|
self.isa.unsigned_add_overflow_condition()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ macro_rules! foreach_builtin_function {
|
|||||||
($mac:ident) => {
|
($mac:ident) => {
|
||||||
$mac! {
|
$mac! {
|
||||||
/// Returns an index for wasm's `memory.grow` builtin function.
|
/// Returns an index for wasm's `memory.grow` builtin function.
|
||||||
memory32_grow(vmctx, i32, i32) -> (i32);
|
memory32_grow(vmctx, i64, i32) -> (pointer);
|
||||||
/// Returns an index for wasm's `table.copy` when both tables are locally
|
/// Returns an index for wasm's `table.copy` when both tables are locally
|
||||||
/// defined.
|
/// defined.
|
||||||
table_copy(vmctx, i32, i32, i32, i32, i32) -> ();
|
table_copy(vmctx, i32, i32, i32, i32, i32) -> ();
|
||||||
@@ -13,11 +13,11 @@ macro_rules! foreach_builtin_function {
|
|||||||
/// Returns an index for wasm's `elem.drop`.
|
/// Returns an index for wasm's `elem.drop`.
|
||||||
elem_drop(vmctx, i32) -> ();
|
elem_drop(vmctx, i32) -> ();
|
||||||
/// Returns an index for wasm's `memory.copy`
|
/// Returns an index for wasm's `memory.copy`
|
||||||
memory_copy(vmctx, i32, i32, i32, i32, i32) -> ();
|
memory_copy(vmctx, i32, i64, i32, i64, i64) -> ();
|
||||||
/// Returns an index for wasm's `memory.fill` instruction.
|
/// Returns an index for wasm's `memory.fill` instruction.
|
||||||
memory_fill(vmctx, i32, i32, i32, i32) -> ();
|
memory_fill(vmctx, i32, i64, i32, i64) -> ();
|
||||||
/// Returns an index for wasm's `memory.init` instruction.
|
/// Returns an index for wasm's `memory.init` instruction.
|
||||||
memory_init(vmctx, i32, i32, i32, i32, i32) -> ();
|
memory_init(vmctx, i32, i32, i64, i32, i32) -> ();
|
||||||
/// Returns an index for wasm's `data.drop` instruction.
|
/// Returns an index for wasm's `data.drop` instruction.
|
||||||
data_drop(vmctx, i32) -> ();
|
data_drop(vmctx, i32) -> ();
|
||||||
/// Returns an index for Wasm's `table.grow` instruction for `funcref`s.
|
/// Returns an index for Wasm's `table.grow` instruction for `funcref`s.
|
||||||
|
|||||||
@@ -44,8 +44,12 @@ pub use crate::vmoffsets::*;
|
|||||||
/// WebAssembly page sizes are defined to be 64KiB.
|
/// WebAssembly page sizes are defined to be 64KiB.
|
||||||
pub const WASM_PAGE_SIZE: u32 = 0x10000;
|
pub const WASM_PAGE_SIZE: u32 = 0x10000;
|
||||||
|
|
||||||
/// The number of pages we can have before we run out of byte index space.
|
/// The number of pages (for 32-bit modules) we can have before we run out of
|
||||||
pub const WASM_MAX_PAGES: u32 = 0x10000;
|
/// byte index space.
|
||||||
|
pub const WASM32_MAX_PAGES: u64 = 1 << 16;
|
||||||
|
/// The number of pages (for 64-bit modules) we can have before we run out of
|
||||||
|
/// byte index space.
|
||||||
|
pub const WASM64_MAX_PAGES: u64 = 1 << 48;
|
||||||
|
|
||||||
/// Version number of this crate.
|
/// Version number of this crate.
|
||||||
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
|
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
//! Data structures for representing decoded wasm modules.
|
//! Data structures for representing decoded wasm modules.
|
||||||
|
|
||||||
use crate::tunables::Tunables;
|
use crate::tunables::Tunables;
|
||||||
use crate::WASM_MAX_PAGES;
|
|
||||||
use cranelift_entity::{EntityRef, PrimaryMap};
|
use cranelift_entity::{EntityRef, PrimaryMap};
|
||||||
use cranelift_wasm::*;
|
use cranelift_wasm::*;
|
||||||
use indexmap::IndexMap;
|
use indexmap::IndexMap;
|
||||||
@@ -18,7 +17,7 @@ pub enum MemoryStyle {
|
|||||||
/// Addresss space is allocated up front.
|
/// Addresss space is allocated up front.
|
||||||
Static {
|
Static {
|
||||||
/// The number of mapped and unmapped pages.
|
/// The number of mapped and unmapped pages.
|
||||||
bound: u32,
|
bound: u64,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -30,12 +29,17 @@ impl MemoryStyle {
|
|||||||
//
|
//
|
||||||
// If the module doesn't declare an explicit maximum treat it as 4GiB when not
|
// If the module doesn't declare an explicit maximum treat it as 4GiB when not
|
||||||
// requested to use the static memory bound itself as the maximum.
|
// requested to use the static memory bound itself as the maximum.
|
||||||
|
let absolute_max_pages = if memory.memory64 {
|
||||||
|
crate::WASM64_MAX_PAGES
|
||||||
|
} else {
|
||||||
|
crate::WASM32_MAX_PAGES
|
||||||
|
};
|
||||||
let maximum = std::cmp::min(
|
let maximum = std::cmp::min(
|
||||||
memory.maximum.unwrap_or(WASM_MAX_PAGES),
|
memory.maximum.unwrap_or(absolute_max_pages),
|
||||||
if tunables.static_memory_bound_is_maximum {
|
if tunables.static_memory_bound_is_maximum {
|
||||||
std::cmp::min(tunables.static_memory_bound, WASM_MAX_PAGES)
|
std::cmp::min(tunables.static_memory_bound, absolute_max_pages)
|
||||||
} else {
|
} else {
|
||||||
WASM_MAX_PAGES
|
absolute_max_pages
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -94,7 +98,7 @@ pub struct MemoryInitializer {
|
|||||||
/// Optionally, a global variable giving a base index.
|
/// Optionally, a global variable giving a base index.
|
||||||
pub base: Option<GlobalIndex>,
|
pub base: Option<GlobalIndex>,
|
||||||
/// The offset to add to the base.
|
/// The offset to add to the base.
|
||||||
pub offset: u32,
|
pub offset: u64,
|
||||||
/// The data to write into the linear memory.
|
/// The data to write into the linear memory.
|
||||||
pub data: Box<[u8]>,
|
pub data: Box<[u8]>,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -770,7 +770,7 @@ impl<'data> cranelift_wasm::ModuleEnvironment<'data> for ModuleEnvironment<'data
|
|||||||
fn define_function_body(
|
fn define_function_body(
|
||||||
&mut self,
|
&mut self,
|
||||||
validator: FuncValidator<ValidatorResources>,
|
validator: FuncValidator<ValidatorResources>,
|
||||||
body: FunctionBody<'data>,
|
mut body: FunctionBody<'data>,
|
||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
if self.tunables.generate_native_debuginfo {
|
if self.tunables.generate_native_debuginfo {
|
||||||
let func_index = self.result.code_index + self.result.module.num_imported_funcs as u32;
|
let func_index = self.result.code_index + self.result.module.num_imported_funcs as u32;
|
||||||
@@ -790,6 +790,7 @@ impl<'data> cranelift_wasm::ModuleEnvironment<'data> for ModuleEnvironment<'data
|
|||||||
params: sig.params.iter().cloned().map(|i| i.into()).collect(),
|
params: sig.params.iter().cloned().map(|i| i.into()).collect(),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
body.allow_memarg64(self.features.memory64);
|
||||||
self.result
|
self.result
|
||||||
.function_body_inputs
|
.function_body_inputs
|
||||||
.push(FunctionBodyData { validator, body });
|
.push(FunctionBodyData { validator, body });
|
||||||
@@ -811,7 +812,7 @@ impl<'data> cranelift_wasm::ModuleEnvironment<'data> for ModuleEnvironment<'data
|
|||||||
&mut self,
|
&mut self,
|
||||||
memory_index: MemoryIndex,
|
memory_index: MemoryIndex,
|
||||||
base: Option<GlobalIndex>,
|
base: Option<GlobalIndex>,
|
||||||
offset: u32,
|
offset: u64,
|
||||||
data: &'data [u8],
|
data: &'data [u8],
|
||||||
) -> WasmResult<()> {
|
) -> WasmResult<()> {
|
||||||
match &mut self.result.module.memory_initialization {
|
match &mut self.result.module.memory_initialization {
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ use serde::{Deserialize, Serialize};
|
|||||||
#[derive(Clone, Hash, Serialize, Deserialize)]
|
#[derive(Clone, Hash, Serialize, Deserialize)]
|
||||||
pub struct Tunables {
|
pub struct Tunables {
|
||||||
/// For static heaps, the size in wasm pages of the heap protected by bounds checking.
|
/// For static heaps, the size in wasm pages of the heap protected by bounds checking.
|
||||||
pub static_memory_bound: u32,
|
pub static_memory_bound: u64,
|
||||||
|
|
||||||
/// The size in bytes of the offset guard for static heaps.
|
/// The size in bytes of the offset guard for static heaps.
|
||||||
pub static_memory_offset_guard_size: u64,
|
pub static_memory_offset_guard_size: u64,
|
||||||
|
|||||||
@@ -149,4 +149,8 @@ impl wasm_smith::Config for WasmtimeDefaultConfig {
|
|||||||
fn bulk_memory_enabled(&self) -> bool {
|
fn bulk_memory_enabled(&self) -> bool {
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn memory64_enabled(&self) -> bool {
|
||||||
|
true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -41,6 +41,7 @@ pub fn fuzz_default_config(strategy: wasmtime::Strategy) -> anyhow::Result<wasmt
|
|||||||
.wasm_module_linking(true)
|
.wasm_module_linking(true)
|
||||||
.wasm_multi_memory(true)
|
.wasm_multi_memory(true)
|
||||||
.wasm_simd(true)
|
.wasm_simd(true)
|
||||||
|
.wasm_memory64(true)
|
||||||
.strategy(strategy)?;
|
.strategy(strategy)?;
|
||||||
Ok(config)
|
Ok(config)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -79,16 +79,11 @@ impl StoreLimits {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl ResourceLimiter for StoreLimits {
|
impl ResourceLimiter for StoreLimits {
|
||||||
fn memory_growing(&mut self, current: u32, desired: u32, _maximum: Option<u32>) -> bool {
|
fn memory_growing(&mut self, current: usize, desired: usize, _maximum: Option<usize>) -> bool {
|
||||||
// Units provided are in wasm pages, so adjust them to bytes to see if
|
self.alloc(desired - current)
|
||||||
// we are ok to allocate this much.
|
|
||||||
self.alloc((desired - current) as usize * 16 * 1024)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn table_growing(&mut self, current: u32, desired: u32, _maximum: Option<u32>) -> bool {
|
fn table_growing(&mut self, current: u32, desired: u32, _maximum: Option<u32>) -> bool {
|
||||||
// Units provided are in table elements, and for now we allocate one
|
|
||||||
// pointer per table element, so use that size for an adjustment into
|
|
||||||
// bytes.
|
|
||||||
let delta = (desired - current) as usize * std::mem::size_of::<usize>();
|
let delta = (desired - current) as usize * std::mem::size_of::<usize>();
|
||||||
self.alloc(delta)
|
self.alloc(delta)
|
||||||
}
|
}
|
||||||
@@ -491,6 +486,7 @@ pub fn spectest(fuzz_config: crate::generators::Config, test: crate::generators:
|
|||||||
crate::init_fuzzing();
|
crate::init_fuzzing();
|
||||||
log::debug!("running {:?} with {:?}", test.file, fuzz_config);
|
log::debug!("running {:?} with {:?}", test.file, fuzz_config);
|
||||||
let mut config = fuzz_config.to_wasmtime();
|
let mut config = fuzz_config.to_wasmtime();
|
||||||
|
config.wasm_memory64(false);
|
||||||
config.wasm_reference_types(false);
|
config.wasm_reference_types(false);
|
||||||
config.wasm_bulk_memory(false);
|
config.wasm_bulk_memory(false);
|
||||||
config.wasm_module_linking(false);
|
config.wasm_module_linking(false);
|
||||||
|
|||||||
@@ -154,8 +154,8 @@ impl WatGenerator {
|
|||||||
write!(
|
write!(
|
||||||
self.dst,
|
self.dst,
|
||||||
"(memory {} {})",
|
"(memory {} {})",
|
||||||
mem.limits().min(),
|
mem.minimum(),
|
||||||
match mem.limits().max() {
|
match mem.maximum() {
|
||||||
Some(max) => max.to_string(),
|
Some(max) => max.to_string(),
|
||||||
None => String::new(),
|
None => String::new(),
|
||||||
}
|
}
|
||||||
@@ -166,12 +166,12 @@ impl WatGenerator {
|
|||||||
write!(
|
write!(
|
||||||
self.dst,
|
self.dst,
|
||||||
"(table {} {} {})",
|
"(table {} {} {})",
|
||||||
table.limits().min(),
|
table.minimum(),
|
||||||
match table.limits().max() {
|
match table.maximum() {
|
||||||
Some(max) => max.to_string(),
|
Some(max) => max.to_string(),
|
||||||
None => String::new(),
|
None => String::new(),
|
||||||
},
|
},
|
||||||
wat_ty(table.element()),
|
wat_ty(&table.element()),
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
@@ -243,8 +243,8 @@ impl WatGenerator {
|
|||||||
self.dst,
|
self.dst,
|
||||||
"(memory ${} {} {})\n",
|
"(memory ${} {} {})\n",
|
||||||
name,
|
name,
|
||||||
mem.limits().min(),
|
mem.minimum(),
|
||||||
match mem.limits().max() {
|
match mem.maximum() {
|
||||||
Some(max) => max.to_string(),
|
Some(max) => max.to_string(),
|
||||||
None => String::new(),
|
None => String::new(),
|
||||||
}
|
}
|
||||||
@@ -256,12 +256,12 @@ impl WatGenerator {
|
|||||||
self.dst,
|
self.dst,
|
||||||
"(table ${} {} {} {})\n",
|
"(table ${} {} {} {})\n",
|
||||||
name,
|
name,
|
||||||
table.limits().min(),
|
table.minimum(),
|
||||||
match table.limits().max() {
|
match table.maximum() {
|
||||||
Some(max) => max.to_string(),
|
Some(max) => max.to_string(),
|
||||||
None => String::new(),
|
None => String::new(),
|
||||||
},
|
},
|
||||||
wat_ty(table.element()),
|
wat_ty(&table.element()),
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
@@ -390,11 +390,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn dummy_table_import() {
|
fn dummy_table_import() {
|
||||||
let mut store = store();
|
let mut store = store();
|
||||||
let table = dummy_table(
|
let table = dummy_table(&mut store, TableType::new(ValType::ExternRef, 10, None)).unwrap();
|
||||||
&mut store,
|
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(10)),
|
|
||||||
)
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(table.size(&store), 10);
|
assert_eq!(table.size(&store), 10);
|
||||||
for i in 0..10 {
|
for i in 0..10 {
|
||||||
assert!(table
|
assert!(table
|
||||||
@@ -416,7 +412,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn dummy_memory_import() {
|
fn dummy_memory_import() {
|
||||||
let mut store = store();
|
let mut store = store();
|
||||||
let memory = dummy_memory(&mut store, MemoryType::new(Limits::at_least(1))).unwrap();
|
let memory = dummy_memory(&mut store, MemoryType::new(1, None)).unwrap();
|
||||||
assert_eq!(memory.size(&store), 1);
|
assert_eq!(memory.size(&store), 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -449,18 +445,12 @@ mod tests {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Tables.
|
// Tables.
|
||||||
instance_ty.add_named_export(
|
instance_ty.add_named_export("table0", TableType::new(ValType::ExternRef, 1, None).into());
|
||||||
"table0",
|
instance_ty.add_named_export("table1", TableType::new(ValType::ExternRef, 1, None).into());
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
|
||||||
);
|
|
||||||
instance_ty.add_named_export(
|
|
||||||
"table1",
|
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Memories.
|
// Memories.
|
||||||
instance_ty.add_named_export("memory0", MemoryType::new(Limits::at_least(1)).into());
|
instance_ty.add_named_export("memory0", MemoryType::new(1, None).into());
|
||||||
instance_ty.add_named_export("memory1", MemoryType::new(Limits::at_least(1)).into());
|
instance_ty.add_named_export("memory1", MemoryType::new(1, None).into());
|
||||||
|
|
||||||
// Modules.
|
// Modules.
|
||||||
instance_ty.add_named_export("module0", ModuleType::new().into());
|
instance_ty.add_named_export("module0", ModuleType::new().into());
|
||||||
@@ -536,30 +526,24 @@ mod tests {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Multiple exported and imported tables.
|
// Multiple exported and imported tables.
|
||||||
module_ty.add_named_export(
|
module_ty.add_named_export("table0", TableType::new(ValType::ExternRef, 1, None).into());
|
||||||
"table0",
|
module_ty.add_named_export("table1", TableType::new(ValType::ExternRef, 1, None).into());
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
|
||||||
);
|
|
||||||
module_ty.add_named_export(
|
|
||||||
"table1",
|
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
|
||||||
);
|
|
||||||
module_ty.add_named_import(
|
module_ty.add_named_import(
|
||||||
"table2",
|
"table2",
|
||||||
None,
|
None,
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
TableType::new(ValType::ExternRef, 1, None).into(),
|
||||||
);
|
);
|
||||||
module_ty.add_named_import(
|
module_ty.add_named_import(
|
||||||
"table3",
|
"table3",
|
||||||
None,
|
None,
|
||||||
TableType::new(ValType::ExternRef, Limits::at_least(1)).into(),
|
TableType::new(ValType::ExternRef, 1, None).into(),
|
||||||
);
|
);
|
||||||
|
|
||||||
// Multiple exported and imported memories.
|
// Multiple exported and imported memories.
|
||||||
module_ty.add_named_export("memory0", MemoryType::new(Limits::at_least(1)).into());
|
module_ty.add_named_export("memory0", MemoryType::new(1, None).into());
|
||||||
module_ty.add_named_export("memory1", MemoryType::new(Limits::at_least(1)).into());
|
module_ty.add_named_export("memory1", MemoryType::new(1, None).into());
|
||||||
module_ty.add_named_import("memory2", None, MemoryType::new(Limits::at_least(1)).into());
|
module_ty.add_named_import("memory2", None, MemoryType::new(1, None).into());
|
||||||
module_ty.add_named_import("memory3", None, MemoryType::new(Limits::at_least(1)).into());
|
module_ty.add_named_import("memory3", None, MemoryType::new(1, None).into());
|
||||||
|
|
||||||
// An exported and an imported module.
|
// An exported and an imported module.
|
||||||
module_ty.add_named_export("module0", ModuleType::new().into());
|
module_ty.add_named_export("module0", ModuleType::new().into());
|
||||||
|
|||||||
@@ -45,18 +45,25 @@ pub const DEFAULT_MEMORY_LIMIT: usize = 10000;
|
|||||||
/// An instance can be created with a resource limiter so that hosts can take into account
|
/// An instance can be created with a resource limiter so that hosts can take into account
|
||||||
/// non-WebAssembly resource usage to determine if a linear memory or table should grow.
|
/// non-WebAssembly resource usage to determine if a linear memory or table should grow.
|
||||||
pub trait ResourceLimiter {
|
pub trait ResourceLimiter {
|
||||||
/// Notifies the resource limiter that an instance's linear memory has been requested to grow.
|
/// Notifies the resource limiter that an instance's linear memory has been
|
||||||
|
/// requested to grow.
|
||||||
///
|
///
|
||||||
/// * `current` is the current size of the linear memory in WebAssembly page units.
|
/// * `current` is the current size of the linear memory in bytes.
|
||||||
/// * `desired` is the desired size of the linear memory in WebAssembly page units.
|
/// * `desired` is the desired size of the linear memory in bytes.
|
||||||
/// * `maximum` is either the linear memory's maximum or a maximum from an instance allocator,
|
/// * `maximum` is either the linear memory's maximum or a maximum from an
|
||||||
/// also in WebAssembly page units. A value of `None` indicates that the linear memory is
|
/// instance allocator, also in bytes. A value of `None`
|
||||||
/// unbounded.
|
/// indicates that the linear memory is unbounded.
|
||||||
///
|
///
|
||||||
/// This function should return `true` to indicate that the growing operation is permitted or
|
/// This function should return `true` to indicate that the growing
|
||||||
/// `false` if not permitted. Returning `true` when a maximum has been exceeded will have no
|
/// operation is permitted or `false` if not permitted. Returning `true`
|
||||||
/// effect as the linear memory will not grow.
|
/// when a maximum has been exceeded will have no effect as the linear
|
||||||
fn memory_growing(&mut self, current: u32, desired: u32, maximum: Option<u32>) -> bool;
|
/// memory will not grow.
|
||||||
|
///
|
||||||
|
/// This function is not guaranteed to be invoked for all requests to
|
||||||
|
/// `memory.grow`. Requests where the allocation requested size doesn't fit
|
||||||
|
/// in `usize` or exceeds the memory's listed maximum size may not invoke
|
||||||
|
/// this method.
|
||||||
|
fn memory_growing(&mut self, current: usize, desired: usize, maximum: Option<usize>) -> bool;
|
||||||
|
|
||||||
/// Notifies the resource limiter that an instance's table has been requested to grow.
|
/// Notifies the resource limiter that an instance's table has been requested to grow.
|
||||||
///
|
///
|
||||||
@@ -406,8 +413,9 @@ impl Instance {
|
|||||||
/// Grow memory by the specified amount of pages.
|
/// Grow memory by the specified amount of pages.
|
||||||
///
|
///
|
||||||
/// Returns `None` if memory can't be grown by the specified amount
|
/// Returns `None` if memory can't be grown by the specified amount
|
||||||
/// of pages.
|
/// of pages. Returns `Some` with the old size in bytes if growth was
|
||||||
pub(crate) fn memory_grow(&mut self, index: MemoryIndex, delta: u32) -> Option<u32> {
|
/// successful.
|
||||||
|
pub(crate) fn memory_grow(&mut self, index: MemoryIndex, delta: u64) -> Option<usize> {
|
||||||
let (idx, instance) = if let Some(idx) = self.module.defined_memory_index(index) {
|
let (idx, instance) = if let Some(idx) = self.module.defined_memory_index(index) {
|
||||||
(idx, self)
|
(idx, self)
|
||||||
} else {
|
} else {
|
||||||
@@ -616,26 +624,18 @@ impl Instance {
|
|||||||
pub(crate) fn memory_copy(
|
pub(crate) fn memory_copy(
|
||||||
&mut self,
|
&mut self,
|
||||||
dst_index: MemoryIndex,
|
dst_index: MemoryIndex,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
src_index: MemoryIndex,
|
src_index: MemoryIndex,
|
||||||
src: u32,
|
src: u64,
|
||||||
len: u32,
|
len: u64,
|
||||||
) -> Result<(), Trap> {
|
) -> Result<(), Trap> {
|
||||||
// https://webassembly.github.io/reference-types/core/exec/instructions.html#exec-memory-copy
|
// https://webassembly.github.io/reference-types/core/exec/instructions.html#exec-memory-copy
|
||||||
|
|
||||||
let src_mem = self.get_memory(src_index);
|
let src_mem = self.get_memory(src_index);
|
||||||
let dst_mem = self.get_memory(dst_index);
|
let dst_mem = self.get_memory(dst_index);
|
||||||
|
|
||||||
if src.checked_add(len).map_or(true, |n| {
|
let src = self.validate_inbounds(src_mem.current_length, src, len)?;
|
||||||
usize::try_from(n).unwrap() > src_mem.current_length
|
let dst = self.validate_inbounds(dst_mem.current_length, dst, len)?;
|
||||||
}) || dst.checked_add(len).map_or(true, |m| {
|
|
||||||
usize::try_from(m).unwrap() > dst_mem.current_length
|
|
||||||
}) {
|
|
||||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
|
||||||
}
|
|
||||||
|
|
||||||
let dst = usize::try_from(dst).unwrap();
|
|
||||||
let src = usize::try_from(src).unwrap();
|
|
||||||
|
|
||||||
// Bounds and casts are checked above, by this point we know that
|
// Bounds and casts are checked above, by this point we know that
|
||||||
// everything is safe.
|
// everything is safe.
|
||||||
@@ -648,6 +648,19 @@ impl Instance {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn validate_inbounds(&self, max: usize, ptr: u64, len: u64) -> Result<usize, Trap> {
|
||||||
|
let oob = || Trap::wasm(ir::TrapCode::HeapOutOfBounds);
|
||||||
|
let end = ptr
|
||||||
|
.checked_add(len)
|
||||||
|
.and_then(|i| usize::try_from(i).ok())
|
||||||
|
.ok_or_else(oob)?;
|
||||||
|
if end > max {
|
||||||
|
Err(oob())
|
||||||
|
} else {
|
||||||
|
Ok(ptr as usize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Perform the `memory.fill` operation on a locally defined memory.
|
/// Perform the `memory.fill` operation on a locally defined memory.
|
||||||
///
|
///
|
||||||
/// # Errors
|
/// # Errors
|
||||||
@@ -656,25 +669,17 @@ impl Instance {
|
|||||||
pub(crate) fn memory_fill(
|
pub(crate) fn memory_fill(
|
||||||
&mut self,
|
&mut self,
|
||||||
memory_index: MemoryIndex,
|
memory_index: MemoryIndex,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
val: u32,
|
val: u8,
|
||||||
len: u32,
|
len: u64,
|
||||||
) -> Result<(), Trap> {
|
) -> Result<(), Trap> {
|
||||||
let memory = self.get_memory(memory_index);
|
let memory = self.get_memory(memory_index);
|
||||||
|
let dst = self.validate_inbounds(memory.current_length, dst, len)?;
|
||||||
if dst.checked_add(len).map_or(true, |m| {
|
|
||||||
usize::try_from(m).unwrap() > memory.current_length
|
|
||||||
}) {
|
|
||||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
|
||||||
}
|
|
||||||
|
|
||||||
let dst = isize::try_from(dst).unwrap();
|
|
||||||
let val = val as u8;
|
|
||||||
|
|
||||||
// Bounds and casts are checked above, by this point we know that
|
// Bounds and casts are checked above, by this point we know that
|
||||||
// everything is safe.
|
// everything is safe.
|
||||||
unsafe {
|
unsafe {
|
||||||
let dst = memory.base.offset(dst);
|
let dst = memory.base.add(dst);
|
||||||
ptr::write_bytes(dst, val, len as usize);
|
ptr::write_bytes(dst, val, len as usize);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -692,7 +697,7 @@ impl Instance {
|
|||||||
&mut self,
|
&mut self,
|
||||||
memory_index: MemoryIndex,
|
memory_index: MemoryIndex,
|
||||||
data_index: DataIndex,
|
data_index: DataIndex,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
src: u32,
|
src: u32,
|
||||||
len: u32,
|
len: u32,
|
||||||
) -> Result<(), Trap> {
|
) -> Result<(), Trap> {
|
||||||
@@ -713,29 +718,22 @@ impl Instance {
|
|||||||
&mut self,
|
&mut self,
|
||||||
memory_index: MemoryIndex,
|
memory_index: MemoryIndex,
|
||||||
data: &[u8],
|
data: &[u8],
|
||||||
dst: u32,
|
dst: u64,
|
||||||
src: u32,
|
src: u32,
|
||||||
len: u32,
|
len: u32,
|
||||||
) -> Result<(), Trap> {
|
) -> Result<(), Trap> {
|
||||||
// https://webassembly.github.io/bulk-memory-operations/core/exec/instructions.html#exec-memory-init
|
// https://webassembly.github.io/bulk-memory-operations/core/exec/instructions.html#exec-memory-init
|
||||||
|
|
||||||
let memory = self.get_memory(memory_index);
|
let memory = self.get_memory(memory_index);
|
||||||
|
let dst = self.validate_inbounds(memory.current_length, dst, len.into())?;
|
||||||
|
let src = self.validate_inbounds(data.len(), src.into(), len.into())?;
|
||||||
|
let len = len as usize;
|
||||||
|
|
||||||
if src
|
let src_slice = &data[src..(src + len)];
|
||||||
.checked_add(len)
|
|
||||||
.map_or(true, |n| usize::try_from(n).unwrap() > data.len())
|
|
||||||
|| dst.checked_add(len).map_or(true, |m| {
|
|
||||||
usize::try_from(m).unwrap() > memory.current_length
|
|
||||||
})
|
|
||||||
{
|
|
||||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
|
||||||
}
|
|
||||||
|
|
||||||
let src_slice = &data[src as usize..(src + len) as usize];
|
|
||||||
|
|
||||||
unsafe {
|
unsafe {
|
||||||
let dst_start = memory.base.add(dst as usize);
|
let dst_start = memory.base.add(dst);
|
||||||
let dst_slice = slice::from_raw_parts_mut(dst_start, len as usize);
|
let dst_slice = slice::from_raw_parts_mut(dst_start, len);
|
||||||
dst_slice.copy_from_slice(src_slice);
|
dst_slice.copy_from_slice(src_slice);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -279,14 +279,22 @@ fn initialize_tables(instance: &mut Instance, module: &Module) -> Result<(), Ins
|
|||||||
fn get_memory_init_start(
|
fn get_memory_init_start(
|
||||||
init: &MemoryInitializer,
|
init: &MemoryInitializer,
|
||||||
instance: &Instance,
|
instance: &Instance,
|
||||||
) -> Result<u32, InstantiationError> {
|
) -> Result<u64, InstantiationError> {
|
||||||
match init.base {
|
match init.base {
|
||||||
Some(base) => {
|
Some(base) => {
|
||||||
|
let mem64 = instance.module.memory_plans[init.memory_index]
|
||||||
|
.memory
|
||||||
|
.memory64;
|
||||||
let val = unsafe {
|
let val = unsafe {
|
||||||
if let Some(def_index) = instance.module.defined_global_index(base) {
|
let global = if let Some(def_index) = instance.module.defined_global_index(base) {
|
||||||
*instance.global(def_index).as_u32()
|
instance.global(def_index)
|
||||||
} else {
|
} else {
|
||||||
*(*instance.imported_global(base).from).as_u32()
|
&*instance.imported_global(base).from
|
||||||
|
};
|
||||||
|
if mem64 {
|
||||||
|
*global.as_u64()
|
||||||
|
} else {
|
||||||
|
u64::from(*global.as_u32())
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -305,8 +313,9 @@ fn check_memory_init_bounds(
|
|||||||
for init in initializers {
|
for init in initializers {
|
||||||
let memory = instance.get_memory(init.memory_index);
|
let memory = instance.get_memory(init.memory_index);
|
||||||
let start = get_memory_init_start(init, instance)?;
|
let start = get_memory_init_start(init, instance)?;
|
||||||
let start = usize::try_from(start).unwrap();
|
let end = usize::try_from(start)
|
||||||
let end = start.checked_add(init.data.len());
|
.ok()
|
||||||
|
.and_then(|start| start.checked_add(init.data.len()));
|
||||||
|
|
||||||
match end {
|
match end {
|
||||||
Some(end) if end <= memory.current_length => {
|
Some(end) if end <= memory.current_length => {
|
||||||
@@ -334,7 +343,7 @@ fn initialize_memories(
|
|||||||
&init.data,
|
&init.data,
|
||||||
get_memory_init_start(init, instance)?,
|
get_memory_init_start(init, instance)?,
|
||||||
0,
|
0,
|
||||||
init.data.len() as u32,
|
u32::try_from(init.data.len()).unwrap(),
|
||||||
)
|
)
|
||||||
.map_err(InstantiationError::Trap)?;
|
.map_err(InstantiationError::Trap)?;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ pub struct ModuleLimits {
|
|||||||
pub table_elements: u32,
|
pub table_elements: u32,
|
||||||
|
|
||||||
/// The maximum number of pages for any linear memory defined in a module.
|
/// The maximum number of pages for any linear memory defined in a module.
|
||||||
pub memory_pages: u32,
|
pub memory_pages: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ModuleLimits {
|
impl ModuleLimits {
|
||||||
@@ -455,7 +455,7 @@ impl InstancePool {
|
|||||||
.expect("failed to reset guard pages");
|
.expect("failed to reset guard pages");
|
||||||
drop(&mut memory); // require mutable on all platforms, not just uffd
|
drop(&mut memory); // require mutable on all platforms, not just uffd
|
||||||
|
|
||||||
let size = (memory.size() as usize) * (WASM_PAGE_SIZE as usize);
|
let size = memory.byte_size();
|
||||||
drop(memory);
|
drop(memory);
|
||||||
decommit_memory_pages(base, size).expect("failed to decommit linear memory pages");
|
decommit_memory_pages(base, size).expect("failed to decommit linear memory pages");
|
||||||
}
|
}
|
||||||
@@ -499,7 +499,7 @@ impl InstancePool {
|
|||||||
fn set_instance_memories(
|
fn set_instance_memories(
|
||||||
instance: &mut Instance,
|
instance: &mut Instance,
|
||||||
mut memories: impl Iterator<Item = *mut u8>,
|
mut memories: impl Iterator<Item = *mut u8>,
|
||||||
max_pages: u32,
|
max_pages: u64,
|
||||||
mut limiter: Option<&mut dyn ResourceLimiter>,
|
mut limiter: Option<&mut dyn ResourceLimiter>,
|
||||||
) -> Result<(), InstantiationError> {
|
) -> Result<(), InstantiationError> {
|
||||||
let module = instance.module.as_ref();
|
let module = instance.module.as_ref();
|
||||||
@@ -599,7 +599,7 @@ struct MemoryPool {
|
|||||||
initial_memory_offset: usize,
|
initial_memory_offset: usize,
|
||||||
max_memories: usize,
|
max_memories: usize,
|
||||||
max_instances: usize,
|
max_instances: usize,
|
||||||
max_wasm_pages: u32,
|
max_wasm_pages: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl MemoryPool {
|
impl MemoryPool {
|
||||||
@@ -1118,6 +1118,7 @@ mod test {
|
|||||||
minimum: 0,
|
minimum: 0,
|
||||||
maximum: None,
|
maximum: None,
|
||||||
shared: false,
|
shared: false,
|
||||||
|
memory64: false,
|
||||||
},
|
},
|
||||||
pre_guard_size: 0,
|
pre_guard_size: 0,
|
||||||
offset_guard_size: 0,
|
offset_guard_size: 0,
|
||||||
@@ -1234,6 +1235,7 @@ mod test {
|
|||||||
minimum: 0,
|
minimum: 0,
|
||||||
maximum: None,
|
maximum: None,
|
||||||
shared: false,
|
shared: false,
|
||||||
|
memory64: false,
|
||||||
},
|
},
|
||||||
pre_guard_size: 0,
|
pre_guard_size: 0,
|
||||||
offset_guard_size: 0,
|
offset_guard_size: 0,
|
||||||
@@ -1308,6 +1310,7 @@ mod test {
|
|||||||
minimum: 6,
|
minimum: 6,
|
||||||
maximum: None,
|
maximum: None,
|
||||||
shared: false,
|
shared: false,
|
||||||
|
memory64: false,
|
||||||
},
|
},
|
||||||
pre_guard_size: 0,
|
pre_guard_size: 0,
|
||||||
offset_guard_size: 0,
|
offset_guard_size: 0,
|
||||||
@@ -1333,6 +1336,7 @@ mod test {
|
|||||||
minimum: 1,
|
minimum: 1,
|
||||||
maximum: None,
|
maximum: None,
|
||||||
shared: false,
|
shared: false,
|
||||||
|
memory64: false,
|
||||||
},
|
},
|
||||||
offset_guard_size: 0,
|
offset_guard_size: 0,
|
||||||
pre_guard_size: 0,
|
pre_guard_size: 0,
|
||||||
|
|||||||
@@ -213,7 +213,7 @@ impl FaultLocator {
|
|||||||
let instance = self.get_instance(index / self.max_memories);
|
let instance = self.get_instance(index / self.max_memories);
|
||||||
|
|
||||||
let init_page_index = (*instance).memories.get(memory_index).and_then(|m| {
|
let init_page_index = (*instance).memories.get(memory_index).and_then(|m| {
|
||||||
if page_index < m.size() as usize {
|
if (addr - memory_start) < m.byte_size() {
|
||||||
Some(page_index)
|
Some(page_index)
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
@@ -500,6 +500,7 @@ mod test {
|
|||||||
minimum: 2,
|
minimum: 2,
|
||||||
maximum: Some(2),
|
maximum: Some(2),
|
||||||
shared: false,
|
shared: false,
|
||||||
|
memory64: false,
|
||||||
},
|
},
|
||||||
style: MemoryStyle::Static { bound: 1 },
|
style: MemoryStyle::Static { bound: 1 },
|
||||||
offset_guard_size: 0,
|
offset_guard_size: 0,
|
||||||
|
|||||||
@@ -190,14 +190,15 @@ pub extern "C" fn wasmtime_f64_nearest(x: f64) -> f64 {
|
|||||||
/// Implementation of memory.grow for locally-defined 32-bit memories.
|
/// Implementation of memory.grow for locally-defined 32-bit memories.
|
||||||
pub unsafe extern "C" fn wasmtime_memory32_grow(
|
pub unsafe extern "C" fn wasmtime_memory32_grow(
|
||||||
vmctx: *mut VMContext,
|
vmctx: *mut VMContext,
|
||||||
delta: u32,
|
delta: u64,
|
||||||
memory_index: u32,
|
memory_index: u32,
|
||||||
) -> u32 {
|
) -> usize {
|
||||||
let instance = (*vmctx).instance_mut();
|
let instance = (*vmctx).instance_mut();
|
||||||
let memory_index = MemoryIndex::from_u32(memory_index);
|
let memory_index = MemoryIndex::from_u32(memory_index);
|
||||||
instance
|
match instance.memory_grow(memory_index, delta) {
|
||||||
.memory_grow(memory_index, delta)
|
Some(size_in_bytes) => size_in_bytes / (wasmtime_environ::WASM_PAGE_SIZE as usize),
|
||||||
.unwrap_or(u32::max_value())
|
None => usize::max_value(),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Implementation of `table.grow`.
|
/// Implementation of `table.grow`.
|
||||||
@@ -317,10 +318,10 @@ pub unsafe extern "C" fn wasmtime_elem_drop(vmctx: *mut VMContext, elem_index: u
|
|||||||
pub unsafe extern "C" fn wasmtime_memory_copy(
|
pub unsafe extern "C" fn wasmtime_memory_copy(
|
||||||
vmctx: *mut VMContext,
|
vmctx: *mut VMContext,
|
||||||
dst_index: u32,
|
dst_index: u32,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
src_index: u32,
|
src_index: u32,
|
||||||
src: u32,
|
src: u64,
|
||||||
len: u32,
|
len: u64,
|
||||||
) {
|
) {
|
||||||
let result = {
|
let result = {
|
||||||
let src_index = MemoryIndex::from_u32(src_index);
|
let src_index = MemoryIndex::from_u32(src_index);
|
||||||
@@ -337,14 +338,14 @@ pub unsafe extern "C" fn wasmtime_memory_copy(
|
|||||||
pub unsafe extern "C" fn wasmtime_memory_fill(
|
pub unsafe extern "C" fn wasmtime_memory_fill(
|
||||||
vmctx: *mut VMContext,
|
vmctx: *mut VMContext,
|
||||||
memory_index: u32,
|
memory_index: u32,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
val: u32,
|
val: u32,
|
||||||
len: u32,
|
len: u64,
|
||||||
) {
|
) {
|
||||||
let result = {
|
let result = {
|
||||||
let memory_index = MemoryIndex::from_u32(memory_index);
|
let memory_index = MemoryIndex::from_u32(memory_index);
|
||||||
let instance = (*vmctx).instance_mut();
|
let instance = (*vmctx).instance_mut();
|
||||||
instance.memory_fill(memory_index, dst, val, len)
|
instance.memory_fill(memory_index, dst, val as u8, len)
|
||||||
};
|
};
|
||||||
if let Err(trap) = result {
|
if let Err(trap) = result {
|
||||||
raise_lib_trap(trap);
|
raise_lib_trap(trap);
|
||||||
@@ -356,7 +357,7 @@ pub unsafe extern "C" fn wasmtime_memory_init(
|
|||||||
vmctx: *mut VMContext,
|
vmctx: *mut VMContext,
|
||||||
memory_index: u32,
|
memory_index: u32,
|
||||||
data_index: u32,
|
data_index: u32,
|
||||||
dst: u32,
|
dst: u64,
|
||||||
src: u32,
|
src: u32,
|
||||||
len: u32,
|
len: u32,
|
||||||
) {
|
) {
|
||||||
|
|||||||
@@ -5,15 +5,23 @@
|
|||||||
use crate::mmap::Mmap;
|
use crate::mmap::Mmap;
|
||||||
use crate::vmcontext::VMMemoryDefinition;
|
use crate::vmcontext::VMMemoryDefinition;
|
||||||
use crate::ResourceLimiter;
|
use crate::ResourceLimiter;
|
||||||
use anyhow::{bail, Result};
|
use anyhow::{bail, format_err, Result};
|
||||||
use more_asserts::{assert_ge, assert_le};
|
use more_asserts::{assert_ge, assert_le};
|
||||||
use std::convert::TryFrom;
|
use std::convert::TryFrom;
|
||||||
use wasmtime_environ::{MemoryPlan, MemoryStyle, WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
use wasmtime_environ::{MemoryPlan, MemoryStyle, WASM32_MAX_PAGES, WASM64_MAX_PAGES};
|
||||||
|
|
||||||
|
const WASM_PAGE_SIZE: usize = wasmtime_environ::WASM_PAGE_SIZE as usize;
|
||||||
|
const WASM_PAGE_SIZE_U64: u64 = wasmtime_environ::WASM_PAGE_SIZE as u64;
|
||||||
|
|
||||||
/// A memory allocator
|
/// A memory allocator
|
||||||
pub trait RuntimeMemoryCreator: Send + Sync {
|
pub trait RuntimeMemoryCreator: Send + Sync {
|
||||||
/// Create new RuntimeLinearMemory
|
/// Create new RuntimeLinearMemory
|
||||||
fn new_memory(&self, plan: &MemoryPlan) -> Result<Box<dyn RuntimeLinearMemory>>;
|
fn new_memory(
|
||||||
|
&self,
|
||||||
|
plan: &MemoryPlan,
|
||||||
|
minimum: usize,
|
||||||
|
maximum: Option<usize>,
|
||||||
|
) -> Result<Box<dyn RuntimeLinearMemory>>;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A default memory allocator used by Wasmtime
|
/// A default memory allocator used by Wasmtime
|
||||||
@@ -21,27 +29,33 @@ pub struct DefaultMemoryCreator;
|
|||||||
|
|
||||||
impl RuntimeMemoryCreator for DefaultMemoryCreator {
|
impl RuntimeMemoryCreator for DefaultMemoryCreator {
|
||||||
/// Create new MmapMemory
|
/// Create new MmapMemory
|
||||||
fn new_memory(&self, plan: &MemoryPlan) -> Result<Box<dyn RuntimeLinearMemory>> {
|
fn new_memory(
|
||||||
Ok(Box::new(MmapMemory::new(plan)?) as _)
|
&self,
|
||||||
|
plan: &MemoryPlan,
|
||||||
|
minimum: usize,
|
||||||
|
maximum: Option<usize>,
|
||||||
|
) -> Result<Box<dyn RuntimeLinearMemory>> {
|
||||||
|
Ok(Box::new(MmapMemory::new(plan, minimum, maximum)?))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A linear memory
|
/// A linear memory
|
||||||
pub trait RuntimeLinearMemory: Send + Sync {
|
pub trait RuntimeLinearMemory: Send + Sync {
|
||||||
/// Returns the number of allocated wasm pages.
|
/// Returns the number of allocated bytes.
|
||||||
fn size(&self) -> u32;
|
fn byte_size(&self) -> usize;
|
||||||
|
|
||||||
/// Returns the maximum number of pages the memory can grow to.
|
/// Returns the maximum number of bytes the memory can grow to.
|
||||||
/// Returns `None` if the memory is unbounded.
|
/// Returns `None` if the memory is unbounded.
|
||||||
fn maximum(&self) -> Option<u32>;
|
fn maximum_byte_size(&self) -> Option<usize>;
|
||||||
|
|
||||||
/// Grow memory by the specified amount of wasm pages.
|
/// Grow memory to the specified amount of bytes.
|
||||||
///
|
///
|
||||||
/// Returns `None` if memory can't be grown by the specified amount
|
/// Returns `None` if memory can't be grown by the specified amount
|
||||||
/// of wasm pages.
|
/// of bytes.
|
||||||
fn grow(&mut self, delta: u32) -> Option<u32>;
|
fn grow_to(&mut self, size: usize) -> Option<()>;
|
||||||
|
|
||||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm
|
||||||
|
/// code.
|
||||||
fn vmmemory(&self) -> VMMemoryDefinition;
|
fn vmmemory(&self) -> VMMemoryDefinition;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -49,10 +63,19 @@ pub trait RuntimeLinearMemory: Send + Sync {
|
|||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct MmapMemory {
|
pub struct MmapMemory {
|
||||||
// The underlying allocation.
|
// The underlying allocation.
|
||||||
mmap: WasmMmap,
|
mmap: Mmap,
|
||||||
|
|
||||||
// The optional maximum size in wasm pages of this linear memory.
|
// The number of bytes that are accessible in `mmap` and available for
|
||||||
maximum: Option<u32>,
|
// reading and writing.
|
||||||
|
//
|
||||||
|
// This region starts at `pre_guard_size` offset from the base of `mmap`.
|
||||||
|
accessible: usize,
|
||||||
|
|
||||||
|
// The optional maximum accessible size, in bytes, for this linear memory.
|
||||||
|
//
|
||||||
|
// Note that this maximum does not factor in guard pages, so this isn't the
|
||||||
|
// maximum size of the linear address space reservation for this memory.
|
||||||
|
maximum: Option<usize>,
|
||||||
|
|
||||||
// Size in bytes of extra guard pages before the start and after the end to
|
// Size in bytes of extra guard pages before the start and after the end to
|
||||||
// optimize loads and stores with constant offsets.
|
// optimize loads and stores with constant offsets.
|
||||||
@@ -60,52 +83,36 @@ pub struct MmapMemory {
|
|||||||
offset_guard_size: usize,
|
offset_guard_size: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
struct WasmMmap {
|
|
||||||
// Our OS allocation of mmap'd memory.
|
|
||||||
alloc: Mmap,
|
|
||||||
// The current logical size in wasm pages of this linear memory.
|
|
||||||
size: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl MmapMemory {
|
impl MmapMemory {
|
||||||
/// Create a new linear memory instance with specified minimum and maximum number of wasm pages.
|
/// Create a new linear memory instance with specified minimum and maximum number of wasm pages.
|
||||||
pub fn new(plan: &MemoryPlan) -> Result<Self> {
|
pub fn new(plan: &MemoryPlan, minimum: usize, maximum: Option<usize>) -> Result<Self> {
|
||||||
// `maximum` cannot be set to more than `65536` pages.
|
// It's a programmer error for these two configuration values to exceed
|
||||||
assert_le!(plan.memory.minimum, WASM_MAX_PAGES);
|
// the host available address space, so panic if such a configuration is
|
||||||
assert!(plan.memory.maximum.is_none() || plan.memory.maximum.unwrap() <= WASM_MAX_PAGES);
|
// found (mostly an issue for hypothetical 32-bit hosts).
|
||||||
|
let offset_guard_bytes = usize::try_from(plan.offset_guard_size).unwrap();
|
||||||
|
let pre_guard_bytes = usize::try_from(plan.pre_guard_size).unwrap();
|
||||||
|
|
||||||
let offset_guard_bytes = plan.offset_guard_size as usize;
|
let alloc_bytes = match plan.style {
|
||||||
let pre_guard_bytes = plan.pre_guard_size as usize;
|
MemoryStyle::Dynamic => minimum,
|
||||||
|
|
||||||
let minimum_pages = match plan.style {
|
|
||||||
MemoryStyle::Dynamic => plan.memory.minimum,
|
|
||||||
MemoryStyle::Static { bound } => {
|
MemoryStyle::Static { bound } => {
|
||||||
assert_ge!(bound, plan.memory.minimum);
|
assert_ge!(bound, plan.memory.minimum);
|
||||||
bound
|
usize::try_from(bound.checked_mul(WASM_PAGE_SIZE_U64).unwrap()).unwrap()
|
||||||
}
|
}
|
||||||
} as usize;
|
|
||||||
let minimum_bytes = minimum_pages.checked_mul(WASM_PAGE_SIZE as usize).unwrap();
|
|
||||||
let request_bytes = pre_guard_bytes
|
|
||||||
.checked_add(minimum_bytes)
|
|
||||||
.unwrap()
|
|
||||||
.checked_add(offset_guard_bytes)
|
|
||||||
.unwrap();
|
|
||||||
let mapped_pages = plan.memory.minimum as usize;
|
|
||||||
let accessible_bytes = mapped_pages * WASM_PAGE_SIZE as usize;
|
|
||||||
|
|
||||||
let mut mmap = WasmMmap {
|
|
||||||
alloc: Mmap::accessible_reserved(0, request_bytes)?,
|
|
||||||
size: plan.memory.minimum,
|
|
||||||
};
|
};
|
||||||
if accessible_bytes > 0 {
|
let request_bytes = pre_guard_bytes
|
||||||
mmap.alloc
|
.checked_add(alloc_bytes)
|
||||||
.make_accessible(pre_guard_bytes, accessible_bytes)?;
|
.and_then(|i| i.checked_add(offset_guard_bytes))
|
||||||
|
.ok_or_else(|| format_err!("cannot allocate {} with guard regions", minimum))?;
|
||||||
|
|
||||||
|
let mut mmap = Mmap::accessible_reserved(0, request_bytes)?;
|
||||||
|
if minimum > 0 {
|
||||||
|
mmap.make_accessible(pre_guard_bytes, minimum)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
mmap: mmap.into(),
|
mmap,
|
||||||
maximum: plan.memory.maximum,
|
accessible: minimum,
|
||||||
|
maximum,
|
||||||
pre_guard_size: pre_guard_bytes,
|
pre_guard_size: pre_guard_bytes,
|
||||||
offset_guard_size: offset_guard_bytes,
|
offset_guard_size: offset_guard_bytes,
|
||||||
})
|
})
|
||||||
@@ -113,88 +120,52 @@ impl MmapMemory {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl RuntimeLinearMemory for MmapMemory {
|
impl RuntimeLinearMemory for MmapMemory {
|
||||||
/// Returns the number of allocated wasm pages.
|
fn byte_size(&self) -> usize {
|
||||||
fn size(&self) -> u32 {
|
self.accessible
|
||||||
self.mmap.size
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the maximum number of pages the memory can grow to.
|
fn maximum_byte_size(&self) -> Option<usize> {
|
||||||
/// Returns `None` if the memory is unbounded.
|
|
||||||
fn maximum(&self) -> Option<u32> {
|
|
||||||
self.maximum
|
self.maximum
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Grow memory by the specified amount of wasm pages.
|
fn grow_to(&mut self, new_size: usize) -> Option<()> {
|
||||||
///
|
if new_size > self.mmap.len() - self.offset_guard_size - self.pre_guard_size {
|
||||||
/// Returns `None` if memory can't be grown by the specified amount
|
|
||||||
/// of wasm pages.
|
|
||||||
fn grow(&mut self, delta: u32) -> Option<u32> {
|
|
||||||
// Optimization of memory.grow 0 calls.
|
|
||||||
if delta == 0 {
|
|
||||||
return Some(self.mmap.size);
|
|
||||||
}
|
|
||||||
|
|
||||||
let new_pages = match self.mmap.size.checked_add(delta) {
|
|
||||||
Some(new_pages) => new_pages,
|
|
||||||
// Linear memory size overflow.
|
|
||||||
None => return None,
|
|
||||||
};
|
|
||||||
let prev_pages = self.mmap.size;
|
|
||||||
|
|
||||||
if let Some(maximum) = self.maximum {
|
|
||||||
if new_pages > maximum {
|
|
||||||
// Linear memory size would exceed the declared maximum.
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wasm linear memories are never allowed to grow beyond what is
|
|
||||||
// indexable. If the memory has no maximum, enforce the greatest
|
|
||||||
// limit here.
|
|
||||||
if new_pages > WASM_MAX_PAGES {
|
|
||||||
// Linear memory size would exceed the index range.
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
let delta_bytes = usize::try_from(delta).unwrap() * WASM_PAGE_SIZE as usize;
|
|
||||||
let prev_bytes = usize::try_from(prev_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
|
||||||
let new_bytes = usize::try_from(new_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
|
||||||
|
|
||||||
if new_bytes > self.mmap.alloc.len() - self.offset_guard_size - self.pre_guard_size {
|
|
||||||
// If the new size is within the declared maximum, but needs more memory than we
|
// If the new size is within the declared maximum, but needs more memory than we
|
||||||
// have on hand, it's a dynamic heap and it can move.
|
// have on hand, it's a dynamic heap and it can move.
|
||||||
let request_bytes = self
|
let request_bytes = self
|
||||||
.pre_guard_size
|
.pre_guard_size
|
||||||
.checked_add(new_bytes)?
|
.checked_add(new_size)?
|
||||||
.checked_add(self.offset_guard_size)?;
|
.checked_add(self.offset_guard_size)?;
|
||||||
|
|
||||||
let mut new_mmap = Mmap::accessible_reserved(0, request_bytes).ok()?;
|
let mut new_mmap = Mmap::accessible_reserved(0, request_bytes).ok()?;
|
||||||
new_mmap
|
new_mmap
|
||||||
.make_accessible(self.pre_guard_size, new_bytes)
|
.make_accessible(self.pre_guard_size, new_size)
|
||||||
.ok()?;
|
.ok()?;
|
||||||
|
|
||||||
new_mmap.as_mut_slice()[self.pre_guard_size..][..prev_bytes]
|
new_mmap.as_mut_slice()[self.pre_guard_size..][..self.accessible]
|
||||||
.copy_from_slice(&self.mmap.alloc.as_slice()[self.pre_guard_size..][..prev_bytes]);
|
.copy_from_slice(&self.mmap.as_slice()[self.pre_guard_size..][..self.accessible]);
|
||||||
|
|
||||||
self.mmap.alloc = new_mmap;
|
self.mmap = new_mmap;
|
||||||
} else if delta_bytes > 0 {
|
} else {
|
||||||
|
assert!(new_size > self.accessible);
|
||||||
// Make the newly allocated pages accessible.
|
// Make the newly allocated pages accessible.
|
||||||
self.mmap
|
self.mmap
|
||||||
.alloc
|
.make_accessible(
|
||||||
.make_accessible(self.pre_guard_size + prev_bytes, delta_bytes)
|
self.pre_guard_size + self.accessible,
|
||||||
|
new_size - self.accessible,
|
||||||
|
)
|
||||||
.ok()?;
|
.ok()?;
|
||||||
}
|
}
|
||||||
|
|
||||||
self.mmap.size = new_pages;
|
self.accessible = new_size;
|
||||||
|
|
||||||
Some(prev_pages)
|
Some(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
|
||||||
fn vmmemory(&self) -> VMMemoryDefinition {
|
fn vmmemory(&self) -> VMMemoryDefinition {
|
||||||
VMMemoryDefinition {
|
VMMemoryDefinition {
|
||||||
base: unsafe { self.mmap.alloc.as_mut_ptr().add(self.pre_guard_size) },
|
base: unsafe { self.mmap.as_mut_ptr().add(self.pre_guard_size) },
|
||||||
current_length: self.mmap.size as usize * WASM_PAGE_SIZE as usize,
|
current_length: self.accessible,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -208,8 +179,8 @@ pub enum Memory {
|
|||||||
/// slice is the maximum size of the memory that can be grown to.
|
/// slice is the maximum size of the memory that can be grown to.
|
||||||
base: &'static mut [u8],
|
base: &'static mut [u8],
|
||||||
|
|
||||||
/// The current size, in wasm pages, of this memory.
|
/// The current size, in bytes, of this memory.
|
||||||
size: u32,
|
size: usize,
|
||||||
|
|
||||||
/// A callback which makes portions of `base` accessible for when memory
|
/// A callback which makes portions of `base` accessible for when memory
|
||||||
/// is grown. Otherwise it's expected that accesses to `base` will
|
/// is grown. Otherwise it's expected that accesses to `base` will
|
||||||
@@ -234,8 +205,8 @@ impl Memory {
|
|||||||
creator: &dyn RuntimeMemoryCreator,
|
creator: &dyn RuntimeMemoryCreator,
|
||||||
limiter: Option<&mut dyn ResourceLimiter>,
|
limiter: Option<&mut dyn ResourceLimiter>,
|
||||||
) -> Result<Self> {
|
) -> Result<Self> {
|
||||||
Self::limit_new(plan, limiter)?;
|
let (minimum, maximum) = Self::limit_new(plan, limiter)?;
|
||||||
Ok(Memory::Dynamic(creator.new_memory(plan)?))
|
Ok(Memory::Dynamic(creator.new_memory(plan, minimum, maximum)?))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create a new static (immovable) memory instance for the specified plan.
|
/// Create a new static (immovable) memory instance for the specified plan.
|
||||||
@@ -245,48 +216,94 @@ impl Memory {
|
|||||||
make_accessible: fn(*mut u8, usize) -> Result<()>,
|
make_accessible: fn(*mut u8, usize) -> Result<()>,
|
||||||
limiter: Option<&mut dyn ResourceLimiter>,
|
limiter: Option<&mut dyn ResourceLimiter>,
|
||||||
) -> Result<Self> {
|
) -> Result<Self> {
|
||||||
Self::limit_new(plan, limiter)?;
|
let (minimum, maximum) = Self::limit_new(plan, limiter)?;
|
||||||
|
|
||||||
let base = match plan.memory.maximum {
|
let base = match maximum {
|
||||||
Some(max) if (max as usize) < base.len() / (WASM_PAGE_SIZE as usize) => {
|
Some(max) if max < base.len() => &mut base[..max],
|
||||||
&mut base[..(max * WASM_PAGE_SIZE) as usize]
|
|
||||||
}
|
|
||||||
_ => base,
|
_ => base,
|
||||||
};
|
};
|
||||||
|
|
||||||
if plan.memory.minimum > 0 {
|
if minimum > 0 {
|
||||||
make_accessible(
|
make_accessible(base.as_mut_ptr(), minimum)?;
|
||||||
base.as_mut_ptr(),
|
|
||||||
plan.memory.minimum as usize * WASM_PAGE_SIZE as usize,
|
|
||||||
)?;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(Memory::Static {
|
Ok(Memory::Static {
|
||||||
base,
|
base,
|
||||||
size: plan.memory.minimum,
|
size: minimum,
|
||||||
make_accessible,
|
make_accessible,
|
||||||
#[cfg(all(feature = "uffd", target_os = "linux"))]
|
#[cfg(all(feature = "uffd", target_os = "linux"))]
|
||||||
guard_page_faults: Vec::new(),
|
guard_page_faults: Vec::new(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
fn limit_new(plan: &MemoryPlan, limiter: Option<&mut dyn ResourceLimiter>) -> Result<()> {
|
/// Calls the `limiter`, if specified, to optionally prevent a memory from
|
||||||
|
/// being allocated.
|
||||||
|
///
|
||||||
|
/// Returns the minimum size and optional maximum size of the memory, in
|
||||||
|
/// bytes.
|
||||||
|
fn limit_new(
|
||||||
|
plan: &MemoryPlan,
|
||||||
|
limiter: Option<&mut dyn ResourceLimiter>,
|
||||||
|
) -> Result<(usize, Option<usize>)> {
|
||||||
|
// Sanity-check what should already be true from wasm module validation.
|
||||||
|
let absolute_max = if plan.memory.memory64 {
|
||||||
|
WASM64_MAX_PAGES
|
||||||
|
} else {
|
||||||
|
WASM32_MAX_PAGES
|
||||||
|
};
|
||||||
|
assert_le!(plan.memory.minimum, absolute_max);
|
||||||
|
assert!(plan.memory.maximum.is_none() || plan.memory.maximum.unwrap() <= absolute_max);
|
||||||
|
|
||||||
|
// If the minimum memory size overflows the size of our own address
|
||||||
|
// space, then we can't satisfy this request.
|
||||||
|
let minimum = plan
|
||||||
|
.memory
|
||||||
|
.minimum
|
||||||
|
.checked_mul(WASM_PAGE_SIZE_U64)
|
||||||
|
.and_then(|m| usize::try_from(m).ok())
|
||||||
|
.ok_or_else(|| {
|
||||||
|
format_err!(
|
||||||
|
"memory minimum size of {} pages exceeds memory limits",
|
||||||
|
plan.memory.minimum
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// The plan stores the maximum size in units of wasm pages, but we
|
||||||
|
// use units of bytes. Do the mapping here, and if we overflow for some
|
||||||
|
// reason then just assume that the listed maximum was our entire memory
|
||||||
|
// minus one wasm page since we can't grow past that anyway (presumably
|
||||||
|
// the kernel will reserve at least *something* for itself...)
|
||||||
|
let mut maximum = plan.memory.maximum.map(|max| {
|
||||||
|
usize::try_from(max)
|
||||||
|
.ok()
|
||||||
|
.and_then(|m| m.checked_mul(WASM_PAGE_SIZE))
|
||||||
|
.unwrap_or(usize::MAX - WASM_PAGE_SIZE)
|
||||||
|
});
|
||||||
|
|
||||||
|
// If this is a 32-bit memory and no maximum is otherwise listed then we
|
||||||
|
// need to still specify a maximum size of 4GB. If the host platform is
|
||||||
|
// 32-bit then there's no need to limit the maximum this way since no
|
||||||
|
// allocation of 4GB can succeed, but for 64-bit platforms this is
|
||||||
|
// required to limit memories to 4GB.
|
||||||
|
if !plan.memory.memory64 && maximum.is_none() {
|
||||||
|
maximum = usize::try_from(1u64 << 32).ok();
|
||||||
|
}
|
||||||
if let Some(limiter) = limiter {
|
if let Some(limiter) = limiter {
|
||||||
if !limiter.memory_growing(0, plan.memory.minimum, plan.memory.maximum) {
|
if !limiter.memory_growing(0, minimum, maximum) {
|
||||||
bail!(
|
bail!(
|
||||||
"memory minimum size of {} pages exceeds memory limits",
|
"memory minimum size of {} pages exceeds memory limits",
|
||||||
plan.memory.minimum
|
plan.memory.minimum
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok((minimum, maximum))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the number of allocated wasm pages.
|
/// Returns the number of allocated wasm pages.
|
||||||
pub fn size(&self) -> u32 {
|
pub fn byte_size(&self) -> usize {
|
||||||
match self {
|
match self {
|
||||||
Memory::Static { size, .. } => *size,
|
Memory::Static { size, .. } => *size,
|
||||||
Memory::Dynamic(mem) => mem.size(),
|
Memory::Dynamic(mem) => mem.byte_size(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -296,10 +313,10 @@ impl Memory {
|
|||||||
///
|
///
|
||||||
/// The runtime maximum may not be equal to the maximum from the linear memory's
|
/// The runtime maximum may not be equal to the maximum from the linear memory's
|
||||||
/// Wasm type when it is being constrained by an instance allocator.
|
/// Wasm type when it is being constrained by an instance allocator.
|
||||||
pub fn maximum(&self) -> Option<u32> {
|
pub fn maximum_byte_size(&self) -> Option<usize> {
|
||||||
match self {
|
match self {
|
||||||
Memory::Static { base, .. } => Some((base.len() / (WASM_PAGE_SIZE as usize)) as u32),
|
Memory::Static { base, .. } => Some(base.len()),
|
||||||
Memory::Dynamic(mem) => mem.maximum(),
|
Memory::Dynamic(mem) => mem.maximum_byte_size(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -315,7 +332,8 @@ impl Memory {
|
|||||||
/// Grow memory by the specified amount of wasm pages.
|
/// Grow memory by the specified amount of wasm pages.
|
||||||
///
|
///
|
||||||
/// Returns `None` if memory can't be grown by the specified amount
|
/// Returns `None` if memory can't be grown by the specified amount
|
||||||
/// of wasm pages.
|
/// of wasm pages. Returns `Some` with the old size of memory, in bytes, on
|
||||||
|
/// successful growth.
|
||||||
///
|
///
|
||||||
/// # Safety
|
/// # Safety
|
||||||
///
|
///
|
||||||
@@ -327,19 +345,27 @@ impl Memory {
|
|||||||
/// this unsafety.
|
/// this unsafety.
|
||||||
pub unsafe fn grow(
|
pub unsafe fn grow(
|
||||||
&mut self,
|
&mut self,
|
||||||
delta: u32,
|
delta_pages: u64,
|
||||||
limiter: Option<&mut dyn ResourceLimiter>,
|
limiter: Option<&mut dyn ResourceLimiter>,
|
||||||
) -> Option<u32> {
|
) -> Option<usize> {
|
||||||
let old_size = self.size();
|
let old_byte_size = self.byte_size();
|
||||||
if delta == 0 {
|
if delta_pages == 0 {
|
||||||
return Some(old_size);
|
return Some(old_byte_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
let new_size = old_size.checked_add(delta)?;
|
let new_byte_size = usize::try_from(delta_pages)
|
||||||
let maximum = self.maximum();
|
.ok()?
|
||||||
|
.checked_mul(WASM_PAGE_SIZE)?
|
||||||
|
.checked_add(old_byte_size)?;
|
||||||
|
let maximum = self.maximum_byte_size();
|
||||||
|
|
||||||
|
if let Some(max) = maximum {
|
||||||
|
if new_byte_size > max {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
}
|
||||||
if let Some(limiter) = limiter {
|
if let Some(limiter) = limiter {
|
||||||
if !limiter.memory_growing(old_size, new_size, maximum) {
|
if !limiter.memory_growing(old_byte_size, new_byte_size, maximum) {
|
||||||
return None;
|
return None;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -359,21 +385,21 @@ impl Memory {
|
|||||||
make_accessible,
|
make_accessible,
|
||||||
..
|
..
|
||||||
} => {
|
} => {
|
||||||
if new_size > maximum.unwrap_or(WASM_MAX_PAGES) {
|
if new_byte_size > base.len() {
|
||||||
return None;
|
return None;
|
||||||
}
|
}
|
||||||
|
|
||||||
let start = usize::try_from(old_size).unwrap() * WASM_PAGE_SIZE as usize;
|
make_accessible(
|
||||||
let len = usize::try_from(delta).unwrap() * WASM_PAGE_SIZE as usize;
|
base.as_mut_ptr().add(old_byte_size),
|
||||||
|
new_byte_size - old_byte_size,
|
||||||
|
)
|
||||||
|
.ok()?;
|
||||||
|
|
||||||
make_accessible(base.as_mut_ptr().add(start), len).ok()?;
|
*size = new_byte_size;
|
||||||
|
|
||||||
*size = new_size;
|
|
||||||
|
|
||||||
Some(old_size)
|
|
||||||
}
|
}
|
||||||
Memory::Dynamic(mem) => mem.grow(delta),
|
Memory::Dynamic(mem) => mem.grow_to(new_byte_size)?,
|
||||||
}
|
}
|
||||||
|
Some(old_byte_size)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
||||||
@@ -381,7 +407,7 @@ impl Memory {
|
|||||||
match self {
|
match self {
|
||||||
Memory::Static { base, size, .. } => VMMemoryDefinition {
|
Memory::Static { base, size, .. } => VMMemoryDefinition {
|
||||||
base: base.as_ptr() as *mut _,
|
base: base.as_ptr() as *mut _,
|
||||||
current_length: *size as usize * WASM_PAGE_SIZE as usize,
|
current_length: *size,
|
||||||
},
|
},
|
||||||
Memory::Dynamic(mem) => mem.vmmemory(),
|
Memory::Dynamic(mem) => mem.vmmemory(),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -73,7 +73,11 @@ impl Mmap {
|
|||||||
)
|
)
|
||||||
};
|
};
|
||||||
if ptr as isize == -1_isize {
|
if ptr as isize == -1_isize {
|
||||||
bail!("mmap failed: {}", io::Error::last_os_error());
|
bail!(
|
||||||
|
"mmap failed to allocate {:#x} bytes: {}",
|
||||||
|
mapping_size,
|
||||||
|
io::Error::last_os_error()
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
Self {
|
Self {
|
||||||
@@ -93,7 +97,11 @@ impl Mmap {
|
|||||||
)
|
)
|
||||||
};
|
};
|
||||||
if ptr as isize == -1_isize {
|
if ptr as isize == -1_isize {
|
||||||
bail!("mmap failed: {}", io::Error::last_os_error());
|
bail!(
|
||||||
|
"mmap failed to allocate {:#x} bytes: {}",
|
||||||
|
mapping_size,
|
||||||
|
io::Error::last_os_error()
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut result = Self {
|
let mut result = Self {
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ use crate::trampoline::MemoryCreatorProxy;
|
|||||||
use anyhow::{bail, Result};
|
use anyhow::{bail, Result};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::cmp;
|
use std::cmp;
|
||||||
use std::convert::TryFrom;
|
|
||||||
use std::fmt;
|
use std::fmt;
|
||||||
#[cfg(feature = "cache")]
|
#[cfg(feature = "cache")]
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
@@ -136,7 +135,7 @@ pub struct ModuleLimits {
|
|||||||
/// The reservation size of each linear memory is controlled by the
|
/// The reservation size of each linear memory is controlled by the
|
||||||
/// [`static_memory_maximum_size`](Config::static_memory_maximum_size) setting and this value cannot
|
/// [`static_memory_maximum_size`](Config::static_memory_maximum_size) setting and this value cannot
|
||||||
/// exceed the configured static memory maximum size.
|
/// exceed the configured static memory maximum size.
|
||||||
pub memory_pages: u32,
|
pub memory_pages: u64,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for ModuleLimits {
|
impl Default for ModuleLimits {
|
||||||
@@ -773,6 +772,21 @@ impl Config {
|
|||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Configures whether the WebAssembly memory64 [proposal] will
|
||||||
|
/// be enabled for compilation.
|
||||||
|
///
|
||||||
|
/// Note that this the upstream specification is not finalized and Wasmtime
|
||||||
|
/// may also have bugs for this feature since it hasn't been exercised
|
||||||
|
/// much.
|
||||||
|
///
|
||||||
|
/// This is `false` by default.
|
||||||
|
///
|
||||||
|
/// [proposal]: https://github.com/webassembly/memory64
|
||||||
|
pub fn wasm_memory64(&mut self, enable: bool) -> &mut Self {
|
||||||
|
self.features.memory64 = enable;
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
/// Configures which compilation strategy will be used for wasm modules.
|
/// Configures which compilation strategy will be used for wasm modules.
|
||||||
///
|
///
|
||||||
/// This method can be used to configure which compiler is used for wasm
|
/// This method can be used to configure which compiler is used for wasm
|
||||||
@@ -1081,7 +1095,7 @@ impl Config {
|
|||||||
/// pooling allocator.
|
/// pooling allocator.
|
||||||
pub fn static_memory_maximum_size(&mut self, max_size: u64) -> &mut Self {
|
pub fn static_memory_maximum_size(&mut self, max_size: u64) -> &mut Self {
|
||||||
let max_pages = max_size / u64::from(wasmtime_environ::WASM_PAGE_SIZE);
|
let max_pages = max_size / u64::from(wasmtime_environ::WASM_PAGE_SIZE);
|
||||||
self.tunables.static_memory_bound = u32::try_from(max_pages).unwrap_or(u32::max_value());
|
self.tunables.static_memory_bound = max_pages;
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -419,7 +419,7 @@ impl Table {
|
|||||||
/// let engine = Engine::default();
|
/// let engine = Engine::default();
|
||||||
/// let mut store = Store::new(&engine, ());
|
/// let mut store = Store::new(&engine, ());
|
||||||
///
|
///
|
||||||
/// let ty = TableType::new(ValType::FuncRef, Limits::new(2, None));
|
/// let ty = TableType::new(ValType::FuncRef, 2, None);
|
||||||
/// let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
/// let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
||||||
///
|
///
|
||||||
/// let module = Module::new(
|
/// let module = Module::new(
|
||||||
@@ -442,7 +442,7 @@ impl Table {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn _new(store: &mut StoreOpaque, ty: TableType, init: Val) -> Result<Table> {
|
fn _new(store: &mut StoreOpaque, ty: TableType, init: Val) -> Result<Table> {
|
||||||
if init.ty() != *ty.element() {
|
if init.ty() != ty.element() {
|
||||||
bail!(
|
bail!(
|
||||||
"table initialization value type {:?} does not have expected type {:?}",
|
"table initialization value type {:?} does not have expected type {:?}",
|
||||||
init.ty(),
|
init.ty(),
|
||||||
@@ -467,7 +467,7 @@ impl Table {
|
|||||||
unsafe {
|
unsafe {
|
||||||
let table = Table::from_wasmtime_table(wasmtime_export, store);
|
let table = Table::from_wasmtime_table(wasmtime_export, store);
|
||||||
(*table.wasmtime_table(store))
|
(*table.wasmtime_table(store))
|
||||||
.fill(0, init, ty.limits().min())
|
.fill(0, init, ty.minimum())
|
||||||
.map_err(Trap::from_runtime)?;
|
.map_err(Trap::from_runtime)?;
|
||||||
|
|
||||||
Ok(table)
|
Ok(table)
|
||||||
|
|||||||
@@ -9,13 +9,13 @@ impl StoreLimitsBuilder {
|
|||||||
Self(StoreLimits::default())
|
Self(StoreLimits::default())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// The maximum number of WebAssembly pages a linear memory can grow to.
|
/// The maximum number of bytes a linear memory can grow to.
|
||||||
///
|
///
|
||||||
/// Growing a linear memory beyond this limit will fail.
|
/// Growing a linear memory beyond this limit will fail.
|
||||||
///
|
///
|
||||||
/// By default, linear memory pages will not be limited.
|
/// By default, linear memory will not be limited.
|
||||||
pub fn memory_pages(mut self, limit: u32) -> Self {
|
pub fn memory_size(mut self, limit: usize) -> Self {
|
||||||
self.0.memory_pages = Some(limit);
|
self.0.memory_size = Some(limit);
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -67,7 +67,7 @@ impl StoreLimitsBuilder {
|
|||||||
|
|
||||||
/// Provides limits for a [`Store`](crate::Store).
|
/// Provides limits for a [`Store`](crate::Store).
|
||||||
pub struct StoreLimits {
|
pub struct StoreLimits {
|
||||||
memory_pages: Option<u32>,
|
memory_size: Option<usize>,
|
||||||
table_elements: Option<u32>,
|
table_elements: Option<u32>,
|
||||||
instances: usize,
|
instances: usize,
|
||||||
tables: usize,
|
tables: usize,
|
||||||
@@ -77,7 +77,7 @@ pub struct StoreLimits {
|
|||||||
impl Default for StoreLimits {
|
impl Default for StoreLimits {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
memory_pages: None,
|
memory_size: None,
|
||||||
table_elements: None,
|
table_elements: None,
|
||||||
instances: wasmtime_runtime::DEFAULT_INSTANCE_LIMIT,
|
instances: wasmtime_runtime::DEFAULT_INSTANCE_LIMIT,
|
||||||
tables: wasmtime_runtime::DEFAULT_TABLE_LIMIT,
|
tables: wasmtime_runtime::DEFAULT_TABLE_LIMIT,
|
||||||
@@ -87,8 +87,8 @@ impl Default for StoreLimits {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl ResourceLimiter for StoreLimits {
|
impl ResourceLimiter for StoreLimits {
|
||||||
fn memory_growing(&mut self, _current: u32, desired: u32, _maximum: Option<u32>) -> bool {
|
fn memory_growing(&mut self, _current: usize, desired: usize, _maximum: Option<usize>) -> bool {
|
||||||
match self.memory_pages {
|
match self.memory_size {
|
||||||
Some(limit) if desired > limit => false,
|
Some(limit) if desired > limit => false,
|
||||||
_ => true,
|
_ => true,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ use crate::store::{StoreData, StoreOpaque, Stored};
|
|||||||
use crate::trampoline::generate_memory_export;
|
use crate::trampoline::generate_memory_export;
|
||||||
use crate::{AsContext, AsContextMut, MemoryType, StoreContext, StoreContextMut};
|
use crate::{AsContext, AsContextMut, MemoryType, StoreContext, StoreContextMut};
|
||||||
use anyhow::{bail, Result};
|
use anyhow::{bail, Result};
|
||||||
|
use std::convert::TryFrom;
|
||||||
use std::slice;
|
use std::slice;
|
||||||
|
|
||||||
/// Error for out of bounds [`Memory`] access.
|
/// Error for out of bounds [`Memory`] access.
|
||||||
@@ -209,7 +210,7 @@ impl Memory {
|
|||||||
/// let engine = Engine::default();
|
/// let engine = Engine::default();
|
||||||
/// let mut store = Store::new(&engine, ());
|
/// let mut store = Store::new(&engine, ());
|
||||||
///
|
///
|
||||||
/// let memory_ty = MemoryType::new(Limits::new(1, None));
|
/// let memory_ty = MemoryType::new(1, None);
|
||||||
/// let memory = Memory::new(&mut store, memory_ty)?;
|
/// let memory = Memory::new(&mut store, memory_ty)?;
|
||||||
///
|
///
|
||||||
/// let module = Module::new(&engine, "(module (memory (import \"\" \"\") 1))")?;
|
/// let module = Module::new(&engine, "(module (memory (import \"\" \"\") 1))")?;
|
||||||
@@ -246,7 +247,7 @@ impl Memory {
|
|||||||
/// let instance = Instance::new(&mut store, &module, &[])?;
|
/// let instance = Instance::new(&mut store, &module, &[])?;
|
||||||
/// let memory = instance.get_memory(&mut store, "mem").unwrap();
|
/// let memory = instance.get_memory(&mut store, "mem").unwrap();
|
||||||
/// let ty = memory.ty(&store);
|
/// let ty = memory.ty(&store);
|
||||||
/// assert_eq!(ty.limits().min(), 1);
|
/// assert_eq!(ty.minimum(), 1);
|
||||||
/// # Ok(())
|
/// # Ok(())
|
||||||
/// # }
|
/// # }
|
||||||
/// ```
|
/// ```
|
||||||
@@ -403,8 +404,8 @@ impl Memory {
|
|||||||
/// # Panics
|
/// # Panics
|
||||||
///
|
///
|
||||||
/// Panics if this memory doesn't belong to `store`.
|
/// Panics if this memory doesn't belong to `store`.
|
||||||
pub fn size(&self, store: impl AsContext) -> u32 {
|
pub fn size(&self, store: impl AsContext) -> u64 {
|
||||||
(self.data_size(store) / wasmtime_environ::WASM_PAGE_SIZE as usize) as u32
|
(self.data_size(store) / wasmtime_environ::WASM_PAGE_SIZE as usize) as u64
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Grows this WebAssembly memory by `delta` pages.
|
/// Grows this WebAssembly memory by `delta` pages.
|
||||||
@@ -448,7 +449,7 @@ impl Memory {
|
|||||||
/// # Ok(())
|
/// # Ok(())
|
||||||
/// # }
|
/// # }
|
||||||
/// ```
|
/// ```
|
||||||
pub fn grow(&self, mut store: impl AsContextMut, delta: u32) -> Result<u32> {
|
pub fn grow(&self, mut store: impl AsContextMut, delta: u64) -> Result<u64> {
|
||||||
let mem = self.wasmtime_memory(&mut store.as_context_mut().opaque());
|
let mem = self.wasmtime_memory(&mut store.as_context_mut().opaque());
|
||||||
let store = store.as_context_mut();
|
let store = store.as_context_mut();
|
||||||
unsafe {
|
unsafe {
|
||||||
@@ -456,7 +457,7 @@ impl Memory {
|
|||||||
Some(size) => {
|
Some(size) => {
|
||||||
let vm = (*mem).vmmemory();
|
let vm = (*mem).vmmemory();
|
||||||
*store[self.0].definition = vm;
|
*store[self.0].definition = vm;
|
||||||
Ok(size)
|
Ok(u64::try_from(size).unwrap() / u64::from(wasmtime_environ::WASM_PAGE_SIZE))
|
||||||
}
|
}
|
||||||
None => bail!("failed to grow memory by `{}`", delta),
|
None => bail!("failed to grow memory by `{}`", delta),
|
||||||
}
|
}
|
||||||
@@ -499,10 +500,11 @@ impl Memory {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A linear memory. This trait provides an interface for raw memory buffers which are used
|
/// A linear memory. This trait provides an interface for raw memory buffers
|
||||||
/// by wasmtime, e.g. inside ['Memory']. Such buffers are in principle not thread safe.
|
/// which are used by wasmtime, e.g. inside ['Memory']. Such buffers are in
|
||||||
/// By implementing this trait together with MemoryCreator,
|
/// principle not thread safe. By implementing this trait together with
|
||||||
/// one can supply wasmtime with custom allocated host managed memory.
|
/// MemoryCreator, one can supply wasmtime with custom allocated host managed
|
||||||
|
/// memory.
|
||||||
///
|
///
|
||||||
/// # Safety
|
/// # Safety
|
||||||
///
|
///
|
||||||
@@ -514,18 +516,20 @@ impl Memory {
|
|||||||
/// Note that this is a relatively new and experimental feature and it is
|
/// Note that this is a relatively new and experimental feature and it is
|
||||||
/// recommended to be familiar with wasmtime runtime code to use it.
|
/// recommended to be familiar with wasmtime runtime code to use it.
|
||||||
pub unsafe trait LinearMemory: Send + Sync + 'static {
|
pub unsafe trait LinearMemory: Send + Sync + 'static {
|
||||||
/// Returns the number of allocated wasm pages.
|
/// Returns the number of allocated bytes which are accessible at this time.
|
||||||
fn size(&self) -> u32;
|
fn byte_size(&self) -> usize;
|
||||||
|
|
||||||
/// Returns the maximum number of pages the memory can grow to.
|
/// Returns the maximum number of bytes the memory can grow to.
|
||||||
/// Returns `None` if the memory is unbounded.
|
///
|
||||||
fn maximum(&self) -> Option<u32>;
|
/// Returns `None` if the memory is unbounded, or `Some` if memory cannot
|
||||||
|
/// grow beyond a specified limit.
|
||||||
|
fn maximum_byte_size(&self) -> Option<usize>;
|
||||||
|
|
||||||
/// Grow memory by the specified amount of wasm pages.
|
/// Grows this memory to have the `new_size`, in bytes, specified.
|
||||||
///
|
///
|
||||||
/// Returns `None` if memory can't be grown by the specified amount
|
/// Returns `None` if memory can't be grown by the specified amount
|
||||||
/// of wasm pages.
|
/// of bytes. Returns `Some` if memory was grown successfully.
|
||||||
fn grow(&mut self, delta: u32) -> Option<u32>;
|
fn grow_to(&mut self, new_size: usize) -> Option<()>;
|
||||||
|
|
||||||
/// Return the allocated memory as a mutable pointer to u8.
|
/// Return the allocated memory as a mutable pointer to u8.
|
||||||
fn as_ptr(&self) -> *mut u8;
|
fn as_ptr(&self) -> *mut u8;
|
||||||
@@ -547,7 +551,9 @@ pub unsafe trait MemoryCreator: Send + Sync {
|
|||||||
/// Create a new `LinearMemory` object from the specified parameters.
|
/// Create a new `LinearMemory` object from the specified parameters.
|
||||||
///
|
///
|
||||||
/// The type of memory being created is specified by `ty` which indicates
|
/// The type of memory being created is specified by `ty` which indicates
|
||||||
/// both the minimum and maximum size, in wasm pages.
|
/// both the minimum and maximum size, in wasm pages. The minimum and
|
||||||
|
/// maximum sizes, in bytes, are also specified as parameters to avoid
|
||||||
|
/// integer conversion if desired.
|
||||||
///
|
///
|
||||||
/// The `reserved_size_in_bytes` value indicates the expected size of the
|
/// The `reserved_size_in_bytes` value indicates the expected size of the
|
||||||
/// reservation that is to be made for this memory. If this value is `None`
|
/// reservation that is to be made for this memory. If this value is `None`
|
||||||
@@ -557,23 +563,27 @@ pub unsafe trait MemoryCreator: Send + Sync {
|
|||||||
/// size at the end. Note that this reservation need only be a virtual
|
/// size at the end. Note that this reservation need only be a virtual
|
||||||
/// memory reservation, physical memory does not need to be allocated
|
/// memory reservation, physical memory does not need to be allocated
|
||||||
/// immediately. In this case `grow` should never move the base pointer and
|
/// immediately. In this case `grow` should never move the base pointer and
|
||||||
/// the maximum size of `ty` is guaranteed to fit within `reserved_size_in_bytes`.
|
/// the maximum size of `ty` is guaranteed to fit within
|
||||||
|
/// `reserved_size_in_bytes`.
|
||||||
///
|
///
|
||||||
/// The `guard_size_in_bytes` parameter indicates how many bytes of space, after the
|
/// The `guard_size_in_bytes` parameter indicates how many bytes of space,
|
||||||
/// memory allocation, is expected to be unmapped. JIT code will elide
|
/// after the memory allocation, is expected to be unmapped. JIT code will
|
||||||
/// bounds checks based on the `guard_size_in_bytes` provided, so for JIT code to
|
/// elide bounds checks based on the `guard_size_in_bytes` provided, so for
|
||||||
/// work correctly the memory returned will need to be properly guarded with
|
/// JIT code to work correctly the memory returned will need to be properly
|
||||||
/// `guard_size_in_bytes` bytes left unmapped after the base allocation.
|
/// guarded with `guard_size_in_bytes` bytes left unmapped after the base
|
||||||
|
/// allocation.
|
||||||
///
|
///
|
||||||
/// Note that the `reserved_size_in_bytes` and `guard_size_in_bytes` options are tuned from
|
/// Note that the `reserved_size_in_bytes` and `guard_size_in_bytes` options
|
||||||
/// the various [`Config`](crate::Config) methods about memory
|
/// are tuned from the various [`Config`](crate::Config) methods about
|
||||||
/// sizes/guards. Additionally these two values are guaranteed to be
|
/// memory sizes/guards. Additionally these two values are guaranteed to be
|
||||||
/// multiples of the system page size.
|
/// multiples of the system page size.
|
||||||
fn new_memory(
|
fn new_memory(
|
||||||
&self,
|
&self,
|
||||||
ty: MemoryType,
|
ty: MemoryType,
|
||||||
reserved_size_in_bytes: Option<u64>,
|
minimum: usize,
|
||||||
guard_size_in_bytes: u64,
|
maximum: Option<usize>,
|
||||||
|
reserved_size_in_bytes: Option<usize>,
|
||||||
|
guard_size_in_bytes: usize,
|
||||||
) -> Result<Box<dyn LinearMemory>, String>;
|
) -> Result<Box<dyn LinearMemory>, String>;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -589,7 +599,7 @@ mod tests {
|
|||||||
cfg.static_memory_maximum_size(0)
|
cfg.static_memory_maximum_size(0)
|
||||||
.dynamic_memory_guard_size(0);
|
.dynamic_memory_guard_size(0);
|
||||||
let mut store = Store::new(&Engine::new(&cfg).unwrap(), ());
|
let mut store = Store::new(&Engine::new(&cfg).unwrap(), ());
|
||||||
let ty = MemoryType::new(Limits::new(1, None));
|
let ty = MemoryType::new(1, None);
|
||||||
let mem = Memory::new(&mut store, ty).unwrap();
|
let mem = Memory::new(&mut store, ty).unwrap();
|
||||||
let store = store.as_context();
|
let store = store.as_context();
|
||||||
assert_eq!(store[mem.0].memory.offset_guard_size, 0);
|
assert_eq!(store[mem.0].memory.offset_guard_size, 0);
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
use crate::memory::{LinearMemory, MemoryCreator};
|
use crate::memory::{LinearMemory, MemoryCreator};
|
||||||
use crate::store::{InstanceId, StoreOpaque};
|
use crate::store::{InstanceId, StoreOpaque};
|
||||||
use crate::trampoline::create_handle;
|
use crate::trampoline::create_handle;
|
||||||
use crate::{Limits, MemoryType};
|
use crate::MemoryType;
|
||||||
use anyhow::{anyhow, Result};
|
use anyhow::{anyhow, Result};
|
||||||
|
use std::convert::TryFrom;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use wasmtime_environ::entity::PrimaryMap;
|
use wasmtime_environ::entity::PrimaryMap;
|
||||||
use wasmtime_environ::{wasm, MemoryPlan, MemoryStyle, Module, WASM_PAGE_SIZE};
|
use wasmtime_environ::{wasm, MemoryPlan, MemoryStyle, Module, WASM_PAGE_SIZE};
|
||||||
@@ -11,14 +12,10 @@ use wasmtime_runtime::{RuntimeLinearMemory, RuntimeMemoryCreator, VMMemoryDefini
|
|||||||
pub fn create_memory(store: &mut StoreOpaque<'_>, memory: &MemoryType) -> Result<InstanceId> {
|
pub fn create_memory(store: &mut StoreOpaque<'_>, memory: &MemoryType) -> Result<InstanceId> {
|
||||||
let mut module = Module::new();
|
let mut module = Module::new();
|
||||||
|
|
||||||
let memory = wasm::Memory {
|
let memory_plan = wasmtime_environ::MemoryPlan::for_memory(
|
||||||
minimum: memory.limits().min(),
|
memory.wasmtime_memory().clone(),
|
||||||
maximum: memory.limits().max(),
|
&store.engine().config().tunables,
|
||||||
shared: false, // TODO
|
);
|
||||||
};
|
|
||||||
|
|
||||||
let memory_plan =
|
|
||||||
wasmtime_environ::MemoryPlan::for_memory(memory, &store.engine().config().tunables);
|
|
||||||
let memory_id = module.memory_plans.push(memory_plan);
|
let memory_id = module.memory_plans.push(memory_plan);
|
||||||
module
|
module
|
||||||
.exports
|
.exports
|
||||||
@@ -32,22 +29,22 @@ struct LinearMemoryProxy {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl RuntimeLinearMemory for LinearMemoryProxy {
|
impl RuntimeLinearMemory for LinearMemoryProxy {
|
||||||
fn size(&self) -> u32 {
|
fn byte_size(&self) -> usize {
|
||||||
self.mem.size()
|
self.mem.byte_size()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn maximum(&self) -> Option<u32> {
|
fn maximum_byte_size(&self) -> Option<usize> {
|
||||||
self.mem.maximum()
|
self.mem.maximum_byte_size()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn grow(&mut self, delta: u32) -> Option<u32> {
|
fn grow_to(&mut self, new_size: usize) -> Option<()> {
|
||||||
self.mem.grow(delta)
|
self.mem.grow_to(new_size)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn vmmemory(&self) -> VMMemoryDefinition {
|
fn vmmemory(&self) -> VMMemoryDefinition {
|
||||||
VMMemoryDefinition {
|
VMMemoryDefinition {
|
||||||
base: self.mem.as_ptr(),
|
base: self.mem.as_ptr(),
|
||||||
current_length: self.mem.size() as usize * WASM_PAGE_SIZE as usize,
|
current_length: self.mem.byte_size(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -56,14 +53,27 @@ impl RuntimeLinearMemory for LinearMemoryProxy {
|
|||||||
pub(crate) struct MemoryCreatorProxy(pub Arc<dyn MemoryCreator>);
|
pub(crate) struct MemoryCreatorProxy(pub Arc<dyn MemoryCreator>);
|
||||||
|
|
||||||
impl RuntimeMemoryCreator for MemoryCreatorProxy {
|
impl RuntimeMemoryCreator for MemoryCreatorProxy {
|
||||||
fn new_memory(&self, plan: &MemoryPlan) -> Result<Box<dyn RuntimeLinearMemory>> {
|
fn new_memory(
|
||||||
let ty = MemoryType::new(Limits::new(plan.memory.minimum, plan.memory.maximum));
|
&self,
|
||||||
|
plan: &MemoryPlan,
|
||||||
|
minimum: usize,
|
||||||
|
maximum: Option<usize>,
|
||||||
|
) -> Result<Box<dyn RuntimeLinearMemory>> {
|
||||||
|
let ty = MemoryType::from_wasmtime_memory(&plan.memory);
|
||||||
let reserved_size_in_bytes = match plan.style {
|
let reserved_size_in_bytes = match plan.style {
|
||||||
MemoryStyle::Static { bound } => Some(bound as u64 * WASM_PAGE_SIZE as u64),
|
MemoryStyle::Static { bound } => {
|
||||||
|
Some(usize::try_from(bound * (WASM_PAGE_SIZE as u64)).unwrap())
|
||||||
|
}
|
||||||
MemoryStyle::Dynamic => None,
|
MemoryStyle::Dynamic => None,
|
||||||
};
|
};
|
||||||
self.0
|
self.0
|
||||||
.new_memory(ty, reserved_size_in_bytes, plan.offset_guard_size)
|
.new_memory(
|
||||||
|
ty,
|
||||||
|
minimum,
|
||||||
|
maximum,
|
||||||
|
reserved_size_in_bytes,
|
||||||
|
usize::try_from(plan.offset_guard_size).unwrap(),
|
||||||
|
)
|
||||||
.map(|mem| Box::new(LinearMemoryProxy { mem }) as Box<dyn RuntimeLinearMemory>)
|
.map(|mem| Box::new(LinearMemoryProxy { mem }) as Box<dyn RuntimeLinearMemory>)
|
||||||
.map_err(|e| anyhow!(e))
|
.map_err(|e| anyhow!(e))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,26 +1,16 @@
|
|||||||
use crate::store::{InstanceId, StoreOpaque};
|
use crate::store::{InstanceId, StoreOpaque};
|
||||||
use crate::trampoline::create_handle;
|
use crate::trampoline::create_handle;
|
||||||
use crate::{TableType, ValType};
|
use crate::TableType;
|
||||||
use anyhow::{bail, Result};
|
use anyhow::Result;
|
||||||
use wasmtime_environ::entity::PrimaryMap;
|
use wasmtime_environ::entity::PrimaryMap;
|
||||||
use wasmtime_environ::{wasm, Module};
|
use wasmtime_environ::{wasm, Module};
|
||||||
|
|
||||||
pub fn create_table(store: &mut StoreOpaque<'_>, table: &TableType) -> Result<InstanceId> {
|
pub fn create_table(store: &mut StoreOpaque<'_>, table: &TableType) -> Result<InstanceId> {
|
||||||
let mut module = Module::new();
|
let mut module = Module::new();
|
||||||
|
let table_plan = wasmtime_environ::TablePlan::for_table(
|
||||||
let table = wasm::Table {
|
table.wasmtime_table().clone(),
|
||||||
wasm_ty: table.element().to_wasm_type(),
|
&store.engine().config().tunables,
|
||||||
minimum: table.limits().min(),
|
);
|
||||||
maximum: table.limits().max(),
|
|
||||||
ty: match table.element() {
|
|
||||||
ValType::FuncRef => wasm::TableElementType::Func,
|
|
||||||
ValType::ExternRef => wasm::TableElementType::Val(wasmtime_runtime::ref_type()),
|
|
||||||
_ => bail!("cannot support {:?} as a table element", table.element()),
|
|
||||||
},
|
|
||||||
};
|
|
||||||
let tunable = Default::default();
|
|
||||||
|
|
||||||
let table_plan = wasmtime_environ::TablePlan::for_table(table, &tunable);
|
|
||||||
let table_id = module.table_plans.push(table_plan);
|
let table_id = module.table_plans.push(table_plan);
|
||||||
// TODO: can this `exports.insert` get removed?
|
// TODO: can this `exports.insert` get removed?
|
||||||
module
|
module
|
||||||
|
|||||||
@@ -18,38 +18,6 @@ pub enum Mutability {
|
|||||||
Var,
|
Var,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Limits of tables/memories where the units of the limits are defined by the
|
|
||||||
/// table/memory types.
|
|
||||||
///
|
|
||||||
/// A minimum is always available but the maximum may not be present.
|
|
||||||
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
|
|
||||||
pub struct Limits {
|
|
||||||
min: u32,
|
|
||||||
max: Option<u32>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Limits {
|
|
||||||
/// Creates a new set of limits with the minimum and maximum both specified.
|
|
||||||
pub fn new(min: u32, max: Option<u32>) -> Limits {
|
|
||||||
Limits { min, max }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Creates a new `Limits` with the `min` specified and no maximum specified.
|
|
||||||
pub fn at_least(min: u32) -> Limits {
|
|
||||||
Limits::new(min, None)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns the minimum amount for these limits.
|
|
||||||
pub fn min(&self) -> u32 {
|
|
||||||
self.min
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns the maximum amount for these limits, if specified.
|
|
||||||
pub fn max(&self) -> Option<u32> {
|
|
||||||
self.max
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Value Types
|
// Value Types
|
||||||
|
|
||||||
/// A list of all possible value types in WebAssembly.
|
/// A list of all possible value types in WebAssembly.
|
||||||
@@ -357,38 +325,50 @@ impl GlobalType {
|
|||||||
/// which `call_indirect` can invoke other functions.
|
/// which `call_indirect` can invoke other functions.
|
||||||
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
|
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
|
||||||
pub struct TableType {
|
pub struct TableType {
|
||||||
element: ValType,
|
ty: wasm::Table,
|
||||||
limits: Limits,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl TableType {
|
impl TableType {
|
||||||
/// Creates a new table descriptor which will contain the specified
|
/// Creates a new table descriptor which will contain the specified
|
||||||
/// `element` and have the `limits` applied to its length.
|
/// `element` and have the `limits` applied to its length.
|
||||||
pub fn new(element: ValType, limits: Limits) -> TableType {
|
pub fn new(element: ValType, min: u32, max: Option<u32>) -> TableType {
|
||||||
TableType { element, limits }
|
TableType {
|
||||||
|
ty: wasm::Table {
|
||||||
|
ty: match element {
|
||||||
|
ValType::FuncRef => wasm::TableElementType::Func,
|
||||||
|
_ => wasm::TableElementType::Val(element.get_wasmtime_type()),
|
||||||
|
},
|
||||||
|
wasm_ty: element.to_wasm_type(),
|
||||||
|
minimum: min,
|
||||||
|
maximum: max,
|
||||||
|
},
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the element value type of this table.
|
/// Returns the element value type of this table.
|
||||||
pub fn element(&self) -> &ValType {
|
pub fn element(&self) -> ValType {
|
||||||
&self.element
|
ValType::from_wasm_type(&self.ty.wasm_ty)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the limits, in units of elements, of this table.
|
/// Returns minimum number of elements this table must have
|
||||||
pub fn limits(&self) -> &Limits {
|
pub fn minimum(&self) -> u32 {
|
||||||
&self.limits
|
self.ty.minimum
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the optionally-specified maximum number of elements this table
|
||||||
|
/// can have.
|
||||||
|
///
|
||||||
|
/// If this returns `None` then the table is not limited in size.
|
||||||
|
pub fn maximum(&self) -> Option<u32> {
|
||||||
|
self.ty.maximum
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn from_wasmtime_table(table: &wasm::Table) -> TableType {
|
pub(crate) fn from_wasmtime_table(table: &wasm::Table) -> TableType {
|
||||||
let ty = match table.ty {
|
TableType { ty: table.clone() }
|
||||||
wasm::TableElementType::Func => ValType::FuncRef,
|
}
|
||||||
#[cfg(target_pointer_width = "64")]
|
|
||||||
wasm::TableElementType::Val(ir::types::R64) => ValType::ExternRef,
|
pub(crate) fn wasmtime_table(&self) -> &wasm::Table {
|
||||||
#[cfg(target_pointer_width = "32")]
|
&self.ty
|
||||||
wasm::TableElementType::Val(ir::types::R32) => ValType::ExternRef,
|
|
||||||
_ => panic!("only `funcref` and `externref` tables supported"),
|
|
||||||
};
|
|
||||||
let limits = Limits::new(table.minimum, table.maximum);
|
|
||||||
TableType::new(ty, limits)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -400,23 +380,78 @@ impl TableType {
|
|||||||
/// chunks of addressable memory.
|
/// chunks of addressable memory.
|
||||||
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
|
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
|
||||||
pub struct MemoryType {
|
pub struct MemoryType {
|
||||||
limits: Limits,
|
ty: wasm::Memory,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl MemoryType {
|
impl MemoryType {
|
||||||
/// Creates a new descriptor for a WebAssembly memory given the specified
|
/// Creates a new descriptor for a 32-bit WebAssembly memory given the
|
||||||
/// limits of the memory.
|
/// specified limits of the memory.
|
||||||
pub fn new(limits: Limits) -> MemoryType {
|
///
|
||||||
MemoryType { limits }
|
/// The `minimum` and `maximum` values here are specified in units of
|
||||||
|
/// WebAssembly pages, which are 64k.
|
||||||
|
pub fn new(minimum: u32, maximum: Option<u32>) -> MemoryType {
|
||||||
|
MemoryType {
|
||||||
|
ty: wasm::Memory {
|
||||||
|
memory64: false,
|
||||||
|
shared: false,
|
||||||
|
minimum: minimum.into(),
|
||||||
|
maximum: maximum.map(|i| i.into()),
|
||||||
|
},
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the limits (in pages) that are configured for this memory.
|
/// Creates a new descriptor for a 64-bit WebAssembly memory given the
|
||||||
pub fn limits(&self) -> &Limits {
|
/// specified limits of the memory.
|
||||||
&self.limits
|
///
|
||||||
|
/// The `minimum` and `maximum` values here are specified in units of
|
||||||
|
/// WebAssembly pages, which are 64k.
|
||||||
|
///
|
||||||
|
/// Note that 64-bit memories are part of the memory64 proposal for
|
||||||
|
/// WebAssembly which is not standardized yet.
|
||||||
|
pub fn new64(minimum: u64, maximum: Option<u64>) -> MemoryType {
|
||||||
|
MemoryType {
|
||||||
|
ty: wasm::Memory {
|
||||||
|
memory64: true,
|
||||||
|
shared: false,
|
||||||
|
minimum,
|
||||||
|
maximum,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns whether this is a 64-bit memory or not.
|
||||||
|
///
|
||||||
|
/// Note that 64-bit memories are part of the memory64 proposal for
|
||||||
|
/// WebAssembly which is not standardized yet.
|
||||||
|
pub fn is_64(&self) -> bool {
|
||||||
|
self.ty.memory64
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns minimum number of WebAssembly pages this memory must have.
|
||||||
|
///
|
||||||
|
/// Note that the return value, while a `u64`, will always fit into a `u32`
|
||||||
|
/// for 32-bit memories.
|
||||||
|
pub fn minimum(&self) -> u64 {
|
||||||
|
self.ty.minimum
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the optionally-specified maximum number of pages this memory
|
||||||
|
/// can have.
|
||||||
|
///
|
||||||
|
/// If this returns `None` then the memory is not limited in size.
|
||||||
|
///
|
||||||
|
/// Note that the return value, while a `u64`, will always fit into a `u32`
|
||||||
|
/// for 32-bit memories.
|
||||||
|
pub fn maximum(&self) -> Option<u64> {
|
||||||
|
self.ty.maximum
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn from_wasmtime_memory(memory: &wasm::Memory) -> MemoryType {
|
pub(crate) fn from_wasmtime_memory(memory: &wasm::Memory) -> MemoryType {
|
||||||
MemoryType::new(Limits::new(memory.minimum, memory.maximum))
|
MemoryType { ty: memory.clone() }
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) fn wasmtime_memory(&self) -> &wasm::Memory {
|
||||||
|
&self.ty
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -60,6 +60,7 @@ impl MatchCx<'_> {
|
|||||||
|
|
||||||
fn memory_ty(&self, expected: &Memory, actual: &Memory) -> Result<()> {
|
fn memory_ty(&self, expected: &Memory, actual: &Memory) -> Result<()> {
|
||||||
if expected.shared == actual.shared
|
if expected.shared == actual.shared
|
||||||
|
&& expected.memory64 == actual.memory64
|
||||||
&& expected.minimum <= actual.minimum
|
&& expected.minimum <= actual.minimum
|
||||||
&& match expected.maximum {
|
&& match expected.maximum {
|
||||||
Some(expected) => match actual.maximum {
|
Some(expected) => match actual.maximum {
|
||||||
|
|||||||
@@ -34,11 +34,11 @@ pub fn link_spectest<T>(linker: &mut Linker<T>, store: &mut Store<T>) -> Result<
|
|||||||
let g = Global::new(&mut *store, ty, Val::F64(0x4084_d000_0000_0000))?;
|
let g = Global::new(&mut *store, ty, Val::F64(0x4084_d000_0000_0000))?;
|
||||||
linker.define("spectest", "global_f64", g)?;
|
linker.define("spectest", "global_f64", g)?;
|
||||||
|
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(10, Some(20)));
|
let ty = TableType::new(ValType::FuncRef, 10, Some(20));
|
||||||
let table = Table::new(&mut *store, ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut *store, ty, Val::FuncRef(None))?;
|
||||||
linker.define("spectest", "table", table)?;
|
linker.define("spectest", "table", table)?;
|
||||||
|
|
||||||
let ty = MemoryType::new(Limits::new(1, Some(2)));
|
let ty = MemoryType::new(1, Some(2));
|
||||||
let memory = Memory::new(&mut *store, ty)?;
|
let memory = Memory::new(&mut *store, ty)?;
|
||||||
linker.define("spectest", "memory", memory)?;
|
linker.define("spectest", "memory", memory)?;
|
||||||
|
|
||||||
|
|||||||
@@ -64,7 +64,7 @@ fn main() -> Result<()> {
|
|||||||
assert!(memory.grow(&mut store, 0).is_ok());
|
assert!(memory.grow(&mut store, 0).is_ok());
|
||||||
|
|
||||||
println!("Creating stand-alone memory...");
|
println!("Creating stand-alone memory...");
|
||||||
let memorytype = MemoryType::new(Limits::new(5, Some(5)));
|
let memorytype = MemoryType::new(5, Some(5));
|
||||||
let memory2 = Memory::new(&mut store, memorytype)?;
|
let memory2 = Memory::new(&mut store, memorytype)?;
|
||||||
assert_eq!(memory2.size(&store), 5);
|
assert_eq!(memory2.size(&store), 5);
|
||||||
assert!(memory2.grow(&mut store, 1).is_err());
|
assert!(memory2.grow(&mut store, 1).is_err());
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ fn run(data: &[u8]) -> Result<()> {
|
|||||||
let mut config: SwarmConfig = u.arbitrary()?;
|
let mut config: SwarmConfig = u.arbitrary()?;
|
||||||
config.simd_enabled = u.arbitrary()?;
|
config.simd_enabled = u.arbitrary()?;
|
||||||
config.module_linking_enabled = u.arbitrary()?;
|
config.module_linking_enabled = u.arbitrary()?;
|
||||||
|
config.memory64_enabled = u.arbitrary()?;
|
||||||
// Don't generate modules that allocate more than 6GB
|
// Don't generate modules that allocate more than 6GB
|
||||||
config.max_memory_pages = 6 << 30;
|
config.max_memory_pages = 6 << 30;
|
||||||
let module = ConfiguredModule::new(config.clone(), &mut u)?;
|
let module = ConfiguredModule::new(config.clone(), &mut u)?;
|
||||||
@@ -35,6 +36,7 @@ fn run(data: &[u8]) -> Result<()> {
|
|||||||
cfg.wasm_multi_memory(config.max_memories > 1);
|
cfg.wasm_multi_memory(config.max_memories > 1);
|
||||||
cfg.wasm_module_linking(config.module_linking_enabled);
|
cfg.wasm_module_linking(config.module_linking_enabled);
|
||||||
cfg.wasm_simd(config.simd_enabled);
|
cfg.wasm_simd(config.simd_enabled);
|
||||||
|
cfg.wasm_memory64(config.memory64_enabled);
|
||||||
|
|
||||||
oracles::instantiate_with_config(&module.to_bytes(), true, cfg, timeout);
|
oracles::instantiate_with_config(&module.to_bytes(), true, cfg, timeout);
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -22,35 +22,35 @@ fn bad_tables() {
|
|||||||
let mut store = Store::<()>::default();
|
let mut store = Store::<()>::default();
|
||||||
|
|
||||||
// i32 not supported yet
|
// i32 not supported yet
|
||||||
let ty = TableType::new(ValType::I32, Limits::new(0, Some(1)));
|
let ty = TableType::new(ValType::I32, 0, Some(1));
|
||||||
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
||||||
|
|
||||||
// mismatched initializer
|
// mismatched initializer
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(0, Some(1)));
|
let ty = TableType::new(ValType::FuncRef, 0, Some(1));
|
||||||
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
||||||
|
|
||||||
// get out of bounds
|
// get out of bounds
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(0, Some(1)));
|
let ty = TableType::new(ValType::FuncRef, 0, Some(1));
|
||||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||||
assert!(t.get(&mut store, 0).is_none());
|
assert!(t.get(&mut store, 0).is_none());
|
||||||
assert!(t.get(&mut store, u32::max_value()).is_none());
|
assert!(t.get(&mut store, u32::max_value()).is_none());
|
||||||
|
|
||||||
// set out of bounds or wrong type
|
// set out of bounds or wrong type
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(1)));
|
let ty = TableType::new(ValType::FuncRef, 1, Some(1));
|
||||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||||
assert!(t.set(&mut store, 0, Val::I32(0)).is_err());
|
assert!(t.set(&mut store, 0, Val::I32(0)).is_err());
|
||||||
assert!(t.set(&mut store, 0, Val::FuncRef(None)).is_ok());
|
assert!(t.set(&mut store, 0, Val::FuncRef(None)).is_ok());
|
||||||
assert!(t.set(&mut store, 1, Val::FuncRef(None)).is_err());
|
assert!(t.set(&mut store, 1, Val::FuncRef(None)).is_err());
|
||||||
|
|
||||||
// grow beyond max
|
// grow beyond max
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(1)));
|
let ty = TableType::new(ValType::FuncRef, 1, Some(1));
|
||||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||||
assert!(t.grow(&mut store, 0, Val::FuncRef(None)).is_ok());
|
assert!(t.grow(&mut store, 0, Val::FuncRef(None)).is_ok());
|
||||||
assert!(t.grow(&mut store, 1, Val::FuncRef(None)).is_err());
|
assert!(t.grow(&mut store, 1, Val::FuncRef(None)).is_err());
|
||||||
assert_eq!(t.size(&store), 1);
|
assert_eq!(t.size(&store), 1);
|
||||||
|
|
||||||
// grow wrong type
|
// grow wrong type
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(2)));
|
let ty = TableType::new(ValType::FuncRef, 1, Some(2));
|
||||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||||
assert!(t.grow(&mut store, 1, Val::I32(0)).is_err());
|
assert!(t.grow(&mut store, 1, Val::I32(0)).is_err());
|
||||||
assert_eq!(t.size(&store), 1);
|
assert_eq!(t.size(&store), 1);
|
||||||
@@ -69,9 +69,9 @@ fn cross_store() -> anyhow::Result<()> {
|
|||||||
let func = Func::wrap(&mut store2, || {});
|
let func = Func::wrap(&mut store2, || {});
|
||||||
let ty = GlobalType::new(ValType::I32, Mutability::Const);
|
let ty = GlobalType::new(ValType::I32, Mutability::Const);
|
||||||
let global = Global::new(&mut store2, ty, Val::I32(0))?;
|
let global = Global::new(&mut store2, ty, Val::I32(0))?;
|
||||||
let ty = MemoryType::new(Limits::new(1, None));
|
let ty = MemoryType::new(1, None);
|
||||||
let memory = Memory::new(&mut store2, ty)?;
|
let memory = Memory::new(&mut store2, ty)?;
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
let table = Table::new(&mut store2, ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut store2, ty, Val::FuncRef(None))?;
|
||||||
|
|
||||||
let need_func = Module::new(&engine, r#"(module (import "" "" (func)))"#)?;
|
let need_func = Module::new(&engine, r#"(module (import "" "" (func)))"#)?;
|
||||||
@@ -99,7 +99,7 @@ fn cross_store() -> anyhow::Result<()> {
|
|||||||
|
|
||||||
// ============ Cross-store tables ==============
|
// ============ Cross-store tables ==============
|
||||||
|
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
assert!(Table::new(&mut store2, ty.clone(), store1val.clone()).is_err());
|
assert!(Table::new(&mut store2, ty.clone(), store1val.clone()).is_err());
|
||||||
let t1 = Table::new(&mut store2, ty.clone(), store2val.clone())?;
|
let t1 = Table::new(&mut store2, ty.clone(), store2val.clone())?;
|
||||||
assert!(t1.set(&mut store2, 0, store1val.clone()).is_err());
|
assert!(t1.set(&mut store2, 0, store1val.clone()).is_err());
|
||||||
@@ -218,7 +218,7 @@ fn create_get_set_funcref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||||
let init = Val::FuncRef(Some(Func::wrap(&mut store, || {})));
|
let init = Val::FuncRef(Some(Func::wrap(&mut store, || {})));
|
||||||
let table = Table::new(&mut store, table_ty, init)?;
|
let table = Table::new(&mut store, table_ty, init)?;
|
||||||
|
|
||||||
@@ -236,7 +236,7 @@ fn fill_funcref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||||
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
||||||
|
|
||||||
for i in 0..10 {
|
for i in 0..10 {
|
||||||
@@ -263,7 +263,7 @@ fn grow_funcref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||||
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
||||||
|
|
||||||
assert_eq!(table.size(&store), 10);
|
assert_eq!(table.size(&store), 10);
|
||||||
@@ -280,7 +280,7 @@ fn create_get_set_externref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||||
let table = Table::new(
|
let table = Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
table_ty,
|
table_ty,
|
||||||
@@ -315,7 +315,7 @@ fn fill_externref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||||
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
||||||
|
|
||||||
for i in 0..10 {
|
for i in 0..10 {
|
||||||
@@ -364,7 +364,7 @@ fn grow_externref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
let engine = Engine::new(&cfg)?;
|
let engine = Engine::new(&cfg)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
|
|
||||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||||
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
||||||
|
|
||||||
assert_eq!(table.size(&store), 10);
|
assert_eq!(table.size(&store), 10);
|
||||||
@@ -378,7 +378,7 @@ fn grow_externref_tables_via_api() -> anyhow::Result<()> {
|
|||||||
fn read_write_memory_via_api() {
|
fn read_write_memory_via_api() {
|
||||||
let cfg = Config::new();
|
let cfg = Config::new();
|
||||||
let mut store = Store::new(&Engine::new(&cfg).unwrap(), ());
|
let mut store = Store::new(&Engine::new(&cfg).unwrap(), ());
|
||||||
let ty = MemoryType::new(Limits::new(1, None));
|
let ty = MemoryType::new(1, None);
|
||||||
let mem = Memory::new(&mut store, ty).unwrap();
|
let mem = Memory::new(&mut store, ty).unwrap();
|
||||||
mem.grow(&mut store, 1).unwrap();
|
mem.grow(&mut store, 1).unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -303,7 +303,7 @@ fn table_drops_externref() -> anyhow::Result<()> {
|
|||||||
let externref = ExternRef::new(SetFlagOnDrop(flag.clone()));
|
let externref = ExternRef::new(SetFlagOnDrop(flag.clone()));
|
||||||
Table::new(
|
Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
TableType::new(ValType::ExternRef, Limits::new(1, None)),
|
TableType::new(ValType::ExternRef, 1, None),
|
||||||
externref.into(),
|
externref.into(),
|
||||||
)?;
|
)?;
|
||||||
drop(store);
|
drop(store);
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use wasmtime::*;
|
use wasmtime::*;
|
||||||
|
|
||||||
|
const WASM_PAGE_SIZE: usize = wasmtime_environ::WASM_PAGE_SIZE as usize;
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_limits() -> Result<()> {
|
fn test_limits() -> Result<()> {
|
||||||
let engine = Engine::default();
|
let engine = Engine::default();
|
||||||
@@ -12,7 +14,7 @@ fn test_limits() -> Result<()> {
|
|||||||
let mut store = Store::new(
|
let mut store = Store::new(
|
||||||
&engine,
|
&engine,
|
||||||
StoreLimitsBuilder::new()
|
StoreLimitsBuilder::new()
|
||||||
.memory_pages(10)
|
.memory_size(10 * WASM_PAGE_SIZE)
|
||||||
.table_elements(5)
|
.table_elements(5)
|
||||||
.build(),
|
.build(),
|
||||||
);
|
);
|
||||||
@@ -23,7 +25,7 @@ fn test_limits() -> Result<()> {
|
|||||||
// Test instance exports and host objects hitting the limit
|
// Test instance exports and host objects hitting the limit
|
||||||
for memory in std::array::IntoIter::new([
|
for memory in std::array::IntoIter::new([
|
||||||
instance.get_memory(&mut store, "m").unwrap(),
|
instance.get_memory(&mut store, "m").unwrap(),
|
||||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||||
]) {
|
]) {
|
||||||
memory.grow(&mut store, 3)?;
|
memory.grow(&mut store, 3)?;
|
||||||
memory.grow(&mut store, 5)?;
|
memory.grow(&mut store, 5)?;
|
||||||
@@ -43,7 +45,7 @@ fn test_limits() -> Result<()> {
|
|||||||
instance.get_table(&mut store, "t").unwrap(),
|
instance.get_table(&mut store, "t").unwrap(),
|
||||||
Table::new(
|
Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
TableType::new(ValType::FuncRef, 0, None),
|
||||||
Val::FuncRef(None),
|
Val::FuncRef(None),
|
||||||
)?,
|
)?,
|
||||||
]) {
|
]) {
|
||||||
@@ -71,7 +73,12 @@ fn test_limits_memory_only() -> Result<()> {
|
|||||||
r#"(module (memory (export "m") 0) (table (export "t") 0 anyfunc))"#,
|
r#"(module (memory (export "m") 0) (table (export "t") 0 anyfunc))"#,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(10).build());
|
let mut store = Store::new(
|
||||||
|
&engine,
|
||||||
|
StoreLimitsBuilder::new()
|
||||||
|
.memory_size(10 * WASM_PAGE_SIZE)
|
||||||
|
.build(),
|
||||||
|
);
|
||||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||||
|
|
||||||
let instance = Instance::new(&mut store, &module, &[])?;
|
let instance = Instance::new(&mut store, &module, &[])?;
|
||||||
@@ -79,7 +86,7 @@ fn test_limits_memory_only() -> Result<()> {
|
|||||||
// Test instance exports and host objects hitting the limit
|
// Test instance exports and host objects hitting the limit
|
||||||
for memory in std::array::IntoIter::new([
|
for memory in std::array::IntoIter::new([
|
||||||
instance.get_memory(&mut store, "m").unwrap(),
|
instance.get_memory(&mut store, "m").unwrap(),
|
||||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||||
]) {
|
]) {
|
||||||
memory.grow(&mut store, 3)?;
|
memory.grow(&mut store, 3)?;
|
||||||
memory.grow(&mut store, 5)?;
|
memory.grow(&mut store, 5)?;
|
||||||
@@ -99,7 +106,7 @@ fn test_limits_memory_only() -> Result<()> {
|
|||||||
instance.get_table(&mut store, "t").unwrap(),
|
instance.get_table(&mut store, "t").unwrap(),
|
||||||
Table::new(
|
Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
TableType::new(ValType::FuncRef, 0, None),
|
||||||
Val::FuncRef(None),
|
Val::FuncRef(None),
|
||||||
)?,
|
)?,
|
||||||
]) {
|
]) {
|
||||||
@@ -117,7 +124,12 @@ fn test_initial_memory_limits_exceeded() -> Result<()> {
|
|||||||
let engine = Engine::default();
|
let engine = Engine::default();
|
||||||
let module = Module::new(&engine, r#"(module (memory (export "m") 11))"#)?;
|
let module = Module::new(&engine, r#"(module (memory (export "m") 11))"#)?;
|
||||||
|
|
||||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(10).build());
|
let mut store = Store::new(
|
||||||
|
&engine,
|
||||||
|
StoreLimitsBuilder::new()
|
||||||
|
.memory_size(10 * WASM_PAGE_SIZE)
|
||||||
|
.build(),
|
||||||
|
);
|
||||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||||
|
|
||||||
match Instance::new(&mut store, &module, &[]) {
|
match Instance::new(&mut store, &module, &[]) {
|
||||||
@@ -128,7 +140,7 @@ fn test_initial_memory_limits_exceeded() -> Result<()> {
|
|||||||
),
|
),
|
||||||
}
|
}
|
||||||
|
|
||||||
match Memory::new(&mut store, MemoryType::new(Limits::new(25, None))) {
|
match Memory::new(&mut store, MemoryType::new(25, None)) {
|
||||||
Ok(_) => unreachable!(),
|
Ok(_) => unreachable!(),
|
||||||
Err(e) => assert_eq!(
|
Err(e) => assert_eq!(
|
||||||
e.to_string(),
|
e.to_string(),
|
||||||
@@ -155,7 +167,7 @@ fn test_limits_table_only() -> Result<()> {
|
|||||||
// Test instance exports and host objects *not* hitting the limit
|
// Test instance exports and host objects *not* hitting the limit
|
||||||
for memory in std::array::IntoIter::new([
|
for memory in std::array::IntoIter::new([
|
||||||
instance.get_memory(&mut store, "m").unwrap(),
|
instance.get_memory(&mut store, "m").unwrap(),
|
||||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||||
]) {
|
]) {
|
||||||
memory.grow(&mut store, 3)?;
|
memory.grow(&mut store, 3)?;
|
||||||
memory.grow(&mut store, 5)?;
|
memory.grow(&mut store, 5)?;
|
||||||
@@ -168,7 +180,7 @@ fn test_limits_table_only() -> Result<()> {
|
|||||||
instance.get_table(&mut store, "t").unwrap(),
|
instance.get_table(&mut store, "t").unwrap(),
|
||||||
Table::new(
|
Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
TableType::new(ValType::FuncRef, 0, None),
|
||||||
Val::FuncRef(None),
|
Val::FuncRef(None),
|
||||||
)?,
|
)?,
|
||||||
]) {
|
]) {
|
||||||
@@ -206,7 +218,7 @@ fn test_initial_table_limits_exceeded() -> Result<()> {
|
|||||||
|
|
||||||
match Table::new(
|
match Table::new(
|
||||||
&mut store,
|
&mut store,
|
||||||
TableType::new(ValType::FuncRef, Limits::new(99, None)),
|
TableType::new(ValType::FuncRef, 99, None),
|
||||||
Val::FuncRef(None),
|
Val::FuncRef(None),
|
||||||
) {
|
) {
|
||||||
Ok(_) => unreachable!(),
|
Ok(_) => unreachable!(),
|
||||||
@@ -241,7 +253,12 @@ fn test_pooling_allocator_initial_limits_exceeded() -> Result<()> {
|
|||||||
r#"(module (memory (export "m1") 2) (memory (export "m2") 5))"#,
|
r#"(module (memory (export "m1") 2) (memory (export "m2") 5))"#,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(3).build());
|
let mut store = Store::new(
|
||||||
|
&engine,
|
||||||
|
StoreLimitsBuilder::new()
|
||||||
|
.memory_size(3 * WASM_PAGE_SIZE)
|
||||||
|
.build(),
|
||||||
|
);
|
||||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||||
|
|
||||||
match Instance::new(&mut store, &module, &[]) {
|
match Instance::new(&mut store, &module, &[]) {
|
||||||
@@ -268,15 +285,12 @@ struct MemoryContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl ResourceLimiter for MemoryContext {
|
impl ResourceLimiter for MemoryContext {
|
||||||
fn memory_growing(&mut self, current: u32, desired: u32, maximum: Option<u32>) -> bool {
|
fn memory_growing(&mut self, current: usize, desired: usize, maximum: Option<usize>) -> bool {
|
||||||
// Check if the desired exceeds a maximum (either from Wasm or from the host)
|
// Check if the desired exceeds a maximum (either from Wasm or from the host)
|
||||||
if desired > maximum.unwrap_or(u32::MAX) {
|
assert!(desired < maximum.unwrap_or(usize::MAX));
|
||||||
self.limit_exceeded = true;
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
assert_eq!(current as usize * 0x10000, self.wasm_memory_used,);
|
assert_eq!(current as usize, self.wasm_memory_used);
|
||||||
let desired = desired as usize * 0x10000;
|
let desired = desired as usize;
|
||||||
|
|
||||||
if desired + self.host_memory_used > self.memory_limit {
|
if desired + self.host_memory_used > self.memory_limit {
|
||||||
self.limit_exceeded = true;
|
self.limit_exceeded = true;
|
||||||
|
|||||||
@@ -52,20 +52,20 @@ fn link_twice_bad() -> Result<()> {
|
|||||||
assert!(linker.define("g", "3", global.clone()).is_err());
|
assert!(linker.define("g", "3", global.clone()).is_err());
|
||||||
|
|
||||||
// memories
|
// memories
|
||||||
let ty = MemoryType::new(Limits::new(1, None));
|
let ty = MemoryType::new(1, None);
|
||||||
let memory = Memory::new(&mut store, ty)?;
|
let memory = Memory::new(&mut store, ty)?;
|
||||||
linker.define("m", "", memory.clone())?;
|
linker.define("m", "", memory.clone())?;
|
||||||
assert!(linker.define("m", "", memory.clone()).is_err());
|
assert!(linker.define("m", "", memory.clone()).is_err());
|
||||||
let ty = MemoryType::new(Limits::new(2, None));
|
let ty = MemoryType::new(2, None);
|
||||||
let memory = Memory::new(&mut store, ty)?;
|
let memory = Memory::new(&mut store, ty)?;
|
||||||
assert!(linker.define("m", "", memory.clone()).is_err());
|
assert!(linker.define("m", "", memory.clone()).is_err());
|
||||||
|
|
||||||
// tables
|
// tables
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
||||||
linker.define("t", "", table.clone())?;
|
linker.define("t", "", table.clone())?;
|
||||||
assert!(linker.define("t", "", table.clone()).is_err());
|
assert!(linker.define("t", "", table.clone()).is_err());
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(2, None));
|
let ty = TableType::new(ValType::FuncRef, 2, None);
|
||||||
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
||||||
assert!(linker.define("t", "", table.clone()).is_err());
|
assert!(linker.define("t", "", table.clone()).is_err());
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -75,7 +75,14 @@ fn test_traps(store: &mut Store<()>, funcs: &[TestFunc], addr: u32, mem: &Memory
|
|||||||
let base = u64::from(func.offset) + u64::from(addr);
|
let base = u64::from(func.offset) + u64::from(addr);
|
||||||
let range = base..base + u64::from(func.width);
|
let range = base..base + u64::from(func.width);
|
||||||
if range.start >= mem_size || range.end >= mem_size {
|
if range.start >= mem_size || range.end >= mem_size {
|
||||||
assert!(result.is_err());
|
assert!(
|
||||||
|
result.is_err(),
|
||||||
|
"access at {}+{}+{} succeeded but should have failed when memory has {} bytes",
|
||||||
|
addr,
|
||||||
|
func.offset,
|
||||||
|
func.width,
|
||||||
|
mem_size
|
||||||
|
);
|
||||||
} else {
|
} else {
|
||||||
assert!(result.is_ok());
|
assert!(result.is_ok());
|
||||||
}
|
}
|
||||||
@@ -97,6 +104,7 @@ fn offsets_static_dynamic_oh_my() -> Result<()> {
|
|||||||
config.dynamic_memory_guard_size(guard_size);
|
config.dynamic_memory_guard_size(guard_size);
|
||||||
config.static_memory_guard_size(guard_size);
|
config.static_memory_guard_size(guard_size);
|
||||||
config.guard_before_linear_memory(guard_before_linear_memory);
|
config.guard_before_linear_memory(guard_before_linear_memory);
|
||||||
|
config.cranelift_debug_verifier(true);
|
||||||
engines.push(Engine::new(&config)?);
|
engines.push(Engine::new(&config)?);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -105,9 +113,9 @@ fn offsets_static_dynamic_oh_my() -> Result<()> {
|
|||||||
engines.par_iter().for_each(|engine| {
|
engines.par_iter().for_each(|engine| {
|
||||||
let module = module(&engine).unwrap();
|
let module = module(&engine).unwrap();
|
||||||
|
|
||||||
for limits in [Limits::new(1, Some(2)), Limits::new(1, None)].iter() {
|
for (min, max) in [(1, Some(2)), (1, None)].iter() {
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
let mem = Memory::new(&mut store, MemoryType::new(limits.clone())).unwrap();
|
let mem = Memory::new(&mut store, MemoryType::new(*min, *max)).unwrap();
|
||||||
let instance = Instance::new(&mut store, &module, &[mem.into()]).unwrap();
|
let instance = Instance::new(&mut store, &module, &[mem.into()]).unwrap();
|
||||||
let funcs = find_funcs(&mut store, &instance);
|
let funcs = find_funcs(&mut store, &instance);
|
||||||
|
|
||||||
@@ -137,8 +145,8 @@ fn guards_present() -> Result<()> {
|
|||||||
config.guard_before_linear_memory(true);
|
config.guard_before_linear_memory(true);
|
||||||
let engine = Engine::new(&config)?;
|
let engine = Engine::new(&config)?;
|
||||||
let mut store = Store::new(&engine, ());
|
let mut store = Store::new(&engine, ());
|
||||||
let static_mem = Memory::new(&mut store, MemoryType::new(Limits::new(1, Some(2))))?;
|
let static_mem = Memory::new(&mut store, MemoryType::new(1, Some(2)))?;
|
||||||
let dynamic_mem = Memory::new(&mut store, MemoryType::new(Limits::new(1, None)))?;
|
let dynamic_mem = Memory::new(&mut store, MemoryType::new(1, None))?;
|
||||||
|
|
||||||
let assert_guards = |store: &Store<()>| unsafe {
|
let assert_guards = |store: &Store<()>| unsafe {
|
||||||
// guards before
|
// guards before
|
||||||
|
|||||||
@@ -1,13 +1,14 @@
|
|||||||
#[cfg(not(target_os = "windows"))]
|
#[cfg(not(target_os = "windows"))]
|
||||||
mod not_for_windows {
|
mod not_for_windows {
|
||||||
use wasmtime::*;
|
use wasmtime::*;
|
||||||
use wasmtime_environ::{WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
use wasmtime_environ::{WASM32_MAX_PAGES, WASM_PAGE_SIZE};
|
||||||
|
|
||||||
use libc::MAP_FAILED;
|
use libc::MAP_FAILED;
|
||||||
use libc::{mmap, mprotect, munmap};
|
use libc::{mmap, mprotect, munmap};
|
||||||
use libc::{sysconf, _SC_PAGESIZE};
|
use libc::{sysconf, _SC_PAGESIZE};
|
||||||
use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
|
use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
|
||||||
|
|
||||||
|
use std::convert::TryFrom;
|
||||||
use std::io::Error;
|
use std::io::Error;
|
||||||
use std::ptr::null_mut;
|
use std::ptr::null_mut;
|
||||||
use std::sync::{Arc, Mutex};
|
use std::sync::{Arc, Mutex};
|
||||||
@@ -16,77 +17,63 @@ mod not_for_windows {
|
|||||||
mem: usize,
|
mem: usize,
|
||||||
size: usize,
|
size: usize,
|
||||||
guard_size: usize,
|
guard_size: usize,
|
||||||
used_wasm_pages: u32,
|
used_wasm_bytes: usize,
|
||||||
glob_page_counter: Arc<Mutex<u64>>,
|
glob_bytes_counter: Arc<Mutex<usize>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CustomMemory {
|
impl CustomMemory {
|
||||||
unsafe fn new(
|
unsafe fn new(minimum: usize, maximum: usize, glob_counter: Arc<Mutex<usize>>) -> Self {
|
||||||
num_wasm_pages: u32,
|
|
||||||
max_wasm_pages: u32,
|
|
||||||
glob_counter: Arc<Mutex<u64>>,
|
|
||||||
) -> Self {
|
|
||||||
let page_size = sysconf(_SC_PAGESIZE) as usize;
|
let page_size = sysconf(_SC_PAGESIZE) as usize;
|
||||||
let guard_size = page_size;
|
let guard_size = page_size;
|
||||||
let size = max_wasm_pages as usize * WASM_PAGE_SIZE as usize + guard_size;
|
let size = maximum + guard_size;
|
||||||
let used_size = num_wasm_pages as usize * WASM_PAGE_SIZE as usize;
|
|
||||||
assert_eq!(size % page_size, 0); // we rely on WASM_PAGE_SIZE being multiple of host page size
|
assert_eq!(size % page_size, 0); // we rely on WASM_PAGE_SIZE being multiple of host page size
|
||||||
|
|
||||||
let mem = mmap(null_mut(), size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
|
let mem = mmap(null_mut(), size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
|
||||||
assert_ne!(mem, MAP_FAILED, "mmap failed: {}", Error::last_os_error());
|
assert_ne!(mem, MAP_FAILED, "mmap failed: {}", Error::last_os_error());
|
||||||
|
|
||||||
let r = mprotect(mem, used_size, PROT_READ | PROT_WRITE);
|
let r = mprotect(mem, minimum, PROT_READ | PROT_WRITE);
|
||||||
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
||||||
*glob_counter.lock().unwrap() += num_wasm_pages as u64;
|
*glob_counter.lock().unwrap() += minimum;
|
||||||
|
|
||||||
Self {
|
Self {
|
||||||
mem: mem as usize,
|
mem: mem as usize,
|
||||||
size,
|
size,
|
||||||
guard_size,
|
guard_size,
|
||||||
used_wasm_pages: num_wasm_pages,
|
used_wasm_bytes: minimum,
|
||||||
glob_page_counter: glob_counter,
|
glob_bytes_counter: glob_counter,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Drop for CustomMemory {
|
impl Drop for CustomMemory {
|
||||||
fn drop(&mut self) {
|
fn drop(&mut self) {
|
||||||
let n = self.used_wasm_pages as u64;
|
*self.glob_bytes_counter.lock().unwrap() -= self.used_wasm_bytes;
|
||||||
*self.glob_page_counter.lock().unwrap() -= n;
|
|
||||||
let r = unsafe { munmap(self.mem as *mut _, self.size) };
|
let r = unsafe { munmap(self.mem as *mut _, self.size) };
|
||||||
assert_eq!(r, 0, "munmap failed: {}", Error::last_os_error());
|
assert_eq!(r, 0, "munmap failed: {}", Error::last_os_error());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
unsafe impl LinearMemory for CustomMemory {
|
unsafe impl LinearMemory for CustomMemory {
|
||||||
fn size(&self) -> u32 {
|
fn byte_size(&self) -> usize {
|
||||||
self.used_wasm_pages
|
self.used_wasm_bytes
|
||||||
}
|
}
|
||||||
|
|
||||||
fn maximum(&self) -> Option<u32> {
|
fn maximum_byte_size(&self) -> Option<usize> {
|
||||||
Some((self.size as u32 - self.guard_size as u32) / WASM_PAGE_SIZE)
|
Some(self.size - self.guard_size)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn grow(&mut self, delta: u32) -> Option<u32> {
|
fn grow_to(&mut self, new_size: usize) -> Option<()> {
|
||||||
let delta_size = (delta as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
println!("grow to {:x}", new_size);
|
||||||
|
let delta = new_size - self.used_wasm_bytes;
|
||||||
let prev_pages = self.used_wasm_pages;
|
|
||||||
let prev_size = (prev_pages as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
|
||||||
|
|
||||||
let new_pages = prev_pages.checked_add(delta)?;
|
|
||||||
|
|
||||||
if new_pages > self.maximum().unwrap() {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
unsafe {
|
unsafe {
|
||||||
let start = (self.mem as *mut u8).add(prev_size) as _;
|
let start = (self.mem as *mut u8).add(self.used_wasm_bytes) as _;
|
||||||
let r = mprotect(start, delta_size, PROT_READ | PROT_WRITE);
|
let r = mprotect(start, delta, PROT_READ | PROT_WRITE);
|
||||||
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
||||||
}
|
}
|
||||||
|
|
||||||
*self.glob_page_counter.lock().unwrap() += delta as u64;
|
*self.glob_bytes_counter.lock().unwrap() += delta;
|
||||||
self.used_wasm_pages = new_pages;
|
self.used_wasm_bytes = new_size;
|
||||||
Some(prev_pages)
|
Some(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn as_ptr(&self) -> *mut u8 {
|
fn as_ptr(&self) -> *mut u8 {
|
||||||
@@ -96,14 +83,14 @@ mod not_for_windows {
|
|||||||
|
|
||||||
struct CustomMemoryCreator {
|
struct CustomMemoryCreator {
|
||||||
pub num_created_memories: Mutex<usize>,
|
pub num_created_memories: Mutex<usize>,
|
||||||
pub num_total_pages: Arc<Mutex<u64>>,
|
pub num_total_bytes: Arc<Mutex<usize>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CustomMemoryCreator {
|
impl CustomMemoryCreator {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
Self {
|
Self {
|
||||||
num_created_memories: Mutex::new(0),
|
num_created_memories: Mutex::new(0),
|
||||||
num_total_pages: Arc::new(Mutex::new(0)),
|
num_total_bytes: Arc::new(Mutex::new(0)),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -112,17 +99,21 @@ mod not_for_windows {
|
|||||||
fn new_memory(
|
fn new_memory(
|
||||||
&self,
|
&self,
|
||||||
ty: MemoryType,
|
ty: MemoryType,
|
||||||
reserved_size: Option<u64>,
|
minimum: usize,
|
||||||
guard_size: u64,
|
maximum: Option<usize>,
|
||||||
|
reserved_size: Option<usize>,
|
||||||
|
guard_size: usize,
|
||||||
) -> Result<Box<dyn LinearMemory>, String> {
|
) -> Result<Box<dyn LinearMemory>, String> {
|
||||||
assert_eq!(guard_size, 0);
|
assert_eq!(guard_size, 0);
|
||||||
assert!(reserved_size.is_none());
|
assert!(reserved_size.is_none());
|
||||||
let max = ty.limits().max().unwrap_or(WASM_MAX_PAGES);
|
assert!(!ty.is_64());
|
||||||
unsafe {
|
unsafe {
|
||||||
let mem = Box::new(CustomMemory::new(
|
let mem = Box::new(CustomMemory::new(
|
||||||
ty.limits().min(),
|
minimum,
|
||||||
max,
|
maximum.unwrap_or(
|
||||||
self.num_total_pages.clone(),
|
usize::try_from(WASM32_MAX_PAGES * u64::from(WASM_PAGE_SIZE)).unwrap(),
|
||||||
|
),
|
||||||
|
self.num_total_bytes.clone(),
|
||||||
));
|
));
|
||||||
*self.num_created_memories.lock().unwrap() += 1;
|
*self.num_created_memories.lock().unwrap() += 1;
|
||||||
Ok(mem)
|
Ok(mem)
|
||||||
@@ -186,11 +177,11 @@ mod not_for_windows {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// we take the lock outside the assert, so it won't get poisoned on assert failure
|
// we take the lock outside the assert, so it won't get poisoned on assert failure
|
||||||
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
|
||||||
assert_eq!(tot_pages, 4);
|
assert_eq!(tot_pages, (4 * WASM_PAGE_SIZE) as usize);
|
||||||
|
|
||||||
drop(store);
|
drop(store);
|
||||||
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
|
||||||
assert_eq!(tot_pages, 0);
|
assert_eq!(tot_pages, 0);
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -170,8 +170,8 @@ fn imports_exports() -> Result<()> {
|
|||||||
assert_eq!(mem_export.name(), "m");
|
assert_eq!(mem_export.name(), "m");
|
||||||
match mem_export.ty() {
|
match mem_export.ty() {
|
||||||
ExternType::Memory(m) => {
|
ExternType::Memory(m) => {
|
||||||
assert_eq!(m.limits().min(), 1);
|
assert_eq!(m.minimum(), 1);
|
||||||
assert_eq!(m.limits().max(), None);
|
assert_eq!(m.maximum(), None);
|
||||||
}
|
}
|
||||||
_ => panic!("unexpected type"),
|
_ => panic!("unexpected type"),
|
||||||
}
|
}
|
||||||
@@ -179,9 +179,9 @@ fn imports_exports() -> Result<()> {
|
|||||||
assert_eq!(table_export.name(), "t");
|
assert_eq!(table_export.name(), "t");
|
||||||
match table_export.ty() {
|
match table_export.ty() {
|
||||||
ExternType::Table(t) => {
|
ExternType::Table(t) => {
|
||||||
assert_eq!(t.limits().min(), 1);
|
assert_eq!(t.minimum(), 1);
|
||||||
assert_eq!(t.limits().max(), None);
|
assert_eq!(t.maximum(), None);
|
||||||
assert_eq!(*t.element(), ValType::FuncRef);
|
assert_eq!(t.element(), ValType::FuncRef);
|
||||||
}
|
}
|
||||||
_ => panic!("unexpected type"),
|
_ => panic!("unexpected type"),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ use wasmtime::*;
|
|||||||
#[test]
|
#[test]
|
||||||
fn get_none() {
|
fn get_none() {
|
||||||
let mut store = Store::<()>::default();
|
let mut store = Store::<()>::default();
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||||
match table.get(&mut store, 0) {
|
match table.get(&mut store, 0) {
|
||||||
Some(Val::FuncRef(None)) => {}
|
Some(Val::FuncRef(None)) => {}
|
||||||
@@ -15,7 +15,7 @@ fn get_none() {
|
|||||||
#[test]
|
#[test]
|
||||||
fn fill_wrong() {
|
fn fill_wrong() {
|
||||||
let mut store = Store::<()>::default();
|
let mut store = Store::<()>::default();
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
table
|
table
|
||||||
@@ -25,7 +25,7 @@ fn fill_wrong() {
|
|||||||
"value does not match table element type"
|
"value does not match table element type"
|
||||||
);
|
);
|
||||||
|
|
||||||
let ty = TableType::new(ValType::ExternRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::ExternRef, 1, None);
|
||||||
let table = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
let table = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
table
|
table
|
||||||
@@ -39,9 +39,9 @@ fn fill_wrong() {
|
|||||||
#[test]
|
#[test]
|
||||||
fn copy_wrong() {
|
fn copy_wrong() {
|
||||||
let mut store = Store::<()>::default();
|
let mut store = Store::<()>::default();
|
||||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||||
let table1 = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
let table1 = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||||
let ty = TableType::new(ValType::ExternRef, Limits::new(1, None));
|
let ty = TableType::new(ValType::ExternRef, 1, None);
|
||||||
let table2 = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
let table2 = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
Table::copy(&mut store, &table1, 0, &table2, 0, 1)
|
Table::copy(&mut store, &table1, 0, &table2, 0, 1)
|
||||||
|
|||||||
@@ -14,16 +14,16 @@ include!(concat!(env!("OUT_DIR"), "/wast_testsuite_tests.rs"));
|
|||||||
fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()> {
|
fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()> {
|
||||||
let wast = Path::new(wast);
|
let wast = Path::new(wast);
|
||||||
|
|
||||||
let simd = wast.iter().any(|s| s == "simd");
|
let simd = feature_found(wast, "simd");
|
||||||
|
let memory64 = feature_found(wast, "memory64");
|
||||||
let multi_memory = wast.iter().any(|s| s == "multi-memory");
|
let multi_memory = feature_found(wast, "multi-memory");
|
||||||
let module_linking = wast.iter().any(|s| s == "module-linking");
|
let module_linking = feature_found(wast, "module-linking");
|
||||||
let threads = wast.iter().any(|s| s == "threads");
|
let threads = feature_found(wast, "threads");
|
||||||
let bulk_mem = multi_memory || wast.iter().any(|s| s == "bulk-memory-operations");
|
let bulk_mem = memory64 || multi_memory || feature_found(wast, "bulk-memory-operations");
|
||||||
|
|
||||||
// Some simd tests assume support for multiple tables, which are introduced
|
// Some simd tests assume support for multiple tables, which are introduced
|
||||||
// by reference types.
|
// by reference types.
|
||||||
let reftypes = simd || wast.iter().any(|s| s == "reference-types");
|
let reftypes = simd || feature_found(wast, "reference-types");
|
||||||
|
|
||||||
// Threads aren't implemented in the old backend, so skip those tests.
|
// Threads aren't implemented in the old backend, so skip those tests.
|
||||||
if threads && cfg!(feature = "old-x86-backend") {
|
if threads && cfg!(feature = "old-x86-backend") {
|
||||||
@@ -37,12 +37,14 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
|||||||
.wasm_multi_memory(multi_memory || module_linking)
|
.wasm_multi_memory(multi_memory || module_linking)
|
||||||
.wasm_module_linking(module_linking)
|
.wasm_module_linking(module_linking)
|
||||||
.wasm_threads(threads)
|
.wasm_threads(threads)
|
||||||
|
.wasm_memory64(memory64)
|
||||||
.strategy(strategy)?
|
.strategy(strategy)?
|
||||||
.cranelift_debug_verifier(true);
|
.cranelift_debug_verifier(true);
|
||||||
|
|
||||||
if wast.ends_with("canonicalize-nan.wast") {
|
if feature_found(wast, "canonicalize-nan") {
|
||||||
cfg.cranelift_nan_canonicalization(true);
|
cfg.cranelift_nan_canonicalization(true);
|
||||||
}
|
}
|
||||||
|
let test_allocates_lots_of_memory = wast.ends_with("more-than-4gb.wast");
|
||||||
|
|
||||||
// By default we'll allocate huge chunks (6gb) of the address space for each
|
// By default we'll allocate huge chunks (6gb) of the address space for each
|
||||||
// linear memory. This is typically fine but when we emulate tests with QEMU
|
// linear memory. This is typically fine but when we emulate tests with QEMU
|
||||||
@@ -54,10 +56,30 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
|||||||
// tests suite from 10GiB to 600MiB. Previously we saw that crossing the
|
// tests suite from 10GiB to 600MiB. Previously we saw that crossing the
|
||||||
// 10GiB threshold caused our processes to get OOM killed on CI.
|
// 10GiB threshold caused our processes to get OOM killed on CI.
|
||||||
if std::env::var("WASMTIME_TEST_NO_HOG_MEMORY").is_ok() {
|
if std::env::var("WASMTIME_TEST_NO_HOG_MEMORY").is_ok() {
|
||||||
|
// The pooling allocator hogs ~6TB of virtual address space for each
|
||||||
|
// store, so if we don't to hog memory then ignore pooling tests.
|
||||||
|
if pooling {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// If the test allocates a lot of memory, that's considered "hogging"
|
||||||
|
// memory, so skip it.
|
||||||
|
if test_allocates_lots_of_memory {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't use 4gb address space reservations when not hogging memory.
|
||||||
cfg.static_memory_maximum_size(0);
|
cfg.static_memory_maximum_size(0);
|
||||||
}
|
}
|
||||||
|
|
||||||
let _pooling_lock = if pooling {
|
let _pooling_lock = if pooling {
|
||||||
|
// Some memory64 tests take more than 4gb of resident memory to test,
|
||||||
|
// but we don't want to configure the pooling allocator to allow that
|
||||||
|
// (that's a ton of memory to reserve), so we skip those tests.
|
||||||
|
if test_allocates_lots_of_memory {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
// The limits here are crafted such that the wast tests should pass.
|
// The limits here are crafted such that the wast tests should pass.
|
||||||
// However, these limits may become insufficient in the future as the wast tests change.
|
// However, these limits may become insufficient in the future as the wast tests change.
|
||||||
// If a wast test fails because of a limit being "exceeded" or if memory/table
|
// If a wast test fails because of a limit being "exceeded" or if memory/table
|
||||||
@@ -91,6 +113,13 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn feature_found(path: &Path, name: &str) -> bool {
|
||||||
|
path.iter().any(|part| match part.to_str() {
|
||||||
|
Some(s) => s.contains(name),
|
||||||
|
None => false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// The pooling tests make about 6TB of address space reservation which means
|
// The pooling tests make about 6TB of address space reservation which means
|
||||||
// that we shouldn't let too many of them run concurrently at once. On
|
// that we shouldn't let too many of them run concurrently at once. On
|
||||||
// high-cpu-count systems (e.g. 80 threads) this leads to mmap failures because
|
// high-cpu-count systems (e.g. 80 threads) this leads to mmap failures because
|
||||||
|
|||||||
54
tests/misc_testsuite/memory64/bounds.wast
Normal file
54
tests/misc_testsuite/memory64/bounds.wast
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
(assert_unlinkable
|
||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
(data (i64.const 0xffff_ffff_ffff) "x"))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
|
||||||
|
(func (export "copy") (param i64 i64 i64)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.copy)
|
||||||
|
|
||||||
|
(func (export "fill") (param i64 i32 i64)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.fill)
|
||||||
|
|
||||||
|
(func (export "init") (param i64 i32 i32)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.init 0)
|
||||||
|
|
||||||
|
(data "1234")
|
||||||
|
)
|
||||||
|
|
||||||
|
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 100))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i64.const 0x1_0000_0000) (i64.const 0) (i64.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i64.const 0) (i64.const 0x1_0000_0000) (i64.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 0x1_0000_0000))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
|
(invoke "fill" (i64.const 0) (i32.const 0) (i64.const 100))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "fill" (i64.const 0x1_0000_0000) (i32.const 0) (i64.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
(assert_trap
|
||||||
|
(invoke "fill" (i64.const 0) (i32.const 0) (i64.const 0x1_0000_0000))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
|
(invoke "init" (i64.const 0) (i32.const 0) (i32.const 0))
|
||||||
|
(invoke "init" (i64.const 0) (i32.const 0) (i32.const 4))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "fill" (i64.const 0x1_0000_0000) (i32.const 0) (i64.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
38
tests/misc_testsuite/memory64/codegen.wast
Normal file
38
tests/misc_testsuite/memory64/codegen.wast
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
(func (export "run")
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 memory.copy
|
||||||
|
i64.const 0 i32.const 0 i64.const 0 memory.fill
|
||||||
|
i64.const 0 i32.const 0 i32.const 0 memory.init $seg
|
||||||
|
memory.size drop
|
||||||
|
i64.const 0 memory.grow drop
|
||||||
|
|
||||||
|
i64.const 0 i32.load drop
|
||||||
|
i64.const 0 i64.load drop
|
||||||
|
i64.const 0 f32.load drop
|
||||||
|
i64.const 0 f64.load drop
|
||||||
|
i64.const 0 i32.load8_s drop
|
||||||
|
i64.const 0 i32.load8_u drop
|
||||||
|
i64.const 0 i32.load16_s drop
|
||||||
|
i64.const 0 i32.load16_u drop
|
||||||
|
i64.const 0 i64.load8_s drop
|
||||||
|
i64.const 0 i64.load8_u drop
|
||||||
|
i64.const 0 i64.load16_s drop
|
||||||
|
i64.const 0 i64.load16_u drop
|
||||||
|
i64.const 0 i64.load32_s drop
|
||||||
|
i64.const 0 i64.load32_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.store
|
||||||
|
i64.const 0 i64.const 0 i64.store
|
||||||
|
i64.const 0 f32.const 0 f32.store
|
||||||
|
i64.const 0 f64.const 0 f64.store
|
||||||
|
i64.const 0 i32.const 0 i32.store8
|
||||||
|
i64.const 0 i32.const 0 i32.store16
|
||||||
|
i64.const 0 i64.const 0 i64.store8
|
||||||
|
i64.const 0 i64.const 0 i64.store16
|
||||||
|
i64.const 0 i64.const 0 i64.store32
|
||||||
|
)
|
||||||
|
|
||||||
|
(data $seg "..")
|
||||||
|
)
|
||||||
|
(assert_return (invoke "run"))
|
||||||
12
tests/misc_testsuite/memory64/linking.wast
Normal file
12
tests/misc_testsuite/memory64/linking.wast
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
(module $export32 (memory (export "m") 1))
|
||||||
|
(module $export64 (memory (export "m") i64 1))
|
||||||
|
|
||||||
|
(module (import "export64" "m" (memory i64 1)))
|
||||||
|
(module (import "export32" "m" (memory i32 1)))
|
||||||
|
|
||||||
|
(assert_unlinkable
|
||||||
|
(module (import "export32" "m" (memory i64 1)))
|
||||||
|
"memory types incompatible")
|
||||||
|
(assert_unlinkable
|
||||||
|
(module (import "export64" "m" (memory 1)))
|
||||||
|
"memory types incompatible")
|
||||||
69
tests/misc_testsuite/memory64/more-than-4gb.wast
Normal file
69
tests/misc_testsuite/memory64/more-than-4gb.wast
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
;; try to create as few 4gb memories as we can to reduce the memory consumption
|
||||||
|
;; of this test, so create one up front here and use it below.
|
||||||
|
(module $memory
|
||||||
|
(memory (export "memory") i64 0x1_0001 0x1_0005)
|
||||||
|
)
|
||||||
|
|
||||||
|
(module
|
||||||
|
(import "memory" "memory" (memory i64 0))
|
||||||
|
(func (export "grow") (param i64) (result i64)
|
||||||
|
local.get 0
|
||||||
|
memory.grow)
|
||||||
|
(func (export "size") (result i64)
|
||||||
|
memory.size)
|
||||||
|
)
|
||||||
|
(assert_return (invoke "grow" (i64.const 0)) (i64.const 0x1_0001))
|
||||||
|
(assert_return (invoke "size") (i64.const 0x1_0001))
|
||||||
|
|
||||||
|
;; TODO: unsure how to test this. Right now growth of any 64-bit memory will
|
||||||
|
;; always reallocate and copy all the previous memory to a new location, and
|
||||||
|
;; this means that we're doing a 4gb copy here. That's pretty slow and is just
|
||||||
|
;; copying a bunch of zeros, so until we optimize that it's not really feasible
|
||||||
|
;; to test growth in CI andd such.
|
||||||
|
(;
|
||||||
|
(assert_return (invoke "grow" (i64.const 1)) (i64.const 0x1_0001))
|
||||||
|
(assert_return (invoke "size") (i64.const 0x1_0002))
|
||||||
|
;)
|
||||||
|
|
||||||
|
;; Test that initialization with a 64-bit global works
|
||||||
|
(module $offset
|
||||||
|
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||||
|
)
|
||||||
|
(module
|
||||||
|
(import "offset" "offset" (global i64))
|
||||||
|
(import "memory" "memory" (memory i64 0))
|
||||||
|
(data (global.get 0) "\01\02\03\04")
|
||||||
|
|
||||||
|
(func (export "load32") (param i64) (result i32)
|
||||||
|
local.get 0
|
||||||
|
i32.load)
|
||||||
|
)
|
||||||
|
(assert_return (invoke "load32" (i64.const 0x1_0000_0000)) (i32.const 0x04030201))
|
||||||
|
|
||||||
|
;; Test that initialization with a 64-bit data segment works
|
||||||
|
(module $offset
|
||||||
|
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||||
|
)
|
||||||
|
(module
|
||||||
|
(import "memory" "memory" (memory i64 0))
|
||||||
|
(data (i64.const 0x1_0000_0004) "\01\02\03\04")
|
||||||
|
|
||||||
|
(func (export "load32") (param i64) (result i32)
|
||||||
|
local.get 0
|
||||||
|
i32.load)
|
||||||
|
)
|
||||||
|
(assert_return (invoke "load32" (i64.const 0x1_0000_0004)) (i32.const 0x04030201))
|
||||||
|
|
||||||
|
;; loading with a huge offset works
|
||||||
|
(module $offset
|
||||||
|
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||||
|
)
|
||||||
|
(module
|
||||||
|
(import "memory" "memory" (memory i64 0))
|
||||||
|
(data (i64.const 0x1_0000_0004) "\01\02\03\04")
|
||||||
|
|
||||||
|
(func (export "load32") (param i64) (result i32)
|
||||||
|
local.get 0
|
||||||
|
i32.load offset=0x100000000)
|
||||||
|
)
|
||||||
|
(assert_return (invoke "load32" (i64.const 2)) (i32.const 0x02010403))
|
||||||
48
tests/misc_testsuite/memory64/multi-memory.wast
Normal file
48
tests/misc_testsuite/memory64/multi-memory.wast
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
;; 64 => 64
|
||||||
|
(module
|
||||||
|
(memory $a i64 1)
|
||||||
|
(memory $b i64 1)
|
||||||
|
|
||||||
|
(func (export "copy") (param i64 i64 i64)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.copy $a $b)
|
||||||
|
)
|
||||||
|
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 100))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i64.const 0x1_0000_0000) (i64.const 0) (i64.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
|
;; 32 => 64
|
||||||
|
(module
|
||||||
|
(memory $a i32 1)
|
||||||
|
(memory $b i64 1)
|
||||||
|
|
||||||
|
(func (export "copy") (param i32 i64 i32)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.copy $a $b)
|
||||||
|
)
|
||||||
|
(invoke "copy" (i32.const 0) (i64.const 0) (i32.const 100))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i32.const 0) (i64.const 0x1_0000_0000) (i32.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
|
;; 64 => 32
|
||||||
|
(module
|
||||||
|
(memory $a i64 1)
|
||||||
|
(memory $b i32 1)
|
||||||
|
|
||||||
|
(func (export "copy") (param i64 i32 i32)
|
||||||
|
local.get 0
|
||||||
|
local.get 1
|
||||||
|
local.get 2
|
||||||
|
memory.copy $a $b)
|
||||||
|
)
|
||||||
|
(invoke "copy" (i64.const 0) (i32.const 0) (i32.const 100))
|
||||||
|
(assert_trap
|
||||||
|
(invoke "copy" (i64.const 0x1_0000_0000) (i32.const 0) (i32.const 0))
|
||||||
|
"out of bounds memory access")
|
||||||
|
|
||||||
11
tests/misc_testsuite/memory64/offsets.wast
Normal file
11
tests/misc_testsuite/memory64/offsets.wast
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
(func (export "load1") (result i32)
|
||||||
|
i64.const 0xffff_ffff_ffff_fff0
|
||||||
|
i32.load offset=16)
|
||||||
|
(func (export "load2") (result i32)
|
||||||
|
i64.const 16
|
||||||
|
i32.load offset=0xfffffffffffffff0)
|
||||||
|
)
|
||||||
|
(assert_trap (invoke "load1") "out of bounds memory access")
|
||||||
|
(assert_trap (invoke "load2") "out of bounds memory access")
|
||||||
29
tests/misc_testsuite/memory64/simd.wast
Normal file
29
tests/misc_testsuite/memory64/simd.wast
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
(func (export "run")
|
||||||
|
i64.const 0 v128.load drop
|
||||||
|
i64.const 0 v128.load8x8_s drop
|
||||||
|
i64.const 0 v128.load8x8_u drop
|
||||||
|
i64.const 0 v128.load16x4_s drop
|
||||||
|
i64.const 0 v128.load16x4_u drop
|
||||||
|
i64.const 0 v128.load32x2_s drop
|
||||||
|
i64.const 0 v128.load32x2_u drop
|
||||||
|
i64.const 0 v128.load8_splat drop
|
||||||
|
i64.const 0 v128.load16_splat drop
|
||||||
|
i64.const 0 v128.load32_splat drop
|
||||||
|
i64.const 0 v128.load64_splat drop
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.store
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.store8_lane 0
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.store16_lane 0
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.store32_lane 0
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.store64_lane 0
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.load8_lane 0 drop
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.load16_lane 0 drop
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.load32_lane 0 drop
|
||||||
|
i64.const 0 i32.const 0 i8x16.splat v128.load64_lane 0 drop
|
||||||
|
i64.const 0 v128.load32_zero drop
|
||||||
|
i64.const 0 v128.load64_zero drop
|
||||||
|
)
|
||||||
|
)
|
||||||
|
(assert_return (invoke "run"))
|
||||||
79
tests/misc_testsuite/memory64/threads.wast
Normal file
79
tests/misc_testsuite/memory64/threads.wast
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||||
|
(module
|
||||||
|
(memory i64 1)
|
||||||
|
(func (export "run")
|
||||||
|
i64.const 0 i32.atomic.load drop
|
||||||
|
i64.const 0 i64.atomic.load drop
|
||||||
|
i64.const 0 i32.atomic.load8_u drop
|
||||||
|
i64.const 0 i32.atomic.load16_u drop
|
||||||
|
i64.const 0 i64.atomic.load8_u drop
|
||||||
|
i64.const 0 i64.atomic.load16_u drop
|
||||||
|
i64.const 0 i64.atomic.load32_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.store
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.store
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.store8
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.store16
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.store8
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.store16
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.store32
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.add drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.add drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.add_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.add_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.add_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.add_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.add_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.sub drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.sub drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.sub_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.sub_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.sub_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.sub_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.sub_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.and drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.and drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.and_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.and_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.and_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.and_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.and_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.or drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.or drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.or_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.or_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.or_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.or_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.or_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.xor drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.xor drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.xor_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.xor_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.xor_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.xor_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.xor_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw.xchg drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw.xchg drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw8.xchg_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.atomic.rmw16.xchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw8.xchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw16.xchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.atomic.rmw32.xchg_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw.cmpxchg drop
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw.cmpxchg drop
|
||||||
|
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw8.cmpxchg_u drop
|
||||||
|
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw16.cmpxchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw8.cmpxchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw16.cmpxchg_u drop
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw32.cmpxchg_u drop
|
||||||
|
)
|
||||||
|
|
||||||
|
;; these are unimplemented intrinsics that trap at runtime so just make sure
|
||||||
|
;; we can codegen instead of also testing execution.
|
||||||
|
(func $just_validate_codegen
|
||||||
|
i64.const 0 i32.const 0 memory.atomic.notify drop
|
||||||
|
i64.const 0 i32.const 0 i64.const 0 memory.atomic.wait32 drop
|
||||||
|
i64.const 0 i64.const 0 i64.const 0 memory.atomic.wait64 drop
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
(assert_return (invoke "run"))
|
||||||
Reference in New Issue
Block a user