Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can't grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that's what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we're going to/from wasm. Overall I've tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren't aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn't want to enable this by default naturally until we've fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it's implemented with "dynamic" heaps which have a few consequences: * All memory accesses are bounds-checked. I'm not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven't stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it's really not all that comprehensive. I've tried adding more tests for basic things as I've had to implement guards for them, but I wouldn't really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
This commit is contained in:
@@ -22,35 +22,35 @@ fn bad_tables() {
|
||||
let mut store = Store::<()>::default();
|
||||
|
||||
// i32 not supported yet
|
||||
let ty = TableType::new(ValType::I32, Limits::new(0, Some(1)));
|
||||
let ty = TableType::new(ValType::I32, 0, Some(1));
|
||||
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
||||
|
||||
// mismatched initializer
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(0, Some(1)));
|
||||
let ty = TableType::new(ValType::FuncRef, 0, Some(1));
|
||||
assert!(Table::new(&mut store, ty.clone(), Val::I32(0)).is_err());
|
||||
|
||||
// get out of bounds
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(0, Some(1)));
|
||||
let ty = TableType::new(ValType::FuncRef, 0, Some(1));
|
||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||
assert!(t.get(&mut store, 0).is_none());
|
||||
assert!(t.get(&mut store, u32::max_value()).is_none());
|
||||
|
||||
// set out of bounds or wrong type
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(1)));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, Some(1));
|
||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||
assert!(t.set(&mut store, 0, Val::I32(0)).is_err());
|
||||
assert!(t.set(&mut store, 0, Val::FuncRef(None)).is_ok());
|
||||
assert!(t.set(&mut store, 1, Val::FuncRef(None)).is_err());
|
||||
|
||||
// grow beyond max
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(1)));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, Some(1));
|
||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||
assert!(t.grow(&mut store, 0, Val::FuncRef(None)).is_ok());
|
||||
assert!(t.grow(&mut store, 1, Val::FuncRef(None)).is_err());
|
||||
assert_eq!(t.size(&store), 1);
|
||||
|
||||
// grow wrong type
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, Some(2)));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, Some(2));
|
||||
let t = Table::new(&mut store, ty.clone(), Val::FuncRef(None)).unwrap();
|
||||
assert!(t.grow(&mut store, 1, Val::I32(0)).is_err());
|
||||
assert_eq!(t.size(&store), 1);
|
||||
@@ -69,9 +69,9 @@ fn cross_store() -> anyhow::Result<()> {
|
||||
let func = Func::wrap(&mut store2, || {});
|
||||
let ty = GlobalType::new(ValType::I32, Mutability::Const);
|
||||
let global = Global::new(&mut store2, ty, Val::I32(0))?;
|
||||
let ty = MemoryType::new(Limits::new(1, None));
|
||||
let ty = MemoryType::new(1, None);
|
||||
let memory = Memory::new(&mut store2, ty)?;
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
let table = Table::new(&mut store2, ty, Val::FuncRef(None))?;
|
||||
|
||||
let need_func = Module::new(&engine, r#"(module (import "" "" (func)))"#)?;
|
||||
@@ -99,7 +99,7 @@ fn cross_store() -> anyhow::Result<()> {
|
||||
|
||||
// ============ Cross-store tables ==============
|
||||
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
assert!(Table::new(&mut store2, ty.clone(), store1val.clone()).is_err());
|
||||
let t1 = Table::new(&mut store2, ty.clone(), store2val.clone())?;
|
||||
assert!(t1.set(&mut store2, 0, store1val.clone()).is_err());
|
||||
@@ -218,7 +218,7 @@ fn create_get_set_funcref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||
let init = Val::FuncRef(Some(Func::wrap(&mut store, || {})));
|
||||
let table = Table::new(&mut store, table_ty, init)?;
|
||||
|
||||
@@ -236,7 +236,7 @@ fn fill_funcref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
||||
|
||||
for i in 0..10 {
|
||||
@@ -263,7 +263,7 @@ fn grow_funcref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::FuncRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::FuncRef, 10, None);
|
||||
let table = Table::new(&mut store, table_ty, Val::FuncRef(None))?;
|
||||
|
||||
assert_eq!(table.size(&store), 10);
|
||||
@@ -280,7 +280,7 @@ fn create_get_set_externref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||
let table = Table::new(
|
||||
&mut store,
|
||||
table_ty,
|
||||
@@ -315,7 +315,7 @@ fn fill_externref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
||||
|
||||
for i in 0..10 {
|
||||
@@ -364,7 +364,7 @@ fn grow_externref_tables_via_api() -> anyhow::Result<()> {
|
||||
let engine = Engine::new(&cfg)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
|
||||
let table_ty = TableType::new(ValType::ExternRef, Limits::at_least(10));
|
||||
let table_ty = TableType::new(ValType::ExternRef, 10, None);
|
||||
let table = Table::new(&mut store, table_ty, Val::ExternRef(None))?;
|
||||
|
||||
assert_eq!(table.size(&store), 10);
|
||||
@@ -378,7 +378,7 @@ fn grow_externref_tables_via_api() -> anyhow::Result<()> {
|
||||
fn read_write_memory_via_api() {
|
||||
let cfg = Config::new();
|
||||
let mut store = Store::new(&Engine::new(&cfg).unwrap(), ());
|
||||
let ty = MemoryType::new(Limits::new(1, None));
|
||||
let ty = MemoryType::new(1, None);
|
||||
let mem = Memory::new(&mut store, ty).unwrap();
|
||||
mem.grow(&mut store, 1).unwrap();
|
||||
|
||||
|
||||
@@ -303,7 +303,7 @@ fn table_drops_externref() -> anyhow::Result<()> {
|
||||
let externref = ExternRef::new(SetFlagOnDrop(flag.clone()));
|
||||
Table::new(
|
||||
&mut store,
|
||||
TableType::new(ValType::ExternRef, Limits::new(1, None)),
|
||||
TableType::new(ValType::ExternRef, 1, None),
|
||||
externref.into(),
|
||||
)?;
|
||||
drop(store);
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
use anyhow::Result;
|
||||
use wasmtime::*;
|
||||
|
||||
const WASM_PAGE_SIZE: usize = wasmtime_environ::WASM_PAGE_SIZE as usize;
|
||||
|
||||
#[test]
|
||||
fn test_limits() -> Result<()> {
|
||||
let engine = Engine::default();
|
||||
@@ -12,7 +14,7 @@ fn test_limits() -> Result<()> {
|
||||
let mut store = Store::new(
|
||||
&engine,
|
||||
StoreLimitsBuilder::new()
|
||||
.memory_pages(10)
|
||||
.memory_size(10 * WASM_PAGE_SIZE)
|
||||
.table_elements(5)
|
||||
.build(),
|
||||
);
|
||||
@@ -23,7 +25,7 @@ fn test_limits() -> Result<()> {
|
||||
// Test instance exports and host objects hitting the limit
|
||||
for memory in std::array::IntoIter::new([
|
||||
instance.get_memory(&mut store, "m").unwrap(),
|
||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
||||
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||
]) {
|
||||
memory.grow(&mut store, 3)?;
|
||||
memory.grow(&mut store, 5)?;
|
||||
@@ -43,7 +45,7 @@ fn test_limits() -> Result<()> {
|
||||
instance.get_table(&mut store, "t").unwrap(),
|
||||
Table::new(
|
||||
&mut store,
|
||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
||||
TableType::new(ValType::FuncRef, 0, None),
|
||||
Val::FuncRef(None),
|
||||
)?,
|
||||
]) {
|
||||
@@ -71,7 +73,12 @@ fn test_limits_memory_only() -> Result<()> {
|
||||
r#"(module (memory (export "m") 0) (table (export "t") 0 anyfunc))"#,
|
||||
)?;
|
||||
|
||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(10).build());
|
||||
let mut store = Store::new(
|
||||
&engine,
|
||||
StoreLimitsBuilder::new()
|
||||
.memory_size(10 * WASM_PAGE_SIZE)
|
||||
.build(),
|
||||
);
|
||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||
|
||||
let instance = Instance::new(&mut store, &module, &[])?;
|
||||
@@ -79,7 +86,7 @@ fn test_limits_memory_only() -> Result<()> {
|
||||
// Test instance exports and host objects hitting the limit
|
||||
for memory in std::array::IntoIter::new([
|
||||
instance.get_memory(&mut store, "m").unwrap(),
|
||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
||||
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||
]) {
|
||||
memory.grow(&mut store, 3)?;
|
||||
memory.grow(&mut store, 5)?;
|
||||
@@ -99,7 +106,7 @@ fn test_limits_memory_only() -> Result<()> {
|
||||
instance.get_table(&mut store, "t").unwrap(),
|
||||
Table::new(
|
||||
&mut store,
|
||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
||||
TableType::new(ValType::FuncRef, 0, None),
|
||||
Val::FuncRef(None),
|
||||
)?,
|
||||
]) {
|
||||
@@ -117,7 +124,12 @@ fn test_initial_memory_limits_exceeded() -> Result<()> {
|
||||
let engine = Engine::default();
|
||||
let module = Module::new(&engine, r#"(module (memory (export "m") 11))"#)?;
|
||||
|
||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(10).build());
|
||||
let mut store = Store::new(
|
||||
&engine,
|
||||
StoreLimitsBuilder::new()
|
||||
.memory_size(10 * WASM_PAGE_SIZE)
|
||||
.build(),
|
||||
);
|
||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||
|
||||
match Instance::new(&mut store, &module, &[]) {
|
||||
@@ -128,7 +140,7 @@ fn test_initial_memory_limits_exceeded() -> Result<()> {
|
||||
),
|
||||
}
|
||||
|
||||
match Memory::new(&mut store, MemoryType::new(Limits::new(25, None))) {
|
||||
match Memory::new(&mut store, MemoryType::new(25, None)) {
|
||||
Ok(_) => unreachable!(),
|
||||
Err(e) => assert_eq!(
|
||||
e.to_string(),
|
||||
@@ -155,7 +167,7 @@ fn test_limits_table_only() -> Result<()> {
|
||||
// Test instance exports and host objects *not* hitting the limit
|
||||
for memory in std::array::IntoIter::new([
|
||||
instance.get_memory(&mut store, "m").unwrap(),
|
||||
Memory::new(&mut store, MemoryType::new(Limits::new(0, None)))?,
|
||||
Memory::new(&mut store, MemoryType::new(0, None))?,
|
||||
]) {
|
||||
memory.grow(&mut store, 3)?;
|
||||
memory.grow(&mut store, 5)?;
|
||||
@@ -168,7 +180,7 @@ fn test_limits_table_only() -> Result<()> {
|
||||
instance.get_table(&mut store, "t").unwrap(),
|
||||
Table::new(
|
||||
&mut store,
|
||||
TableType::new(ValType::FuncRef, Limits::new(0, None)),
|
||||
TableType::new(ValType::FuncRef, 0, None),
|
||||
Val::FuncRef(None),
|
||||
)?,
|
||||
]) {
|
||||
@@ -206,7 +218,7 @@ fn test_initial_table_limits_exceeded() -> Result<()> {
|
||||
|
||||
match Table::new(
|
||||
&mut store,
|
||||
TableType::new(ValType::FuncRef, Limits::new(99, None)),
|
||||
TableType::new(ValType::FuncRef, 99, None),
|
||||
Val::FuncRef(None),
|
||||
) {
|
||||
Ok(_) => unreachable!(),
|
||||
@@ -241,7 +253,12 @@ fn test_pooling_allocator_initial_limits_exceeded() -> Result<()> {
|
||||
r#"(module (memory (export "m1") 2) (memory (export "m2") 5))"#,
|
||||
)?;
|
||||
|
||||
let mut store = Store::new(&engine, StoreLimitsBuilder::new().memory_pages(3).build());
|
||||
let mut store = Store::new(
|
||||
&engine,
|
||||
StoreLimitsBuilder::new()
|
||||
.memory_size(3 * WASM_PAGE_SIZE)
|
||||
.build(),
|
||||
);
|
||||
store.limiter(|s| s as &mut dyn ResourceLimiter);
|
||||
|
||||
match Instance::new(&mut store, &module, &[]) {
|
||||
@@ -268,15 +285,12 @@ struct MemoryContext {
|
||||
}
|
||||
|
||||
impl ResourceLimiter for MemoryContext {
|
||||
fn memory_growing(&mut self, current: u32, desired: u32, maximum: Option<u32>) -> bool {
|
||||
fn memory_growing(&mut self, current: usize, desired: usize, maximum: Option<usize>) -> bool {
|
||||
// Check if the desired exceeds a maximum (either from Wasm or from the host)
|
||||
if desired > maximum.unwrap_or(u32::MAX) {
|
||||
self.limit_exceeded = true;
|
||||
return false;
|
||||
}
|
||||
assert!(desired < maximum.unwrap_or(usize::MAX));
|
||||
|
||||
assert_eq!(current as usize * 0x10000, self.wasm_memory_used,);
|
||||
let desired = desired as usize * 0x10000;
|
||||
assert_eq!(current as usize, self.wasm_memory_used);
|
||||
let desired = desired as usize;
|
||||
|
||||
if desired + self.host_memory_used > self.memory_limit {
|
||||
self.limit_exceeded = true;
|
||||
|
||||
@@ -52,20 +52,20 @@ fn link_twice_bad() -> Result<()> {
|
||||
assert!(linker.define("g", "3", global.clone()).is_err());
|
||||
|
||||
// memories
|
||||
let ty = MemoryType::new(Limits::new(1, None));
|
||||
let ty = MemoryType::new(1, None);
|
||||
let memory = Memory::new(&mut store, ty)?;
|
||||
linker.define("m", "", memory.clone())?;
|
||||
assert!(linker.define("m", "", memory.clone()).is_err());
|
||||
let ty = MemoryType::new(Limits::new(2, None));
|
||||
let ty = MemoryType::new(2, None);
|
||||
let memory = Memory::new(&mut store, ty)?;
|
||||
assert!(linker.define("m", "", memory.clone()).is_err());
|
||||
|
||||
// tables
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
||||
linker.define("t", "", table.clone())?;
|
||||
assert!(linker.define("t", "", table.clone()).is_err());
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(2, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 2, None);
|
||||
let table = Table::new(&mut store, ty, Val::FuncRef(None))?;
|
||||
assert!(linker.define("t", "", table.clone()).is_err());
|
||||
Ok(())
|
||||
|
||||
@@ -75,7 +75,14 @@ fn test_traps(store: &mut Store<()>, funcs: &[TestFunc], addr: u32, mem: &Memory
|
||||
let base = u64::from(func.offset) + u64::from(addr);
|
||||
let range = base..base + u64::from(func.width);
|
||||
if range.start >= mem_size || range.end >= mem_size {
|
||||
assert!(result.is_err());
|
||||
assert!(
|
||||
result.is_err(),
|
||||
"access at {}+{}+{} succeeded but should have failed when memory has {} bytes",
|
||||
addr,
|
||||
func.offset,
|
||||
func.width,
|
||||
mem_size
|
||||
);
|
||||
} else {
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
@@ -97,6 +104,7 @@ fn offsets_static_dynamic_oh_my() -> Result<()> {
|
||||
config.dynamic_memory_guard_size(guard_size);
|
||||
config.static_memory_guard_size(guard_size);
|
||||
config.guard_before_linear_memory(guard_before_linear_memory);
|
||||
config.cranelift_debug_verifier(true);
|
||||
engines.push(Engine::new(&config)?);
|
||||
}
|
||||
}
|
||||
@@ -105,9 +113,9 @@ fn offsets_static_dynamic_oh_my() -> Result<()> {
|
||||
engines.par_iter().for_each(|engine| {
|
||||
let module = module(&engine).unwrap();
|
||||
|
||||
for limits in [Limits::new(1, Some(2)), Limits::new(1, None)].iter() {
|
||||
for (min, max) in [(1, Some(2)), (1, None)].iter() {
|
||||
let mut store = Store::new(&engine, ());
|
||||
let mem = Memory::new(&mut store, MemoryType::new(limits.clone())).unwrap();
|
||||
let mem = Memory::new(&mut store, MemoryType::new(*min, *max)).unwrap();
|
||||
let instance = Instance::new(&mut store, &module, &[mem.into()]).unwrap();
|
||||
let funcs = find_funcs(&mut store, &instance);
|
||||
|
||||
@@ -137,8 +145,8 @@ fn guards_present() -> Result<()> {
|
||||
config.guard_before_linear_memory(true);
|
||||
let engine = Engine::new(&config)?;
|
||||
let mut store = Store::new(&engine, ());
|
||||
let static_mem = Memory::new(&mut store, MemoryType::new(Limits::new(1, Some(2))))?;
|
||||
let dynamic_mem = Memory::new(&mut store, MemoryType::new(Limits::new(1, None)))?;
|
||||
let static_mem = Memory::new(&mut store, MemoryType::new(1, Some(2)))?;
|
||||
let dynamic_mem = Memory::new(&mut store, MemoryType::new(1, None))?;
|
||||
|
||||
let assert_guards = |store: &Store<()>| unsafe {
|
||||
// guards before
|
||||
|
||||
@@ -1,13 +1,14 @@
|
||||
#[cfg(not(target_os = "windows"))]
|
||||
mod not_for_windows {
|
||||
use wasmtime::*;
|
||||
use wasmtime_environ::{WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
||||
use wasmtime_environ::{WASM32_MAX_PAGES, WASM_PAGE_SIZE};
|
||||
|
||||
use libc::MAP_FAILED;
|
||||
use libc::{mmap, mprotect, munmap};
|
||||
use libc::{sysconf, _SC_PAGESIZE};
|
||||
use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
|
||||
|
||||
use std::convert::TryFrom;
|
||||
use std::io::Error;
|
||||
use std::ptr::null_mut;
|
||||
use std::sync::{Arc, Mutex};
|
||||
@@ -16,77 +17,63 @@ mod not_for_windows {
|
||||
mem: usize,
|
||||
size: usize,
|
||||
guard_size: usize,
|
||||
used_wasm_pages: u32,
|
||||
glob_page_counter: Arc<Mutex<u64>>,
|
||||
used_wasm_bytes: usize,
|
||||
glob_bytes_counter: Arc<Mutex<usize>>,
|
||||
}
|
||||
|
||||
impl CustomMemory {
|
||||
unsafe fn new(
|
||||
num_wasm_pages: u32,
|
||||
max_wasm_pages: u32,
|
||||
glob_counter: Arc<Mutex<u64>>,
|
||||
) -> Self {
|
||||
unsafe fn new(minimum: usize, maximum: usize, glob_counter: Arc<Mutex<usize>>) -> Self {
|
||||
let page_size = sysconf(_SC_PAGESIZE) as usize;
|
||||
let guard_size = page_size;
|
||||
let size = max_wasm_pages as usize * WASM_PAGE_SIZE as usize + guard_size;
|
||||
let used_size = num_wasm_pages as usize * WASM_PAGE_SIZE as usize;
|
||||
let size = maximum + guard_size;
|
||||
assert_eq!(size % page_size, 0); // we rely on WASM_PAGE_SIZE being multiple of host page size
|
||||
|
||||
let mem = mmap(null_mut(), size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
|
||||
assert_ne!(mem, MAP_FAILED, "mmap failed: {}", Error::last_os_error());
|
||||
|
||||
let r = mprotect(mem, used_size, PROT_READ | PROT_WRITE);
|
||||
let r = mprotect(mem, minimum, PROT_READ | PROT_WRITE);
|
||||
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
||||
*glob_counter.lock().unwrap() += num_wasm_pages as u64;
|
||||
*glob_counter.lock().unwrap() += minimum;
|
||||
|
||||
Self {
|
||||
mem: mem as usize,
|
||||
size,
|
||||
guard_size,
|
||||
used_wasm_pages: num_wasm_pages,
|
||||
glob_page_counter: glob_counter,
|
||||
used_wasm_bytes: minimum,
|
||||
glob_bytes_counter: glob_counter,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for CustomMemory {
|
||||
fn drop(&mut self) {
|
||||
let n = self.used_wasm_pages as u64;
|
||||
*self.glob_page_counter.lock().unwrap() -= n;
|
||||
*self.glob_bytes_counter.lock().unwrap() -= self.used_wasm_bytes;
|
||||
let r = unsafe { munmap(self.mem as *mut _, self.size) };
|
||||
assert_eq!(r, 0, "munmap failed: {}", Error::last_os_error());
|
||||
}
|
||||
}
|
||||
|
||||
unsafe impl LinearMemory for CustomMemory {
|
||||
fn size(&self) -> u32 {
|
||||
self.used_wasm_pages
|
||||
fn byte_size(&self) -> usize {
|
||||
self.used_wasm_bytes
|
||||
}
|
||||
|
||||
fn maximum(&self) -> Option<u32> {
|
||||
Some((self.size as u32 - self.guard_size as u32) / WASM_PAGE_SIZE)
|
||||
fn maximum_byte_size(&self) -> Option<usize> {
|
||||
Some(self.size - self.guard_size)
|
||||
}
|
||||
|
||||
fn grow(&mut self, delta: u32) -> Option<u32> {
|
||||
let delta_size = (delta as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
||||
|
||||
let prev_pages = self.used_wasm_pages;
|
||||
let prev_size = (prev_pages as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
||||
|
||||
let new_pages = prev_pages.checked_add(delta)?;
|
||||
|
||||
if new_pages > self.maximum().unwrap() {
|
||||
return None;
|
||||
}
|
||||
fn grow_to(&mut self, new_size: usize) -> Option<()> {
|
||||
println!("grow to {:x}", new_size);
|
||||
let delta = new_size - self.used_wasm_bytes;
|
||||
unsafe {
|
||||
let start = (self.mem as *mut u8).add(prev_size) as _;
|
||||
let r = mprotect(start, delta_size, PROT_READ | PROT_WRITE);
|
||||
let start = (self.mem as *mut u8).add(self.used_wasm_bytes) as _;
|
||||
let r = mprotect(start, delta, PROT_READ | PROT_WRITE);
|
||||
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
||||
}
|
||||
|
||||
*self.glob_page_counter.lock().unwrap() += delta as u64;
|
||||
self.used_wasm_pages = new_pages;
|
||||
Some(prev_pages)
|
||||
*self.glob_bytes_counter.lock().unwrap() += delta;
|
||||
self.used_wasm_bytes = new_size;
|
||||
Some(())
|
||||
}
|
||||
|
||||
fn as_ptr(&self) -> *mut u8 {
|
||||
@@ -96,14 +83,14 @@ mod not_for_windows {
|
||||
|
||||
struct CustomMemoryCreator {
|
||||
pub num_created_memories: Mutex<usize>,
|
||||
pub num_total_pages: Arc<Mutex<u64>>,
|
||||
pub num_total_bytes: Arc<Mutex<usize>>,
|
||||
}
|
||||
|
||||
impl CustomMemoryCreator {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
num_created_memories: Mutex::new(0),
|
||||
num_total_pages: Arc::new(Mutex::new(0)),
|
||||
num_total_bytes: Arc::new(Mutex::new(0)),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -112,17 +99,21 @@ mod not_for_windows {
|
||||
fn new_memory(
|
||||
&self,
|
||||
ty: MemoryType,
|
||||
reserved_size: Option<u64>,
|
||||
guard_size: u64,
|
||||
minimum: usize,
|
||||
maximum: Option<usize>,
|
||||
reserved_size: Option<usize>,
|
||||
guard_size: usize,
|
||||
) -> Result<Box<dyn LinearMemory>, String> {
|
||||
assert_eq!(guard_size, 0);
|
||||
assert!(reserved_size.is_none());
|
||||
let max = ty.limits().max().unwrap_or(WASM_MAX_PAGES);
|
||||
assert!(!ty.is_64());
|
||||
unsafe {
|
||||
let mem = Box::new(CustomMemory::new(
|
||||
ty.limits().min(),
|
||||
max,
|
||||
self.num_total_pages.clone(),
|
||||
minimum,
|
||||
maximum.unwrap_or(
|
||||
usize::try_from(WASM32_MAX_PAGES * u64::from(WASM_PAGE_SIZE)).unwrap(),
|
||||
),
|
||||
self.num_total_bytes.clone(),
|
||||
));
|
||||
*self.num_created_memories.lock().unwrap() += 1;
|
||||
Ok(mem)
|
||||
@@ -186,11 +177,11 @@ mod not_for_windows {
|
||||
);
|
||||
|
||||
// we take the lock outside the assert, so it won't get poisoned on assert failure
|
||||
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
||||
assert_eq!(tot_pages, 4);
|
||||
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
|
||||
assert_eq!(tot_pages, (4 * WASM_PAGE_SIZE) as usize);
|
||||
|
||||
drop(store);
|
||||
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
||||
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
|
||||
assert_eq!(tot_pages, 0);
|
||||
|
||||
Ok(())
|
||||
|
||||
@@ -170,8 +170,8 @@ fn imports_exports() -> Result<()> {
|
||||
assert_eq!(mem_export.name(), "m");
|
||||
match mem_export.ty() {
|
||||
ExternType::Memory(m) => {
|
||||
assert_eq!(m.limits().min(), 1);
|
||||
assert_eq!(m.limits().max(), None);
|
||||
assert_eq!(m.minimum(), 1);
|
||||
assert_eq!(m.maximum(), None);
|
||||
}
|
||||
_ => panic!("unexpected type"),
|
||||
}
|
||||
@@ -179,9 +179,9 @@ fn imports_exports() -> Result<()> {
|
||||
assert_eq!(table_export.name(), "t");
|
||||
match table_export.ty() {
|
||||
ExternType::Table(t) => {
|
||||
assert_eq!(t.limits().min(), 1);
|
||||
assert_eq!(t.limits().max(), None);
|
||||
assert_eq!(*t.element(), ValType::FuncRef);
|
||||
assert_eq!(t.minimum(), 1);
|
||||
assert_eq!(t.maximum(), None);
|
||||
assert_eq!(t.element(), ValType::FuncRef);
|
||||
}
|
||||
_ => panic!("unexpected type"),
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@ use wasmtime::*;
|
||||
#[test]
|
||||
fn get_none() {
|
||||
let mut store = Store::<()>::default();
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||
match table.get(&mut store, 0) {
|
||||
Some(Val::FuncRef(None)) => {}
|
||||
@@ -15,7 +15,7 @@ fn get_none() {
|
||||
#[test]
|
||||
fn fill_wrong() {
|
||||
let mut store = Store::<()>::default();
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
let table = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||
assert_eq!(
|
||||
table
|
||||
@@ -25,7 +25,7 @@ fn fill_wrong() {
|
||||
"value does not match table element type"
|
||||
);
|
||||
|
||||
let ty = TableType::new(ValType::ExternRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::ExternRef, 1, None);
|
||||
let table = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
||||
assert_eq!(
|
||||
table
|
||||
@@ -39,9 +39,9 @@ fn fill_wrong() {
|
||||
#[test]
|
||||
fn copy_wrong() {
|
||||
let mut store = Store::<()>::default();
|
||||
let ty = TableType::new(ValType::FuncRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::FuncRef, 1, None);
|
||||
let table1 = Table::new(&mut store, ty, Val::FuncRef(None)).unwrap();
|
||||
let ty = TableType::new(ValType::ExternRef, Limits::new(1, None));
|
||||
let ty = TableType::new(ValType::ExternRef, 1, None);
|
||||
let table2 = Table::new(&mut store, ty, Val::ExternRef(None)).unwrap();
|
||||
assert_eq!(
|
||||
Table::copy(&mut store, &table1, 0, &table2, 0, 1)
|
||||
|
||||
@@ -14,16 +14,16 @@ include!(concat!(env!("OUT_DIR"), "/wast_testsuite_tests.rs"));
|
||||
fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()> {
|
||||
let wast = Path::new(wast);
|
||||
|
||||
let simd = wast.iter().any(|s| s == "simd");
|
||||
|
||||
let multi_memory = wast.iter().any(|s| s == "multi-memory");
|
||||
let module_linking = wast.iter().any(|s| s == "module-linking");
|
||||
let threads = wast.iter().any(|s| s == "threads");
|
||||
let bulk_mem = multi_memory || wast.iter().any(|s| s == "bulk-memory-operations");
|
||||
let simd = feature_found(wast, "simd");
|
||||
let memory64 = feature_found(wast, "memory64");
|
||||
let multi_memory = feature_found(wast, "multi-memory");
|
||||
let module_linking = feature_found(wast, "module-linking");
|
||||
let threads = feature_found(wast, "threads");
|
||||
let bulk_mem = memory64 || multi_memory || feature_found(wast, "bulk-memory-operations");
|
||||
|
||||
// Some simd tests assume support for multiple tables, which are introduced
|
||||
// by reference types.
|
||||
let reftypes = simd || wast.iter().any(|s| s == "reference-types");
|
||||
let reftypes = simd || feature_found(wast, "reference-types");
|
||||
|
||||
// Threads aren't implemented in the old backend, so skip those tests.
|
||||
if threads && cfg!(feature = "old-x86-backend") {
|
||||
@@ -37,12 +37,14 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
||||
.wasm_multi_memory(multi_memory || module_linking)
|
||||
.wasm_module_linking(module_linking)
|
||||
.wasm_threads(threads)
|
||||
.wasm_memory64(memory64)
|
||||
.strategy(strategy)?
|
||||
.cranelift_debug_verifier(true);
|
||||
|
||||
if wast.ends_with("canonicalize-nan.wast") {
|
||||
if feature_found(wast, "canonicalize-nan") {
|
||||
cfg.cranelift_nan_canonicalization(true);
|
||||
}
|
||||
let test_allocates_lots_of_memory = wast.ends_with("more-than-4gb.wast");
|
||||
|
||||
// By default we'll allocate huge chunks (6gb) of the address space for each
|
||||
// linear memory. This is typically fine but when we emulate tests with QEMU
|
||||
@@ -54,10 +56,30 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
||||
// tests suite from 10GiB to 600MiB. Previously we saw that crossing the
|
||||
// 10GiB threshold caused our processes to get OOM killed on CI.
|
||||
if std::env::var("WASMTIME_TEST_NO_HOG_MEMORY").is_ok() {
|
||||
// The pooling allocator hogs ~6TB of virtual address space for each
|
||||
// store, so if we don't to hog memory then ignore pooling tests.
|
||||
if pooling {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// If the test allocates a lot of memory, that's considered "hogging"
|
||||
// memory, so skip it.
|
||||
if test_allocates_lots_of_memory {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Don't use 4gb address space reservations when not hogging memory.
|
||||
cfg.static_memory_maximum_size(0);
|
||||
}
|
||||
|
||||
let _pooling_lock = if pooling {
|
||||
// Some memory64 tests take more than 4gb of resident memory to test,
|
||||
// but we don't want to configure the pooling allocator to allow that
|
||||
// (that's a ton of memory to reserve), so we skip those tests.
|
||||
if test_allocates_lots_of_memory {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// The limits here are crafted such that the wast tests should pass.
|
||||
// However, these limits may become insufficient in the future as the wast tests change.
|
||||
// If a wast test fails because of a limit being "exceeded" or if memory/table
|
||||
@@ -91,6 +113,13 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn feature_found(path: &Path, name: &str) -> bool {
|
||||
path.iter().any(|part| match part.to_str() {
|
||||
Some(s) => s.contains(name),
|
||||
None => false,
|
||||
})
|
||||
}
|
||||
|
||||
// The pooling tests make about 6TB of address space reservation which means
|
||||
// that we shouldn't let too many of them run concurrently at once. On
|
||||
// high-cpu-count systems (e.g. 80 threads) this leads to mmap failures because
|
||||
|
||||
54
tests/misc_testsuite/memory64/bounds.wast
Normal file
54
tests/misc_testsuite/memory64/bounds.wast
Normal file
@@ -0,0 +1,54 @@
|
||||
(assert_unlinkable
|
||||
(module
|
||||
(memory i64 1)
|
||||
(data (i64.const 0xffff_ffff_ffff) "x"))
|
||||
"out of bounds memory access")
|
||||
|
||||
(module
|
||||
(memory i64 1)
|
||||
|
||||
(func (export "copy") (param i64 i64 i64)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.copy)
|
||||
|
||||
(func (export "fill") (param i64 i32 i64)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.fill)
|
||||
|
||||
(func (export "init") (param i64 i32 i32)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.init 0)
|
||||
|
||||
(data "1234")
|
||||
)
|
||||
|
||||
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 100))
|
||||
(assert_trap
|
||||
(invoke "copy" (i64.const 0x1_0000_0000) (i64.const 0) (i64.const 0))
|
||||
"out of bounds memory access")
|
||||
(assert_trap
|
||||
(invoke "copy" (i64.const 0) (i64.const 0x1_0000_0000) (i64.const 0))
|
||||
"out of bounds memory access")
|
||||
(assert_trap
|
||||
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 0x1_0000_0000))
|
||||
"out of bounds memory access")
|
||||
|
||||
(invoke "fill" (i64.const 0) (i32.const 0) (i64.const 100))
|
||||
(assert_trap
|
||||
(invoke "fill" (i64.const 0x1_0000_0000) (i32.const 0) (i64.const 0))
|
||||
"out of bounds memory access")
|
||||
(assert_trap
|
||||
(invoke "fill" (i64.const 0) (i32.const 0) (i64.const 0x1_0000_0000))
|
||||
"out of bounds memory access")
|
||||
|
||||
(invoke "init" (i64.const 0) (i32.const 0) (i32.const 0))
|
||||
(invoke "init" (i64.const 0) (i32.const 0) (i32.const 4))
|
||||
(assert_trap
|
||||
(invoke "fill" (i64.const 0x1_0000_0000) (i32.const 0) (i64.const 0))
|
||||
"out of bounds memory access")
|
||||
38
tests/misc_testsuite/memory64/codegen.wast
Normal file
38
tests/misc_testsuite/memory64/codegen.wast
Normal file
@@ -0,0 +1,38 @@
|
||||
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||
(module
|
||||
(memory i64 1)
|
||||
(func (export "run")
|
||||
i64.const 0 i64.const 0 i64.const 0 memory.copy
|
||||
i64.const 0 i32.const 0 i64.const 0 memory.fill
|
||||
i64.const 0 i32.const 0 i32.const 0 memory.init $seg
|
||||
memory.size drop
|
||||
i64.const 0 memory.grow drop
|
||||
|
||||
i64.const 0 i32.load drop
|
||||
i64.const 0 i64.load drop
|
||||
i64.const 0 f32.load drop
|
||||
i64.const 0 f64.load drop
|
||||
i64.const 0 i32.load8_s drop
|
||||
i64.const 0 i32.load8_u drop
|
||||
i64.const 0 i32.load16_s drop
|
||||
i64.const 0 i32.load16_u drop
|
||||
i64.const 0 i64.load8_s drop
|
||||
i64.const 0 i64.load8_u drop
|
||||
i64.const 0 i64.load16_s drop
|
||||
i64.const 0 i64.load16_u drop
|
||||
i64.const 0 i64.load32_s drop
|
||||
i64.const 0 i64.load32_u drop
|
||||
i64.const 0 i32.const 0 i32.store
|
||||
i64.const 0 i64.const 0 i64.store
|
||||
i64.const 0 f32.const 0 f32.store
|
||||
i64.const 0 f64.const 0 f64.store
|
||||
i64.const 0 i32.const 0 i32.store8
|
||||
i64.const 0 i32.const 0 i32.store16
|
||||
i64.const 0 i64.const 0 i64.store8
|
||||
i64.const 0 i64.const 0 i64.store16
|
||||
i64.const 0 i64.const 0 i64.store32
|
||||
)
|
||||
|
||||
(data $seg "..")
|
||||
)
|
||||
(assert_return (invoke "run"))
|
||||
12
tests/misc_testsuite/memory64/linking.wast
Normal file
12
tests/misc_testsuite/memory64/linking.wast
Normal file
@@ -0,0 +1,12 @@
|
||||
(module $export32 (memory (export "m") 1))
|
||||
(module $export64 (memory (export "m") i64 1))
|
||||
|
||||
(module (import "export64" "m" (memory i64 1)))
|
||||
(module (import "export32" "m" (memory i32 1)))
|
||||
|
||||
(assert_unlinkable
|
||||
(module (import "export32" "m" (memory i64 1)))
|
||||
"memory types incompatible")
|
||||
(assert_unlinkable
|
||||
(module (import "export64" "m" (memory 1)))
|
||||
"memory types incompatible")
|
||||
69
tests/misc_testsuite/memory64/more-than-4gb.wast
Normal file
69
tests/misc_testsuite/memory64/more-than-4gb.wast
Normal file
@@ -0,0 +1,69 @@
|
||||
;; try to create as few 4gb memories as we can to reduce the memory consumption
|
||||
;; of this test, so create one up front here and use it below.
|
||||
(module $memory
|
||||
(memory (export "memory") i64 0x1_0001 0x1_0005)
|
||||
)
|
||||
|
||||
(module
|
||||
(import "memory" "memory" (memory i64 0))
|
||||
(func (export "grow") (param i64) (result i64)
|
||||
local.get 0
|
||||
memory.grow)
|
||||
(func (export "size") (result i64)
|
||||
memory.size)
|
||||
)
|
||||
(assert_return (invoke "grow" (i64.const 0)) (i64.const 0x1_0001))
|
||||
(assert_return (invoke "size") (i64.const 0x1_0001))
|
||||
|
||||
;; TODO: unsure how to test this. Right now growth of any 64-bit memory will
|
||||
;; always reallocate and copy all the previous memory to a new location, and
|
||||
;; this means that we're doing a 4gb copy here. That's pretty slow and is just
|
||||
;; copying a bunch of zeros, so until we optimize that it's not really feasible
|
||||
;; to test growth in CI andd such.
|
||||
(;
|
||||
(assert_return (invoke "grow" (i64.const 1)) (i64.const 0x1_0001))
|
||||
(assert_return (invoke "size") (i64.const 0x1_0002))
|
||||
;)
|
||||
|
||||
;; Test that initialization with a 64-bit global works
|
||||
(module $offset
|
||||
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||
)
|
||||
(module
|
||||
(import "offset" "offset" (global i64))
|
||||
(import "memory" "memory" (memory i64 0))
|
||||
(data (global.get 0) "\01\02\03\04")
|
||||
|
||||
(func (export "load32") (param i64) (result i32)
|
||||
local.get 0
|
||||
i32.load)
|
||||
)
|
||||
(assert_return (invoke "load32" (i64.const 0x1_0000_0000)) (i32.const 0x04030201))
|
||||
|
||||
;; Test that initialization with a 64-bit data segment works
|
||||
(module $offset
|
||||
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||
)
|
||||
(module
|
||||
(import "memory" "memory" (memory i64 0))
|
||||
(data (i64.const 0x1_0000_0004) "\01\02\03\04")
|
||||
|
||||
(func (export "load32") (param i64) (result i32)
|
||||
local.get 0
|
||||
i32.load)
|
||||
)
|
||||
(assert_return (invoke "load32" (i64.const 0x1_0000_0004)) (i32.const 0x04030201))
|
||||
|
||||
;; loading with a huge offset works
|
||||
(module $offset
|
||||
(global (export "offset") i64 (i64.const 0x1_0000_0000))
|
||||
)
|
||||
(module
|
||||
(import "memory" "memory" (memory i64 0))
|
||||
(data (i64.const 0x1_0000_0004) "\01\02\03\04")
|
||||
|
||||
(func (export "load32") (param i64) (result i32)
|
||||
local.get 0
|
||||
i32.load offset=0x100000000)
|
||||
)
|
||||
(assert_return (invoke "load32" (i64.const 2)) (i32.const 0x02010403))
|
||||
48
tests/misc_testsuite/memory64/multi-memory.wast
Normal file
48
tests/misc_testsuite/memory64/multi-memory.wast
Normal file
@@ -0,0 +1,48 @@
|
||||
;; 64 => 64
|
||||
(module
|
||||
(memory $a i64 1)
|
||||
(memory $b i64 1)
|
||||
|
||||
(func (export "copy") (param i64 i64 i64)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.copy $a $b)
|
||||
)
|
||||
(invoke "copy" (i64.const 0) (i64.const 0) (i64.const 100))
|
||||
(assert_trap
|
||||
(invoke "copy" (i64.const 0x1_0000_0000) (i64.const 0) (i64.const 0))
|
||||
"out of bounds memory access")
|
||||
|
||||
;; 32 => 64
|
||||
(module
|
||||
(memory $a i32 1)
|
||||
(memory $b i64 1)
|
||||
|
||||
(func (export "copy") (param i32 i64 i32)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.copy $a $b)
|
||||
)
|
||||
(invoke "copy" (i32.const 0) (i64.const 0) (i32.const 100))
|
||||
(assert_trap
|
||||
(invoke "copy" (i32.const 0) (i64.const 0x1_0000_0000) (i32.const 0))
|
||||
"out of bounds memory access")
|
||||
|
||||
;; 64 => 32
|
||||
(module
|
||||
(memory $a i64 1)
|
||||
(memory $b i32 1)
|
||||
|
||||
(func (export "copy") (param i64 i32 i32)
|
||||
local.get 0
|
||||
local.get 1
|
||||
local.get 2
|
||||
memory.copy $a $b)
|
||||
)
|
||||
(invoke "copy" (i64.const 0) (i32.const 0) (i32.const 100))
|
||||
(assert_trap
|
||||
(invoke "copy" (i64.const 0x1_0000_0000) (i32.const 0) (i32.const 0))
|
||||
"out of bounds memory access")
|
||||
|
||||
11
tests/misc_testsuite/memory64/offsets.wast
Normal file
11
tests/misc_testsuite/memory64/offsets.wast
Normal file
@@ -0,0 +1,11 @@
|
||||
(module
|
||||
(memory i64 1)
|
||||
(func (export "load1") (result i32)
|
||||
i64.const 0xffff_ffff_ffff_fff0
|
||||
i32.load offset=16)
|
||||
(func (export "load2") (result i32)
|
||||
i64.const 16
|
||||
i32.load offset=0xfffffffffffffff0)
|
||||
)
|
||||
(assert_trap (invoke "load1") "out of bounds memory access")
|
||||
(assert_trap (invoke "load2") "out of bounds memory access")
|
||||
29
tests/misc_testsuite/memory64/simd.wast
Normal file
29
tests/misc_testsuite/memory64/simd.wast
Normal file
@@ -0,0 +1,29 @@
|
||||
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||
(module
|
||||
(memory i64 1)
|
||||
(func (export "run")
|
||||
i64.const 0 v128.load drop
|
||||
i64.const 0 v128.load8x8_s drop
|
||||
i64.const 0 v128.load8x8_u drop
|
||||
i64.const 0 v128.load16x4_s drop
|
||||
i64.const 0 v128.load16x4_u drop
|
||||
i64.const 0 v128.load32x2_s drop
|
||||
i64.const 0 v128.load32x2_u drop
|
||||
i64.const 0 v128.load8_splat drop
|
||||
i64.const 0 v128.load16_splat drop
|
||||
i64.const 0 v128.load32_splat drop
|
||||
i64.const 0 v128.load64_splat drop
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.store
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.store8_lane 0
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.store16_lane 0
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.store32_lane 0
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.store64_lane 0
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.load8_lane 0 drop
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.load16_lane 0 drop
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.load32_lane 0 drop
|
||||
i64.const 0 i32.const 0 i8x16.splat v128.load64_lane 0 drop
|
||||
i64.const 0 v128.load32_zero drop
|
||||
i64.const 0 v128.load64_zero drop
|
||||
)
|
||||
)
|
||||
(assert_return (invoke "run"))
|
||||
79
tests/misc_testsuite/memory64/threads.wast
Normal file
79
tests/misc_testsuite/memory64/threads.wast
Normal file
@@ -0,0 +1,79 @@
|
||||
;; make sure everything codegens correctly and has no cranelift verifier errors
|
||||
(module
|
||||
(memory i64 1)
|
||||
(func (export "run")
|
||||
i64.const 0 i32.atomic.load drop
|
||||
i64.const 0 i64.atomic.load drop
|
||||
i64.const 0 i32.atomic.load8_u drop
|
||||
i64.const 0 i32.atomic.load16_u drop
|
||||
i64.const 0 i64.atomic.load8_u drop
|
||||
i64.const 0 i64.atomic.load16_u drop
|
||||
i64.const 0 i64.atomic.load32_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.store
|
||||
i64.const 0 i64.const 0 i64.atomic.store
|
||||
i64.const 0 i32.const 0 i32.atomic.store8
|
||||
i64.const 0 i32.const 0 i32.atomic.store16
|
||||
i64.const 0 i64.const 0 i64.atomic.store8
|
||||
i64.const 0 i64.const 0 i64.atomic.store16
|
||||
i64.const 0 i64.const 0 i64.atomic.store32
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.add drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.add drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.add_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.add_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.add_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.add_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.add_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.sub drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.sub drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.sub_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.sub_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.sub_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.sub_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.sub_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.and drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.and drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.and_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.and_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.and_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.and_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.and_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.or drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.or drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.or_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.or_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.or_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.or_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.or_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.xor drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.xor drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.xor_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.xor_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.xor_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.xor_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.xor_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw.xchg drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw.xchg drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw8.xchg_u drop
|
||||
i64.const 0 i32.const 0 i32.atomic.rmw16.xchg_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw8.xchg_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw16.xchg_u drop
|
||||
i64.const 0 i64.const 0 i64.atomic.rmw32.xchg_u drop
|
||||
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw.cmpxchg drop
|
||||
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw.cmpxchg drop
|
||||
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw8.cmpxchg_u drop
|
||||
i64.const 0 i32.const 0 i32.const 0 i32.atomic.rmw16.cmpxchg_u drop
|
||||
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw8.cmpxchg_u drop
|
||||
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw16.cmpxchg_u drop
|
||||
i64.const 0 i64.const 0 i64.const 0 i64.atomic.rmw32.cmpxchg_u drop
|
||||
)
|
||||
|
||||
;; these are unimplemented intrinsics that trap at runtime so just make sure
|
||||
;; we can codegen instead of also testing execution.
|
||||
(func $just_validate_codegen
|
||||
i64.const 0 i32.const 0 memory.atomic.notify drop
|
||||
i64.const 0 i32.const 0 i64.const 0 memory.atomic.wait32 drop
|
||||
i64.const 0 i64.const 0 i64.const 0 memory.atomic.wait64 drop
|
||||
)
|
||||
)
|
||||
|
||||
(assert_return (invoke "run"))
|
||||
Reference in New Issue
Block a user