Files
wasmtime/tests/all/memory_creator.rs
Alex Crichton e68aa99588 Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime

This commit implements the WebAssembly [memory64 proposal][proposal] in
both Wasmtime and Cranelift. In terms of work done Cranelift ended up
needing very little work here since most of it was already prepared for
64-bit memories at one point or another. Most of the work in Wasmtime is
largely refactoring, changing a bunch of `u32` values to something else.

A number of internal and public interfaces are changing as a result of
this commit, for example:

* Acessors on `wasmtime::Memory` that work with pages now all return
  `u64` unconditionally rather than `u32`. This makes it possible to
  accommodate 64-bit memories with this API, but we may also want to
  consider `usize` here at some point since the host can't grow past
  `usize`-limited pages anyway.

* The `wasmtime::Limits` structure is removed in favor of
  minimum/maximum methods on table/memory types.

* Many libcall intrinsics called by jit code now unconditionally take
  `u64` arguments instead of `u32`. Return values are `usize`, however,
  since the return value, if successful, is always bounded by host
  memory while arguments can come from any guest.

* The `heap_addr` clif instruction now takes a 64-bit offset argument
  instead of a 32-bit one. It turns out that the legalization of
  `heap_addr` already worked with 64-bit offsets, so this change was
  fairly trivial to make.

* The runtime implementation of mmap-based linear memories has changed
  to largely work in `usize` quantities in its API and in bytes instead
  of pages. This simplifies various aspects and reflects that
  mmap-memories are always bound by `usize` since that's what the host
  is using to address things, and additionally most calculations care
  about bytes rather than pages except for the very edge where we're
  going to/from wasm.

Overall I've tried to minimize the amount of `as` casts as possible,
using checked `try_from` and checked arithemtic with either error
handling or explicit `unwrap()` calls to tell us about bugs in the
future. Most locations have relatively obvious things to do with various
implications on various hosts, and I think they should all be roughly of
the right shape but time will tell. I mostly relied on the compiler
complaining that various types weren't aligned to figure out
type-casting, and I manually audited some of the more obvious locations.
I suspect we have a number of hidden locations that will panic on 32-bit
hosts if 64-bit modules try to run there, but otherwise I think we
should be generally ok (famous last words). In any case I wouldn't want
to enable this by default naturally until we've fuzzed it for some time.

In terms of the actual underlying implementation, no one should expect
memory64 to be all that fast. Right now it's implemented with
"dynamic" heaps which have a few consequences:

* All memory accesses are bounds-checked. I'm not sure how aggressively
  Cranelift tries to optimize out bounds checks, but I suspect not a ton
  since we haven't stressed this much historically.

* Heaps are always precisely sized. This means that every call to
  `memory.grow` will incur a `memcpy` of memory from the old heap to the
  new. We probably want to at least look into `mremap` on Linux and
  otherwise try to implement schemes where dynamic heaps have some
  reserved pages to grow into to help amortize the cost of
  `memory.grow`.

The memory64 spec test suite is scheduled to now run on CI, but as with
all the other spec test suites it's really not all that comprehensive.
I've tried adding more tests for basic things as I've had to implement
guards for them, but I wouldn't really consider the testing adequate
from just this PR itself. I did try to take care in one test to actually
allocate a 4gb+ heap and then avoid running that in the pooling
allocator or in emulation because otherwise that may fail or take
excessively long.

[proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md

* Fix some tests

* More test fixes

* Fix wasmtime tests

* Fix doctests

* Revert to 32-bit immediate offsets in `heap_addr`

This commit updates the generation of addresses in wasm code to always
use 32-bit offsets for `heap_addr`, and if the calculated offset is
bigger than 32-bits we emit a manual add with an overflow check.

* Disable memory64 for spectest fuzzing

* Fix wrong offset being added to heap addr

* More comments!

* Clarify bytes/pages
2021-08-12 09:40:20 -05:00

190 lines
5.9 KiB
Rust

#[cfg(not(target_os = "windows"))]
mod not_for_windows {
use wasmtime::*;
use wasmtime_environ::{WASM32_MAX_PAGES, WASM_PAGE_SIZE};
use libc::MAP_FAILED;
use libc::{mmap, mprotect, munmap};
use libc::{sysconf, _SC_PAGESIZE};
use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
use std::convert::TryFrom;
use std::io::Error;
use std::ptr::null_mut;
use std::sync::{Arc, Mutex};
struct CustomMemory {
mem: usize,
size: usize,
guard_size: usize,
used_wasm_bytes: usize,
glob_bytes_counter: Arc<Mutex<usize>>,
}
impl CustomMemory {
unsafe fn new(minimum: usize, maximum: usize, glob_counter: Arc<Mutex<usize>>) -> Self {
let page_size = sysconf(_SC_PAGESIZE) as usize;
let guard_size = page_size;
let size = maximum + guard_size;
assert_eq!(size % page_size, 0); // we rely on WASM_PAGE_SIZE being multiple of host page size
let mem = mmap(null_mut(), size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
assert_ne!(mem, MAP_FAILED, "mmap failed: {}", Error::last_os_error());
let r = mprotect(mem, minimum, PROT_READ | PROT_WRITE);
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
*glob_counter.lock().unwrap() += minimum;
Self {
mem: mem as usize,
size,
guard_size,
used_wasm_bytes: minimum,
glob_bytes_counter: glob_counter,
}
}
}
impl Drop for CustomMemory {
fn drop(&mut self) {
*self.glob_bytes_counter.lock().unwrap() -= self.used_wasm_bytes;
let r = unsafe { munmap(self.mem as *mut _, self.size) };
assert_eq!(r, 0, "munmap failed: {}", Error::last_os_error());
}
}
unsafe impl LinearMemory for CustomMemory {
fn byte_size(&self) -> usize {
self.used_wasm_bytes
}
fn maximum_byte_size(&self) -> Option<usize> {
Some(self.size - self.guard_size)
}
fn grow_to(&mut self, new_size: usize) -> Option<()> {
println!("grow to {:x}", new_size);
let delta = new_size - self.used_wasm_bytes;
unsafe {
let start = (self.mem as *mut u8).add(self.used_wasm_bytes) as _;
let r = mprotect(start, delta, PROT_READ | PROT_WRITE);
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
}
*self.glob_bytes_counter.lock().unwrap() += delta;
self.used_wasm_bytes = new_size;
Some(())
}
fn as_ptr(&self) -> *mut u8 {
self.mem as *mut u8
}
}
struct CustomMemoryCreator {
pub num_created_memories: Mutex<usize>,
pub num_total_bytes: Arc<Mutex<usize>>,
}
impl CustomMemoryCreator {
pub fn new() -> Self {
Self {
num_created_memories: Mutex::new(0),
num_total_bytes: Arc::new(Mutex::new(0)),
}
}
}
unsafe impl MemoryCreator for CustomMemoryCreator {
fn new_memory(
&self,
ty: MemoryType,
minimum: usize,
maximum: Option<usize>,
reserved_size: Option<usize>,
guard_size: usize,
) -> Result<Box<dyn LinearMemory>, String> {
assert_eq!(guard_size, 0);
assert!(reserved_size.is_none());
assert!(!ty.is_64());
unsafe {
let mem = Box::new(CustomMemory::new(
minimum,
maximum.unwrap_or(
usize::try_from(WASM32_MAX_PAGES * u64::from(WASM_PAGE_SIZE)).unwrap(),
),
self.num_total_bytes.clone(),
));
*self.num_created_memories.lock().unwrap() += 1;
Ok(mem)
}
}
}
fn config() -> (Store<()>, Arc<CustomMemoryCreator>) {
let mem_creator = Arc::new(CustomMemoryCreator::new());
let mut config = Config::new();
config
.with_host_memory(mem_creator.clone())
.static_memory_maximum_size(0)
.dynamic_memory_guard_size(0);
(Store::new(&Engine::new(&config).unwrap(), ()), mem_creator)
}
#[test]
fn host_memory() -> anyhow::Result<()> {
let (mut store, mem_creator) = config();
let module = Module::new(
store.engine(),
r#"
(module
(memory (export "memory") 1)
)
"#,
)?;
Instance::new(&mut store, &module, &[])?;
assert_eq!(*mem_creator.num_created_memories.lock().unwrap(), 1);
Ok(())
}
#[test]
fn host_memory_grow() -> anyhow::Result<()> {
let (mut store, mem_creator) = config();
let module = Module::new(
store.engine(),
r#"
(module
(func $f (drop (memory.grow (i32.const 1))))
(memory (export "memory") 1 2)
(start $f)
)
"#,
)?;
Instance::new(&mut store, &module, &[])?;
let instance2 = Instance::new(&mut store, &module, &[])?;
assert_eq!(*mem_creator.num_created_memories.lock().unwrap(), 2);
assert_eq!(
instance2
.get_memory(&mut store, "memory")
.unwrap()
.size(&store),
2
);
// we take the lock outside the assert, so it won't get poisoned on assert failure
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
assert_eq!(tot_pages, (4 * WASM_PAGE_SIZE) as usize);
drop(store);
let tot_pages = *mem_creator.num_total_bytes.lock().unwrap();
assert_eq!(tot_pages, 0);
Ok(())
}
}