* Reel in unsafety around `InstanceHandle` This commit is an attempt, or at least is targeted at being a start, at reeling in the unsafety around the `InstanceHandle` type. Currently this type represents a sort of moral `Rc<Instance>` but is a bit more specialized since the underlying memory is allocated through mmap. Additionally, though, `InstanceHandle` exposes a fundamental flaw in its safety by safetly allowing mutable access so long as you have `&mut InstanceHandle`. This type, however, is trivially created by simply cloning a `InstanceHandle` to get an owned reference. This means that `&mut InstanceHandle` does not actually provide any guarantees about uniqueness, so there's no more safety than `&InstanceHandle` itself. This commit removes all `&mut self` APIs from `InstanceHandle`, additionally removing some where `&self` was `unsafe` and `&mut self` was safe (since it was trivial to subvert this "safety"). In doing so interior mutability patterns are now used much more extensively through structures such as `Table` and `Memory`. Additionally a number of methods were refactored to be a bit clearer and use helper functions where possible. This is a relatively large commit unfortunately, but it snowballed very quickly into touching quite a few places. My hope though is that this will prevent developers working on wasmtime internals as well as developers still yet to migrate to the `wasmtime` crate from falling into trivial unsafe traps by accidentally using `&mut` when they can't. All existing users relying on `&mut` will need to migrate to some form of interior mutability, such as using `RefCell` or `Cell`. This commit also additionally marks `InstanceHandle::new` as an `unsafe` function. The rationale for this is that the `&mut`-safety is only the beginning for the safety of `InstanceHandle`. In general the wasmtime internals are extremely unsafe and haven't been audited for appropriate usage of `unsafe`. Until that's done it's hoped that we can warn users with this `unsafe` constructor and otherwise push users to the `wasmtime` crate which we know is safe. * Fix windows build * Wrap up mutable memory state in one structure Rather than having separate fields * Use `Cell::set`, not `Cell::replace`, where possible * Add a helper function for offsets from VMContext * Fix a typo from merging * rustfmt * Use try_from, not as * Tweak style of some setters
154 lines
5.5 KiB
Rust
154 lines
5.5 KiB
Rust
//! Memory management for linear memories.
|
|
//!
|
|
//! `LinearMemory` is to WebAssembly linear memories what `Table` is to WebAssembly tables.
|
|
|
|
use crate::mmap::Mmap;
|
|
use crate::vmcontext::VMMemoryDefinition;
|
|
use more_asserts::{assert_ge, assert_le};
|
|
use std::cell::RefCell;
|
|
use std::convert::TryFrom;
|
|
use wasmtime_environ::{MemoryPlan, MemoryStyle, WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
|
|
|
/// A linear memory instance.
|
|
#[derive(Debug)]
|
|
pub struct LinearMemory {
|
|
// The underlying allocation.
|
|
mmap: RefCell<WasmMmap>,
|
|
|
|
// The optional maximum size in wasm pages of this linear memory.
|
|
maximum: Option<u32>,
|
|
|
|
// Size in bytes of extra guard pages after the end to optimize loads and stores with
|
|
// constant offsets.
|
|
offset_guard_size: usize,
|
|
|
|
// Records whether we're using a bounds-checking strategy which requires
|
|
// handlers to catch trapping accesses.
|
|
pub(crate) needs_signal_handlers: bool,
|
|
}
|
|
|
|
#[derive(Debug)]
|
|
struct WasmMmap {
|
|
// Our OS allocation of mmap'd memory.
|
|
alloc: Mmap,
|
|
// The current logical size in wasm pages of this linear memory.
|
|
size: u32,
|
|
}
|
|
|
|
impl LinearMemory {
|
|
/// Create a new linear memory instance with specified minimum and maximum number of wasm pages.
|
|
pub fn new(plan: &MemoryPlan) -> Result<Self, String> {
|
|
// `maximum` cannot be set to more than `65536` pages.
|
|
assert_le!(plan.memory.minimum, WASM_MAX_PAGES);
|
|
assert!(plan.memory.maximum.is_none() || plan.memory.maximum.unwrap() <= WASM_MAX_PAGES);
|
|
|
|
let offset_guard_bytes = plan.offset_guard_size as usize;
|
|
|
|
// If we have an offset guard, or if we're doing the static memory
|
|
// allocation strategy, we need signal handlers to catch out of bounds
|
|
// acceses.
|
|
let needs_signal_handlers = offset_guard_bytes > 0
|
|
|| match plan.style {
|
|
MemoryStyle::Dynamic => false,
|
|
MemoryStyle::Static { .. } => true,
|
|
};
|
|
|
|
let minimum_pages = match plan.style {
|
|
MemoryStyle::Dynamic => plan.memory.minimum,
|
|
MemoryStyle::Static { bound } => {
|
|
assert_ge!(bound, plan.memory.minimum);
|
|
bound
|
|
}
|
|
} as usize;
|
|
let minimum_bytes = minimum_pages.checked_mul(WASM_PAGE_SIZE as usize).unwrap();
|
|
let request_bytes = minimum_bytes.checked_add(offset_guard_bytes).unwrap();
|
|
let mapped_pages = plan.memory.minimum as usize;
|
|
let mapped_bytes = mapped_pages * WASM_PAGE_SIZE as usize;
|
|
|
|
let mmap = WasmMmap {
|
|
alloc: Mmap::accessible_reserved(mapped_bytes, request_bytes)?,
|
|
size: plan.memory.minimum,
|
|
};
|
|
|
|
Ok(Self {
|
|
mmap: mmap.into(),
|
|
maximum: plan.memory.maximum,
|
|
offset_guard_size: offset_guard_bytes,
|
|
needs_signal_handlers,
|
|
})
|
|
}
|
|
|
|
/// Returns the number of allocated wasm pages.
|
|
pub fn size(&self) -> u32 {
|
|
self.mmap.borrow().size
|
|
}
|
|
|
|
/// Grow memory by the specified amount of wasm pages.
|
|
///
|
|
/// Returns `None` if memory can't be grown by the specified amount
|
|
/// of wasm pages.
|
|
pub fn grow(&self, delta: u32) -> Option<u32> {
|
|
// Optimization of memory.grow 0 calls.
|
|
let mut mmap = self.mmap.borrow_mut();
|
|
if delta == 0 {
|
|
return Some(mmap.size);
|
|
}
|
|
|
|
let new_pages = match mmap.size.checked_add(delta) {
|
|
Some(new_pages) => new_pages,
|
|
// Linear memory size overflow.
|
|
None => return None,
|
|
};
|
|
let prev_pages = mmap.size;
|
|
|
|
if let Some(maximum) = self.maximum {
|
|
if new_pages > maximum {
|
|
// Linear memory size would exceed the declared maximum.
|
|
return None;
|
|
}
|
|
}
|
|
|
|
// Wasm linear memories are never allowed to grow beyond what is
|
|
// indexable. If the memory has no maximum, enforce the greatest
|
|
// limit here.
|
|
if new_pages >= WASM_MAX_PAGES {
|
|
// Linear memory size would exceed the index range.
|
|
return None;
|
|
}
|
|
|
|
let delta_bytes = usize::try_from(delta).unwrap() * WASM_PAGE_SIZE as usize;
|
|
let prev_bytes = usize::try_from(prev_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
|
let new_bytes = usize::try_from(new_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
|
|
|
if new_bytes > mmap.alloc.len() - self.offset_guard_size {
|
|
// If the new size is within the declared maximum, but needs more memory than we
|
|
// have on hand, it's a dynamic heap and it can move.
|
|
let guard_bytes = self.offset_guard_size;
|
|
let request_bytes = new_bytes.checked_add(guard_bytes)?;
|
|
|
|
let mut new_mmap = Mmap::accessible_reserved(new_bytes, request_bytes).ok()?;
|
|
|
|
let copy_len = mmap.alloc.len() - self.offset_guard_size;
|
|
new_mmap.as_mut_slice()[..copy_len].copy_from_slice(&mmap.alloc.as_slice()[..copy_len]);
|
|
|
|
mmap.alloc = new_mmap;
|
|
} else if delta_bytes > 0 {
|
|
// Make the newly allocated pages accessible.
|
|
mmap.alloc.make_accessible(prev_bytes, delta_bytes).ok()?;
|
|
}
|
|
|
|
mmap.size = new_pages;
|
|
|
|
Some(prev_pages)
|
|
}
|
|
|
|
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
|
pub fn vmmemory(&self) -> VMMemoryDefinition {
|
|
let mut mmap = self.mmap.borrow_mut();
|
|
VMMemoryDefinition {
|
|
base: mmap.alloc.as_mut_ptr(),
|
|
current_length: mmap.size as usize * WASM_PAGE_SIZE as usize,
|
|
}
|
|
}
|
|
}
|