Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can't grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that's what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we're going to/from wasm. Overall I've tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren't aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn't want to enable this by default naturally until we've fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it's implemented with "dynamic" heaps which have a few consequences: * All memory accesses are bounds-checked. I'm not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven't stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it's really not all that comprehensive. I've tried adding more tests for basic things as I've had to implement guards for them, but I wouldn't really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages
This commit is contained in:
@@ -45,18 +45,25 @@ pub const DEFAULT_MEMORY_LIMIT: usize = 10000;
|
||||
/// An instance can be created with a resource limiter so that hosts can take into account
|
||||
/// non-WebAssembly resource usage to determine if a linear memory or table should grow.
|
||||
pub trait ResourceLimiter {
|
||||
/// Notifies the resource limiter that an instance's linear memory has been requested to grow.
|
||||
/// Notifies the resource limiter that an instance's linear memory has been
|
||||
/// requested to grow.
|
||||
///
|
||||
/// * `current` is the current size of the linear memory in WebAssembly page units.
|
||||
/// * `desired` is the desired size of the linear memory in WebAssembly page units.
|
||||
/// * `maximum` is either the linear memory's maximum or a maximum from an instance allocator,
|
||||
/// also in WebAssembly page units. A value of `None` indicates that the linear memory is
|
||||
/// unbounded.
|
||||
/// * `current` is the current size of the linear memory in bytes.
|
||||
/// * `desired` is the desired size of the linear memory in bytes.
|
||||
/// * `maximum` is either the linear memory's maximum or a maximum from an
|
||||
/// instance allocator, also in bytes. A value of `None`
|
||||
/// indicates that the linear memory is unbounded.
|
||||
///
|
||||
/// This function should return `true` to indicate that the growing operation is permitted or
|
||||
/// `false` if not permitted. Returning `true` when a maximum has been exceeded will have no
|
||||
/// effect as the linear memory will not grow.
|
||||
fn memory_growing(&mut self, current: u32, desired: u32, maximum: Option<u32>) -> bool;
|
||||
/// This function should return `true` to indicate that the growing
|
||||
/// operation is permitted or `false` if not permitted. Returning `true`
|
||||
/// when a maximum has been exceeded will have no effect as the linear
|
||||
/// memory will not grow.
|
||||
///
|
||||
/// This function is not guaranteed to be invoked for all requests to
|
||||
/// `memory.grow`. Requests where the allocation requested size doesn't fit
|
||||
/// in `usize` or exceeds the memory's listed maximum size may not invoke
|
||||
/// this method.
|
||||
fn memory_growing(&mut self, current: usize, desired: usize, maximum: Option<usize>) -> bool;
|
||||
|
||||
/// Notifies the resource limiter that an instance's table has been requested to grow.
|
||||
///
|
||||
@@ -406,8 +413,9 @@ impl Instance {
|
||||
/// Grow memory by the specified amount of pages.
|
||||
///
|
||||
/// Returns `None` if memory can't be grown by the specified amount
|
||||
/// of pages.
|
||||
pub(crate) fn memory_grow(&mut self, index: MemoryIndex, delta: u32) -> Option<u32> {
|
||||
/// of pages. Returns `Some` with the old size in bytes if growth was
|
||||
/// successful.
|
||||
pub(crate) fn memory_grow(&mut self, index: MemoryIndex, delta: u64) -> Option<usize> {
|
||||
let (idx, instance) = if let Some(idx) = self.module.defined_memory_index(index) {
|
||||
(idx, self)
|
||||
} else {
|
||||
@@ -616,26 +624,18 @@ impl Instance {
|
||||
pub(crate) fn memory_copy(
|
||||
&mut self,
|
||||
dst_index: MemoryIndex,
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
src_index: MemoryIndex,
|
||||
src: u32,
|
||||
len: u32,
|
||||
src: u64,
|
||||
len: u64,
|
||||
) -> Result<(), Trap> {
|
||||
// https://webassembly.github.io/reference-types/core/exec/instructions.html#exec-memory-copy
|
||||
|
||||
let src_mem = self.get_memory(src_index);
|
||||
let dst_mem = self.get_memory(dst_index);
|
||||
|
||||
if src.checked_add(len).map_or(true, |n| {
|
||||
usize::try_from(n).unwrap() > src_mem.current_length
|
||||
}) || dst.checked_add(len).map_or(true, |m| {
|
||||
usize::try_from(m).unwrap() > dst_mem.current_length
|
||||
}) {
|
||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
||||
}
|
||||
|
||||
let dst = usize::try_from(dst).unwrap();
|
||||
let src = usize::try_from(src).unwrap();
|
||||
let src = self.validate_inbounds(src_mem.current_length, src, len)?;
|
||||
let dst = self.validate_inbounds(dst_mem.current_length, dst, len)?;
|
||||
|
||||
// Bounds and casts are checked above, by this point we know that
|
||||
// everything is safe.
|
||||
@@ -648,6 +648,19 @@ impl Instance {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn validate_inbounds(&self, max: usize, ptr: u64, len: u64) -> Result<usize, Trap> {
|
||||
let oob = || Trap::wasm(ir::TrapCode::HeapOutOfBounds);
|
||||
let end = ptr
|
||||
.checked_add(len)
|
||||
.and_then(|i| usize::try_from(i).ok())
|
||||
.ok_or_else(oob)?;
|
||||
if end > max {
|
||||
Err(oob())
|
||||
} else {
|
||||
Ok(ptr as usize)
|
||||
}
|
||||
}
|
||||
|
||||
/// Perform the `memory.fill` operation on a locally defined memory.
|
||||
///
|
||||
/// # Errors
|
||||
@@ -656,25 +669,17 @@ impl Instance {
|
||||
pub(crate) fn memory_fill(
|
||||
&mut self,
|
||||
memory_index: MemoryIndex,
|
||||
dst: u32,
|
||||
val: u32,
|
||||
len: u32,
|
||||
dst: u64,
|
||||
val: u8,
|
||||
len: u64,
|
||||
) -> Result<(), Trap> {
|
||||
let memory = self.get_memory(memory_index);
|
||||
|
||||
if dst.checked_add(len).map_or(true, |m| {
|
||||
usize::try_from(m).unwrap() > memory.current_length
|
||||
}) {
|
||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
||||
}
|
||||
|
||||
let dst = isize::try_from(dst).unwrap();
|
||||
let val = val as u8;
|
||||
let dst = self.validate_inbounds(memory.current_length, dst, len)?;
|
||||
|
||||
// Bounds and casts are checked above, by this point we know that
|
||||
// everything is safe.
|
||||
unsafe {
|
||||
let dst = memory.base.offset(dst);
|
||||
let dst = memory.base.add(dst);
|
||||
ptr::write_bytes(dst, val, len as usize);
|
||||
}
|
||||
|
||||
@@ -692,7 +697,7 @@ impl Instance {
|
||||
&mut self,
|
||||
memory_index: MemoryIndex,
|
||||
data_index: DataIndex,
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
src: u32,
|
||||
len: u32,
|
||||
) -> Result<(), Trap> {
|
||||
@@ -713,29 +718,22 @@ impl Instance {
|
||||
&mut self,
|
||||
memory_index: MemoryIndex,
|
||||
data: &[u8],
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
src: u32,
|
||||
len: u32,
|
||||
) -> Result<(), Trap> {
|
||||
// https://webassembly.github.io/bulk-memory-operations/core/exec/instructions.html#exec-memory-init
|
||||
|
||||
let memory = self.get_memory(memory_index);
|
||||
let dst = self.validate_inbounds(memory.current_length, dst, len.into())?;
|
||||
let src = self.validate_inbounds(data.len(), src.into(), len.into())?;
|
||||
let len = len as usize;
|
||||
|
||||
if src
|
||||
.checked_add(len)
|
||||
.map_or(true, |n| usize::try_from(n).unwrap() > data.len())
|
||||
|| dst.checked_add(len).map_or(true, |m| {
|
||||
usize::try_from(m).unwrap() > memory.current_length
|
||||
})
|
||||
{
|
||||
return Err(Trap::wasm(ir::TrapCode::HeapOutOfBounds));
|
||||
}
|
||||
|
||||
let src_slice = &data[src as usize..(src + len) as usize];
|
||||
let src_slice = &data[src..(src + len)];
|
||||
|
||||
unsafe {
|
||||
let dst_start = memory.base.add(dst as usize);
|
||||
let dst_slice = slice::from_raw_parts_mut(dst_start, len as usize);
|
||||
let dst_start = memory.base.add(dst);
|
||||
let dst_slice = slice::from_raw_parts_mut(dst_start, len);
|
||||
dst_slice.copy_from_slice(src_slice);
|
||||
}
|
||||
|
||||
|
||||
@@ -279,14 +279,22 @@ fn initialize_tables(instance: &mut Instance, module: &Module) -> Result<(), Ins
|
||||
fn get_memory_init_start(
|
||||
init: &MemoryInitializer,
|
||||
instance: &Instance,
|
||||
) -> Result<u32, InstantiationError> {
|
||||
) -> Result<u64, InstantiationError> {
|
||||
match init.base {
|
||||
Some(base) => {
|
||||
let mem64 = instance.module.memory_plans[init.memory_index]
|
||||
.memory
|
||||
.memory64;
|
||||
let val = unsafe {
|
||||
if let Some(def_index) = instance.module.defined_global_index(base) {
|
||||
*instance.global(def_index).as_u32()
|
||||
let global = if let Some(def_index) = instance.module.defined_global_index(base) {
|
||||
instance.global(def_index)
|
||||
} else {
|
||||
*(*instance.imported_global(base).from).as_u32()
|
||||
&*instance.imported_global(base).from
|
||||
};
|
||||
if mem64 {
|
||||
*global.as_u64()
|
||||
} else {
|
||||
u64::from(*global.as_u32())
|
||||
}
|
||||
};
|
||||
|
||||
@@ -305,8 +313,9 @@ fn check_memory_init_bounds(
|
||||
for init in initializers {
|
||||
let memory = instance.get_memory(init.memory_index);
|
||||
let start = get_memory_init_start(init, instance)?;
|
||||
let start = usize::try_from(start).unwrap();
|
||||
let end = start.checked_add(init.data.len());
|
||||
let end = usize::try_from(start)
|
||||
.ok()
|
||||
.and_then(|start| start.checked_add(init.data.len()));
|
||||
|
||||
match end {
|
||||
Some(end) if end <= memory.current_length => {
|
||||
@@ -334,7 +343,7 @@ fn initialize_memories(
|
||||
&init.data,
|
||||
get_memory_init_start(init, instance)?,
|
||||
0,
|
||||
init.data.len() as u32,
|
||||
u32::try_from(init.data.len()).unwrap(),
|
||||
)
|
||||
.map_err(InstantiationError::Trap)?;
|
||||
}
|
||||
|
||||
@@ -89,7 +89,7 @@ pub struct ModuleLimits {
|
||||
pub table_elements: u32,
|
||||
|
||||
/// The maximum number of pages for any linear memory defined in a module.
|
||||
pub memory_pages: u32,
|
||||
pub memory_pages: u64,
|
||||
}
|
||||
|
||||
impl ModuleLimits {
|
||||
@@ -455,7 +455,7 @@ impl InstancePool {
|
||||
.expect("failed to reset guard pages");
|
||||
drop(&mut memory); // require mutable on all platforms, not just uffd
|
||||
|
||||
let size = (memory.size() as usize) * (WASM_PAGE_SIZE as usize);
|
||||
let size = memory.byte_size();
|
||||
drop(memory);
|
||||
decommit_memory_pages(base, size).expect("failed to decommit linear memory pages");
|
||||
}
|
||||
@@ -499,7 +499,7 @@ impl InstancePool {
|
||||
fn set_instance_memories(
|
||||
instance: &mut Instance,
|
||||
mut memories: impl Iterator<Item = *mut u8>,
|
||||
max_pages: u32,
|
||||
max_pages: u64,
|
||||
mut limiter: Option<&mut dyn ResourceLimiter>,
|
||||
) -> Result<(), InstantiationError> {
|
||||
let module = instance.module.as_ref();
|
||||
@@ -599,7 +599,7 @@ struct MemoryPool {
|
||||
initial_memory_offset: usize,
|
||||
max_memories: usize,
|
||||
max_instances: usize,
|
||||
max_wasm_pages: u32,
|
||||
max_wasm_pages: u64,
|
||||
}
|
||||
|
||||
impl MemoryPool {
|
||||
@@ -1118,6 +1118,7 @@ mod test {
|
||||
minimum: 0,
|
||||
maximum: None,
|
||||
shared: false,
|
||||
memory64: false,
|
||||
},
|
||||
pre_guard_size: 0,
|
||||
offset_guard_size: 0,
|
||||
@@ -1234,6 +1235,7 @@ mod test {
|
||||
minimum: 0,
|
||||
maximum: None,
|
||||
shared: false,
|
||||
memory64: false,
|
||||
},
|
||||
pre_guard_size: 0,
|
||||
offset_guard_size: 0,
|
||||
@@ -1308,6 +1310,7 @@ mod test {
|
||||
minimum: 6,
|
||||
maximum: None,
|
||||
shared: false,
|
||||
memory64: false,
|
||||
},
|
||||
pre_guard_size: 0,
|
||||
offset_guard_size: 0,
|
||||
@@ -1333,6 +1336,7 @@ mod test {
|
||||
minimum: 1,
|
||||
maximum: None,
|
||||
shared: false,
|
||||
memory64: false,
|
||||
},
|
||||
offset_guard_size: 0,
|
||||
pre_guard_size: 0,
|
||||
|
||||
@@ -213,7 +213,7 @@ impl FaultLocator {
|
||||
let instance = self.get_instance(index / self.max_memories);
|
||||
|
||||
let init_page_index = (*instance).memories.get(memory_index).and_then(|m| {
|
||||
if page_index < m.size() as usize {
|
||||
if (addr - memory_start) < m.byte_size() {
|
||||
Some(page_index)
|
||||
} else {
|
||||
None
|
||||
@@ -500,6 +500,7 @@ mod test {
|
||||
minimum: 2,
|
||||
maximum: Some(2),
|
||||
shared: false,
|
||||
memory64: false,
|
||||
},
|
||||
style: MemoryStyle::Static { bound: 1 },
|
||||
offset_guard_size: 0,
|
||||
|
||||
@@ -190,14 +190,15 @@ pub extern "C" fn wasmtime_f64_nearest(x: f64) -> f64 {
|
||||
/// Implementation of memory.grow for locally-defined 32-bit memories.
|
||||
pub unsafe extern "C" fn wasmtime_memory32_grow(
|
||||
vmctx: *mut VMContext,
|
||||
delta: u32,
|
||||
delta: u64,
|
||||
memory_index: u32,
|
||||
) -> u32 {
|
||||
) -> usize {
|
||||
let instance = (*vmctx).instance_mut();
|
||||
let memory_index = MemoryIndex::from_u32(memory_index);
|
||||
instance
|
||||
.memory_grow(memory_index, delta)
|
||||
.unwrap_or(u32::max_value())
|
||||
match instance.memory_grow(memory_index, delta) {
|
||||
Some(size_in_bytes) => size_in_bytes / (wasmtime_environ::WASM_PAGE_SIZE as usize),
|
||||
None => usize::max_value(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Implementation of `table.grow`.
|
||||
@@ -317,10 +318,10 @@ pub unsafe extern "C" fn wasmtime_elem_drop(vmctx: *mut VMContext, elem_index: u
|
||||
pub unsafe extern "C" fn wasmtime_memory_copy(
|
||||
vmctx: *mut VMContext,
|
||||
dst_index: u32,
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
src_index: u32,
|
||||
src: u32,
|
||||
len: u32,
|
||||
src: u64,
|
||||
len: u64,
|
||||
) {
|
||||
let result = {
|
||||
let src_index = MemoryIndex::from_u32(src_index);
|
||||
@@ -337,14 +338,14 @@ pub unsafe extern "C" fn wasmtime_memory_copy(
|
||||
pub unsafe extern "C" fn wasmtime_memory_fill(
|
||||
vmctx: *mut VMContext,
|
||||
memory_index: u32,
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
val: u32,
|
||||
len: u32,
|
||||
len: u64,
|
||||
) {
|
||||
let result = {
|
||||
let memory_index = MemoryIndex::from_u32(memory_index);
|
||||
let instance = (*vmctx).instance_mut();
|
||||
instance.memory_fill(memory_index, dst, val, len)
|
||||
instance.memory_fill(memory_index, dst, val as u8, len)
|
||||
};
|
||||
if let Err(trap) = result {
|
||||
raise_lib_trap(trap);
|
||||
@@ -356,7 +357,7 @@ pub unsafe extern "C" fn wasmtime_memory_init(
|
||||
vmctx: *mut VMContext,
|
||||
memory_index: u32,
|
||||
data_index: u32,
|
||||
dst: u32,
|
||||
dst: u64,
|
||||
src: u32,
|
||||
len: u32,
|
||||
) {
|
||||
|
||||
@@ -5,15 +5,23 @@
|
||||
use crate::mmap::Mmap;
|
||||
use crate::vmcontext::VMMemoryDefinition;
|
||||
use crate::ResourceLimiter;
|
||||
use anyhow::{bail, Result};
|
||||
use anyhow::{bail, format_err, Result};
|
||||
use more_asserts::{assert_ge, assert_le};
|
||||
use std::convert::TryFrom;
|
||||
use wasmtime_environ::{MemoryPlan, MemoryStyle, WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
||||
use wasmtime_environ::{MemoryPlan, MemoryStyle, WASM32_MAX_PAGES, WASM64_MAX_PAGES};
|
||||
|
||||
const WASM_PAGE_SIZE: usize = wasmtime_environ::WASM_PAGE_SIZE as usize;
|
||||
const WASM_PAGE_SIZE_U64: u64 = wasmtime_environ::WASM_PAGE_SIZE as u64;
|
||||
|
||||
/// A memory allocator
|
||||
pub trait RuntimeMemoryCreator: Send + Sync {
|
||||
/// Create new RuntimeLinearMemory
|
||||
fn new_memory(&self, plan: &MemoryPlan) -> Result<Box<dyn RuntimeLinearMemory>>;
|
||||
fn new_memory(
|
||||
&self,
|
||||
plan: &MemoryPlan,
|
||||
minimum: usize,
|
||||
maximum: Option<usize>,
|
||||
) -> Result<Box<dyn RuntimeLinearMemory>>;
|
||||
}
|
||||
|
||||
/// A default memory allocator used by Wasmtime
|
||||
@@ -21,27 +29,33 @@ pub struct DefaultMemoryCreator;
|
||||
|
||||
impl RuntimeMemoryCreator for DefaultMemoryCreator {
|
||||
/// Create new MmapMemory
|
||||
fn new_memory(&self, plan: &MemoryPlan) -> Result<Box<dyn RuntimeLinearMemory>> {
|
||||
Ok(Box::new(MmapMemory::new(plan)?) as _)
|
||||
fn new_memory(
|
||||
&self,
|
||||
plan: &MemoryPlan,
|
||||
minimum: usize,
|
||||
maximum: Option<usize>,
|
||||
) -> Result<Box<dyn RuntimeLinearMemory>> {
|
||||
Ok(Box::new(MmapMemory::new(plan, minimum, maximum)?))
|
||||
}
|
||||
}
|
||||
|
||||
/// A linear memory
|
||||
pub trait RuntimeLinearMemory: Send + Sync {
|
||||
/// Returns the number of allocated wasm pages.
|
||||
fn size(&self) -> u32;
|
||||
/// Returns the number of allocated bytes.
|
||||
fn byte_size(&self) -> usize;
|
||||
|
||||
/// Returns the maximum number of pages the memory can grow to.
|
||||
/// Returns the maximum number of bytes the memory can grow to.
|
||||
/// Returns `None` if the memory is unbounded.
|
||||
fn maximum(&self) -> Option<u32>;
|
||||
fn maximum_byte_size(&self) -> Option<usize>;
|
||||
|
||||
/// Grow memory by the specified amount of wasm pages.
|
||||
/// Grow memory to the specified amount of bytes.
|
||||
///
|
||||
/// Returns `None` if memory can't be grown by the specified amount
|
||||
/// of wasm pages.
|
||||
fn grow(&mut self, delta: u32) -> Option<u32>;
|
||||
/// of bytes.
|
||||
fn grow_to(&mut self, size: usize) -> Option<()>;
|
||||
|
||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm
|
||||
/// code.
|
||||
fn vmmemory(&self) -> VMMemoryDefinition;
|
||||
}
|
||||
|
||||
@@ -49,10 +63,19 @@ pub trait RuntimeLinearMemory: Send + Sync {
|
||||
#[derive(Debug)]
|
||||
pub struct MmapMemory {
|
||||
// The underlying allocation.
|
||||
mmap: WasmMmap,
|
||||
mmap: Mmap,
|
||||
|
||||
// The optional maximum size in wasm pages of this linear memory.
|
||||
maximum: Option<u32>,
|
||||
// The number of bytes that are accessible in `mmap` and available for
|
||||
// reading and writing.
|
||||
//
|
||||
// This region starts at `pre_guard_size` offset from the base of `mmap`.
|
||||
accessible: usize,
|
||||
|
||||
// The optional maximum accessible size, in bytes, for this linear memory.
|
||||
//
|
||||
// Note that this maximum does not factor in guard pages, so this isn't the
|
||||
// maximum size of the linear address space reservation for this memory.
|
||||
maximum: Option<usize>,
|
||||
|
||||
// Size in bytes of extra guard pages before the start and after the end to
|
||||
// optimize loads and stores with constant offsets.
|
||||
@@ -60,52 +83,36 @@ pub struct MmapMemory {
|
||||
offset_guard_size: usize,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct WasmMmap {
|
||||
// Our OS allocation of mmap'd memory.
|
||||
alloc: Mmap,
|
||||
// The current logical size in wasm pages of this linear memory.
|
||||
size: u32,
|
||||
}
|
||||
|
||||
impl MmapMemory {
|
||||
/// Create a new linear memory instance with specified minimum and maximum number of wasm pages.
|
||||
pub fn new(plan: &MemoryPlan) -> Result<Self> {
|
||||
// `maximum` cannot be set to more than `65536` pages.
|
||||
assert_le!(plan.memory.minimum, WASM_MAX_PAGES);
|
||||
assert!(plan.memory.maximum.is_none() || plan.memory.maximum.unwrap() <= WASM_MAX_PAGES);
|
||||
pub fn new(plan: &MemoryPlan, minimum: usize, maximum: Option<usize>) -> Result<Self> {
|
||||
// It's a programmer error for these two configuration values to exceed
|
||||
// the host available address space, so panic if such a configuration is
|
||||
// found (mostly an issue for hypothetical 32-bit hosts).
|
||||
let offset_guard_bytes = usize::try_from(plan.offset_guard_size).unwrap();
|
||||
let pre_guard_bytes = usize::try_from(plan.pre_guard_size).unwrap();
|
||||
|
||||
let offset_guard_bytes = plan.offset_guard_size as usize;
|
||||
let pre_guard_bytes = plan.pre_guard_size as usize;
|
||||
|
||||
let minimum_pages = match plan.style {
|
||||
MemoryStyle::Dynamic => plan.memory.minimum,
|
||||
let alloc_bytes = match plan.style {
|
||||
MemoryStyle::Dynamic => minimum,
|
||||
MemoryStyle::Static { bound } => {
|
||||
assert_ge!(bound, plan.memory.minimum);
|
||||
bound
|
||||
usize::try_from(bound.checked_mul(WASM_PAGE_SIZE_U64).unwrap()).unwrap()
|
||||
}
|
||||
} as usize;
|
||||
let minimum_bytes = minimum_pages.checked_mul(WASM_PAGE_SIZE as usize).unwrap();
|
||||
let request_bytes = pre_guard_bytes
|
||||
.checked_add(minimum_bytes)
|
||||
.unwrap()
|
||||
.checked_add(offset_guard_bytes)
|
||||
.unwrap();
|
||||
let mapped_pages = plan.memory.minimum as usize;
|
||||
let accessible_bytes = mapped_pages * WASM_PAGE_SIZE as usize;
|
||||
|
||||
let mut mmap = WasmMmap {
|
||||
alloc: Mmap::accessible_reserved(0, request_bytes)?,
|
||||
size: plan.memory.minimum,
|
||||
};
|
||||
if accessible_bytes > 0 {
|
||||
mmap.alloc
|
||||
.make_accessible(pre_guard_bytes, accessible_bytes)?;
|
||||
let request_bytes = pre_guard_bytes
|
||||
.checked_add(alloc_bytes)
|
||||
.and_then(|i| i.checked_add(offset_guard_bytes))
|
||||
.ok_or_else(|| format_err!("cannot allocate {} with guard regions", minimum))?;
|
||||
|
||||
let mut mmap = Mmap::accessible_reserved(0, request_bytes)?;
|
||||
if minimum > 0 {
|
||||
mmap.make_accessible(pre_guard_bytes, minimum)?;
|
||||
}
|
||||
|
||||
Ok(Self {
|
||||
mmap: mmap.into(),
|
||||
maximum: plan.memory.maximum,
|
||||
mmap,
|
||||
accessible: minimum,
|
||||
maximum,
|
||||
pre_guard_size: pre_guard_bytes,
|
||||
offset_guard_size: offset_guard_bytes,
|
||||
})
|
||||
@@ -113,88 +120,52 @@ impl MmapMemory {
|
||||
}
|
||||
|
||||
impl RuntimeLinearMemory for MmapMemory {
|
||||
/// Returns the number of allocated wasm pages.
|
||||
fn size(&self) -> u32 {
|
||||
self.mmap.size
|
||||
fn byte_size(&self) -> usize {
|
||||
self.accessible
|
||||
}
|
||||
|
||||
/// Returns the maximum number of pages the memory can grow to.
|
||||
/// Returns `None` if the memory is unbounded.
|
||||
fn maximum(&self) -> Option<u32> {
|
||||
fn maximum_byte_size(&self) -> Option<usize> {
|
||||
self.maximum
|
||||
}
|
||||
|
||||
/// Grow memory by the specified amount of wasm pages.
|
||||
///
|
||||
/// Returns `None` if memory can't be grown by the specified amount
|
||||
/// of wasm pages.
|
||||
fn grow(&mut self, delta: u32) -> Option<u32> {
|
||||
// Optimization of memory.grow 0 calls.
|
||||
if delta == 0 {
|
||||
return Some(self.mmap.size);
|
||||
}
|
||||
|
||||
let new_pages = match self.mmap.size.checked_add(delta) {
|
||||
Some(new_pages) => new_pages,
|
||||
// Linear memory size overflow.
|
||||
None => return None,
|
||||
};
|
||||
let prev_pages = self.mmap.size;
|
||||
|
||||
if let Some(maximum) = self.maximum {
|
||||
if new_pages > maximum {
|
||||
// Linear memory size would exceed the declared maximum.
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
// Wasm linear memories are never allowed to grow beyond what is
|
||||
// indexable. If the memory has no maximum, enforce the greatest
|
||||
// limit here.
|
||||
if new_pages > WASM_MAX_PAGES {
|
||||
// Linear memory size would exceed the index range.
|
||||
return None;
|
||||
}
|
||||
|
||||
let delta_bytes = usize::try_from(delta).unwrap() * WASM_PAGE_SIZE as usize;
|
||||
let prev_bytes = usize::try_from(prev_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
||||
let new_bytes = usize::try_from(new_pages).unwrap() * WASM_PAGE_SIZE as usize;
|
||||
|
||||
if new_bytes > self.mmap.alloc.len() - self.offset_guard_size - self.pre_guard_size {
|
||||
fn grow_to(&mut self, new_size: usize) -> Option<()> {
|
||||
if new_size > self.mmap.len() - self.offset_guard_size - self.pre_guard_size {
|
||||
// If the new size is within the declared maximum, but needs more memory than we
|
||||
// have on hand, it's a dynamic heap and it can move.
|
||||
let request_bytes = self
|
||||
.pre_guard_size
|
||||
.checked_add(new_bytes)?
|
||||
.checked_add(new_size)?
|
||||
.checked_add(self.offset_guard_size)?;
|
||||
|
||||
let mut new_mmap = Mmap::accessible_reserved(0, request_bytes).ok()?;
|
||||
new_mmap
|
||||
.make_accessible(self.pre_guard_size, new_bytes)
|
||||
.make_accessible(self.pre_guard_size, new_size)
|
||||
.ok()?;
|
||||
|
||||
new_mmap.as_mut_slice()[self.pre_guard_size..][..prev_bytes]
|
||||
.copy_from_slice(&self.mmap.alloc.as_slice()[self.pre_guard_size..][..prev_bytes]);
|
||||
new_mmap.as_mut_slice()[self.pre_guard_size..][..self.accessible]
|
||||
.copy_from_slice(&self.mmap.as_slice()[self.pre_guard_size..][..self.accessible]);
|
||||
|
||||
self.mmap.alloc = new_mmap;
|
||||
} else if delta_bytes > 0 {
|
||||
self.mmap = new_mmap;
|
||||
} else {
|
||||
assert!(new_size > self.accessible);
|
||||
// Make the newly allocated pages accessible.
|
||||
self.mmap
|
||||
.alloc
|
||||
.make_accessible(self.pre_guard_size + prev_bytes, delta_bytes)
|
||||
.make_accessible(
|
||||
self.pre_guard_size + self.accessible,
|
||||
new_size - self.accessible,
|
||||
)
|
||||
.ok()?;
|
||||
}
|
||||
|
||||
self.mmap.size = new_pages;
|
||||
self.accessible = new_size;
|
||||
|
||||
Some(prev_pages)
|
||||
Some(())
|
||||
}
|
||||
|
||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
||||
fn vmmemory(&self) -> VMMemoryDefinition {
|
||||
VMMemoryDefinition {
|
||||
base: unsafe { self.mmap.alloc.as_mut_ptr().add(self.pre_guard_size) },
|
||||
current_length: self.mmap.size as usize * WASM_PAGE_SIZE as usize,
|
||||
base: unsafe { self.mmap.as_mut_ptr().add(self.pre_guard_size) },
|
||||
current_length: self.accessible,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -208,8 +179,8 @@ pub enum Memory {
|
||||
/// slice is the maximum size of the memory that can be grown to.
|
||||
base: &'static mut [u8],
|
||||
|
||||
/// The current size, in wasm pages, of this memory.
|
||||
size: u32,
|
||||
/// The current size, in bytes, of this memory.
|
||||
size: usize,
|
||||
|
||||
/// A callback which makes portions of `base` accessible for when memory
|
||||
/// is grown. Otherwise it's expected that accesses to `base` will
|
||||
@@ -234,8 +205,8 @@ impl Memory {
|
||||
creator: &dyn RuntimeMemoryCreator,
|
||||
limiter: Option<&mut dyn ResourceLimiter>,
|
||||
) -> Result<Self> {
|
||||
Self::limit_new(plan, limiter)?;
|
||||
Ok(Memory::Dynamic(creator.new_memory(plan)?))
|
||||
let (minimum, maximum) = Self::limit_new(plan, limiter)?;
|
||||
Ok(Memory::Dynamic(creator.new_memory(plan, minimum, maximum)?))
|
||||
}
|
||||
|
||||
/// Create a new static (immovable) memory instance for the specified plan.
|
||||
@@ -245,48 +216,94 @@ impl Memory {
|
||||
make_accessible: fn(*mut u8, usize) -> Result<()>,
|
||||
limiter: Option<&mut dyn ResourceLimiter>,
|
||||
) -> Result<Self> {
|
||||
Self::limit_new(plan, limiter)?;
|
||||
let (minimum, maximum) = Self::limit_new(plan, limiter)?;
|
||||
|
||||
let base = match plan.memory.maximum {
|
||||
Some(max) if (max as usize) < base.len() / (WASM_PAGE_SIZE as usize) => {
|
||||
&mut base[..(max * WASM_PAGE_SIZE) as usize]
|
||||
}
|
||||
let base = match maximum {
|
||||
Some(max) if max < base.len() => &mut base[..max],
|
||||
_ => base,
|
||||
};
|
||||
|
||||
if plan.memory.minimum > 0 {
|
||||
make_accessible(
|
||||
base.as_mut_ptr(),
|
||||
plan.memory.minimum as usize * WASM_PAGE_SIZE as usize,
|
||||
)?;
|
||||
if minimum > 0 {
|
||||
make_accessible(base.as_mut_ptr(), minimum)?;
|
||||
}
|
||||
|
||||
Ok(Memory::Static {
|
||||
base,
|
||||
size: plan.memory.minimum,
|
||||
size: minimum,
|
||||
make_accessible,
|
||||
#[cfg(all(feature = "uffd", target_os = "linux"))]
|
||||
guard_page_faults: Vec::new(),
|
||||
})
|
||||
}
|
||||
|
||||
fn limit_new(plan: &MemoryPlan, limiter: Option<&mut dyn ResourceLimiter>) -> Result<()> {
|
||||
/// Calls the `limiter`, if specified, to optionally prevent a memory from
|
||||
/// being allocated.
|
||||
///
|
||||
/// Returns the minimum size and optional maximum size of the memory, in
|
||||
/// bytes.
|
||||
fn limit_new(
|
||||
plan: &MemoryPlan,
|
||||
limiter: Option<&mut dyn ResourceLimiter>,
|
||||
) -> Result<(usize, Option<usize>)> {
|
||||
// Sanity-check what should already be true from wasm module validation.
|
||||
let absolute_max = if plan.memory.memory64 {
|
||||
WASM64_MAX_PAGES
|
||||
} else {
|
||||
WASM32_MAX_PAGES
|
||||
};
|
||||
assert_le!(plan.memory.minimum, absolute_max);
|
||||
assert!(plan.memory.maximum.is_none() || plan.memory.maximum.unwrap() <= absolute_max);
|
||||
|
||||
// If the minimum memory size overflows the size of our own address
|
||||
// space, then we can't satisfy this request.
|
||||
let minimum = plan
|
||||
.memory
|
||||
.minimum
|
||||
.checked_mul(WASM_PAGE_SIZE_U64)
|
||||
.and_then(|m| usize::try_from(m).ok())
|
||||
.ok_or_else(|| {
|
||||
format_err!(
|
||||
"memory minimum size of {} pages exceeds memory limits",
|
||||
plan.memory.minimum
|
||||
)
|
||||
})?;
|
||||
|
||||
// The plan stores the maximum size in units of wasm pages, but we
|
||||
// use units of bytes. Do the mapping here, and if we overflow for some
|
||||
// reason then just assume that the listed maximum was our entire memory
|
||||
// minus one wasm page since we can't grow past that anyway (presumably
|
||||
// the kernel will reserve at least *something* for itself...)
|
||||
let mut maximum = plan.memory.maximum.map(|max| {
|
||||
usize::try_from(max)
|
||||
.ok()
|
||||
.and_then(|m| m.checked_mul(WASM_PAGE_SIZE))
|
||||
.unwrap_or(usize::MAX - WASM_PAGE_SIZE)
|
||||
});
|
||||
|
||||
// If this is a 32-bit memory and no maximum is otherwise listed then we
|
||||
// need to still specify a maximum size of 4GB. If the host platform is
|
||||
// 32-bit then there's no need to limit the maximum this way since no
|
||||
// allocation of 4GB can succeed, but for 64-bit platforms this is
|
||||
// required to limit memories to 4GB.
|
||||
if !plan.memory.memory64 && maximum.is_none() {
|
||||
maximum = usize::try_from(1u64 << 32).ok();
|
||||
}
|
||||
if let Some(limiter) = limiter {
|
||||
if !limiter.memory_growing(0, plan.memory.minimum, plan.memory.maximum) {
|
||||
if !limiter.memory_growing(0, minimum, maximum) {
|
||||
bail!(
|
||||
"memory minimum size of {} pages exceeds memory limits",
|
||||
plan.memory.minimum
|
||||
);
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
Ok((minimum, maximum))
|
||||
}
|
||||
|
||||
/// Returns the number of allocated wasm pages.
|
||||
pub fn size(&self) -> u32 {
|
||||
pub fn byte_size(&self) -> usize {
|
||||
match self {
|
||||
Memory::Static { size, .. } => *size,
|
||||
Memory::Dynamic(mem) => mem.size(),
|
||||
Memory::Dynamic(mem) => mem.byte_size(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -296,10 +313,10 @@ impl Memory {
|
||||
///
|
||||
/// The runtime maximum may not be equal to the maximum from the linear memory's
|
||||
/// Wasm type when it is being constrained by an instance allocator.
|
||||
pub fn maximum(&self) -> Option<u32> {
|
||||
pub fn maximum_byte_size(&self) -> Option<usize> {
|
||||
match self {
|
||||
Memory::Static { base, .. } => Some((base.len() / (WASM_PAGE_SIZE as usize)) as u32),
|
||||
Memory::Dynamic(mem) => mem.maximum(),
|
||||
Memory::Static { base, .. } => Some(base.len()),
|
||||
Memory::Dynamic(mem) => mem.maximum_byte_size(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -315,7 +332,8 @@ impl Memory {
|
||||
/// Grow memory by the specified amount of wasm pages.
|
||||
///
|
||||
/// Returns `None` if memory can't be grown by the specified amount
|
||||
/// of wasm pages.
|
||||
/// of wasm pages. Returns `Some` with the old size of memory, in bytes, on
|
||||
/// successful growth.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
@@ -327,19 +345,27 @@ impl Memory {
|
||||
/// this unsafety.
|
||||
pub unsafe fn grow(
|
||||
&mut self,
|
||||
delta: u32,
|
||||
delta_pages: u64,
|
||||
limiter: Option<&mut dyn ResourceLimiter>,
|
||||
) -> Option<u32> {
|
||||
let old_size = self.size();
|
||||
if delta == 0 {
|
||||
return Some(old_size);
|
||||
) -> Option<usize> {
|
||||
let old_byte_size = self.byte_size();
|
||||
if delta_pages == 0 {
|
||||
return Some(old_byte_size);
|
||||
}
|
||||
|
||||
let new_size = old_size.checked_add(delta)?;
|
||||
let maximum = self.maximum();
|
||||
let new_byte_size = usize::try_from(delta_pages)
|
||||
.ok()?
|
||||
.checked_mul(WASM_PAGE_SIZE)?
|
||||
.checked_add(old_byte_size)?;
|
||||
let maximum = self.maximum_byte_size();
|
||||
|
||||
if let Some(max) = maximum {
|
||||
if new_byte_size > max {
|
||||
return None;
|
||||
}
|
||||
}
|
||||
if let Some(limiter) = limiter {
|
||||
if !limiter.memory_growing(old_size, new_size, maximum) {
|
||||
if !limiter.memory_growing(old_byte_size, new_byte_size, maximum) {
|
||||
return None;
|
||||
}
|
||||
}
|
||||
@@ -359,21 +385,21 @@ impl Memory {
|
||||
make_accessible,
|
||||
..
|
||||
} => {
|
||||
if new_size > maximum.unwrap_or(WASM_MAX_PAGES) {
|
||||
if new_byte_size > base.len() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let start = usize::try_from(old_size).unwrap() * WASM_PAGE_SIZE as usize;
|
||||
let len = usize::try_from(delta).unwrap() * WASM_PAGE_SIZE as usize;
|
||||
make_accessible(
|
||||
base.as_mut_ptr().add(old_byte_size),
|
||||
new_byte_size - old_byte_size,
|
||||
)
|
||||
.ok()?;
|
||||
|
||||
make_accessible(base.as_mut_ptr().add(start), len).ok()?;
|
||||
|
||||
*size = new_size;
|
||||
|
||||
Some(old_size)
|
||||
*size = new_byte_size;
|
||||
}
|
||||
Memory::Dynamic(mem) => mem.grow(delta),
|
||||
Memory::Dynamic(mem) => mem.grow_to(new_byte_size)?,
|
||||
}
|
||||
Some(old_byte_size)
|
||||
}
|
||||
|
||||
/// Return a `VMMemoryDefinition` for exposing the memory to compiled wasm code.
|
||||
@@ -381,7 +407,7 @@ impl Memory {
|
||||
match self {
|
||||
Memory::Static { base, size, .. } => VMMemoryDefinition {
|
||||
base: base.as_ptr() as *mut _,
|
||||
current_length: *size as usize * WASM_PAGE_SIZE as usize,
|
||||
current_length: *size,
|
||||
},
|
||||
Memory::Dynamic(mem) => mem.vmmemory(),
|
||||
}
|
||||
|
||||
@@ -73,7 +73,11 @@ impl Mmap {
|
||||
)
|
||||
};
|
||||
if ptr as isize == -1_isize {
|
||||
bail!("mmap failed: {}", io::Error::last_os_error());
|
||||
bail!(
|
||||
"mmap failed to allocate {:#x} bytes: {}",
|
||||
mapping_size,
|
||||
io::Error::last_os_error()
|
||||
);
|
||||
}
|
||||
|
||||
Self {
|
||||
@@ -93,7 +97,11 @@ impl Mmap {
|
||||
)
|
||||
};
|
||||
if ptr as isize == -1_isize {
|
||||
bail!("mmap failed: {}", io::Error::last_os_error());
|
||||
bail!(
|
||||
"mmap failed to allocate {:#x} bytes: {}",
|
||||
mapping_size,
|
||||
io::Error::last_os_error()
|
||||
);
|
||||
}
|
||||
|
||||
let mut result = Self {
|
||||
|
||||
Reference in New Issue
Block a user