* Implement defining host functions at the Config level. This commit introduces defining host functions at the `Config` rather than with `Func` tied to a `Store`. The intention here is to enable a host to define all of the functions once with a `Config` and then use a `Linker` (or directly with `Store::get_host_func`) to use the functions when instantiating a module. This should help improve the performance of use cases where a `Store` is short-lived and redefining the functions at every module instantiation is a noticeable performance hit. This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi` type. The new method adds the WASI functions to the given config as host functions. This commit adds context functions to `Store`: `get` to get a context of a particular type and `set` to set the context on the store. For safety, `set` cannot replace an existing context value of the same type. `Wasi::set_context` was added to set the WASI context for a `Store` when using `Wasi::add_to_config`. * Add `Config::define_host_func_async`. * Make config "async" rather than store. This commit moves the concept of "async-ness" to `Config` rather than `Store`. Note: this is a breaking API change for anyone that's already adopted the new async support in Wasmtime. Now `Config::new_async` is used to create an "async" config and any `Store` associated with that config is inherently "async". This is needed for async shared host functions to have some sanity check during their execution (async host functions, like "async" `Func`, need to be called with the "async" variants). * Update async function tests to smoke async shared host functions. This commit updates the async function tests to also smoke the shared host functions, plus `Func::wrap0_async`. This also changes the "wrap async" method names on `Config` to `wrap$N_host_func_async` to slightly better match what is on `Func`. * Move the instance allocator into `Engine`. This commit moves the instantiated instance allocator from `Config` into `Engine`. This makes certain settings in `Config` no longer order-dependent, which is how `Config` should ideally be. This also removes the confusing concept of the "default" instance allocator, instead opting to construct the on-demand instance allocator when needed. This does alter the semantics of the instance allocator as now each `Engine` gets its own instance allocator rather than sharing a single one between all engines created from a configuration. * Make `Engine::new` return `Result`. This is a breaking API change for anyone using `Engine::new`. As creating the pooling instance allocator may fail (likely cause is not enough memory for the provided limits), instead of panicking when creating an `Engine`, `Engine::new` now returns a `Result`. * Remove `Config::new_async`. This commit removes `Config::new_async` in favor of treating "async support" as any other setting on `Config`. The setting is `Config::async_support`. * Remove order dependency when defining async host functions in `Config`. This commit removes the order dependency where async support must be enabled on the `Config` prior to defining async host functions. The check is now delayed to when an `Engine` is created from the config. * Update WASI example to use shared `Wasi::add_to_config`. This commit updates the WASI example to use `Wasi::add_to_config`. As only a single store and instance are used in the example, it has no semantic difference from the previous example, but the intention is to steer users towards defining WASI on the config and only using `Wasi::add_to_linker` when more explicit scoping of the WASI context is required.
192 lines
6.1 KiB
Rust
192 lines
6.1 KiB
Rust
#[cfg(not(target_os = "windows"))]
|
|
mod not_for_windows {
|
|
use wasmtime::*;
|
|
use wasmtime_environ::{WASM_MAX_PAGES, WASM_PAGE_SIZE};
|
|
|
|
use libc::c_void;
|
|
use libc::MAP_FAILED;
|
|
use libc::{mmap, mprotect, munmap};
|
|
use libc::{sysconf, _SC_PAGESIZE};
|
|
use libc::{MAP_ANON, MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
|
|
|
|
use std::cell::RefCell;
|
|
use std::io::Error;
|
|
use std::ptr::null_mut;
|
|
use std::sync::{Arc, Mutex};
|
|
|
|
struct CustomMemory {
|
|
mem: *mut c_void,
|
|
size: usize,
|
|
used_wasm_pages: RefCell<u32>,
|
|
glob_page_counter: Arc<Mutex<u64>>,
|
|
}
|
|
|
|
impl CustomMemory {
|
|
unsafe fn new(
|
|
num_wasm_pages: u32,
|
|
max_wasm_pages: u32,
|
|
glob_counter: Arc<Mutex<u64>>,
|
|
) -> Self {
|
|
let page_size = sysconf(_SC_PAGESIZE) as usize;
|
|
let guard_size = page_size;
|
|
let size = max_wasm_pages as usize * WASM_PAGE_SIZE as usize + guard_size;
|
|
let used_size = num_wasm_pages as usize * WASM_PAGE_SIZE as usize;
|
|
assert_eq!(size % page_size, 0); // we rely on WASM_PAGE_SIZE being multiple of host page size
|
|
|
|
let mem = mmap(null_mut(), size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
|
|
assert_ne!(mem, MAP_FAILED, "mmap failed: {}", Error::last_os_error());
|
|
|
|
let r = mprotect(mem, used_size, PROT_READ | PROT_WRITE);
|
|
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
|
*glob_counter.lock().unwrap() += num_wasm_pages as u64;
|
|
|
|
Self {
|
|
mem,
|
|
size,
|
|
used_wasm_pages: RefCell::new(num_wasm_pages),
|
|
glob_page_counter: glob_counter,
|
|
}
|
|
}
|
|
}
|
|
|
|
impl Drop for CustomMemory {
|
|
fn drop(&mut self) {
|
|
let n = *self.used_wasm_pages.borrow() as u64;
|
|
*self.glob_page_counter.lock().unwrap() -= n;
|
|
let r = unsafe { munmap(self.mem, self.size) };
|
|
assert_eq!(r, 0, "munmap failed: {}", Error::last_os_error());
|
|
}
|
|
}
|
|
|
|
unsafe impl LinearMemory for CustomMemory {
|
|
fn size(&self) -> u32 {
|
|
*self.used_wasm_pages.borrow()
|
|
}
|
|
|
|
fn grow(&self, delta: u32) -> Option<u32> {
|
|
let delta_size = (delta as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
|
|
|
let prev_pages = *self.used_wasm_pages.borrow();
|
|
let prev_size = (prev_pages as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
|
|
|
let new_pages = prev_pages.checked_add(delta)?;
|
|
let new_size = (new_pages as usize).checked_mul(WASM_PAGE_SIZE as usize)?;
|
|
|
|
let guard_size = unsafe { sysconf(_SC_PAGESIZE) as usize };
|
|
|
|
if new_size > self.size - guard_size {
|
|
return None;
|
|
}
|
|
unsafe {
|
|
let start = (self.mem as *mut u8).add(prev_size) as _;
|
|
let r = mprotect(start, delta_size, PROT_READ | PROT_WRITE);
|
|
assert_eq!(r, 0, "mprotect failed: {}", Error::last_os_error());
|
|
}
|
|
|
|
*self.glob_page_counter.lock().unwrap() += delta as u64;
|
|
*self.used_wasm_pages.borrow_mut() = new_pages;
|
|
Some(prev_pages)
|
|
}
|
|
|
|
fn as_ptr(&self) -> *mut u8 {
|
|
self.mem as *mut u8
|
|
}
|
|
}
|
|
|
|
struct CustomMemoryCreator {
|
|
pub num_created_memories: Mutex<usize>,
|
|
pub num_total_pages: Arc<Mutex<u64>>,
|
|
}
|
|
|
|
impl CustomMemoryCreator {
|
|
pub fn new() -> Self {
|
|
Self {
|
|
num_created_memories: Mutex::new(0),
|
|
num_total_pages: Arc::new(Mutex::new(0)),
|
|
}
|
|
}
|
|
}
|
|
|
|
unsafe impl MemoryCreator for CustomMemoryCreator {
|
|
fn new_memory(
|
|
&self,
|
|
ty: MemoryType,
|
|
reserved_size: Option<u64>,
|
|
guard_size: u64,
|
|
) -> Result<Box<dyn LinearMemory>, String> {
|
|
assert_eq!(guard_size, 0);
|
|
assert!(reserved_size.is_none());
|
|
let max = ty.limits().max().unwrap_or(WASM_MAX_PAGES);
|
|
unsafe {
|
|
let mem = Box::new(CustomMemory::new(
|
|
ty.limits().min(),
|
|
max,
|
|
self.num_total_pages.clone(),
|
|
));
|
|
*self.num_created_memories.lock().unwrap() += 1;
|
|
Ok(mem)
|
|
}
|
|
}
|
|
}
|
|
|
|
fn config() -> (Store, Arc<CustomMemoryCreator>) {
|
|
let mem_creator = Arc::new(CustomMemoryCreator::new());
|
|
let mut config = Config::new();
|
|
config
|
|
.with_host_memory(mem_creator.clone())
|
|
.static_memory_maximum_size(0)
|
|
.dynamic_memory_guard_size(0);
|
|
(Store::new(&Engine::new(&config).unwrap()), mem_creator)
|
|
}
|
|
|
|
#[test]
|
|
fn host_memory() -> anyhow::Result<()> {
|
|
let (store, mem_creator) = config();
|
|
let module = Module::new(
|
|
store.engine(),
|
|
r#"
|
|
(module
|
|
(memory (export "memory") 1)
|
|
)
|
|
"#,
|
|
)?;
|
|
Instance::new(&store, &module, &[])?;
|
|
|
|
assert_eq!(*mem_creator.num_created_memories.lock().unwrap(), 1);
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[test]
|
|
fn host_memory_grow() -> anyhow::Result<()> {
|
|
let (store, mem_creator) = config();
|
|
let module = Module::new(
|
|
store.engine(),
|
|
r#"
|
|
(module
|
|
(func $f (drop (memory.grow (i32.const 1))))
|
|
(memory (export "memory") 1 2)
|
|
(start $f)
|
|
)
|
|
"#,
|
|
)?;
|
|
|
|
let instance1 = Instance::new(&store, &module, &[])?;
|
|
let instance2 = Instance::new(&store, &module, &[])?;
|
|
|
|
assert_eq!(*mem_creator.num_created_memories.lock().unwrap(), 2);
|
|
|
|
assert_eq!(instance2.get_memory("memory").unwrap().size(), 2);
|
|
|
|
// we take the lock outside the assert, so it won't get poisoned on assert failure
|
|
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
|
assert_eq!(tot_pages, 4);
|
|
|
|
drop((instance1, instance2, store, module));
|
|
let tot_pages = *mem_creator.num_total_pages.lock().unwrap();
|
|
assert_eq!(tot_pages, 0);
|
|
|
|
Ok(())
|
|
}
|
|
}
|