Pre-generate trampoline functions (#957)
* Refactor wasmtime_runtime::Export Instead of an enumeration with variants that have data fields have an enumeration where each variant has a struct, and each struct has the data fields. This allows us to store the structs in the `wasmtime` API and avoid lots of `panic!` calls and various extraneous matches. * Pre-generate trampoline functions The `wasmtime` crate supports calling arbitrary function signatures in wasm code, and to do this it generates "trampoline functions" which have a known ABI that then internally convert to a particular signature's ABI and call it. These trampoline functions are currently generated on-the-fly and are cached in the global `Store` structure. This, however, is suboptimal for a few reasons: * Due to how code memory is managed each trampoline resides in its own 64kb allocation of memory. This means if you have N trampolines you're using N * 64kb of memory, which is quite a lot of overhead! * Trampolines are never free'd, even if the referencing module goes away. This is similar to #925. * Trampolines are a source of shared state which prevents `Store` from being easily thread safe. This commit refactors how trampolines are managed inside of the `wasmtime` crate and jit/runtime internals. All trampolines are now allocated in the same pass of `CodeMemory` that the main module is allocated into. A trampoline is generated per-signature in a module as well, instead of per-function. This cache of trampolines is stored directly inside of an `Instance`. Trampolines are stored based on `VMSharedSignatureIndex` so they can be looked up from the internals of the `ExportFunction` value. The `Func` API has been updated with various bits and pieces to ensure the right trampolines are registered in the right places. Overall this should ensure that all trampolines necessary are generated up-front rather than lazily. This allows us to remove the trampoline cache from the `Compiler` type, and move one step closer to making `Compiler` threadsafe for usage across multiple threads. Note that as one small caveat the `Func::wrap*` family of functions don't need to generate a trampoline at runtime, they actually generate the trampoline at compile time which gets passed in. Also in addition to shuffling a lot of code around this fixes one minor bug found in `code_memory.rs`, where `self.position` was loaded before allocation, but the allocation may push a new chunk which would cause `self.position` to be zero instead. * Pass the `SignatureRegistry` as an argument to where it's needed. This avoids the need for storing it in an `Arc`. * Ignore tramoplines for functions with lots of arguments Co-authored-by: Dan Gohman <sunfish@mozilla.com>
This commit is contained in:
@@ -71,10 +71,9 @@ impl CodeMemory {
|
||||
) -> Result<&mut [VMFunctionBody], String> {
|
||||
let size = Self::function_allocation_size(func);
|
||||
|
||||
let start = self.position as u32;
|
||||
let (buf, table) = self.allocate(size)?;
|
||||
let (buf, table, start) = self.allocate(size)?;
|
||||
|
||||
let (_, _, _, vmfunc) = Self::copy_function(func, start, buf, table);
|
||||
let (_, _, _, vmfunc) = Self::copy_function(func, start as u32, buf, table);
|
||||
|
||||
Ok(vmfunc)
|
||||
}
|
||||
@@ -90,9 +89,9 @@ impl CodeMemory {
|
||||
.into_iter()
|
||||
.fold(0, |acc, func| acc + Self::function_allocation_size(func));
|
||||
|
||||
let mut start = self.position as u32;
|
||||
let (mut buf, mut table) = self.allocate(total_len)?;
|
||||
let (mut buf, mut table, start) = self.allocate(total_len)?;
|
||||
let mut result = Vec::with_capacity(compilation.len());
|
||||
let mut start = start as u32;
|
||||
|
||||
for func in compilation.into_iter() {
|
||||
let (next_start, next_buf, next_table, vmfunc) =
|
||||
@@ -134,8 +133,14 @@ impl CodeMemory {
|
||||
/// that it can be written to and patched, though we make it readonly before
|
||||
/// actually executing from it.
|
||||
///
|
||||
/// A few values are returned:
|
||||
///
|
||||
/// * A mutable slice which references the allocated memory
|
||||
/// * A function table instance where unwind information is registered
|
||||
/// * The offset within the current mmap that the slice starts at
|
||||
///
|
||||
/// TODO: Add an alignment flag.
|
||||
fn allocate(&mut self, size: usize) -> Result<(&mut [u8], &mut FunctionTable), String> {
|
||||
fn allocate(&mut self, size: usize) -> Result<(&mut [u8], &mut FunctionTable, usize), String> {
|
||||
if self.current.mmap.len() - self.position < size {
|
||||
self.push_current(cmp::max(0x10000, size))?;
|
||||
}
|
||||
@@ -146,6 +151,7 @@ impl CodeMemory {
|
||||
Ok((
|
||||
&mut self.current.mmap.as_mut_slice()[old_position..self.position],
|
||||
&mut self.current.table,
|
||||
old_position,
|
||||
))
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user