Add a pooling allocator mode based on copy-on-write mappings of memfds.
As first suggested by Jan on the Zulip here [1], a cheap and effective way to obtain copy-on-write semantics of a "backing image" for a Wasm memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism provided by the Linux kernel allows us to create anonymous, in-memory-only files that we can use for this mapping, so we can construct the image contents on-the-fly then effectively create a CoW overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)` will discard the CoW overlay, returning the mapping to its original state. By itself this is almost enough for a very fast instantiation-termination loop of the same image over and over, without changing the address space mapping at all (which is expensive). The only missing bit is how to implement heap *growth*. But here memfds can help us again: if we create another anonymous file and map it where the extended parts of the heap would go, we can take advantage of the fact that a `mmap()` mapping can be *larger than the file itself*, with accesses beyond the end generating a `SIGBUS`, and the fact that we can cheaply resize the file with `ftruncate`, even after a mapping exists. So we can map the "heap extension" file once with the maximum memory-slot size and grow the memfd itself as `memory.grow` operations occur. The above CoW technique and heap-growth technique together allow us a fastpath of `madvise()` and `ftruncate()` only when we re-instantiate the same module over and over, as long as we can reuse the same slot. This fastpath avoids all whole-process address-space locks in the Linux kernel, which should mean it is highly scalable. It also avoids the cost of copying data on read, as the `uffd` heap backend does when servicing pagefaults; the kernel's own optimized CoW logic (same as used by all file mmaps) is used instead. [1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
This commit is contained in:
25
src/lib.rs
25
src/lib.rs
@@ -100,6 +100,8 @@ use std::collections::HashMap;
|
||||
use std::path::PathBuf;
|
||||
use structopt::StructOpt;
|
||||
use wasmtime::{Config, ProfilingStrategy};
|
||||
#[cfg(feature = "pooling-allocator")]
|
||||
use wasmtime::{InstanceLimits, ModuleLimits, PoolingAllocationStrategy};
|
||||
|
||||
fn pick_profiling_strategy(jitdump: bool, vtune: bool) -> Result<ProfilingStrategy> {
|
||||
Ok(match (jitdump, vtune) {
|
||||
@@ -250,6 +252,12 @@ struct CommonOptions {
|
||||
/// the data segments specified in the original wasm module.
|
||||
#[structopt(long)]
|
||||
paged_memory_initialization: bool,
|
||||
|
||||
/// Enables the pooling allocator, in place of the on-demand
|
||||
/// allocator.
|
||||
#[cfg(feature = "pooling-allocator")]
|
||||
#[structopt(long)]
|
||||
pooling_allocator: bool,
|
||||
}
|
||||
|
||||
impl CommonOptions {
|
||||
@@ -325,6 +333,23 @@ impl CommonOptions {
|
||||
config.generate_address_map(!self.disable_address_map);
|
||||
config.paged_memory_initialization(self.paged_memory_initialization);
|
||||
|
||||
#[cfg(feature = "pooling-allocator")]
|
||||
{
|
||||
if self.pooling_allocator {
|
||||
let mut module_limits = ModuleLimits::default();
|
||||
module_limits.functions = 50000;
|
||||
module_limits.types = 10000;
|
||||
module_limits.globals = 1000;
|
||||
module_limits.memory_pages = 2048;
|
||||
let instance_limits = InstanceLimits::default();
|
||||
config.allocation_strategy(wasmtime::InstanceAllocationStrategy::Pooling {
|
||||
strategy: PoolingAllocationStrategy::NextAvailable,
|
||||
module_limits,
|
||||
instance_limits,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
Ok(config)
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user