Add a pooling allocator mode based on copy-on-write mappings of memfds.
As first suggested by Jan on the Zulip here [1], a cheap and effective way to obtain copy-on-write semantics of a "backing image" for a Wasm memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism provided by the Linux kernel allows us to create anonymous, in-memory-only files that we can use for this mapping, so we can construct the image contents on-the-fly then effectively create a CoW overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)` will discard the CoW overlay, returning the mapping to its original state. By itself this is almost enough for a very fast instantiation-termination loop of the same image over and over, without changing the address space mapping at all (which is expensive). The only missing bit is how to implement heap *growth*. But here memfds can help us again: if we create another anonymous file and map it where the extended parts of the heap would go, we can take advantage of the fact that a `mmap()` mapping can be *larger than the file itself*, with accesses beyond the end generating a `SIGBUS`, and the fact that we can cheaply resize the file with `ftruncate`, even after a mapping exists. So we can map the "heap extension" file once with the maximum memory-slot size and grow the memfd itself as `memory.grow` operations occur. The above CoW technique and heap-growth technique together allow us a fastpath of `madvise()` and `ftruncate()` only when we re-instantiate the same module over and over, as long as we can reuse the same slot. This fastpath avoids all whole-process address-space locks in the Linux kernel, which should mean it is highly scalable. It also avoids the cost of copying data on read, as the `uffd` heap backend does when servicing pagefaults; the kernel's own optimized CoW logic (same as used by all file mmaps) is used instead. [1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
This commit is contained in:
28
crates/runtime/src/module_id.rs
Normal file
28
crates/runtime/src/module_id.rs
Normal file
@@ -0,0 +1,28 @@
|
||||
//! Unique IDs for modules in the runtime.
|
||||
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
/// A unique identifier (within an engine or similar) for a compiled
|
||||
/// module.
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
|
||||
pub struct CompiledModuleId(u64);
|
||||
|
||||
/// An allocator for compiled module IDs.
|
||||
pub struct CompiledModuleIdAllocator {
|
||||
next: AtomicU64,
|
||||
}
|
||||
|
||||
impl CompiledModuleIdAllocator {
|
||||
/// Create a compiled-module ID allocator.
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
next: AtomicU64::new(1),
|
||||
}
|
||||
}
|
||||
|
||||
/// Allocate a new ID.
|
||||
pub fn alloc(&self) -> CompiledModuleId {
|
||||
let id = self.next.fetch_add(1, Ordering::Relaxed);
|
||||
CompiledModuleId(id)
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user