Files
wasmtime/crates/fuzzing/src/generators/pooling_config.rs
Alex Crichton d3a6181939 Add support for keeping pooling allocator pages resident (#5207)
When new wasm instances are created repeatedly in high-concurrency
environments one of the largest bottlenecks is the contention on
kernel-level locks having to do with the virtual memory. It's expected
that usage in this environment is leveraging the pooling instance
allocator with the `memory-init-cow` feature enabled which means that
the kernel level VM lock is acquired in operations such as:

1. Growing a heap with `mprotect` (write lock)
2. Faulting in memory during usage (read lock)
3. Resetting a heap's contents with `madvise` (read lock)
4. Shrinking a heap with `mprotect` when reusing a slot (write lock)

Rapid usage of these operations can lead to detrimental performance
especially on otherwise heavily loaded systems, worsening the more
frequent the above operations are. This commit is aimed at addressing
the (2) case above, reducing the number of page faults that are
fulfilled by the kernel.

Currently these page faults happen for three reasons:

* When memory is first accessed after the heap is grown.
* When the initial linear memory image is accessed for the first time.
* When the initial zero'd heap contents, not part of the linear memory
  image, are accessed.

This PR is attempting to address the latter of these cases, and to a
lesser extent the first case as well. Specifically this PR provides the
ability to partially reset a pooled linear memory with `memset` rather
than `madvise`. This is done to have the same effect of resetting
contents to zero but namely has a different effect on paging, notably
keeping the pages resident in memory rather than returning them to the
kernel. This means that reuse of a linear memory slot on a page that was
previously `memset` will not trigger a page fault since everything
remains paged into the process.

The end result is that any access to linear memory which has been
touched by `memset` will no longer page fault on reuse. On more recent
kernels (6.0+) this also means pages which were zero'd by `memset`, made
inaccessible with `PROT_NONE`, and then made accessible again with
`PROT_READ | PROT_WRITE` will not page fault. This can be common when a
wasm instances grows its heap slightly, uses that memory, but then it's
shrunk when the memory is reused for the next instance. Note that this
kernel optimization requires a 6.0+ kernel.

This same optimization is furthermore applied to both async stacks with
the pooling memory allocator in addition to table elements. The defaults
of Wasmtime are not changing with this PR, instead knobs are being
exposed for embedders to turn if they so desire. This is currently being
experimented with at Fastly and I may come back and alter the defaults
of Wasmtime if it seems suitable after our measurements.
2022-11-04 20:56:34 +00:00

91 lines
3.4 KiB
Rust

//! Generate instance limits for the pooling allocation strategy.
use arbitrary::{Arbitrary, Unstructured};
/// Configuration for `wasmtime::PoolingAllocationStrategy`.
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
#[allow(missing_docs)]
pub struct PoolingAllocationConfig {
pub strategy: PoolingAllocationStrategy,
pub instance_count: u32,
pub instance_memories: u32,
pub instance_tables: u32,
pub instance_memory_pages: u64,
pub instance_table_elements: u32,
pub instance_size: usize,
pub async_stack_zeroing: bool,
pub async_stack_keep_resident: usize,
pub linear_memory_keep_resident: usize,
pub table_keep_resident: usize,
}
impl PoolingAllocationConfig {
/// Convert the generated limits to Wasmtime limits.
pub fn to_wasmtime(&self) -> wasmtime::PoolingAllocationConfig {
let mut cfg = wasmtime::PoolingAllocationConfig::default();
cfg.strategy(self.strategy.to_wasmtime())
.instance_count(self.instance_count)
.instance_memories(self.instance_memories)
.instance_tables(self.instance_tables)
.instance_memory_pages(self.instance_memory_pages)
.instance_table_elements(self.instance_table_elements)
.instance_size(self.instance_size)
.async_stack_zeroing(self.async_stack_zeroing)
.async_stack_keep_resident(self.async_stack_keep_resident)
.linear_memory_keep_resident(self.linear_memory_keep_resident)
.table_keep_resident(self.table_keep_resident);
cfg
}
}
impl<'a> Arbitrary<'a> for PoolingAllocationConfig {
fn arbitrary(u: &mut Unstructured<'a>) -> arbitrary::Result<Self> {
const MAX_COUNT: u32 = 100;
const MAX_TABLES: u32 = 10;
const MAX_MEMORIES: u32 = 10;
const MAX_ELEMENTS: u32 = 1000;
const MAX_MEMORY_PAGES: u64 = 160; // 10 MiB
const MAX_SIZE: usize = 1 << 20; // 1 MiB
Ok(Self {
strategy: u.arbitrary()?,
instance_tables: u.int_in_range(0..=MAX_TABLES)?,
instance_memories: u.int_in_range(0..=MAX_MEMORIES)?,
instance_table_elements: u.int_in_range(0..=MAX_ELEMENTS)?,
instance_memory_pages: u.int_in_range(0..=MAX_MEMORY_PAGES)?,
instance_count: u.int_in_range(1..=MAX_COUNT)?,
instance_size: u.int_in_range(0..=MAX_SIZE)?,
async_stack_zeroing: u.arbitrary()?,
async_stack_keep_resident: u.int_in_range(0..=1 << 20)?,
linear_memory_keep_resident: u.int_in_range(0..=1 << 20)?,
table_keep_resident: u.int_in_range(0..=1 << 20)?,
})
}
}
/// Configuration for `wasmtime::PoolingAllocationStrategy`.
#[derive(Arbitrary, Clone, Debug, PartialEq, Eq, Hash)]
pub enum PoolingAllocationStrategy {
/// Use next available instance slot.
NextAvailable,
/// Use random instance slot.
Random,
/// Use an affinity-based strategy.
ReuseAffinity,
}
impl PoolingAllocationStrategy {
fn to_wasmtime(&self) -> wasmtime::PoolingAllocationStrategy {
match self {
PoolingAllocationStrategy::NextAvailable => {
wasmtime::PoolingAllocationStrategy::NextAvailable
}
PoolingAllocationStrategy::Random => wasmtime::PoolingAllocationStrategy::Random,
PoolingAllocationStrategy::ReuseAffinity => {
wasmtime::PoolingAllocationStrategy::ReuseAffinity
}
}
}
}