* Implement the memory64 proposal in Wasmtime
This commit implements the WebAssembly [memory64 proposal][proposal] in
both Wasmtime and Cranelift. In terms of work done Cranelift ended up
needing very little work here since most of it was already prepared for
64-bit memories at one point or another. Most of the work in Wasmtime is
largely refactoring, changing a bunch of `u32` values to something else.
A number of internal and public interfaces are changing as a result of
this commit, for example:
* Acessors on `wasmtime::Memory` that work with pages now all return
`u64` unconditionally rather than `u32`. This makes it possible to
accommodate 64-bit memories with this API, but we may also want to
consider `usize` here at some point since the host can't grow past
`usize`-limited pages anyway.
* The `wasmtime::Limits` structure is removed in favor of
minimum/maximum methods on table/memory types.
* Many libcall intrinsics called by jit code now unconditionally take
`u64` arguments instead of `u32`. Return values are `usize`, however,
since the return value, if successful, is always bounded by host
memory while arguments can come from any guest.
* The `heap_addr` clif instruction now takes a 64-bit offset argument
instead of a 32-bit one. It turns out that the legalization of
`heap_addr` already worked with 64-bit offsets, so this change was
fairly trivial to make.
* The runtime implementation of mmap-based linear memories has changed
to largely work in `usize` quantities in its API and in bytes instead
of pages. This simplifies various aspects and reflects that
mmap-memories are always bound by `usize` since that's what the host
is using to address things, and additionally most calculations care
about bytes rather than pages except for the very edge where we're
going to/from wasm.
Overall I've tried to minimize the amount of `as` casts as possible,
using checked `try_from` and checked arithemtic with either error
handling or explicit `unwrap()` calls to tell us about bugs in the
future. Most locations have relatively obvious things to do with various
implications on various hosts, and I think they should all be roughly of
the right shape but time will tell. I mostly relied on the compiler
complaining that various types weren't aligned to figure out
type-casting, and I manually audited some of the more obvious locations.
I suspect we have a number of hidden locations that will panic on 32-bit
hosts if 64-bit modules try to run there, but otherwise I think we
should be generally ok (famous last words). In any case I wouldn't want
to enable this by default naturally until we've fuzzed it for some time.
In terms of the actual underlying implementation, no one should expect
memory64 to be all that fast. Right now it's implemented with
"dynamic" heaps which have a few consequences:
* All memory accesses are bounds-checked. I'm not sure how aggressively
Cranelift tries to optimize out bounds checks, but I suspect not a ton
since we haven't stressed this much historically.
* Heaps are always precisely sized. This means that every call to
`memory.grow` will incur a `memcpy` of memory from the old heap to the
new. We probably want to at least look into `mremap` on Linux and
otherwise try to implement schemes where dynamic heaps have some
reserved pages to grow into to help amortize the cost of
`memory.grow`.
The memory64 spec test suite is scheduled to now run on CI, but as with
all the other spec test suites it's really not all that comprehensive.
I've tried adding more tests for basic things as I've had to implement
guards for them, but I wouldn't really consider the testing adequate
from just this PR itself. I did try to take care in one test to actually
allocate a 4gb+ heap and then avoid running that in the pooling
allocator or in emulation because otherwise that may fail or take
excessively long.
[proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md
* Fix some tests
* More test fixes
* Fix wasmtime tests
* Fix doctests
* Revert to 32-bit immediate offsets in `heap_addr`
This commit updates the generation of addresses in wasm code to always
use 32-bit offsets for `heap_addr`, and if the calculated offset is
bigger than 32-bits we emit a manual add with an overflow check.
* Disable memory64 for spectest fuzzing
* Fix wrong offset being added to heap addr
* More comments!
* Clarify bytes/pages
* Change VMMemoryDefinition::current_length to `usize`
This commit changes the definition of
`VMMemoryDefinition::current_length` to `usize` from its previous
definition of `u32`. This is a pretty impactful change because it also
changes the cranelift semantics of "dynamic" heaps where the bound
global value specifier must now match the pointer type for the platform
rather than the index type for the heap.
The motivation for this change is that the `current_length` field (or
bound for the heap) is intended to reflect the current size of the heap.
This is bound by `usize` on the host platform rather than `u32` or`
u64`. The previous choice of `u32` couldn't represent a 4GB memory
because we couldn't put a number representing 4GB into the
`current_length` field. By using `usize`, which reflects the host's
memory allocation, this should better reflect the size of the heap and
allows Wasmtime to support a full 4GB heap for a wasm program (instead
of 4GB minus one page).
This commit also updates the legalization of the `heap_addr` clif
instruction to appropriately cast the address to the platform's pointer
type, handling bounds checks along the way. The practical impact for
today's targets is that a `uextend` is happening sooner than it happened
before, but otherwise there is no intended impact of this change. In the
future when 64-bit memories are supported there will likely need to be
fancier logic which handles offsets a bit differently (especially in the
case of a 64-bit memory on a 32-bit host).
The clif `filetest` changes should show the differences in codegen, and
the Wasmtime changes are largely removing casts here and there.
Closes#3022
* Add tests for memory.size at maximum memory size
* Add a dfg helper method
* Add a type parameter to `VMOffsets` for pointer size
This commit adds a type parameter to `VMOffsets` representing the
pointer size to improve computations in `wasmtime-runtime` which always
use a constant value of the host's pointer size. The type parameter is
`u8` for `wasmtime-cranelift`'s use case where cross-compilation may be
involved.
* fix lightbeam
This commit removes some one-use methods to inline them at their use
site, and otherwise adds bounds checks to other functions like
`imported_function` where previously the `FuncIndex` may have been
accidentally out of bounds, which would cause memory unsafety. There's
no actual bug this was fixing, just trying to improve the safety of the
code internally a little.
The current_length member is defined as "usize" in Rust code,
but generated wasm code refers to it as if it were "u32".
While this happens to mostly work on little-endian machines
(as long as the length is < 4GB), it will always fail on
big-endian machines.
Fixed by making current_length "u32" in Rust as well, and
ensuring the actual memory size is always less than 4GB.
* Add guard pages to the front of linear memories
This commit implements a safety feature for Wasmtime to place guard
pages before the allocation of all linear memories. Guard pages placed
after linear memories are typically present for performance (at least)
because it can help elide bounds checks. Guard pages before a linear
memory, however, are never strictly needed for performance or features.
The intention of a preceding guard page is to help insulate against bugs
in Cranelift or other code generators, such as CVE-2021-32629.
This commit adds a `Config::guard_before_linear_memory` configuration
option, defaulting to `true`, which indicates whether guard pages should
be present both before linear memories as well as afterwards. Guard
regions continue to be controlled by
`{static,dynamic}_memory_guard_size` methods.
The implementation here affects both on-demand allocated memories as
well as the pooling allocator for memories. For on-demand memories this
adjusts the size of the allocation as well as adjusts the calculations
for the base pointer of the wasm memory. For the pooling allocator this
will place a singular extra guard region at the very start of the
allocation for memories. Since linear memories in the pooling allocator
are contiguous every memory already had a preceding guard region in
memory, it was just the previous memory's guard region afterwards. Only
the first memory needed this extra guard.
I've attempted to write some tests to help test all this, but this is
all somewhat tricky to test because the settings are pretty far away
from the actual behavior. I think, though, that the tests added here
should help cover various use cases and help us have confidence in
tweaking the various `Config` settings beyond their defaults.
Note that this also contains a semantic change where
`InstanceLimits::memory_reservation_size` has been removed. Instead this
field is now inferred from the `static_memory_maximum_size` and guard
size settings. This should hopefully remove some duplication in these
settings, canonicalizing on the guard-size/static-size settings as the
way to control memory sizes and virtual reservations.
* Update config docs
* Fix a typo
* Fix benchmark
* Fix wasmtime-runtime tests
* Fix some more tests
* Try to fix uffd failing test
* Review items
* Tweak 32-bit defaults
Makes the pooling allocator a bit more reasonable by default on 32-bit
with these settings.
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
This adds benchmarks around module instantiation using criterion.
Both the default (i.e. on-demand) and pooling allocators are tested
sequentially and in parallel using a thread pool.
Instantiation is tested with an empty module, a module with a single page
linear memory, a larger linear memory with a data initializer, and a "hello
world" Rust WASI program.
* Optimize `table.init` instruction and instantiation
This commit optimizes table initialization as part of instance
instantiation and also applies the same optimization to the `table.init`
instruction. One part of this commit is to remove some preexisting
duplication between instance instantiation and the `table.init`
instruction itself, after this the actual implementation of `table.init`
is optimized to effectively have fewer bounds checks in fewer places and
have a much tighter loop for instantiation.
A big fallout from this change is that memory/table initializer offsets
are now stored as `u32` instead of `usize` to remove a few casts in a
few places. This ended up requiring moving some overflow checks that
happened in parsing to later in code itself because otherwise the wrong
spec test errors are emitted during testing. I've tried to trace where
these can possibly overflow but I think that I managed to get
everything.
In a local synthetic test where an empty module with a single 80,000
element initializer this improves total instantiation time by 4x (562us
=> 141us)
* Review comments
* Add resource limiting to the Wasmtime API.
This commit adds a `ResourceLimiter` trait to the Wasmtime API.
When used in conjunction with `Store::new_with_limiter`, this can be used to
monitor and prevent WebAssembly code from growing linear memories and tables.
This is particularly useful when hosts need to take into account host resource
usage to determine if WebAssembly code can consume more resources.
A simple `StaticResourceLimiter` is also included with these changes that will
simply limit the size of linear memories or tables for all instances created in
the store based on static values.
* Code review feedback.
* Implemented `StoreLimits` and `StoreLimitsBuilder`.
* Moved `max_instances`, `max_memories`, `max_tables` out of `Config` and into
`StoreLimits`.
* Moved storage of the limiter in the runtime into `Memory` and `Table`.
* Made `InstanceAllocationRequest` use a reference to the limiter.
* Updated docs.
* Made `ResourceLimiterProxy` generic to remove a level of indirection.
* Fixed the limiter not being used for `wasmtime::Memory` and
`wasmtime::Table`.
* Code review feedback and bug fix.
* `Memory::new` now returns `Result<Self>` so that an error can be returned if
the initial requested memory exceeds any limits placed on the store.
* Changed an `Arc` to `Rc` as the `Arc` wasn't necessary.
* Removed `Store` from the `ResourceLimiter` callbacks. Custom resource limiter
implementations are free to capture any context they want, so no need to
unnecessarily store a weak reference to `Store` from the proxy type.
* Fixed a bug in the pooling instance allocator where an instance would be
leaked from the pool. Previously, this would only have happened if the OS was
unable to make the necessary linear memory available for the instance. With
these changes, however, the instance might not be created due to limits
placed on the store. We now properly deallocate the instance on error.
* Added more tests, including one that covers the fix mentioned above.
* Code review feedback.
* Add another memory to `test_pooling_allocator_initial_limits_exceeded` to
ensure a partially created instance is successfully deallocated.
* Update some doc comments for better documentation of `Store` and
`ResourceLimiter`.
This commit uses a two-phase lookup of stack map information from modules
rather than giving back raw pointers to stack maps.
First the runtime looks up information about a module from a pc value, which
returns an `Arc` it keeps a reference on while completing the stack map lookup.
Second it then queries the module information for the stack map from a pc
value, getting a reference to the stack map (which is now safe because of the
`Arc` held by the runtime).
This commit removes the stack map registry and instead uses the existing
information from the store's module registry to lookup stack maps.
A trait is now used to pass the lookup context to the runtime, implemented by
`Store` to do the lookup.
With this change, module registration in `Store` is now entirely limited to
inserting the module into the module registry.
This commit is intended to be a perf improvement for instantiation of
modules with lots of functions. Previously the `lookup_shared_signature`
callback was showing up quite high in profiles as part of instantiation.
As some background, this callback is used to translate from a module's
`SignatureIndex` to a `VMSharedSignatureIndex` which the instance
stores. This callback is called for two reasons, one is to translate all
of the module's own types into `VMSharedSignatureIndex` for the purposes
of `call_indirect` (the translation of that loads from this table to
compare indices). The second reason is that a `VMCallerCheckedAnyfunc`
is prepared for all functions and this embeds a `VMSharedSignatureIndex`
inside of it.
The slow part today is that the lookup callback was called
once-per-function and each lookup involved hashing a full
`WasmFuncType`. Albeit our hash algorithm is still Rust's default
SipHash algorithm which is quite slow, but we also shouldn't need to
re-hash each signature if we see it multiple times anyway.
The fix applied in this commit is to change this lookup callback to an
`enum` where one variant is that there's a table to lookup from. This
table is a `PrimaryMap` which means that lookup is quite fast. The only
thing we need to do is to prepare the table ahead of time. Currently
this happens on the instantiation path because in my measurments the
creation of the table is quite fast compared to the rest of
instantiation. If this becomes an issue, though, we can look into
creating the table as part of `SigRegistry::register_module` and caching
it somewhere (I'm not entirely sure where but I'm sure we can figure it
out).
There's in generally not a ton of efficiency around the `SigRegistry`
type. I'm hoping though that this fixes the next-lowest-hanging-fruit in
terms of performance without complicating the implementation too much. I
tried a few variants and this change seemed like the best balance
between simplicity and still a nice performance gain.
Locally I measured an improvement in instantiation time for a large-ish
module by reducing the time from ~3ms to ~2.6ms per instance.
This commit updates the implementation of `VMOffsets` to frontload all
checked arithmetic on construction of the `VMOffsets` which allows
eliding all checked arithmetic when accessing the fields of `VMOffsets`.
For testing and such this adds a new constructor as well from a new
`VMOffsetsFields` structure which is a clone of the old definition.
This should help speed up some profile hot spots I've been seeing where
with all the checked arithmetic on field sizes this was slowing down the
various accessors during instantiation (which uses `VMOffsets` to
initialize various fields of the `VMContext`).
- some tests don't pass because of bad interactions with the system's
libunwind; ignore them for now.
- the page size on mac aarch64 is 16K, not 4K; tweak some tests which
were expecting 4K or multiples of 4K pages to use a multiple of host page size
instead.
- a cranelift-native test needed an update for the new calling convention.
This commit fixes a bug where the wrong destination range was used when copying
data from the module's memory initialization upon instance initialization.
This affects the on-demand allocator only when using the `uffd` feature on
Linux and when the Wasm page being initialized is not the last in the module's
initial pages.
Fixes#2784.
* Use stable Rust on CI to test the x64 backend
This commit leverages the newly-released 1.51.0 compiler to test the
new backend on Windows and Linux with a stable compiler instead of a
nightly compiler. This isolates the nightly build to just the nightly
documentation generation and fuzzing, both of which rely on nightly for
the best results right now.
* Use updated stable in book build job
* Run rustfmt for new stable
* Silence new warnings for wasi-nn
* Allow some dead code in the x64 backend
Looks like new rustc is better about emitting some dead-code warnings
* Update rust in peepmatic job
* Fix a test in the pooling allocator
* Remove `package.metdata.docs.rs` temporarily
Needs resolution of https://github.com/rust-lang/cargo/pull/9300 first
* Fix a warning in a wasi-nn example
* Add assert to `StackPool::deallocate` to ensure the fiber stack given to it
comes from the pool.
* Remove outdated comment about windows and stacks as the allocator now returns
fiber stacks.
* Remove conditional compilation around `stack_size` in the allocators as it
was just clutter.
This commit splits out a `FiberStack` from `Fiber`, allowing the instance
allocator trait to return `FiberStack` rather than raw stack pointers. This
keeps the stack creation mostly in `wasmtime_fiber`, but now the on-demand
instance allocator can make use of it.
The instance allocators no longer have to return a "not supported" error to
indicate that the store should allocate its own fiber stack.
This includes a bunch of cleanup in the instance allocator to scope stacks to
the new "async" feature in the runtime.
Closes#2708.
* Implement defining host functions at the Config level.
This commit introduces defining host functions at the `Config` rather than with
`Func` tied to a `Store`.
The intention here is to enable a host to define all of the functions once
with a `Config` and then use a `Linker` (or directly with
`Store::get_host_func`) to use the functions when instantiating a module.
This should help improve the performance of use cases where a `Store` is
short-lived and redefining the functions at every module instantiation is a
noticeable performance hit.
This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi`
type.
The new method adds the WASI functions to the given config as host functions.
This commit adds context functions to `Store`: `get` to get a context of a
particular type and `set` to set the context on the store.
For safety, `set` cannot replace an existing context value of the same type.
`Wasi::set_context` was added to set the WASI context for a `Store` when using
`Wasi::add_to_config`.
* Add `Config::define_host_func_async`.
* Make config "async" rather than store.
This commit moves the concept of "async-ness" to `Config` rather than `Store`.
Note: this is a breaking API change for anyone that's already adopted the new
async support in Wasmtime.
Now `Config::new_async` is used to create an "async" config and any `Store`
associated with that config is inherently "async".
This is needed for async shared host functions to have some sanity check during their
execution (async host functions, like "async" `Func`, need to be called with
the "async" variants).
* Update async function tests to smoke async shared host functions.
This commit updates the async function tests to also smoke the shared host
functions, plus `Func::wrap0_async`.
This also changes the "wrap async" method names on `Config` to
`wrap$N_host_func_async` to slightly better match what is on `Func`.
* Move the instance allocator into `Engine`.
This commit moves the instantiated instance allocator from `Config` into
`Engine`.
This makes certain settings in `Config` no longer order-dependent, which is how
`Config` should ideally be.
This also removes the confusing concept of the "default" instance allocator,
instead opting to construct the on-demand instance allocator when needed.
This does alter the semantics of the instance allocator as now each `Engine`
gets its own instance allocator rather than sharing a single one between all
engines created from a configuration.
* Make `Engine::new` return `Result`.
This is a breaking API change for anyone using `Engine::new`.
As creating the pooling instance allocator may fail (likely cause is not enough
memory for the provided limits), instead of panicking when creating an
`Engine`, `Engine::new` now returns a `Result`.
* Remove `Config::new_async`.
This commit removes `Config::new_async` in favor of treating "async support" as
any other setting on `Config`.
The setting is `Config::async_support`.
* Remove order dependency when defining async host functions in `Config`.
This commit removes the order dependency where async support must be enabled on
the `Config` prior to defining async host functions.
The check is now delayed to when an `Engine` is created from the config.
* Update WASI example to use shared `Wasi::add_to_config`.
This commit updates the WASI example to use `Wasi::add_to_config`.
As only a single store and instance are used in the example, it has no semantic
difference from the previous example, but the intention is to steer users
towards defining WASI on the config and only using `Wasi::add_to_linker` when
more explicit scoping of the WASI context is required.
This change makes the storage of `Table` more internally consistent.
Elements are stored as raw pointers for both static and dynamic table storage.
Explicitly storing elements as pointers removes assumptions being made by the
pooling allocator in terms of the size and default representation of the
elements.
However, care must be made to properly clone externrefs for table operations.
* Add more overflow checks in table/memory initialization.
* Comment for `with_allocation_strategy` to explain ignored `Config` options.
* Fix Wasmtime `Table` to not panic for type mismatches in `fill`/`copy`.
* Add tests for that fix.
* More use of `anyhow`.
* Change `make_accessible` into `protect_linear_memory` to better demonstrate
what it is used for; this will make the uffd implementation make a little
more sense.
* Remove `create_memory_map` in favor of just creating the `Mmap` instances in
the pooling allocator. This also removes the need for `MAP_NORESERVE` in the
uffd implementation.
* Moar comments.
* Remove `BasePointerIterator` in favor of `impl Iterator`.
* The uffd implementation now only monitors linear memory pages and will only
receive faults on pages that could potentially be accessible and never on a
statically known guard page.
* Stop allocating memory or table pools if the maximum limit of the memory or
table is 0.
* Add `anyhow` dependency to `wasmtime-runtime`.
* Revert `get_data` back to `fn`.
* Remove `DataInitializer` and box the data in `Module` translation instead.
* Improve comments on `MemoryInitialization`.
* Remove `MemoryInitialization::OutOfBounds` in favor of proper bulk memory
semantics.
* Use segmented memory initialization except for when the uffd feature is
enabled on Linux.
* Validate modules with the allocator after translation.
* Updated various functions in the runtime to return `anyhow::Result`.
* Use a slice when copying pages instead of `ptr::copy_nonoverlapping`.
* Remove unnecessary casts in `OnDemandAllocator::deallocate`.
* Better document the `uffd` feature.
* Use WebAssembly page-sized pages in the paged initialization.
* Remove the stack pool from the uffd handler and simply protect just the guard
pages.
Last minute code clean up to fix some comments and rename `address_space_size`
to `memory_reservation_size` to better describe what the option is doing.
This was originally written to support sourcing the table and memory
definitions differently for the pooling allocator.
However, both allocators do the exact same thing, so the closure arguments are
no longer necessary.
Additionally, this cleans up the code a bit to pass in the allocation request
rather than having individual parameters.
This commit implements copying paged initialization data upon a fault of a
linear memory page.
If the initialization data is "paged", then the appropriate pages are copied
into the Wasm page (or zeroed if the page is not present in the
initialization data).
If the initialization data is not "paged", the Wasm page is zeroed so that
module instantiation can initialize the pages.
As Windows uses the native fiber implementation, the stack tests should be
ignored on Windows as the implementation intentionally errors when handing out
stacks.
This commit implements the `uffd` feature which turns on support for utilizing
the `userfaultfd` system call on Linux for the pooling instance allocator.
By handling page faults in userland, we are able to detect guard page accesses
without having to constantly change memory page protections.
This should help reduce the number of syscalls as well as kernel lock
contentions when many threads are allocating and deallocating instances.
Additionally, the user fault handler can lazy initialize linear
memories of an instance (implementation to come).
This commit implements the pooling instance allocator.
The allocation strategy can be set with `Config::with_allocation_strategy`.
The pooling strategy uses the pooling instance allocator to preallocate a
contiguous region of memory for instantiating modules that adhere to various
limits.
The intention of the pooling instance allocator is to reserve as much of the
host address space needed for instantiating modules ahead of time and to reuse
committed memory pages wherever possible.
This commit implements allocating fiber stacks in an instance allocator.
The on-demand instance allocator doesn't support custom stacks, so the
implementation will use the allocation from `wasmtime-fiber` for the fiber
stacks.
In the future, the pooling instance allocator will return custom stacks to use
on Linux and macOS.
On Windows, the native fiber implementation will always be used.
This commit changes `Instance` such that memories can be stored statically,
with just a base pointer, size, maximum, and a callback to make memory
accessible.
Previously the memories were being stored as boxed trait objects, which would
require the pooling allocator to do some unpleasant things to avoid
allocations.
With this change, the pooling allocator can simply define a memory for the
instance without using a trait object.
This commit changes how memories and tables are stored in `Instance`.
Previously, the memories and tables were stored as a `BoxedSlice`. Storing it
this way requires an allocation to change the length of the memories and
tables, which is desirable for a pooling instance allocator that is reusing an
`Instance` structure for a new instantiation.
By storing it instead as `PrimaryMap`, the memories and tables can be resized
without any allocations (the capacity of these maps will always be the
configured limits of the pooling allocator).
This commit refactors `Table` in the runtime such that it can be created from a
pointer to existing table data.
The current `Vec` backing of the `Table` is considered to be "dynamic" storage.
This will be used for the upcoming pooling allocator where table memory is
managed externally to the instance.
The `table.copy` implementation was improved to use slice primitives for doing
the copying.
Fixes#983.
This commit introduces two new methods on `InstanceAllocator`:
* `validate_module` - this method is used to validate a module after
translation but before compilation. It will be used for the upcoming pooling
allocator to ensure a module being compiled adheres to the limits of the
allocator.
* `adjust_tunables` - this method is used to adjust the `Tunables` given the
JIT compiler. The pooling allocator will use this to force all memories to
be static during compilation.
This commit refactors module instantiation in the runtime to allow for
different instance allocation strategy implementations.
It adds an `InstanceAllocator` trait with the current implementation put behind
the `OnDemandInstanceAllocator` struct.
The Wasmtime API has been updated to allow a `Config` to have an instance
allocation strategy set which will determine how instances get allocated.
This change is in preparation for an alternative *pooling* instance allocator
that can reserve all needed host process address space in advance.
This commit also makes changes to the `wasmtime_environ` crate to represent
compiled modules in a way that reduces copying at instantiation time.