This commit enables Cranelift's AArch64 backend to generate code
for instruction set extensions (previously only the base Armv8-A
architecture was supported); also, it makes it possible to detect
the extensions supported by the host when JIT compiling. The new
functionality is applied to the IR instruction `AtomicCas`.
Copyright (c) 2021, Arm Limited.
Our previous implementation of unwind infrastructure was somewhat
complex and brittle: it parsed generated instructions in order to
reverse-engineer unwind info from prologues. It also relied on some
fragile linkage to communicate instruction-layout information that VCode
was not designed to provide.
A much simpler, more reliable, and easier-to-reason-about approach is to
embed unwind directives as pseudo-instructions in the prologue as we
generate it. That way, we can say what we mean and just emit it
directly.
The usual reasoning that leads to the reverse-engineering approach is
that metadata is hard to keep in sync across optimization passes; but
here, (i) prologues are generated at the very end of the pipeline, and
(ii) if we ever do a post-prologue-gen optimization, we can treat unwind
directives as black boxes with unknown side-effects, just as we do for
some other pseudo-instructions today.
It turns out that it was easier to just build this for both x64 and
aarch64 (since they share a factored-out ABI implementation), and wire
up the platform-specific unwind-info generation for Windows and SystemV.
Now we have simpler unwind on all platforms and we can delete the old
unwind infra as soon as we remove the old backend.
There were a few consequences to supporting Fastcall unwind in
particular that led to a refactor of the common ABI. Windows only
supports naming clobbered-register save locations within 240 bytes of
the frame-pointer register, whatever one chooses that to be (RSP or
RBP). We had previously saved clobbers below the fixed frame (and below
nominal-SP). The 240-byte range has to include the old RBP too, so we're
forced to place clobbers at the top of the frame, just below saved
RBP/RIP. This is fine; we always keep a frame pointer anyway because we
use it to refer to stack args. It does mean that offsets of fixed-frame
slots (spillslots, stackslots) from RBP are no longer known before we do
regalloc, so if we ever want to index these off of RBP rather than
nominal-SP because we add support for `alloca` (dynamic frame growth),
then we'll need a "nominal-BP" mode that is resolved after regalloc and
clobber-save code is generated. I added a comment to this effect in
`abi_impl.rs`.
The above refactor touched both x64 and aarch64 because of shared code.
This had a further effect in that the old aarch64 prologue generation
subtracted from `sp` once to allocate space, then used stores to `[sp,
offset]` to save clobbers. Unfortunately the offset only has 7-bit
range, so if there are enough clobbered registers (and there can be --
aarch64 has 384 bytes of registers; at least one unit test hits this)
the stores/loads will be out-of-range. I really don't want to synthesize
large-offset sequences here; better to go back to the simpler
pre-index/post-index `stp r1, r2, [sp, #-16]` form that works just like
a "push". It's likely not much worse microarchitecturally (dependence
chain on SP, but oh well) and it actually saves an instruction if
there's no other frame to allocate. As a further advantage, it's much
simpler to understand; simpler is usually better.
This PR adds the new backend on Windows to CI as well.
* Redo the statically typed `Func` API
This commit reimplements the `Func` API with respect to statically typed
dispatch. Previously `Func` had a `getN` and `getN_async` family of
methods which were implemented for 0 to 16 parameters. The return value
of these functions was an `impl Fn(..)` closure with the appropriate
parameters and return values.
There are a number of downsides with this approach that have become
apparent over time:
* The addition of `*_async` doubled the API surface area (which is quite
large here due to one-method-per-number-of-parameters).
* The [documentation of `Func`][old-docs] are quite verbose and feel
"polluted" with all these getters, making it harder to understand the
other methods that can be used to interact with a `Func`.
* These methods unconditionally pay the cost of returning an owned `impl
Fn` with a `'static` lifetime. While cheap, this is still paying the
cost for cloning the `Store` effectively and moving data into the
closed-over environment.
* Storage of the return value into a struct, for example, always
requires `Box`-ing the returned closure since it otherwise cannot be
named.
* Recently I had the desire to implement an "unchecked" path for
invoking wasm where you unsafely assert the type signature of a wasm
function. Doing this with today's scheme would require doubling
(again) the API surface area for both async and synchronous calls,
further polluting the documentation.
The main benefit of the previous scheme is that by returning a `impl Fn`
it was quite easy and ergonomic to actually invoke the function. In
practice, though, examples would often have something akin to
`.get0::<()>()?()?` which is a lot of things to interpret all at once.
Note that `get0` means "0 parameters" yet a type parameter is passed.
There's also a double function invocation which looks like a lot of
characters all lined up in a row.
Overall, I think that the previous design is starting to show too many
cracks and deserves a rewrite. This commit is that rewrite.
The new design in this commit is to delete the `getN{,_async}` family of
functions and instead have a new API:
impl Func {
fn typed<P, R>(&self) -> Result<&Typed<P, R>>;
}
impl Typed<P, R> {
fn call(&self, params: P) -> Result<R, Trap>;
async fn call_async(&self, params: P) -> Result<R, Trap>;
}
This should entirely replace the current scheme, albeit by slightly
losing ergonomics use cases. The idea behind the API is that the
existence of `Typed<P, R>` is a "proof" that the underlying function
takes `P` and returns `R`. The `Func::typed` method peforms a runtime
type-check to ensure that types all match up, and if successful you get
a `Typed` value. Otherwise an error is returned.
Once you have a `Typed` then, like `Func`, you can either `call` or
`call_async`. The difference with a `Typed`, however, is that the
params/results are statically known and hence these calls can be much
more efficient.
This is a much smaller API surface area from before and should greatly
simplify the `Func` documentation. There's still a problem where
`Func::wrapN_async` produces a lot of functions to document, but that's
now the sole offender. It's a nice benefit that the
statically-typed-async verisons are now expressed with an `async`
function rather than a function-returning-a-future which makes it both
more efficient and easier to understand.
The type `P` and `R` are intended to either be bare types (e.g. `i32`)
or tuples of any length (including 0). At this time `R` is only allowed
to be `()` or a bare `i32`-style type because multi-value is not
supported with a native ABI (yet). The `P`, however, can be any size of
tuples of parameters. This is also where some ergonomics are lost
because instead of `f(1, 2)` you now have to write `f.call((1, 2))`
(note the double-parens). Similarly `f()` becomes `f.call(())`.
Overall I feel that this is a better tradeoff than before. While not
universally better due to the loss in ergonomics I feel that this design
is much more flexible in terms of what you can do with the return value
and also understanding the API surface area (just less to take in).
[old-docs]: https://docs.rs/wasmtime/0.24.0/wasmtime/struct.Func.html#method.get0
* Rename Typed to TypedFunc
* Implement multi-value returns through `Func::typed`
* Fix examples in docs
* Fix some more errors
* More test fixes
* Rebasing and adding `get_typed_func`
* Updating tests
* Fix typo
* More doc tweaks
* Tweak visibility on `Func::invoke`
* Fix tests again
This commit fixes a few issues around managing the thread-local state of
a wasmtime thread. We intentionally only have a singular TLS variable in
the whole world, and the problem is that when stack-switching off an
async thread we were not restoring the previous TLS state. This is
necessary in two cases:
* Futures aren't guaranteed to be polled/completed in a stack-like
fashion. If a poll sees that a future isn't ready then we may resume
execution in a previous wasm context that ends up needing the TLS
information.
* Futures can also cross threads (when the whole store crosses threads)
and we need to save/restore TLS state from the thread we're coming
from and the thread that we're going to.
The stack switching issue necessitates some more glue around suspension
and resumption of a stack to ensure we save/restore the TLS state on
both sides. The thread issue, however, also necessitates that we use
`#[inline(never)]` on TLS access functions and never have TLS borrows
live across a function which could result in running arbitrary code (as
was the case for the `tls::set` function.
* Implement defining host functions at the Config level.
This commit introduces defining host functions at the `Config` rather than with
`Func` tied to a `Store`.
The intention here is to enable a host to define all of the functions once
with a `Config` and then use a `Linker` (or directly with
`Store::get_host_func`) to use the functions when instantiating a module.
This should help improve the performance of use cases where a `Store` is
short-lived and redefining the functions at every module instantiation is a
noticeable performance hit.
This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi`
type.
The new method adds the WASI functions to the given config as host functions.
This commit adds context functions to `Store`: `get` to get a context of a
particular type and `set` to set the context on the store.
For safety, `set` cannot replace an existing context value of the same type.
`Wasi::set_context` was added to set the WASI context for a `Store` when using
`Wasi::add_to_config`.
* Add `Config::define_host_func_async`.
* Make config "async" rather than store.
This commit moves the concept of "async-ness" to `Config` rather than `Store`.
Note: this is a breaking API change for anyone that's already adopted the new
async support in Wasmtime.
Now `Config::new_async` is used to create an "async" config and any `Store`
associated with that config is inherently "async".
This is needed for async shared host functions to have some sanity check during their
execution (async host functions, like "async" `Func`, need to be called with
the "async" variants).
* Update async function tests to smoke async shared host functions.
This commit updates the async function tests to also smoke the shared host
functions, plus `Func::wrap0_async`.
This also changes the "wrap async" method names on `Config` to
`wrap$N_host_func_async` to slightly better match what is on `Func`.
* Move the instance allocator into `Engine`.
This commit moves the instantiated instance allocator from `Config` into
`Engine`.
This makes certain settings in `Config` no longer order-dependent, which is how
`Config` should ideally be.
This also removes the confusing concept of the "default" instance allocator,
instead opting to construct the on-demand instance allocator when needed.
This does alter the semantics of the instance allocator as now each `Engine`
gets its own instance allocator rather than sharing a single one between all
engines created from a configuration.
* Make `Engine::new` return `Result`.
This is a breaking API change for anyone using `Engine::new`.
As creating the pooling instance allocator may fail (likely cause is not enough
memory for the provided limits), instead of panicking when creating an
`Engine`, `Engine::new` now returns a `Result`.
* Remove `Config::new_async`.
This commit removes `Config::new_async` in favor of treating "async support" as
any other setting on `Config`.
The setting is `Config::async_support`.
* Remove order dependency when defining async host functions in `Config`.
This commit removes the order dependency where async support must be enabled on
the `Config` prior to defining async host functions.
The check is now delayed to when an `Engine` is created from the config.
* Update WASI example to use shared `Wasi::add_to_config`.
This commit updates the WASI example to use `Wasi::add_to_config`.
As only a single store and instance are used in the example, it has no semantic
difference from the previous example, but the intention is to steer users
towards defining WASI on the config and only using `Wasi::add_to_linker` when
more explicit scoping of the WASI context is required.
The Wasm SIMD specification has added new instructions that allow inserting to the lane of a vector from a memory location, and conversely, extracting from a lane of a vector to a memory location. The simplest implementation lowers these instructions, `load[8|16|32|64]_lane` and `store[8|16|32|64]_lane`, to a sequence of either `load + insertlane` or `extractlane + store` (in CLIF). With the new backend's pattern matching, we expect these CLIF sequences to compile as a single machine instruction (at least in x64).
This commit adds a "pooling" variant to the wast tests that uses the pooling
instance allocation strategy.
This should help with the test coverage of the pooling instance allocator.
This change makes the storage of `Table` more internally consistent.
Elements are stored as raw pointers for both static and dynamic table storage.
Explicitly storing elements as pointers removes assumptions being made by the
pooling allocator in terms of the size and default representation of the
elements.
However, care must be made to properly clone externrefs for table operations.
* Add more overflow checks in table/memory initialization.
* Comment for `with_allocation_strategy` to explain ignored `Config` options.
* Fix Wasmtime `Table` to not panic for type mismatches in `fill`/`copy`.
* Add tests for that fix.
* More use of `anyhow`.
* Change `make_accessible` into `protect_linear_memory` to better demonstrate
what it is used for; this will make the uffd implementation make a little
more sense.
* Remove `create_memory_map` in favor of just creating the `Mmap` instances in
the pooling allocator. This also removes the need for `MAP_NORESERVE` in the
uffd implementation.
* Moar comments.
* Remove `BasePointerIterator` in favor of `impl Iterator`.
* The uffd implementation now only monitors linear memory pages and will only
receive faults on pages that could potentially be accessible and never on a
statically known guard page.
* Stop allocating memory or table pools if the maximum limit of the memory or
table is 0.
* Add `anyhow` dependency to `wasmtime-runtime`.
* Revert `get_data` back to `fn`.
* Remove `DataInitializer` and box the data in `Module` translation instead.
* Improve comments on `MemoryInitialization`.
* Remove `MemoryInitialization::OutOfBounds` in favor of proper bulk memory
semantics.
* Use segmented memory initialization except for when the uffd feature is
enabled on Linux.
* Validate modules with the allocator after translation.
* Updated various functions in the runtime to return `anyhow::Result`.
* Use a slice when copying pages instead of `ptr::copy_nonoverlapping`.
* Remove unnecessary casts in `OnDemandAllocator::deallocate`.
* Better document the `uffd` feature.
* Use WebAssembly page-sized pages in the paged initialization.
* Remove the stack pool from the uffd handler and simply protect just the guard
pages.
Last minute code clean up to fix some comments and rename `address_space_size`
to `memory_reservation_size` to better describe what the option is doing.
The issue that required the pin to the older version has been resolved.
This keeps the x64 backend tests on an older nightly version to support the
`-Z` flags being passed without having to update Cargo.toml to a new feature
resolver version.
The doc task is also kept on the older nightly for the same reason.
Wasmtime documentation says stable is the supported rustc version, and that's
what we test with CI, so the badge should reflect that.
Wasmtime doesn't even build with 1.37 any longer anyway.
This was originally written to support sourcing the table and memory
definitions differently for the pooling allocator.
However, both allocators do the exact same thing, so the closure arguments are
no longer necessary.
Additionally, this cleans up the code a bit to pass in the allocation request
rather than having individual parameters.
This commit implements copying paged initialization data upon a fault of a
linear memory page.
If the initialization data is "paged", then the appropriate pages are copied
into the Wasm page (or zeroed if the page is not present in the
initialization data).
If the initialization data is not "paged", the Wasm page is zeroed so that
module instantiation can initialize the pages.