Yesterday fuzzing was switched to using a `Linker` to improve coverage
when using module linking since we can fake instance imports with
definitions of each individual item. Using a `Linker`, however, means
that we can't necessarily instantiate all modules, such as
(module
(import "" "" (memory (;0;) 0 1))
(import "" "" (memory (;1;) 2)))
As a result this just allows these sorts of "incompatible import type"
errors when fuzzing to not trigger crashes.
Previously each module in a module-linking-using-module would compile
all the trampolines for all signatures for all modules. In forest-like
situations with lots of modules this would cause quite a few trampolines
to get compiled. The original intention was to have one global list of
trampolines for all modules in the module-linking graph that they could
all share. With the current design of module linking, however, the
intention is for modules to be relatively isolated from one another
which would make achieving this difficult.
In lieu of total sharing (which would be good for the global scope
anyway but we also don't do that right now) this commit implements an
alternative strategy where each module simply compiles its own
trampolines that it itself can reach. This should mean that
module-linking modules behave more similarly to standalone modules in
terms of trampoline duplication. If we ever do global trampoline
deduplication we can likely batch this all together into one, but for
now this should fix the performance issues seen in fuzzing.
Closes#2525
Currently wasmtime will generate a `SignatureIndex`-per-type in the
module itself, even if the module itself declares the same type multiple
times. To make matters worse if the same type is declared across
multiple modules used in a module-linking-using-module then the
signature will be recorded each time it's declared.
This commit adds a simple map to module translation to deduplicate these
function types. This should improve the performance of module-linking
graphs where the same function type may be declared in a number of
modules. For modules that don't use module linking this adds an extra
map that's not used too often, but the time spent managing it should be
dwarfed by other compile tasks.
* Increase allowances for values when fuzzing
The wasm-smith limits for generating modules are a bit higher than what
we specify, so sync those up to avoid getting too many false positives
with limits getting blown.
* Ensure fuzzing `*.wat` files are in sync
I keep looking at `*.wat` files that are actually stale, so remove stale
files if we write out a `*.wasm` file and can't disassemble it.
* Enable shadowing in dummy_linker
Fixes an issues where the same name is imported twice and we generated
two values for that. We don't mind the error here, we just want to
ignore the shadowing errors.
Currently this exposes a bug where modules broken by module linking
cause failures in the fuzzer, but we want to fuzz those modules since
module linking isn't enabled when generating these modules.
This commit fixes an issue where when module linking was enabled for
fuzzing (which it is) import types of modules show as imports of
instances. In an attempt to satisfy the dummy values of such imports the
fuzzing integration would create instances for each import. This would,
however, count towards instance limits and isn't always desired.
This commit refactors the creation of dummy import values to decompose
imports of instances into imports of each individual item. This should
retain the pre-module-linking behavior of dummy imports for various
fuzzers.
* Use stable Rust on CI to test the x64 backend
This commit leverages the newly-released 1.51.0 compiler to test the
new backend on Windows and Linux with a stable compiler instead of a
nightly compiler. This isolates the nightly build to just the nightly
documentation generation and fuzzing, both of which rely on nightly for
the best results right now.
* Use updated stable in book build job
* Run rustfmt for new stable
* Silence new warnings for wasi-nn
* Allow some dead code in the x64 backend
Looks like new rustc is better about emitting some dead-code warnings
* Update rust in peepmatic job
* Fix a test in the pooling allocator
* Remove `package.metdata.docs.rs` temporarily
Needs resolution of https://github.com/rust-lang/cargo/pull/9300 first
* Fix a warning in a wasi-nn example
* Combine stack-based cleanups for faster wasm calls
This commit is an extension of #2757 where the goal is to optimize entry
into WebAssembly. Currently wasmtime has two stack-based cleanups when
entering wasm, one for the externref activation table and another for
stack limits getting reset. This commit fuses these two cleanups
together into one and moves some code around which enables less captures
for fewer closures and such to speed up calls in to wasm a bit more.
Overall this drops the execution time from 88ns to 80ns locally for me.
This also updates the atomic orderings when updating the stack limit
from `SeqCst` to `Relaxed`. While `SeqCst` is a reasonable starting
point the usage here should be safe to use `Relaxed` since we're not
using the atomics to actually protect any memory, it's simply receiving
signals from other threads.
* Determine whether a pc is wasm via a global map
The macOS implementation of traps recently changed to using mach ports
for handlers instead of signal handlers. This means that a previously
relied upon invariant, each thread fixes its own trap, was broken. The
macOS implementation worked around this by maintaining a global map from
thread id to thread local information, however, to solve the problem.
This global map is quite slow though. It involves taking a lock and
updating a hash map on all calls into WebAssembly. In my local testing
this accounts for >70% of the overhead of calling into WebAssembly on
macOS. Naturally it'd be great to remove this!
This commit fixes this issue and removes the global lock/map that is
updated on all calls into WebAssembly. The fix is to maintain a global
map of wasm modules and their trap addresses in the `wasmtime` crate.
Doing so is relatively simple since we're already tracking this
information at the `Store` level.
Once we've got a global map then the macOS implementation can use this
from a foreign thread and everything works out.
Locally this brings the overhead, on macOS specifically, of calling into
wasm from 80ns to ~20ns.
* Fix compiles
* Review comments
This commit implements a few optimizations, mainly inlining, that should
improve the performance of calling a WebAssembly function. This code
path can be quite hot depending on the embedding case and we hadn't
really put much effort into optimizing the nitty gritty.
The predominant optimization here is adding `#[inline]` to trivial
functions so performance is improved without having to compile with LTO.
Another optimization is to call `lazy_per_thread_init` when traps are
initialized per-thread (when a `Store` is created) rather than each time
a function is called. The next optimization is to change the unwind
reason in the `CallThreadState` to `MaybeUninit` to avoid extra checks
in the default case about whether we need to drop its variants (since in
the happy path we never need to drop it). The final optimization is to
optimize out a few checks when `async` support is disabled for a small
speed boost.
In a small benchmark where wasmtime calls a simple wasm function my
macOS computer dropped from 110ns to 86ns overhead, a 20% decrease. The
macOS overhead is still largely dominated by the global lock acquisition
and hash table management for traps right now, but I suspect the Linux
overhead is much better (should be on the order of ~30 or so ns).
We still have a long way to go to compete with SpiderMonkey which, in
testing, seem to have ~6ns overhead in calling the same wasm function on
my computer.
Add support for `poll_oneoff` calls which just sleep on a relative
timeout. This fixes a bug handling code compiled with WASI libc's `sleep`
family of functions, which call `poll_oneoff` with a `CLOCK_REALTIME`
timer, which wasn't previously implemented.
This bumps target-lexicon and adds support for the AppleAarch64 calling
convention. Specifically for WebAssembly support, we only have to worry
about the new stack slots convention. Stack slots don't need to be at
least 8-bytes, they can be as small as the data type's size. For
instance, if we need stack slots for (i32, i32), they can be located at
offsets (+0, +4). Note that they still need to be properly aligned on
the data type they're containing, though, so if we need stack slots for
(i32, i64), we can't start the i64 slot at the +4 offset (it must start
at the +8 offset).
Added one test that was failing on the Mac M1, as well as other tests
stressing different yet similar situations.
* Add assert to `StackPool::deallocate` to ensure the fiber stack given to it
comes from the pool.
* Remove outdated comment about windows and stacks as the allocator now returns
fiber stacks.
* Remove conditional compilation around `stack_size` in the allocators as it
was just clutter.
This commit splits out a `FiberStack` from `Fiber`, allowing the instance
allocator trait to return `FiberStack` rather than raw stack pointers. This
keeps the stack creation mostly in `wasmtime_fiber`, but now the on-demand
instance allocator can make use of it.
The instance allocators no longer have to return a "not supported" error to
indicate that the store should allocate its own fiber stack.
This includes a bunch of cleanup in the instance allocator to scope stacks to
the new "async" feature in the runtime.
Closes#2708.
* Switch macOS to using mach ports for trap handling
This commit moves macOS to using mach ports instead of signals for
handling traps. The motivation for this is listed in #2456, namely that
once mach ports are used in a process that means traditional UNIX signal
handlers won't get used. This means that if Wasmtime is integrated with
Breakpad, for example, then Wasmtime's trap handler never fires and
traps don't work.
The `traphandlers` module is refactored as part of this commit to split
the platform-specific bits into their own files (it was growing quite a
lot for one inline `cfg_if!`). The `unix.rs` and `windows.rs` files
remain the same as they were before with a few minor tweaks for some
refactored interfaces. The `macos.rs` file is brand new and lifts almost
its entire implementation from SpiderMonkey, adapted for Wasmtime
though.
The main gotcha with mach ports is that a separate thread is what
services the exception. Some unsafe magic allows this separate thread to
read non-`Send` and temporary state from other threads, but is hoped to
be safe in this context. The unfortunate downside is that calling wasm
on macOS now involves taking a global lock and modifying a global hash
map twice-per-call. I'm not entirely sure how to get out of this cost
for now, but hopefully for any embeddings on macOS it's not the end of
the world.
Closes#2456
* Add a sketch of arm64 apple support
* store: maintain CallThreadState mapping when switching fibers
* cranelift/aarch64: generate unwind directives to disable pointer auth
Aarch64 post ARMv8.3 has a feature called pointer authentication,
designed to fight ROP/JOP attacks: some pointers may be signed using new
instructions, adding payloads to the high (previously unused) bits of
the pointers. More on this here: https://lwn.net/Articles/718888/
Unwinders on aarch64 need to know if some pointers contained on the call
frame contain an authentication code or not, to be able to properly
authenticate them or use them directly. Since native code may have
enabled it by default (as is the case on the Mac M1), and the default is
that this configuration value is inherited, we need to explicitly
disable it, for the only kind of supported pointers (return addresses).
To do so, we set the value of a non-existing dwarf pseudo register (34)
to 0, as documented in
https://github.com/ARM-software/abi-aa/blob/master/aadwarf64/aadwarf64.rst#note-8.
This is done at the function granularity, in the spirit of Cranelift
compilation model. Alternatively, a single directive could be generated
in the CIE, generating less information per module.
* Make exception handling work on Mac aarch64 too
* fibers: use a breakpoint instruction after the final call in wasmtime_fiber_start
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* Redo the statically typed `Func` API
This commit reimplements the `Func` API with respect to statically typed
dispatch. Previously `Func` had a `getN` and `getN_async` family of
methods which were implemented for 0 to 16 parameters. The return value
of these functions was an `impl Fn(..)` closure with the appropriate
parameters and return values.
There are a number of downsides with this approach that have become
apparent over time:
* The addition of `*_async` doubled the API surface area (which is quite
large here due to one-method-per-number-of-parameters).
* The [documentation of `Func`][old-docs] are quite verbose and feel
"polluted" with all these getters, making it harder to understand the
other methods that can be used to interact with a `Func`.
* These methods unconditionally pay the cost of returning an owned `impl
Fn` with a `'static` lifetime. While cheap, this is still paying the
cost for cloning the `Store` effectively and moving data into the
closed-over environment.
* Storage of the return value into a struct, for example, always
requires `Box`-ing the returned closure since it otherwise cannot be
named.
* Recently I had the desire to implement an "unchecked" path for
invoking wasm where you unsafely assert the type signature of a wasm
function. Doing this with today's scheme would require doubling
(again) the API surface area for both async and synchronous calls,
further polluting the documentation.
The main benefit of the previous scheme is that by returning a `impl Fn`
it was quite easy and ergonomic to actually invoke the function. In
practice, though, examples would often have something akin to
`.get0::<()>()?()?` which is a lot of things to interpret all at once.
Note that `get0` means "0 parameters" yet a type parameter is passed.
There's also a double function invocation which looks like a lot of
characters all lined up in a row.
Overall, I think that the previous design is starting to show too many
cracks and deserves a rewrite. This commit is that rewrite.
The new design in this commit is to delete the `getN{,_async}` family of
functions and instead have a new API:
impl Func {
fn typed<P, R>(&self) -> Result<&Typed<P, R>>;
}
impl Typed<P, R> {
fn call(&self, params: P) -> Result<R, Trap>;
async fn call_async(&self, params: P) -> Result<R, Trap>;
}
This should entirely replace the current scheme, albeit by slightly
losing ergonomics use cases. The idea behind the API is that the
existence of `Typed<P, R>` is a "proof" that the underlying function
takes `P` and returns `R`. The `Func::typed` method peforms a runtime
type-check to ensure that types all match up, and if successful you get
a `Typed` value. Otherwise an error is returned.
Once you have a `Typed` then, like `Func`, you can either `call` or
`call_async`. The difference with a `Typed`, however, is that the
params/results are statically known and hence these calls can be much
more efficient.
This is a much smaller API surface area from before and should greatly
simplify the `Func` documentation. There's still a problem where
`Func::wrapN_async` produces a lot of functions to document, but that's
now the sole offender. It's a nice benefit that the
statically-typed-async verisons are now expressed with an `async`
function rather than a function-returning-a-future which makes it both
more efficient and easier to understand.
The type `P` and `R` are intended to either be bare types (e.g. `i32`)
or tuples of any length (including 0). At this time `R` is only allowed
to be `()` or a bare `i32`-style type because multi-value is not
supported with a native ABI (yet). The `P`, however, can be any size of
tuples of parameters. This is also where some ergonomics are lost
because instead of `f(1, 2)` you now have to write `f.call((1, 2))`
(note the double-parens). Similarly `f()` becomes `f.call(())`.
Overall I feel that this is a better tradeoff than before. While not
universally better due to the loss in ergonomics I feel that this design
is much more flexible in terms of what you can do with the return value
and also understanding the API surface area (just less to take in).
[old-docs]: https://docs.rs/wasmtime/0.24.0/wasmtime/struct.Func.html#method.get0
* Rename Typed to TypedFunc
* Implement multi-value returns through `Func::typed`
* Fix examples in docs
* Fix some more errors
* More test fixes
* Rebasing and adding `get_typed_func`
* Updating tests
* Fix typo
* More doc tweaks
* Tweak visibility on `Func::invoke`
* Fix tests again
This commit fixes a few issues around managing the thread-local state of
a wasmtime thread. We intentionally only have a singular TLS variable in
the whole world, and the problem is that when stack-switching off an
async thread we were not restoring the previous TLS state. This is
necessary in two cases:
* Futures aren't guaranteed to be polled/completed in a stack-like
fashion. If a poll sees that a future isn't ready then we may resume
execution in a previous wasm context that ends up needing the TLS
information.
* Futures can also cross threads (when the whole store crosses threads)
and we need to save/restore TLS state from the thread we're coming
from and the thread that we're going to.
The stack switching issue necessitates some more glue around suspension
and resumption of a stack to ensure we save/restore the TLS state on
both sides. The thread issue, however, also necessitates that we use
`#[inline(never)]` on TLS access functions and never have TLS borrows
live across a function which could result in running arbitrary code (as
was the case for the `tls::set` function.
* Implement defining host functions at the Config level.
This commit introduces defining host functions at the `Config` rather than with
`Func` tied to a `Store`.
The intention here is to enable a host to define all of the functions once
with a `Config` and then use a `Linker` (or directly with
`Store::get_host_func`) to use the functions when instantiating a module.
This should help improve the performance of use cases where a `Store` is
short-lived and redefining the functions at every module instantiation is a
noticeable performance hit.
This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi`
type.
The new method adds the WASI functions to the given config as host functions.
This commit adds context functions to `Store`: `get` to get a context of a
particular type and `set` to set the context on the store.
For safety, `set` cannot replace an existing context value of the same type.
`Wasi::set_context` was added to set the WASI context for a `Store` when using
`Wasi::add_to_config`.
* Add `Config::define_host_func_async`.
* Make config "async" rather than store.
This commit moves the concept of "async-ness" to `Config` rather than `Store`.
Note: this is a breaking API change for anyone that's already adopted the new
async support in Wasmtime.
Now `Config::new_async` is used to create an "async" config and any `Store`
associated with that config is inherently "async".
This is needed for async shared host functions to have some sanity check during their
execution (async host functions, like "async" `Func`, need to be called with
the "async" variants).
* Update async function tests to smoke async shared host functions.
This commit updates the async function tests to also smoke the shared host
functions, plus `Func::wrap0_async`.
This also changes the "wrap async" method names on `Config` to
`wrap$N_host_func_async` to slightly better match what is on `Func`.
* Move the instance allocator into `Engine`.
This commit moves the instantiated instance allocator from `Config` into
`Engine`.
This makes certain settings in `Config` no longer order-dependent, which is how
`Config` should ideally be.
This also removes the confusing concept of the "default" instance allocator,
instead opting to construct the on-demand instance allocator when needed.
This does alter the semantics of the instance allocator as now each `Engine`
gets its own instance allocator rather than sharing a single one between all
engines created from a configuration.
* Make `Engine::new` return `Result`.
This is a breaking API change for anyone using `Engine::new`.
As creating the pooling instance allocator may fail (likely cause is not enough
memory for the provided limits), instead of panicking when creating an
`Engine`, `Engine::new` now returns a `Result`.
* Remove `Config::new_async`.
This commit removes `Config::new_async` in favor of treating "async support" as
any other setting on `Config`.
The setting is `Config::async_support`.
* Remove order dependency when defining async host functions in `Config`.
This commit removes the order dependency where async support must be enabled on
the `Config` prior to defining async host functions.
The check is now delayed to when an `Engine` is created from the config.
* Update WASI example to use shared `Wasi::add_to_config`.
This commit updates the WASI example to use `Wasi::add_to_config`.
As only a single store and instance are used in the example, it has no semantic
difference from the previous example, but the intention is to steer users
towards defining WASI on the config and only using `Wasi::add_to_linker` when
more explicit scoping of the WASI context is required.
This change makes the storage of `Table` more internally consistent.
Elements are stored as raw pointers for both static and dynamic table storage.
Explicitly storing elements as pointers removes assumptions being made by the
pooling allocator in terms of the size and default representation of the
elements.
However, care must be made to properly clone externrefs for table operations.
* Add more overflow checks in table/memory initialization.
* Comment for `with_allocation_strategy` to explain ignored `Config` options.
* Fix Wasmtime `Table` to not panic for type mismatches in `fill`/`copy`.
* Add tests for that fix.