* winch(fuzz): Initial support for differential fuzzing
This commit introduces initial support for differential fuzzing for Winch. In
order to fuzz winch, this change introduces the `winch` cargo feature. When the
`winch` cargo feature is enabled the differential fuzz target uses `wasmi` as
the differential engine and `wasm-smith` and `single-inst` as the module sources.
The intention behind this change is to have a *local* approach for fuzzing and
verifying programs generated by Winch and to have an initial implementation that
will allow us to eventually enable this change by default. Currently it's not
worth it to enable this change by default given all the filtering that needs to
happen to ensure that the generated modules are supported by Winch.
It's worth noting that the Wasm filtering code will be temporary, until Winch
reaches feature parity in terms of Wasm operators.
* Check build targets with the `winch` feature flag
* Rename fuzz target feature to `fuzz-winch`
Previously it could affect the PartialEq and Hash impls. Ignoring the
sequence number in PartialEq and Hash allows us to not renumber all
blocks in the incremental cache.
Fuzz additional targets in the cranelift-icache target. The list of targets fuzzed is controlled by the targets enabled in fuzz/Cargo.toml.
This PR also reworks how instruction disabling is done in function generator, moving the deny-list to a function to make the decision at runtime instead of compile time.
We have some operations defined on DataFlowGraph purely to work around borrow-checker issues with InstructionData and other data on DataFlowGraph. Part of the problem is that indexing the DFG directly hides the fact that we're only indexing the insts field of the DFG.
This PR makes the insts field of the DFG public, but wraps it in a newtype that only allows indexing. This means that the borrow checker is better able to tell when operations on memory held by the DFG won't conflict, which comes up frequently when mutating ValueLists held by InstructionData.
This commit changes the APIs in the `wasmtime` crate for configuring the
pooling allocator. I plan on adding a few more configuration options in
the near future and the current structure was feeling unwieldy for
adding these new abstractions.
The previous `struct`-based API has been replaced with a builder-style
API in a similar shape as to `Config`. This is done to help make it
easier to add more configuration options in the future through adding
more methods as opposed to adding more field which could break prior
initializations.
The `libfuzzer-sys` update in #5068 included some changes to the
`fuzz_target!` macro which caused a bare `run` function to be shadowed
by the macro-defined `run` function (changed in
rust-fuzz/libfuzzer#95) which meant that some of our fuzz targets were
infinite looping or stack overflowing as the same function was called
indefinitely. This renames the top-level `run` function to something
else in the meantime.
* fuzzgen: Test compiler flags
* cranelift: Generate `all()` function for all enum flags
This allows a user to iterate all flags that exist.
* fuzzgen: Minimize regalloc_checker compiles
* fuzzgen: Limit the amount of test case inputs
* fuzzgen: Add egraphs flag
It's finally here! 🥳
* cranelift: Add fuzzing comment to settings
* fuzzgen: Add riscv64
* fuzzgen: Unconditionally enable some flags
* Leverage Cargo's workspace inheritance feature
This commit is an attempt to reduce the complexity of the Cargo
manifests in this repository with Cargo's workspace-inheritance feature
becoming stable in Rust 1.64.0. This feature allows specifying fields in
the root workspace `Cargo.toml` which are then reused throughout the
workspace. For example this PR shares definitions such as:
* All of the Wasmtime-family of crates now use `version.workspace =
true` to have a single location which defines the version number.
* All crates use `edition.workspace = true` to have one default edition
for the entire workspace.
* Common dependencies are listed in `[workspace.dependencies]` to avoid
typing the same version number in a lot of different places (e.g. the
`wasmparser = "0.89.0"` is now in just one spot.
Currently the workspace-inheritance feature doesn't allow having two
different versions to inherit, so all of the Cranelift-family of crates
still manually specify their version. The inter-crate dependencies,
however, are shared amongst the root workspace.
This feature can be seen as a method of "preprocessing" of sorts for
Cargo manifests. This will help us develop Wasmtime but shouldn't have
any actual impact on the published artifacts -- everything's dependency
lists are still the same.
* Fix wasi-crypto tests
We weren't using the "union" cargo feature for the smallvec crate, which
reduces the size of a SmallVec by one machine word. This feature
requires Rust 1.49 but we already require much newer versions.
When using Wasmtime to compile pulldown-cmark from Sightglass, this
saves a decent amount of memory allocations and writes. According to
`valgrind --tool=dhat`:
- 6.2MiB (3.69%) less memory allocated over the program's lifetime
- 0.5MiB (4.13%) less memory allocated at maximum heap size
- 5.5MiB (1.88%) fewer bytes written to
- 0.44% fewer instructions executed
Sightglass reports a statistically significant runtime improvement too:
compilation :: cycles :: benchmarks/pulldown-cmark/benchmark.wasm
Δ = 24379323.60 ± 20051394.04 (confidence = 99%)
shrink-abiarg-0406da67c.so is 1.01x to 1.13x faster than main-be690a468.so!
[227506364 355007998.78 423280514] main-be690a468.so
[227686018 330628675.18 406025344] shrink-abiarg-0406da67c.so
compilation :: cycles :: benchmarks/spidermonkey/benchmark.wasm
Δ = 360151622.56 ± 278294316.90 (confidence = 99%)
shrink-abiarg-0406da67c.so is 1.01x to 1.07x faster than main-be690a468.so!
[8709162212 8911001926.44 9535111576] main-be690a468.so
[5058015392 8550850303.88 9282148438] shrink-abiarg-0406da67c.so
compilation :: cycles :: benchmarks/bz2/benchmark.wasm
Δ = 6936570.28 ± 6897696.38 (confidence = 99%)
shrink-abiarg-0406da67c.so is 1.00x to 1.08x faster than main-be690a468.so!
[155810934 175260571.20 234737344] main-be690a468.so
[119128240 168324000.92 257451074] shrink-abiarg-0406da67c.so
* fuzzgen: Disable verifier after NaN Canonicalization
We are currently running the verifier twice, once after the nan canonicalization pass, and again when JIT compiling the code.
The verifier first runs in the NaN Canonicalization pass. If it fails it prevents us from getting a nice `cargo fuzz fmt` test case.
So disable the verifier there, but ensure its enabled when JIT compiling.
* fuzzgen: Force enable verifier in cranelift-icache
This is already the default, but since we no longer run the verifier in `fuzzgen` its important to ensure that it runs in the fuzz targets.
* fuzz: improve the API of the `wasm-spec-interpreter` crate
This change addresses key parts of #4852 by improving the bindings to
the OCaml spec interpreter. The new API allows users to `instantiate` a
module, `interpret` named functions on that instance, and `export`
globals and memories from that instance. This currently leaves the
existing implementation ("instantiate and interpret the first function in
a module") present under a new name: `interpret_legacy`.
* fuzz: adapt the differential spec engine to the new API
This removes the legacy uses in the differential spec engine, replacing
them with the new `instantiate`-`interpret`-`export` API from the
`wasm-spec-interpreter` crate.
* fix: make instance access thread-safe
This changes the OCaml-side definition of the instance so that each
instance carries round a reference to a "global store" that's specific
to that instantiation. Because everything is updated by reference there
should be no visible behavioural change on the Rust side, apart from
everything suddenly being thread-safe (modulo the fact that access to
the OCaml runtime still needs to be locked). This fix will need to be
generalised slightly in future if we want to allow multiple modules to
be instantiated in the same store.
Co-authored-by: conrad-watt <cnrdwtt@gmail.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* Throw out fewer fuzz inputs with differential fuzzer
Prior to this commit the differential fuzzer would generate a module and
then select an engine to execute the module against Wasmtime. This
meant, however, that the candidate list of engines were filtered against
the configuration used to generate the module to ensure that the
selected engine could run the generated module.
This commit inverts this logic and instead selects an engine first,
allowing the engine to then tweak the module configuration to ensure
that the generated module is compatible with the engine selected. This
means that fewer fuzz inputs are discarded because every fuzz input will
result in an engine being executed.
Internally the engine constructors have all been updated to update the
configuration to work instead of filtering the configuration. Some other
fixes were applied for the spec interpreter as well to work around #4852
* Fix tests
* Improve wasmi differential fuzzer
* Support modules with a `start` function
* Implement trap-matching to ensure that wasmi and Wasmtime both report
the same flavor of trap.
* Support differential fuzzing where no engines match
Locally I was attempting to run against just one wasm engine with
`ALLOWED_ENGINES=wasmi` but the fuzzer quickly panicked because the
generated test case didn't match wasmi's configuration. This commit
updates engine-selection in the differential fuzzer to return `None` if
no engine is applicable, throwing out the test case. This won't be hit
at all with oss-fuzz-based runs but for local runs it'll be useful to
have.
* Improve proposal support in differential fuzzer
* De-prioritize unstable wasm proposals such as multi-memory and
memory64 by making them more unlikely with `Unstructured::ratio`.
* Allow fuzzing multi-table (reference types) and multi-memory by
avoiding setting their maximums to 1 in `set_differential_config`.
* Update selection of the pooling strategy to unconditionally support
the selected module config rather than the other way around.
* Improve handling of traps in differential fuzzing
This commit fixes an issue found via local fuzzing where engines were
reporting different results but the underlying reason for this was that
one engine was hitting stack overflow before the other. To fix the
underlying issue I updated the execution to check for stack overflow
and, if hit, it discards the entire fuzz test case from then on.
The rationale behind this is that each engine can have unique limits for
stack overflow. One test case I was looking at for example would stack
overflow at less than 1000 frames with epoch interruption enabled but
would stack overflow at more than 1000 frames with it disabled. This
means that the state after the trap started to diverge and it looked
like the engines produced different results.
While I was at it I also improved the "function call returned a trap"
case to compare traps to make sure the same trap reason popped out.
* Fix fuzzer tests
The version of the `arbitrary` crate used in fuzz targets needs to be
the same as the version used in `libfuzzer-sys`. That's why the latter
crate re-exports the former.
But we need to make sure to consistently use the re-exported version.
That's most easily done if that's the only version we have available.
However, `fuzz/Cargo.toml` declared a direct dependency on `arbitrary`,
making it available for import, and leading to that version being used
in a couple places.
There were two copies of `arbitrary` built before, even though they were
the same version: one with the `derive` feature turned on, through the
direct dependency, and one with it turned off when imported through
`libfuzzer-sys`. So I haven't specifically tested this but fuzzer builds
might be slightly faster now.
I have not removed the build-dep on `arbitrary`, because `build.rs` is
not invoked by libFuzzer and so it doesn't matter what version of
`arbitrary` it uses.
Our other crates, like `cranelift-fuzzgen` and `wasmtime-fuzzing`, can
still accidentally use a different version of `arbitrary` than the fuzz
targets which rely on them. This commit only fixes the direct cases
within `fuzz/**`.
* cranelift: Change test runner order
Changes the ordering of runtests to run per target and then per function.
This change doesn't do a lot by itself, but helps future refactorings of runtests.
* cranelift: Rename SingleFunctionCompiler to TestCaseCompiler
* cranelift: Skip runtests per target instead of per run
* cranelift: Deduplicate test names
With the upcoming changes to the runtest infrastructure we require unique ExtNames for all tests.
Note that for test names we have a 16 character limit on test names, and must be unique within those 16 characters.
* cranelift: Add TestFileCompiler to runtests
TestFileCompiler allows us to compile the entire file once, and then call the trampolines for each test.
The previous code was compiling the function for each invocation of a test.
* cranelift: Deduplicate ExtName for avg_round tests
* cranelift: Rename functions as they are defined.
The JIT internally only deals with User functions, and cannot link test name funcs.
This also caches trampolines by signature.
* cranelift: Preserve original name when reporting errors.
* cranelift: Rename aarch64 test functions
* cranelift: Add `call` and `call_indirect` tests!
* cranelift: Add pauth runtests for aarch64
* cranelift: Rename duplicate s390x tests
* cranelift: Delete `i128_bricmp_of` function from i128-bricmp
It looks like we forgot to delete it when it was moved to
`i128-bricmp-overflow`, and since it didn't have a run invocation
it was never compiled.
However, s390x does not support this, and panics when lowering.
* cranelift: Add `colocated` call tests
* cranelift: Rename *more* `s390x` tests
* cranelift: Add pauth + sign_return_address call tests
* cranelift: Undeduplicate test names
With the latest main changes we now support *unlimited* length test names.
This commit reverts:
52274676ff631c630f9879dd32e756566d3e700f
7989edc172493547cdf63e180bb58365e8a43a42
25c8a8395527d98976be6a34baa3b0b214776739
792e8cfa8f748077f9d80fe7ee5e958b7124e83b
* cranelift: Add LibCall tests
* cranelift: Revert more test names
These weren't auto reverted by the previous revert.
* cranelift: Disable libcall tests for aarch64
* cranelift: Runtest fibonacci tests
* cranelift: Misc cleanup
This change is a follow-on from #4515 to add the ability to configure
the `differential` fuzz target by limiting which engines and modules are
used for fuzzing. This is incredibly useful when troubleshooting, e.g.,
when an engine is more prone to failure, we can target that engine
exclusively. The effect of this configuration is visible in the
statistics now printed out from #4739.
Engines are configured using the `ALLOWED_ENGINES` environment variable.
We can either subtract from the set of allowed engines (e.g.,
`ALLOWED_ENGINES=-v8`) or build up a set of allowed engines (e.g.,
`ALLOWED_ENGINES=wasmi,spec`), but not both at the same time.
`ALLOWED_ENGINES` only configures the left-hand side engine; the
right-hand side is always Wasmtime. When omitted, `ALLOWED_ENGINES`
defaults to [`wasmtime`, `wasmi`, `spec`, `v8`].
The generated WebAssembly modules are configured using
`ALLOWED_MODULES`. This environment variables works the same as above
but the available options are: [`wasm-smith`, `single-inst`].
* Port v8 fuzzer to the new framework
This commit aims to improve the support for the new "meta" differential
fuzzer added in #4515 by ensuring that all existing differential fuzzing
is migrated to this new fuzzer. This PR includes features such as:
* The V8 differential execution is migrated to the new framework.
* `Config::set_differential_config` no longer force-disables wasm
features, instead allowing them to be enabled as per the fuzz input.
* `DiffInstance::{hash, hash}` was replaced with
`DiffInstance::get_{memory,global}` to allow more fine-grained
assertions.
* Support for `FuncRef` and `ExternRef` have been added to `DiffValue`
and `DiffValueType`. For now though generating an arbitrary
`ExternRef` and `FuncRef` simply generates a null value.
* Arbitrary `DiffValue::{F32,F64}` values are guaranteed to use
canonical NaN representations to fix an issue with v8 where with the
v8 engine we can't communicate non-canonical NaN values through JS.
* `DiffEngine::evaluate` allows "successful failure" for cases where
engines can't support that particular invocation, for example v8 can't
support `v128` arguments or return values.
* Smoke tests were added for each engine to ensure that a simple wasm
module works at PR-time.
* Statistics printed from the main fuzzer now include percentage-rates
for chosen engines as well as percentage rates for styles-of-module.
There's also a few small refactorings here and there but mostly just
things I saw along the way.
* Update the fuzzing README
* [fuzz] Remove the `differential` fuzz target
This functionality is already covered by the `differential_meta` target.
* [fuzz] Rename `differential_meta` to `differential`
Now that the `differential_meta` fuzz target does everything that the
existing `differential` target did and more, it can take over the
original name.
* [fuzz] Remove some differential fuzz targets
The changes in #4515 do everything the `differential_spec` and
`differential_wasmi` fuzz target already do. These fuzz targets are now
redundant and this PR removes them. It also updates the fuzz
documentation slightly.
* [fuzz] Add `Module` enum, refactor `ModuleConfig`
This change adds a way to create either a single-instruction module or a
regular (big) `wasm-smith` module. It has some slight refactorings in
preparation for the use of this new code.
* [fuzz] Add `DiffValue` for differential evaluation
In order to evaluate functions with randomly-generated values, we needed
a common way to generate these values. Using the Wasmtime `Val` type is
not great because we would like to be able to implement various traits
on the new value type, e.g., to convert `Into` and `From` boxed values
of other engines we differentially fuzz against. This new type,
`DiffValue`, gives us a common ground for all the conversions and
comparisons between the other engine types.
* [fuzz] Add interface for differential engines
In order to randomly choose an engine to fuzz against, we expect all of
the engines to meet a common interface. The traits in this commit allow
us to instantiate a module from its binary form, evaluate exported
functions, and (possibly) hash the exported items of the instance.
This change has some missing pieces, though:
- the `wasm-spec-interpreter` needs some work to be able to create
instances, evaluate a function by name, and expose exported items
- the `v8` engine is not implemented yet due to the complexity of its
Rust lifetimes
* [fuzz] Use `ModuleFeatures` instead of existing configuration
When attempting to use both wasm-smith and single-instruction modules,
there is a mismatch in how we communicate what an engine must be able to
support. In the first case, we could use the `ModuleConfig`, a wrapper
for wasm-smith's `SwarmConfig`, but single-instruction modules do not
have a `SwarmConfig`--the many options simply don't apply. Here, we
instead add `ModuleFeatures` and adapt a `ModuleConfig` to that.
`ModuleFeatures` then becomes the way to communicate what features an
engine must support to evaluate functions in a module.
* [fuzz] Add a new fuzz target using the meta-differential oracle
This change adds the `differential_meta` target to the list of fuzz
targets. I expect that sometime soon this could replace the other
`differential*` targets, as it almost checks all the things those check.
The major missing piece is that currently it only chooses
single-instruction modules instead of also generating arbitrary modules
using `wasm-smith`.
Also, this change adds the concept of an ignorable error: some
differential engines will choke with certain inputs (e.g., `wasmi` might
have an old opcode mapping) which we do not want to flag as fuzz bugs.
Here we wrap those errors in `DiffIgnoreError` and then use a new helper
trait, `DiffIgnorable`, to downcast and inspect the `anyhow` error to
only panic on non-ignorable errors; the ignorable errors are converted
to one of the `arbitrary::Error` variants, which we already ignore.
* [fuzz] Compare `DiffValue` NaNs more leniently
Because arithmetic NaNs can contain arbitrary payload bits, checking
that two differential executions should produce the same result should
relax the comparison of the `F32` and `F64` types (and eventually `V128`
as well... TODO). This change adds several considerations, however, so
that in the future we make the comparison a bit stricter, e.g., re:
canonical NaNs. This change, however, just matches the current logic
used by other fuzz targets.
* review: allow hashing mutate the instance state
@alexcrichton requested that the interface be adapted to accommodate
Wasmtime's API, in which even reading from an instance could trigger
mutation of the store.
* review: refactor where configurations are made compatible
See @alexcrichton's
[suggestion](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928974376).
* review: convert `DiffValueType` using `TryFrom`
See @alexcrichton's
[comment](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928962394).
* review: adapt target implementation to Wasmtime-specific RHS
This change is joint work with @alexcrichton to adapt the structure of
the fuzz target to his comments
[here](https://github.com/bytecodealliance/wasmtime/pull/4515#pullrequestreview-1073247791).
This change:
- removes `ModuleFeatures` and the `Module` enum (for big and small
modules)
- upgrades `SingleInstModule` to filter out cases that are not valid for
a given `ModuleConfig`
- adds `DiffEngine::name()`
- constructs each `DiffEngine` using a `ModuleConfig`, eliminating
`DiffIgnoreError` completely
- prints an execution rate to the `differential_meta` target
Still TODO:
- `get_exported_function_signatures` could be re-written in terms of the
Wasmtime API instead `wasmparser`
- the fuzzer crashes eventually, we think due to the signal handler
interference between OCaml and Wasmtime
- the spec interpreter has several cases that we skip for now but could
be fuzzed with further work
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* fix: avoid SIGSEGV by explicitly initializing OCaml runtime first
* review: use Wasmtime's API to retrieve exported functions
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* Upgrade wasm-tools crates, namely the component model
This commit pulls in the latest versions of all of the `wasm-tools`
family of crates. There were two major changes that happened in
`wasm-tools` in the meantime:
* bytecodealliance/wasm-tools#697 - this commit introduced a new API for
more efficiently reading binary operators from a wasm binary. The old
`Operator`-based reading was left in place, however, and continues to
be what Wasmtime uses. I hope to update Wasmtime in a future PR to use
this new API, but for now the biggest change is...
* bytecodealliance/wasm-tools#703 - this commit was a major update to
the component model AST. This commit almost entirely deals with the
fallout of this change.
The changes made to the component model were:
1. The `unit` type no longer exists. This was generally a simple change
where the `Unit` case in a few different locations were all removed.
2. The `expected` type was renamed to `result`. This similarly was
relatively lightweight and mostly just a renaming on the surface. I
took this opportunity to rename `val::Result` to `val::ResultVal` and
`types::Result` to `types::ResultType` to avoid clashing with the
standard library types. The `Option`-based types were handled with
this as well.
3. The payload type of `variant` and `result` types are now optional.
This affected many locations that calculate flat type
representations, ABI information, etc. The `#[derive(ComponentType)]`
macro now specifically handles Rust-defined `enum` types which have
no payload to the equivalent in the component model.
4. Functions can now return multiple parameters. This changed the
signature of invoking component functions because the return value is
now bound by `ComponentNamedList` (renamed from `ComponentParams`).
This had a large effect in the tests, fuzz test case generation, etc.
5. Function types with 2-or-more parameters/results must uniquely name
all parameters/results. This mostly affected the text format used
throughout the tests.
I haven't added specifically new tests for multi-return but I changed a
number of tests to use it. Additionally I've updated the fuzzers to all
exercise multi-return as well so I think we should get some good
coverage with that.
* Update version numbers
* Use crates.io
This is the implementation of https://github.com/bytecodealliance/wasmtime/issues/4155, using the "inverted API" approach suggested by @cfallin (thanks!) in Cranelift, and trait object to provide a backend for an all-included experience in Wasmtime.
After the suggestion of Chris, `Function` has been split into mostly two parts:
- on the one hand, `FunctionStencil` contains all the fields required during compilation, and that act as a compilation cache key: if two function stencils are the same, then the result of their compilation (`CompiledCodeBase<Stencil>`) will be the same. This makes caching trivial, as the only thing to cache is the `FunctionStencil`.
- on the other hand, `FunctionParameters` contain the... function parameters that are required to finalize the result of compilation into a `CompiledCode` (aka `CompiledCodeBase<Final>`) with proper final relocations etc., by applying fixups and so on.
Most changes are here to accomodate those requirements, in particular that `FunctionStencil` should be `Hash`able to be used as a key in the cache:
- most source locations are now relative to a base source location in the function, and as such they're encoded as `RelSourceLoc` in the `FunctionStencil`. This required changes so that there's no need to explicitly mark a `SourceLoc` as the base source location, it's automatically detected instead the first time a non-default `SourceLoc` is set.
- user-defined external names in the `FunctionStencil` (aka before this patch `ExternalName::User { namespace, index }`) are now references into an external table of `UserExternalNameRef -> UserExternalName`, present in the `FunctionParameters`, and must be explicitly declared using `Function::declare_imported_user_function`.
- some refactorings have been made for function names:
- `ExternalName` was used as the type for a `Function`'s name; while it thus allowed `ExternalName::Libcall` in this place, this would have been quite confusing to use it there. Instead, a new enum `UserFuncName` is introduced for this name, that's either a user-defined function name (the above `UserExternalName`) or a test case name.
- The future of `ExternalName` is likely to become a full reference into the `FunctionParameters`'s mapping, instead of being "either a handle for user-defined external names, or the thing itself for other variants". I'm running out of time to do this, and this is not trivial as it implies touching ISLE which I'm less familiar with.
The cache computes a sha256 hash of the `FunctionStencil`, and uses this as the cache key. No equality check (using `PartialEq`) is performed in addition to the hash being the same, as we hope that this is sufficient data to avoid collisions.
A basic fuzz target has been introduced that tries to do the bare minimum:
- check that a function successfully compiled and cached will be also successfully reloaded from the cache, and returns the exact same function.
- check that a trivial modification in the external mapping of `UserExternalNameRef -> UserExternalName` hits the cache, and that other modifications don't hit the cache.
- This last check is less efficient and less likely to happen, so probably should be rethought a bit.
Thanks to both @alexcrichton and @cfallin for your very useful feedback on Zulip.
Some numbers show that for a large wasm module we're using internally, this is a 20% compile-time speedup, because so many `FunctionStencil`s are the same, even within a single module. For a group of modules that have a lot of code in common, we get hit rates up to 70% when they're used together. When a single function changes in a wasm module, every other function is reloaded; that's still slower than I expect (between 10% and 50% of the overall compile time), so there's likely room for improvement.
Fixes#4155.
* cranelift: Use JIT in runtests
Using `cranelift-jit` in run tests allows us to preform relocations and
libcalls. This is important since some instruction lowerings fallback
to libcall's when an extension is missing, or when it's too complicated
to implement manually.
This is also a first step to being able to test `call`'s between functions
in the runtest suite. It should also make it easier to eventually test
TLS relocations, symbol resolution and ABI's.
Another benefit of this is that we also get to test the JIT more, since
it now runs the runtests, and gets some fuzzing via `fuzzgen` (which
uses the `SingleFunctionCompiler`).
This change causes regressions in terms of runtime for the filetests.
I haven't done any serious benchmarking but what I've been seeing is
that it now takes about ~3 seconds to run the testsuite while it
previously took around 2 seconds.
* Add FMA tests for X86
* Improve the `component_api` fuzzer on a few dimensions
* Update the generated component to use an adapter module. This involves
two core wasm instances communicating with each other to test that
data flows through everything correctly. The intention here is to fuzz
the fused adapter compiler. String encoding options have been plumbed
here to exercise differences in string encodings.
* Use `Cow<'static, ...>` and `static` declarations for each static test
case to try to cut down on rustc codegen time.
* Add `Copy` to derivation of fuzzed enums to make `derive(Clone)`
smaller.
* Use `Store<Box<dyn Any>>` to try to cut down on codegen by
monomorphizing fewer `Store<T>` implementation.
* Add debug logging to print out what's flowing in and what's flowing
out for debugging failures.
* Improve `Debug` representation of dynamic value types to more closely
match their Rust counterparts.
* Fix a variant issue with adapter trampolines
Previously the offset of the payload was calculated as the discriminant
aligned up to the alignment of a singular case, but instead this needs
to be aligned up to the alignment of all cases to ensure all cases start
at the same location.
* Fix a copy/paste error when copying masked integers
A 32-bit load was actually doing a 16-bit load by accident since it was
copied from the 16-bit load-and-mask case.
* Fix f32/i64 conversions in adapter modules
The adapter previously erroneously converted the f32 to f64 and then to
i64, where instead it should go from f32 to i32 to i64.
* Fix zero-sized flags in adapter modules
This commit corrects the size calculation for zero-sized flags in
adapter modules.
cc #4592
* Fix a variant size calculation bug in adapters
This fixes the same issue found with variants during normal host-side
fuzzing earlier where the size of a variant needs to align up the
summation of the discriminant and the maximum case size.
* Implement memory growth in libc bump realloc
Some fuzz-generated test cases are copying lists large enough to exceed
one page of memory so bake in a `memory.grow` to the bump allocator as
well.
* Avoid adapters of exponential size
This commit is an attempt to avoid adapters being exponentially sized
with respect to the type hierarchy of the input. Previously all
adaptation was done inline within each adapter which meant that if
something was structured as `tuple<T, T, T, T, ...>` the translation of
`T` would be inlined N times. For very deeply nested types this can
quickly create an exponentially sized adapter with types of the form:
(type $t0 (list u8))
(type $t1 (tuple $t0 $t0))
(type $t2 (tuple $t1 $t1))
(type $t3 (tuple $t2 $t2))
;; ...
where the translation of `t4` has 8 different copies of translating
`t0`.
This commit changes the translation of types through memory to almost
always go through a helper function. The hope here is that it doesn't
lose too much performance because types already reside in memory.
This can still lead to exponentially sized adapter modules to a lesser
degree where if the translation all happens on the "stack", e.g. via
`variant`s and their flat representation then many copies of one
translation could still be made. For now this commit at least gets the
problem under control for fuzzing where fuzzing doesn't trivially find
type hierarchies that take over a minute to codegen the adapter module.
One of the main tricky parts of this implementation is that when a
function is generated the index that it will be placed at in the final
module is not known at that time. To solve this the encoded form of the
`Call` instruction is saved in a relocation-style format where the
`Call` isn't encoded but instead saved into a different area for
encoding later. When the entire adapter module is encoded to wasm these
pseudo-`Call` instructions are encoded as real instructions at that
time.
* Fix some memory64 issues with string encodings
Introduced just before #4623 I had a few mistakes related to 64-bit
memories and mixing 32/64-bit memories.
* Actually insert into the `translate_mem_funcs` map
This... was the whole point of having the map!
* Assert memory growth succeeds in bump allocator
This addresses #4307.
For the static API we generate 100 arbitrary test cases at build time, each of
which includes 0-5 parameter types, a result type, and a WAT fragment containing
an imported function and an exported function. The exported function calls the
imported function, which is implemented by the host. At runtime, the fuzz test
selects a test case at random and feeds it zero or more sets of arbitrary
parameters and results, checking that values which flow host-to-guest and
guest-to-host make the transition unchanged.
The fuzz test for the dynamic API follows a similar pattern, the only difference
being that test cases are generated at runtime.
Signed-off-by: Joel Dice <joel.dice@fermyon.com>
* Always preserve frame pointers in Wasmtime
This allows us to efficiently and simply capture Wasm stacks without maintaining
and synchronizing any safety-critical side tables between the compiler and the
runtime.
* wasmtime: Implement fast Wasm stack walking
Why do we want Wasm stack walking to be fast? Because we capture stacks whenever
there is a trap and traps actually happen fairly frequently with short-lived
programs and WASI's `exit`.
Previously, we would rely on generating the system unwind info (e.g.
`.eh_frame`) and using the system unwinder (via the `backtrace`crate) to walk
the full stack and filter out any non-Wasm stack frames. This can,
unfortunately, be slow for two primary reasons:
1. The system unwinder is doing `O(all-kinds-of-frames)` work rather than
`O(wasm-frames)` work.
2. System unwind info and the system unwinder need to be much more general than
a purpose-built stack walker for Wasm needs to be. It has to handle any kind of
stack frame that any compiler might emit where as our Wasm frames are emitted by
Cranelift and always have frame pointers. This translates into implementation
complexity and general overhead. There can also be unnecessary-for-our-use-cases
global synchronization and locks involved, further slowing down stack walking in
the presence of multiple threads trying to capture stacks in parallel.
This commit introduces a purpose-built stack walker for traversing just our Wasm
frames. To find all the sequences of Wasm-to-Wasm stack frames, and ignore
non-Wasm stack frames, we keep a linked list of `(entry stack pointer, exit
frame pointer)` pairs. This linked list is maintained via Wasm-to-host and
host-to-Wasm trampolines. Within a sequence of Wasm-to-Wasm calls, we can use
frame pointers (which Cranelift preserves) to find the next older Wasm frame on
the stack, and we keep doing this until we reach the entry stack pointer,
meaning that the next older frame will be a host frame.
The trampolines need to avoid a couple stumbling blocks. First, they need to be
compiled ahead of time, since we may not have access to a compiler at
runtime (e.g. if the `cranelift` feature is disabled) but still want to be able
to call functions that have already been compiled and get stack traces for those
functions. Usually this means we would compile the appropriate trampolines
inside `Module::new` and the compiled module object would hold the
trampolines. However, we *also* need to support calling host functions that are
wrapped into `wasmtime::Func`s and there doesn't exist *any* ahead-of-time
compiled module object to hold the appropriate trampolines:
```rust
// Define a host function.
let func_type = wasmtime::FuncType::new(
vec![wasmtime::ValType::I32],
vec![wasmtime::ValType::I32],
);
let func = Func::new(&mut store, func_type, |_, params, results| {
// ...
Ok(())
});
// Call that host function.
let mut results = vec![wasmtime::Val::I32(0)];
func.call(&[wasmtime::Val::I32(0)], &mut results)?;
```
Therefore, we define one host-to-Wasm trampoline and one Wasm-to-host trampoline
in assembly that work for all Wasm and host function signatures. These
trampolines are careful to only use volatile registers, avoid touching any
register that is an argument in the calling convention ABI, and tail call to the
target callee function. This allows forwarding any set of arguments and any
returns to and from the callee, while also allowing us to maintain our linked
list of Wasm stack and frame pointers before transferring control to the
callee. These trampolines are not used in Wasm-to-Wasm calls, only when crossing
the host-Wasm boundary, so they do not impose overhead on regular calls. (And if
using one trampoline for all host-Wasm boundary crossing ever breaks branch
prediction enough in the CPU to become any kind of bottleneck, we can do fun
things like have multiple copies of the same trampoline and choose a random copy
for each function, sharding the functions across branch predictor entries.)
Finally, this commit also ends the use of a synthetic `Module` and allocating a
stubbed out `VMContext` for host functions. Instead, we define a
`VMHostFuncContext` with its own magic value, similar to `VMComponentContext`,
specifically for host functions.
<h2>Benchmarks</h2>
<h3>Traps and Stack Traces</h3>
Large improvements to taking stack traces on traps, ranging from shaving off 64%
to 99.95% of the time it used to take.
<details>
```
multi-threaded-traps/0 time: [2.5686 us 2.5808 us 2.5934 us]
thrpt: [0.0000 elem/s 0.0000 elem/s 0.0000 elem/s]
change:
time: [-85.419% -85.153% -84.869%] (p = 0.00 < 0.05)
thrpt: [+560.90% +573.56% +585.84%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
multi-threaded-traps/1 time: [2.9021 us 2.9167 us 2.9322 us]
thrpt: [341.04 Kelem/s 342.86 Kelem/s 344.58 Kelem/s]
change:
time: [-91.455% -91.294% -91.096%] (p = 0.00 < 0.05)
thrpt: [+1023.1% +1048.6% +1070.3%]
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
multi-threaded-traps/2 time: [2.9996 us 3.0145 us 3.0295 us]
thrpt: [660.18 Kelem/s 663.47 Kelem/s 666.76 Kelem/s]
change:
time: [-94.040% -93.910% -93.762%] (p = 0.00 < 0.05)
thrpt: [+1503.1% +1542.0% +1578.0%]
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
multi-threaded-traps/4 time: [5.5768 us 5.6052 us 5.6364 us]
thrpt: [709.68 Kelem/s 713.63 Kelem/s 717.25 Kelem/s]
change:
time: [-93.193% -93.121% -93.052%] (p = 0.00 < 0.05)
thrpt: [+1339.2% +1353.6% +1369.1%]
Performance has improved.
multi-threaded-traps/8 time: [8.6408 us 9.1212 us 9.5438 us]
thrpt: [838.24 Kelem/s 877.08 Kelem/s 925.84 Kelem/s]
change:
time: [-94.754% -94.473% -94.202%] (p = 0.00 < 0.05)
thrpt: [+1624.7% +1709.2% +1806.1%]
Performance has improved.
multi-threaded-traps/16 time: [10.152 us 10.840 us 11.545 us]
thrpt: [1.3858 Melem/s 1.4760 Melem/s 1.5761 Melem/s]
change:
time: [-97.042% -96.823% -96.577%] (p = 0.00 < 0.05)
thrpt: [+2821.5% +3048.1% +3281.1%]
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
many-modules-registered-traps/1
time: [2.6278 us 2.6361 us 2.6447 us]
thrpt: [378.11 Kelem/s 379.35 Kelem/s 380.55 Kelem/s]
change:
time: [-85.311% -85.108% -84.909%] (p = 0.00 < 0.05)
thrpt: [+562.65% +571.51% +580.76%]
Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
3 (3.00%) high mild
6 (6.00%) high severe
many-modules-registered-traps/8
time: [2.6294 us 2.6460 us 2.6623 us]
thrpt: [3.0049 Melem/s 3.0235 Melem/s 3.0425 Melem/s]
change:
time: [-85.895% -85.485% -85.022%] (p = 0.00 < 0.05)
thrpt: [+567.63% +588.95% +608.95%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
many-modules-registered-traps/64
time: [2.6218 us 2.6329 us 2.6452 us]
thrpt: [24.195 Melem/s 24.308 Melem/s 24.411 Melem/s]
change:
time: [-93.629% -93.551% -93.470%] (p = 0.00 < 0.05)
thrpt: [+1431.4% +1450.6% +1469.5%]
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
many-modules-registered-traps/512
time: [2.6569 us 2.6737 us 2.6923 us]
thrpt: [190.17 Melem/s 191.50 Melem/s 192.71 Melem/s]
change:
time: [-99.277% -99.268% -99.260%] (p = 0.00 < 0.05)
thrpt: [+13417% +13566% +13731%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
many-modules-registered-traps/4096
time: [2.7258 us 2.7390 us 2.7535 us]
thrpt: [1.4876 Gelem/s 1.4955 Gelem/s 1.5027 Gelem/s]
change:
time: [-99.956% -99.955% -99.955%] (p = 0.00 < 0.05)
thrpt: [+221417% +223380% +224881%]
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
many-stack-frames-traps/1
time: [1.4658 us 1.4719 us 1.4784 us]
thrpt: [676.39 Kelem/s 679.38 Kelem/s 682.21 Kelem/s]
change:
time: [-90.368% -89.947% -89.586%] (p = 0.00 < 0.05)
thrpt: [+860.23% +894.72% +938.21%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
many-stack-frames-traps/8
time: [2.4772 us 2.4870 us 2.4973 us]
thrpt: [3.2034 Melem/s 3.2167 Melem/s 3.2294 Melem/s]
change:
time: [-85.550% -85.370% -85.199%] (p = 0.00 < 0.05)
thrpt: [+575.65% +583.51% +592.03%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
many-stack-frames-traps/64
time: [10.109 us 10.171 us 10.236 us]
thrpt: [6.2525 Melem/s 6.2925 Melem/s 6.3309 Melem/s]
change:
time: [-78.144% -77.797% -77.336%] (p = 0.00 < 0.05)
thrpt: [+341.22% +350.38% +357.55%]
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
5 (5.00%) high mild
2 (2.00%) high severe
many-stack-frames-traps/512
time: [126.16 us 126.54 us 126.96 us]
thrpt: [4.0329 Melem/s 4.0461 Melem/s 4.0583 Melem/s]
change:
time: [-65.364% -64.933% -64.453%] (p = 0.00 < 0.05)
thrpt: [+181.32% +185.17% +188.71%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high severe
```
</details>
<h3>Calls</h3>
There is, however, a small regression in raw Wasm-to-host and host-to-Wasm call
performance due the new trampolines. It seems to be on the order of about 2-10
nanoseconds per call, depending on the benchmark.
I believe this regression is ultimately acceptable because
1. this overhead will be vastly dominated by whatever work a non-nop callee
actually does,
2. we will need these trampolines, or something like them, when implementing the
Wasm exceptions proposal to do things like translate Wasm's exceptions into
Rust's `Result`s,
3. and because the performance improvements to trapping and capturing stack
traces are of such a larger magnitude than this call regressions.
<details>
```
sync/no-hook/host-to-wasm - typed - nop
time: [28.683 ns 28.757 ns 28.844 ns]
change: [+16.472% +17.183% +17.904%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
4 (4.00%) high mild
5 (5.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop
time: [42.515 ns 42.652 ns 42.841 ns]
change: [+12.371% +14.614% +17.462%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop
time: [33.936 ns 34.052 ns 34.179 ns]
change: [+25.478% +26.938% +28.369%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
sync/no-hook/host-to-wasm - typed - nop-params-and-results
time: [34.290 ns 34.388 ns 34.502 ns]
change: [+40.802% +42.706% +44.526%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
5 (5.00%) high mild
8 (8.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [62.546 ns 62.721 ns 62.919 ns]
change: [+2.5014% +3.6319% +4.8078%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
2 (2.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop-params-and-results
time: [42.609 ns 42.710 ns 42.831 ns]
change: [+20.966% +22.282% +23.475%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop
time: [29.546 ns 29.675 ns 29.818 ns]
change: [+20.693% +21.794% +22.836%] (p = 0.00 < 0.05)
Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop
time: [45.448 ns 45.699 ns 45.961 ns]
change: [+17.204% +18.514% +19.590%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop
time: [34.334 ns 34.437 ns 34.558 ns]
change: [+23.225% +24.477% +25.886%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [36.594 ns 36.763 ns 36.974 ns]
change: [+41.967% +47.261% +52.086%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
3 (3.00%) high mild
9 (9.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [63.541 ns 63.831 ns 64.194 ns]
change: [-4.4337% -0.6855% +2.7134%] (p = 0.73 > 0.05)
No change in performance detected.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop-params-and-results
time: [43.968 ns 44.169 ns 44.437 ns]
change: [+18.772% +21.802% +24.623%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
3 (3.00%) high mild
12 (12.00%) high severe
async/no-hook/host-to-wasm - typed - nop
time: [4.9612 us 4.9743 us 4.9889 us]
change: [+9.9493% +11.911% +13.502%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async/no-hook/host-to-wasm - untyped - nop
time: [5.0030 us 5.0211 us 5.0439 us]
change: [+10.841% +11.873% +12.977%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) high mild
7 (7.00%) high severe
async/no-hook/host-to-wasm - typed - nop-params-and-results
time: [4.9273 us 4.9468 us 4.9700 us]
change: [+4.7381% +6.8445% +8.8238%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [5.1151 us 5.1338 us 5.1555 us]
change: [+9.5335% +11.290% +13.044%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/hook-sync/host-to-wasm - typed - nop
time: [4.9330 us 4.9394 us 4.9467 us]
change: [+10.046% +11.038% +12.035%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop
time: [5.0073 us 5.0183 us 5.0310 us]
change: [+9.3828% +10.565% +11.752%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
async/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [4.9610 us 4.9839 us 5.0097 us]
change: [+9.0857% +11.513% +14.359%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [5.0995 us 5.1272 us 5.1617 us]
change: [+9.3600% +11.506% +13.809%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop
time: [2.4242 us 2.4316 us 2.4396 us]
change: [+7.8756% +8.8803% +9.8346%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop
time: [2.5102 us 2.5155 us 2.5210 us]
change: [+12.130% +13.194% +14.270%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
4 (4.00%) high mild
8 (8.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop-params-and-results
time: [2.4203 us 2.4310 us 2.4440 us]
change: [+4.0380% +6.3623% +8.7534%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [2.5501 us 2.5593 us 2.5700 us]
change: [+8.8802% +10.976% +12.937%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop
time: [2.4135 us 2.4190 us 2.4254 us]
change: [+8.3640% +9.3774% +10.435%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop
time: [2.5172 us 2.5248 us 2.5357 us]
change: [+11.543% +12.750% +13.982%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [2.4214 us 2.4353 us 2.4532 us]
change: [+1.5158% +5.0872% +8.6765%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
2 (2.00%) high mild
13 (13.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [2.5499 us 2.5607 us 2.5748 us]
change: [+10.146% +12.459% +14.919%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
sync/no-hook/wasm-to-host - nop - typed
time: [6.6135 ns 6.6288 ns 6.6452 ns]
change: [+37.927% +38.837% +39.869%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.930 ns 15.993 ns 16.067 ns]
change: [+3.9583% +5.6286% +7.2430%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
sync/no-hook/wasm-to-host - nop - untyped
time: [20.596 ns 20.640 ns 20.690 ns]
change: [+4.3293% +5.2047% +6.0935%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [42.659 ns 42.882 ns 43.159 ns]
change: [-2.1466% -0.5079% +1.2554%] (p = 0.58 > 0.05)
No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
sync/no-hook/wasm-to-host - nop - unchecked
time: [10.671 ns 10.691 ns 10.713 ns]
change: [+83.911% +87.620% +92.062%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.136 ns 11.190 ns 11.263 ns]
change: [-29.719% -28.446% -27.029%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop - typed
time: [6.7964 ns 6.8087 ns 6.8226 ns]
change: [+21.531% +24.206% +27.331%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.865 ns 15.921 ns 15.985 ns]
change: [+4.8466% +6.3330% +7.8317%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
sync/hook-sync/wasm-to-host - nop - untyped
time: [21.505 ns 21.587 ns 21.677 ns]
change: [+8.0908% +9.1943% +10.254%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [44.018 ns 44.128 ns 44.261 ns]
change: [-1.4671% -0.0458% +1.2443%] (p = 0.94 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
sync/hook-sync/wasm-to-host - nop - unchecked
time: [11.264 ns 11.326 ns 11.387 ns]
change: [+80.225% +81.659% +83.068%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) high mild
3 (3.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.816 ns 11.865 ns 11.920 ns]
change: [-29.152% -28.040% -26.957%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
8 (8.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop - typed
time: [6.6221 ns 6.6385 ns 6.6569 ns]
change: [+43.618% +44.755% +45.965%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
6 (6.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.884 ns 15.929 ns 15.983 ns]
change: [+3.5987% +5.2053% +6.7846%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/no-hook/wasm-to-host - nop - untyped
time: [20.615 ns 20.702 ns 20.821 ns]
change: [+6.9799% +8.1212% +9.2819%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
2 (2.00%) high mild
8 (8.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.956 ns 42.207 ns 42.521 ns]
change: [-4.3057% -2.7730% -1.2428%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async/no-hook/wasm-to-host - nop - unchecked
time: [10.440 ns 10.474 ns 10.513 ns]
change: [+83.959% +85.826% +87.541%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.476 ns 11.512 ns 11.554 ns]
change: [-29.857% -28.383% -26.978%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
1 (1.00%) low mild
6 (6.00%) high mild
5 (5.00%) high severe
async/no-hook/wasm-to-host - nop - async-typed
time: [26.427 ns 26.478 ns 26.532 ns]
change: [+6.5730% +7.4676% +8.3983%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [28.557 ns 28.693 ns 28.880 ns]
change: [+1.9099% +3.7332% +5.9731%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
async/hook-sync/wasm-to-host - nop - typed
time: [6.7488 ns 6.7630 ns 6.7784 ns]
change: [+19.935% +22.080% +23.683%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.928 ns 16.031 ns 16.149 ns]
change: [+5.5188% +6.9567% +8.3839%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
9 (9.00%) high mild
2 (2.00%) high severe
async/hook-sync/wasm-to-host - nop - untyped
time: [21.930 ns 22.114 ns 22.296 ns]
change: [+4.6674% +7.7588% +10.375%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.684 ns 42.858 ns 43.081 ns]
change: [-5.2957% -3.4693% -1.6217%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severe
async/hook-sync/wasm-to-host - nop - unchecked
time: [11.026 ns 11.053 ns 11.086 ns]
change: [+70.751% +72.378% +73.961%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.840 ns 11.900 ns 11.982 ns]
change: [-27.977% -26.584% -24.887%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async/hook-sync/wasm-to-host - nop - async-typed
time: [27.601 ns 27.709 ns 27.882 ns]
change: [+8.1781% +9.1102% +10.030%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low mild
3 (3.00%) high mild
6 (6.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [28.955 ns 29.174 ns 29.413 ns]
change: [+1.1226% +3.0366% +5.1126%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async-pool/no-hook/wasm-to-host - nop - typed
time: [6.5626 ns 6.5733 ns 6.5851 ns]
change: [+40.561% +42.307% +44.514%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
5 (5.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.820 ns 15.886 ns 15.969 ns]
change: [+4.1044% +5.7928% +7.7122%] (p = 0.00 < 0.05)
Performance has regressed.
Found 17 outliers among 100 measurements (17.00%)
4 (4.00%) high mild
13 (13.00%) high severe
async-pool/no-hook/wasm-to-host - nop - untyped
time: [20.481 ns 20.521 ns 20.566 ns]
change: [+6.7962% +7.6950% +8.7612%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.834 ns 41.998 ns 42.189 ns]
change: [-3.8185% -2.2687% -0.7541%] (p = 0.01 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
3 (3.00%) high mild
10 (10.00%) high severe
async-pool/no-hook/wasm-to-host - nop - unchecked
time: [10.353 ns 10.380 ns 10.414 ns]
change: [+82.042% +84.591% +87.205%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.123 ns 11.168 ns 11.228 ns]
change: [-30.813% -29.285% -27.874%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
async-pool/no-hook/wasm-to-host - nop - async-typed
time: [27.442 ns 27.528 ns 27.638 ns]
change: [+7.5215% +9.9795% +12.266%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [29.014 ns 29.148 ns 29.312 ns]
change: [+2.0227% +3.4722% +4.9047%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - typed
time: [6.7916 ns 6.8116 ns 6.8325 ns]
change: [+20.937% +22.050% +23.281%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.917 ns 15.975 ns 16.051 ns]
change: [+4.6404% +6.4217% +8.3075%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - untyped
time: [21.558 ns 21.612 ns 21.679 ns]
change: [+8.1158% +9.1409% +10.217%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.475 ns 42.614 ns 42.775 ns]
change: [-6.3613% -4.4709% -2.7647%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - unchecked
time: [11.150 ns 11.195 ns 11.247 ns]
change: [+74.424% +77.056% +79.811%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.639 ns 11.695 ns 11.760 ns]
change: [-30.212% -29.023% -27.954%] (p = 0.00 < 0.05)
Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
7 (7.00%) high mild
8 (8.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - async-typed
time: [27.480 ns 27.712 ns 27.984 ns]
change: [+2.9764% +6.5061% +9.8914%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [29.218 ns 29.380 ns 29.600 ns]
change: [+5.2283% +7.7247% +10.822%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
2 (2.00%) high mild
14 (14.00%) high severe
```
</details>
* Add s390x support for frame pointer-based stack walking
* wasmtime: Allow `Caller::get_export` to get all exports
* fuzzing: Add a fuzz target to check that our stack traces are correct
We generate Wasm modules that keep track of their own stack as they call and
return between functions, and then we periodically check that if the host
captures a backtrace, it matches what the Wasm module has recorded.
* Remove VM offsets for `VMHostFuncContext` since it isn't used by JIT code
* Add doc comment with stack walking implementation notes
* Document the extra state that can be passed to `wasmtime_runtime::Backtrace` methods
* Add extensive comments for stack walking function
* Factor architecture-specific bits of stack walking out into modules
* Initialize store-related fields in a vmctx to null when there is no store yet
Rather than leaving them as uninitialized data.
* Use `set_callee` instead of manually setting the vmctx field
* Use a more informative compile error message for unsupported architectures
* Document unsafety of `prepare_host_to_wasm_trampoline`
* Use `bti c` instead of `hint #34` in inline aarch64 assembly
* Remove outdated TODO comment
* Remove setting of `last_wasm_exit_fp` in `set_jit_trap`
This is no longer needed as the value is plumbed through to the backtrace code
directly now.
* Only set the stack limit once, in the face of re-entrancy into Wasm
* Add comments for s390x-specific stack walking bits
* Use the helper macro for all libcalls
If we forget to use it, and then trigger a GC from the libcall, that means we
could miss stack frames when walking the stack, fail to find live GC refs, and
then get use after free bugs. Much less risky to always use the helper macro
that takes care of all of that for us.
* Use the `asm_sym!` macro in Wasm-to-libcall trampolines
This macro handles the macOS-specific underscore prefix stuff for us.
* wasmtime: add size and align to `externref` assertion error message
* Extend the `stacks` fuzzer to have host frames in between Wasm frames
This way we get one or more contiguous sequences of Wasm frames on the stack,
instead of exactly one.
* Add documentation for aarch64-specific backtrace helpers
* Clarify that we only support little-endian aarch64 in trampoline comment
* Use `.machine z13` in s390x assembly file
Since apparently our CI machines have pretty old assemblers that don't have
`.machine z14`. This should be fine though since these trampolines don't make
use of anything that is introduced in z14.
* Fix aarch64 build
* Fix macOS build
* Document the `asm_sym!` macro
* Add windows support to the `wasmtime-asm-macros` crate
* Add windows support to host<--->Wasm trampolines
* Fix trap handler build on windows
* Run `rustfmt` on s390x trampoline source file
* Temporarily disable some assertions about a trap's backtrace in the component model tests
Follow up to re-enable this and fix the associated issue:
https://github.com/bytecodealliance/wasmtime/issues/4535
* Refactor libcall definitions with less macros
This refactors the `libcall!` macro to use the
`foreach_builtin_function!` macro to define all of the trampolines.
Additionally the macro surrounding each libcall itself is no longer
necessary and helps avoid too many macros.
* Use `VMOpaqueContext::from_vm_host_func_context` in `VMHostFuncContext::new`
* Move `backtrace` module to be submodule of `traphandlers`
This avoids making some things `pub(crate)` in `traphandlers` that really
shouldn't be.
* Fix macOS aarch64 build
* Use "i64" instead of "word" in aarch64-specific file
* Save/restore entry SP and exit FP/return pointer in the face of panicking imported host functions
Also clean up assertions surrounding our saved entry/exit registers.
* Put "typed" vs "untyped" in the same position of call benchmark names
Regardless if we are doing wasm-to-host or host-to-wasm
* Fix stacks test case generator build for new `wasm-encoder`
* Fix build for s390x
* Expand libcalls in s390x asm
* Disable more parts of component tests now that backtrace assertions are a bit tighter
* Remove assertion that can maybe fail on s390x
Co-authored-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* Allow using jump-tables multiple times (fixes#3347)
If there are multiple `br_table` instructions using the same jump table,
then `append_jump_argument` must not modify the jump table in-place.
When this function is called, we don't know if more `br_table`
instructions might be added later. So this patch conservatively assumes
that all jump tables might be reused. If Cranelift needs to add a block
argument to a block that's the target of some jump table, then the jump
table will be unconditionally cloned.
I'm not sure if having duplicated and unused jump tables will turn out
to be a compile-time performance issue. If it is, there's discussion in
issue #3347 about ways to determine that there can't be any more uses of
a jump table, so that it's safe to modify in-place.
* Re-enable cranelift-fuzzgen fuzz target
I've been running this fuzz target for an hour without finding new bugs.
Let's see if oss-fuzz finds anything now.
The compile step that cranelift-fuzzgen does also triggers IR
verification. So all bugs that cranelift-fuzzgen-verify could catch are
also caught by cranelift-fuzzgen. Removing redundant fuzzers lets us
spend limited fuzz-testing CPU time budgets better.
Consulting oss-fuzz it looks like these fuzz targets are crashing 100%
of the time partly due to #3347 I believe. Until that's fixed this hopes
to reclaim the time used on oss-fuzz for other fuzzers to make progress.
OSS-fuzz long-ago discovered https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=45662
which we currently believe to be a bug in v8. I originally thought it
was going to be fixed with https://bugs.chromium.org/p/v8/issues/detail?id=12722
but that no longer appears to be the case now that the `v8` crate has
caught up and it still isn't fixed. Personally I've sort of lost an
appetite for continuing to debug these issues so I figure it's best to
just disable reference types with v8 for now and exercise the rest of
the engine, e.g. simd.
`wasm-smith` v0.11 has support for generating shared memories when the
`threads_enabled` configuration flag is set. This change turns on that
flag occasionally. This also upgrades `wasm-smith` to v0.11.1 to always
generate shared memory with a known maximum.
This commit refactored `Config` to use a seperate `CompilerConfig` field instead
of operating on `CompilerBuilder` directly to make all its methods idempotent.
Fixes#4189