d3a6181939015eabd30d0c154b1cf2922e9dc4ba
119 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
2afaac5181 |
Return anyhow::Error from host functions instead of Trap, redesign Trap (#5149)
* Return `anyhow::Error` from host functions instead of `Trap` This commit refactors how errors are modeled when returned from host functions and additionally refactors how custom errors work with `Trap`. At a high level functions in Wasmtime that previously worked with `Result<T, Trap>` now work with `Result<T>` instead where the error is `anyhow::Error`. This includes functions such as: * Host-defined functions in a `Linker<T>` * `TypedFunc::call` * Host-related callbacks like call hooks Errors are now modeled primarily as `anyhow::Error` throughout Wasmtime. This subsequently removes the need for `Trap` to have the ability to represent all host-defined errors as it previously did. Consequently the `From` implementations for any error into a `Trap` have been removed here and the only embedder-defined way to create a `Trap` is to use `Trap::new` with a custom string. After this commit the distinction between a `Trap` and a host error is the wasm backtrace that it contains. Previously all errors in host functions would flow through a `Trap` and get a wasm backtrace attached to them, but now this only happens if a `Trap` itself is created meaning that arbitrary host-defined errors flowing from a host import to the other side won't get backtraces attached. Some internals of Wasmtime itself were updated or preserved to use `Trap::new` to capture a backtrace where it seemed useful, such as when fuel runs out. The main motivation for this commit is that it now enables hosts to thread a concrete error type from a host function all the way through to where a wasm function was invoked. Previously this could not be done since the host error was wrapped in a `Trap` that didn't provide the ability to get at the internals. A consequence of this commit is that when a host error is returned that isn't a `Trap` we'll capture a backtrace and then won't have a `Trap` to attach it to. To avoid losing the contextual information this commit uses the `Error::context` method to attach the backtrace as contextual information to ensure that the backtrace is itself not lost. This is a breaking change for likely all users of Wasmtime, but it's hoped to be a relatively minor change to workaround. Most use cases can likely change `-> Result<T, Trap>` to `-> Result<T>` and otherwise explicit creation of a `Trap` is largely no longer necessary. * Fix some doc links * add some tests and make a backtrace type public (#55) * Trap: avoid a trailing newline in the Display impl which in turn ends up with three newlines between the end of the backtrace and the `Caused by` in the anyhow Debug impl * make BacktraceContext pub, and add tests showing downcasting behavior of anyhow::Error to traps or backtraces * Remove now-unnecesary `Trap` downcasts in `Linker::module` * Fix test output expectations * Remove `Trap::i32_exit` This commit removes special-handling in the `wasmtime::Trap` type for the i32 exit code required by WASI. This is now instead modeled as a specific `I32Exit` error type in the `wasmtime-wasi` crate which is returned by the `proc_exit` hostcall. Embedders which previously tested for i32 exits now downcast to the `I32Exit` value. * Remove the `Trap::new` constructor This commit removes the ability to create a trap with an arbitrary error message. The purpose of this commit is to continue the prior trend of leaning into the `anyhow::Error` type instead of trying to recreate it with `Trap`. A subsequent simplification to `Trap` after this commit is that `Trap` will simply be an `enum` of trap codes with no extra information. This commit is doubly-motivated by the desire to always use the new `BacktraceContext` type instead of sometimes using that and sometimes using `Trap`. Most of the changes here were around updating `Trap::new` calls to `bail!` calls instead. Tests which assert particular error messages additionally often needed to use the `:?` formatter instead of the `{}` formatter because the prior formats the whole `anyhow::Error` and the latter only formats the top-most error, which now contains the backtrace. * Merge `Trap` and `TrapCode` With prior refactorings there's no more need for `Trap` to be opaque or otherwise contain a backtrace. This commit parse down `Trap` to simply an `enum` which was the old `TrapCode`. All various tests and such were updated to handle this. The main consequence of this commit is that all errors have a `BacktraceContext` context attached to them. This unfortunately means that the backtrace is printed first before the error message or trap code, but given all the prior simplifications that seems worth it at this time. * Rename `BacktraceContext` to `WasmBacktrace` This feels like a better name given how this has turned out, and additionally this commit removes having both `WasmBacktrace` and `BacktraceContext`. * Soup up documentation for errors and traps * Fix build of the C API Co-authored-by: Pat Hickey <pat@moreproductive.org> |
||
|
|
6d1bce9c64 |
Adjust fuel consumption to be empty when fuel is 0 (#5013)
Co-authored-by: Jamey Sharp <jsharp@fastly.com> Co-authored-by: Jamey Sharp <jsharp@fastly.com> |
||
|
|
cdecc858b4 |
add riscv64 backend for cranelift. (#4271)
Add a RISC-V 64 (`riscv64`, RV64GC) backend. Co-authored-by: yuyang <756445638@qq.com> Co-authored-by: Chris Fallin <chris@cfallin.org> Co-authored-by: Afonso Bordado <afonsobordado@az8.co> |
||
|
|
10dbb19983 |
Various improvements to differential fuzzing (#4845)
* Improve wasmi differential fuzzer * Support modules with a `start` function * Implement trap-matching to ensure that wasmi and Wasmtime both report the same flavor of trap. * Support differential fuzzing where no engines match Locally I was attempting to run against just one wasm engine with `ALLOWED_ENGINES=wasmi` but the fuzzer quickly panicked because the generated test case didn't match wasmi's configuration. This commit updates engine-selection in the differential fuzzer to return `None` if no engine is applicable, throwing out the test case. This won't be hit at all with oss-fuzz-based runs but for local runs it'll be useful to have. * Improve proposal support in differential fuzzer * De-prioritize unstable wasm proposals such as multi-memory and memory64 by making them more unlikely with `Unstructured::ratio`. * Allow fuzzing multi-table (reference types) and multi-memory by avoiding setting their maximums to 1 in `set_differential_config`. * Update selection of the pooling strategy to unconditionally support the selected module config rather than the other way around. * Improve handling of traps in differential fuzzing This commit fixes an issue found via local fuzzing where engines were reporting different results but the underlying reason for this was that one engine was hitting stack overflow before the other. To fix the underlying issue I updated the execution to check for stack overflow and, if hit, it discards the entire fuzz test case from then on. The rationale behind this is that each engine can have unique limits for stack overflow. One test case I was looking at for example would stack overflow at less than 1000 frames with epoch interruption enabled but would stack overflow at more than 1000 frames with it disabled. This means that the state after the trap started to diverge and it looked like the engines produced different results. While I was at it I also improved the "function call returned a trap" case to compare traps to make sure the same trap reason popped out. * Fix fuzzer tests |
||
|
|
fd98814b96 |
Port v8 fuzzer to the new framework (#4739)
* Port v8 fuzzer to the new framework This commit aims to improve the support for the new "meta" differential fuzzer added in #4515 by ensuring that all existing differential fuzzing is migrated to this new fuzzer. This PR includes features such as: * The V8 differential execution is migrated to the new framework. * `Config::set_differential_config` no longer force-disables wasm features, instead allowing them to be enabled as per the fuzz input. * `DiffInstance::{hash, hash}` was replaced with `DiffInstance::get_{memory,global}` to allow more fine-grained assertions. * Support for `FuncRef` and `ExternRef` have been added to `DiffValue` and `DiffValueType`. For now though generating an arbitrary `ExternRef` and `FuncRef` simply generates a null value. * Arbitrary `DiffValue::{F32,F64}` values are guaranteed to use canonical NaN representations to fix an issue with v8 where with the v8 engine we can't communicate non-canonical NaN values through JS. * `DiffEngine::evaluate` allows "successful failure" for cases where engines can't support that particular invocation, for example v8 can't support `v128` arguments or return values. * Smoke tests were added for each engine to ensure that a simple wasm module works at PR-time. * Statistics printed from the main fuzzer now include percentage-rates for chosen engines as well as percentage rates for styles-of-module. There's also a few small refactorings here and there but mostly just things I saw along the way. * Update the fuzzing README |
||
|
|
9758f5420e |
[fuzz] Remove more fuzz targets (#4737)
* [fuzz] Remove the `differential` fuzz target This functionality is already covered by the `differential_meta` target. * [fuzz] Rename `differential_meta` to `differential` Now that the `differential_meta` fuzz target does everything that the existing `differential` target did and more, it can take over the original name. |
||
|
|
8b7fb19b1d |
[fuzz] Remove some differential fuzz targets (#4735)
* [fuzz] Remove some differential fuzz targets The changes in #4515 do everything the `differential_spec` and `differential_wasmi` fuzz target already do. These fuzz targets are now redundant and this PR removes them. It also updates the fuzz documentation slightly. |
||
|
|
5ec92d59d2 |
[fuzz] Add a meta-differential fuzz target (#4515)
* [fuzz] Add `Module` enum, refactor `ModuleConfig` This change adds a way to create either a single-instruction module or a regular (big) `wasm-smith` module. It has some slight refactorings in preparation for the use of this new code. * [fuzz] Add `DiffValue` for differential evaluation In order to evaluate functions with randomly-generated values, we needed a common way to generate these values. Using the Wasmtime `Val` type is not great because we would like to be able to implement various traits on the new value type, e.g., to convert `Into` and `From` boxed values of other engines we differentially fuzz against. This new type, `DiffValue`, gives us a common ground for all the conversions and comparisons between the other engine types. * [fuzz] Add interface for differential engines In order to randomly choose an engine to fuzz against, we expect all of the engines to meet a common interface. The traits in this commit allow us to instantiate a module from its binary form, evaluate exported functions, and (possibly) hash the exported items of the instance. This change has some missing pieces, though: - the `wasm-spec-interpreter` needs some work to be able to create instances, evaluate a function by name, and expose exported items - the `v8` engine is not implemented yet due to the complexity of its Rust lifetimes * [fuzz] Use `ModuleFeatures` instead of existing configuration When attempting to use both wasm-smith and single-instruction modules, there is a mismatch in how we communicate what an engine must be able to support. In the first case, we could use the `ModuleConfig`, a wrapper for wasm-smith's `SwarmConfig`, but single-instruction modules do not have a `SwarmConfig`--the many options simply don't apply. Here, we instead add `ModuleFeatures` and adapt a `ModuleConfig` to that. `ModuleFeatures` then becomes the way to communicate what features an engine must support to evaluate functions in a module. * [fuzz] Add a new fuzz target using the meta-differential oracle This change adds the `differential_meta` target to the list of fuzz targets. I expect that sometime soon this could replace the other `differential*` targets, as it almost checks all the things those check. The major missing piece is that currently it only chooses single-instruction modules instead of also generating arbitrary modules using `wasm-smith`. Also, this change adds the concept of an ignorable error: some differential engines will choke with certain inputs (e.g., `wasmi` might have an old opcode mapping) which we do not want to flag as fuzz bugs. Here we wrap those errors in `DiffIgnoreError` and then use a new helper trait, `DiffIgnorable`, to downcast and inspect the `anyhow` error to only panic on non-ignorable errors; the ignorable errors are converted to one of the `arbitrary::Error` variants, which we already ignore. * [fuzz] Compare `DiffValue` NaNs more leniently Because arithmetic NaNs can contain arbitrary payload bits, checking that two differential executions should produce the same result should relax the comparison of the `F32` and `F64` types (and eventually `V128` as well... TODO). This change adds several considerations, however, so that in the future we make the comparison a bit stricter, e.g., re: canonical NaNs. This change, however, just matches the current logic used by other fuzz targets. * review: allow hashing mutate the instance state @alexcrichton requested that the interface be adapted to accommodate Wasmtime's API, in which even reading from an instance could trigger mutation of the store. * review: refactor where configurations are made compatible See @alexcrichton's [suggestion](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928974376). * review: convert `DiffValueType` using `TryFrom` See @alexcrichton's [comment](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928962394). * review: adapt target implementation to Wasmtime-specific RHS This change is joint work with @alexcrichton to adapt the structure of the fuzz target to his comments [here](https://github.com/bytecodealliance/wasmtime/pull/4515#pullrequestreview-1073247791). This change: - removes `ModuleFeatures` and the `Module` enum (for big and small modules) - upgrades `SingleInstModule` to filter out cases that are not valid for a given `ModuleConfig` - adds `DiffEngine::name()` - constructs each `DiffEngine` using a `ModuleConfig`, eliminating `DiffIgnoreError` completely - prints an execution rate to the `differential_meta` target Still TODO: - `get_exported_function_signatures` could be re-written in terms of the Wasmtime API instead `wasmparser` - the fuzzer crashes eventually, we think due to the signal handler interference between OCaml and Wasmtime - the spec interpreter has several cases that we skip for now but could be fuzzed with further work Co-authored-by: Alex Crichton <alex@alexcrichton.com> * fix: avoid SIGSEGV by explicitly initializing OCaml runtime first * review: use Wasmtime's API to retrieve exported functions Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
57dca934ad |
Upgrade wasm-tools crates, namely the component model (#4715)
* Upgrade wasm-tools crates, namely the component model This commit pulls in the latest versions of all of the `wasm-tools` family of crates. There were two major changes that happened in `wasm-tools` in the meantime: * bytecodealliance/wasm-tools#697 - this commit introduced a new API for more efficiently reading binary operators from a wasm binary. The old `Operator`-based reading was left in place, however, and continues to be what Wasmtime uses. I hope to update Wasmtime in a future PR to use this new API, but for now the biggest change is... * bytecodealliance/wasm-tools#703 - this commit was a major update to the component model AST. This commit almost entirely deals with the fallout of this change. The changes made to the component model were: 1. The `unit` type no longer exists. This was generally a simple change where the `Unit` case in a few different locations were all removed. 2. The `expected` type was renamed to `result`. This similarly was relatively lightweight and mostly just a renaming on the surface. I took this opportunity to rename `val::Result` to `val::ResultVal` and `types::Result` to `types::ResultType` to avoid clashing with the standard library types. The `Option`-based types were handled with this as well. 3. The payload type of `variant` and `result` types are now optional. This affected many locations that calculate flat type representations, ABI information, etc. The `#[derive(ComponentType)]` macro now specifically handles Rust-defined `enum` types which have no payload to the equivalent in the component model. 4. Functions can now return multiple parameters. This changed the signature of invoking component functions because the return value is now bound by `ComponentNamedList` (renamed from `ComponentParams`). This had a large effect in the tests, fuzz test case generation, etc. 5. Function types with 2-or-more parameters/results must uniquely name all parameters/results. This mostly affected the text format used throughout the tests. I haven't added specifically new tests for multi-return but I changed a number of tests to use it. Additionally I've updated the fuzzers to all exercise multi-return as well so I think we should get some good coverage with that. * Update version numbers * Use crates.io |
||
|
|
5add267b87 |
Fix a soundness issue with lowering variants (#4723)
* Fix a compile error on nightly Rust It looks like Rust nightly has gotten a bit more strict about attributes-on-expressions and previously accepted code is no longer accepted. This commit updates the generated code for a macro to a form which is accepted by rustc. * Fix a soundness issue with lowering variants This commit fixes a soundness issue lowering variants in the component model where host memory could be leaked to the guest module by accident. In reviewing code recently for `Val::lower` I noticed that the variant lowering was extending the payload with `ValRaw::u32(0)` to appropriately fit the size of the variant. In reading this it appeared incorrect to me due to the fact that it should be `ValRaw::u64(0)` since up to 64-bits can be read. Additionally this implementation was also incorrect because the lowered representation of the payload itself was not possibly zero-extended to 64-bits to accommodate other variants. It turned out these issues were benign because with the dynamic surface area to the component model the arguments were all initialized to 0 anyway. The static version of the API, however, does not initialize arguments to 0 and I wanted to initially align these two implementations so I updated the variant implementation of lowering for dynamic values and removed the zero-ing of arguments. To test this change I updated the `debug` mode of adapter module generation to assert that the upper bits of values in wasm are always zero when the value is casted down (during `stack_get` which only happens with variants). I then threaded through the `debug` boolean configuration parameter into the dynamic and static fuzzers. To my surprise this new assertion tripped even after the fix was applied. It turns out, though, that there was other leakage of bits through other means that I was previously unaware of. At the primitive level lowerings of types like `u32` will have a `Lower` representation of `ValRaw` and the lowering is simply `dst.write(ValRaw::i32(self))`, or the equivalent thereof. The problem, that the fuzzers detected, with this pattern is that the `ValRaw` type is 16-bytes, and `ValRaw::i32(X)` only initializes the first 4. This meant that all the lowerings for all primitives were writing up to 12 bytes of garbage from the host for the wasm module to read. It turned out that this write of a `ValRaw` was sometimes 16 bytes and sometimes the appropriate size depending on the number of optimizations in play. With enough inlining for example `dst.write(ValRaw::i32(self))` would only write 4 bytes, as expected. In debug mode though without inlining 16 bytes would be written, including the garbage from the upper bits. To solve this issue I ended up taking a somewhat different approach. I primarily updated the `ValRaw` constructors to simply always extend the values internally to 64-bits, meaning that the low 8 bytes of a `ValRaw` is always initialized. This prevents any undefined data from leaking from the host into a wasm module, and means that values are also zero-extended even if they're only used in 32-bit contexts outside of a variant. This felt like the best fix for now, though, in terms of not really having a performance impact while additionally not requiring a rewrite of all lowerings. This solution ended up also neatly removing the "zero out the entire payload" logic that was previously require. Now after a payload is lowered only the tail end of the payload, up to the size of the variant, is zeroed out. This means that each lowered argument is written to at most once which should hopefully be a small performance boost for calling into functions as well. |
||
|
|
866ec46613 |
Implement roundtrip fuzzing of component adapters (#4640)
* Improve the `component_api` fuzzer on a few dimensions * Update the generated component to use an adapter module. This involves two core wasm instances communicating with each other to test that data flows through everything correctly. The intention here is to fuzz the fused adapter compiler. String encoding options have been plumbed here to exercise differences in string encodings. * Use `Cow<'static, ...>` and `static` declarations for each static test case to try to cut down on rustc codegen time. * Add `Copy` to derivation of fuzzed enums to make `derive(Clone)` smaller. * Use `Store<Box<dyn Any>>` to try to cut down on codegen by monomorphizing fewer `Store<T>` implementation. * Add debug logging to print out what's flowing in and what's flowing out for debugging failures. * Improve `Debug` representation of dynamic value types to more closely match their Rust counterparts. * Fix a variant issue with adapter trampolines Previously the offset of the payload was calculated as the discriminant aligned up to the alignment of a singular case, but instead this needs to be aligned up to the alignment of all cases to ensure all cases start at the same location. * Fix a copy/paste error when copying masked integers A 32-bit load was actually doing a 16-bit load by accident since it was copied from the 16-bit load-and-mask case. * Fix f32/i64 conversions in adapter modules The adapter previously erroneously converted the f32 to f64 and then to i64, where instead it should go from f32 to i32 to i64. * Fix zero-sized flags in adapter modules This commit corrects the size calculation for zero-sized flags in adapter modules. cc #4592 * Fix a variant size calculation bug in adapters This fixes the same issue found with variants during normal host-side fuzzing earlier where the size of a variant needs to align up the summation of the discriminant and the maximum case size. * Implement memory growth in libc bump realloc Some fuzz-generated test cases are copying lists large enough to exceed one page of memory so bake in a `memory.grow` to the bump allocator as well. * Avoid adapters of exponential size This commit is an attempt to avoid adapters being exponentially sized with respect to the type hierarchy of the input. Previously all adaptation was done inline within each adapter which meant that if something was structured as `tuple<T, T, T, T, ...>` the translation of `T` would be inlined N times. For very deeply nested types this can quickly create an exponentially sized adapter with types of the form: (type $t0 (list u8)) (type $t1 (tuple $t0 $t0)) (type $t2 (tuple $t1 $t1)) (type $t3 (tuple $t2 $t2)) ;; ... where the translation of `t4` has 8 different copies of translating `t0`. This commit changes the translation of types through memory to almost always go through a helper function. The hope here is that it doesn't lose too much performance because types already reside in memory. This can still lead to exponentially sized adapter modules to a lesser degree where if the translation all happens on the "stack", e.g. via `variant`s and their flat representation then many copies of one translation could still be made. For now this commit at least gets the problem under control for fuzzing where fuzzing doesn't trivially find type hierarchies that take over a minute to codegen the adapter module. One of the main tricky parts of this implementation is that when a function is generated the index that it will be placed at in the final module is not known at that time. To solve this the encoded form of the `Call` instruction is saved in a relocation-style format where the `Call` isn't encoded but instead saved into a different area for encoding later. When the entire adapter module is encoded to wasm these pseudo-`Call` instructions are encoded as real instructions at that time. * Fix some memory64 issues with string encodings Introduced just before #4623 I had a few mistakes related to 64-bit memories and mixing 32/64-bit memories. * Actually insert into the `translate_mem_funcs` map This... was the whole point of having the map! * Assert memory growth succeeds in bump allocator |
||
|
|
ed8908efcf |
implement fuzzing for component types (#4537)
This addresses #4307. For the static API we generate 100 arbitrary test cases at build time, each of which includes 0-5 parameter types, a result type, and a WAT fragment containing an imported function and an exported function. The exported function calls the imported function, which is implemented by the host. At runtime, the fuzz test selects a test case at random and feeds it zero or more sets of arbitrary parameters and results, checking that values which flow host-to-guest and guest-to-host make the transition unchanged. The fuzz test for the dynamic API follows a similar pattern, the only difference being that test cases are generated at runtime. Signed-off-by: Joel Dice <joel.dice@fermyon.com> |
||
|
|
edf7f9f2bb |
wasmtime: Add lots of logging for externrefs and table_ops fuzz target (#4583)
I essentially add these same logs back in every time I'm debugging something related to this fuzz target or `externref`s in general. Probably like 5 times I've added roughly these logs. We should just make them available whenever we need them via `RUST_LOG=wasmtime_runtime=trace`. This also changes a couple `if let`s to `unwrap`s that are now infallible after |
||
|
|
46782b18c2 |
wasmtime: Implement fast Wasm stack walking (#4431)
* Always preserve frame pointers in Wasmtime
This allows us to efficiently and simply capture Wasm stacks without maintaining
and synchronizing any safety-critical side tables between the compiler and the
runtime.
* wasmtime: Implement fast Wasm stack walking
Why do we want Wasm stack walking to be fast? Because we capture stacks whenever
there is a trap and traps actually happen fairly frequently with short-lived
programs and WASI's `exit`.
Previously, we would rely on generating the system unwind info (e.g.
`.eh_frame`) and using the system unwinder (via the `backtrace`crate) to walk
the full stack and filter out any non-Wasm stack frames. This can,
unfortunately, be slow for two primary reasons:
1. The system unwinder is doing `O(all-kinds-of-frames)` work rather than
`O(wasm-frames)` work.
2. System unwind info and the system unwinder need to be much more general than
a purpose-built stack walker for Wasm needs to be. It has to handle any kind of
stack frame that any compiler might emit where as our Wasm frames are emitted by
Cranelift and always have frame pointers. This translates into implementation
complexity and general overhead. There can also be unnecessary-for-our-use-cases
global synchronization and locks involved, further slowing down stack walking in
the presence of multiple threads trying to capture stacks in parallel.
This commit introduces a purpose-built stack walker for traversing just our Wasm
frames. To find all the sequences of Wasm-to-Wasm stack frames, and ignore
non-Wasm stack frames, we keep a linked list of `(entry stack pointer, exit
frame pointer)` pairs. This linked list is maintained via Wasm-to-host and
host-to-Wasm trampolines. Within a sequence of Wasm-to-Wasm calls, we can use
frame pointers (which Cranelift preserves) to find the next older Wasm frame on
the stack, and we keep doing this until we reach the entry stack pointer,
meaning that the next older frame will be a host frame.
The trampolines need to avoid a couple stumbling blocks. First, they need to be
compiled ahead of time, since we may not have access to a compiler at
runtime (e.g. if the `cranelift` feature is disabled) but still want to be able
to call functions that have already been compiled and get stack traces for those
functions. Usually this means we would compile the appropriate trampolines
inside `Module::new` and the compiled module object would hold the
trampolines. However, we *also* need to support calling host functions that are
wrapped into `wasmtime::Func`s and there doesn't exist *any* ahead-of-time
compiled module object to hold the appropriate trampolines:
```rust
// Define a host function.
let func_type = wasmtime::FuncType::new(
vec![wasmtime::ValType::I32],
vec![wasmtime::ValType::I32],
);
let func = Func::new(&mut store, func_type, |_, params, results| {
// ...
Ok(())
});
// Call that host function.
let mut results = vec![wasmtime::Val::I32(0)];
func.call(&[wasmtime::Val::I32(0)], &mut results)?;
```
Therefore, we define one host-to-Wasm trampoline and one Wasm-to-host trampoline
in assembly that work for all Wasm and host function signatures. These
trampolines are careful to only use volatile registers, avoid touching any
register that is an argument in the calling convention ABI, and tail call to the
target callee function. This allows forwarding any set of arguments and any
returns to and from the callee, while also allowing us to maintain our linked
list of Wasm stack and frame pointers before transferring control to the
callee. These trampolines are not used in Wasm-to-Wasm calls, only when crossing
the host-Wasm boundary, so they do not impose overhead on regular calls. (And if
using one trampoline for all host-Wasm boundary crossing ever breaks branch
prediction enough in the CPU to become any kind of bottleneck, we can do fun
things like have multiple copies of the same trampoline and choose a random copy
for each function, sharding the functions across branch predictor entries.)
Finally, this commit also ends the use of a synthetic `Module` and allocating a
stubbed out `VMContext` for host functions. Instead, we define a
`VMHostFuncContext` with its own magic value, similar to `VMComponentContext`,
specifically for host functions.
<h2>Benchmarks</h2>
<h3>Traps and Stack Traces</h3>
Large improvements to taking stack traces on traps, ranging from shaving off 64%
to 99.95% of the time it used to take.
<details>
```
multi-threaded-traps/0 time: [2.5686 us 2.5808 us 2.5934 us]
thrpt: [0.0000 elem/s 0.0000 elem/s 0.0000 elem/s]
change:
time: [-85.419% -85.153% -84.869%] (p = 0.00 < 0.05)
thrpt: [+560.90% +573.56% +585.84%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
multi-threaded-traps/1 time: [2.9021 us 2.9167 us 2.9322 us]
thrpt: [341.04 Kelem/s 342.86 Kelem/s 344.58 Kelem/s]
change:
time: [-91.455% -91.294% -91.096%] (p = 0.00 < 0.05)
thrpt: [+1023.1% +1048.6% +1070.3%]
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
multi-threaded-traps/2 time: [2.9996 us 3.0145 us 3.0295 us]
thrpt: [660.18 Kelem/s 663.47 Kelem/s 666.76 Kelem/s]
change:
time: [-94.040% -93.910% -93.762%] (p = 0.00 < 0.05)
thrpt: [+1503.1% +1542.0% +1578.0%]
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
multi-threaded-traps/4 time: [5.5768 us 5.6052 us 5.6364 us]
thrpt: [709.68 Kelem/s 713.63 Kelem/s 717.25 Kelem/s]
change:
time: [-93.193% -93.121% -93.052%] (p = 0.00 < 0.05)
thrpt: [+1339.2% +1353.6% +1369.1%]
Performance has improved.
multi-threaded-traps/8 time: [8.6408 us 9.1212 us 9.5438 us]
thrpt: [838.24 Kelem/s 877.08 Kelem/s 925.84 Kelem/s]
change:
time: [-94.754% -94.473% -94.202%] (p = 0.00 < 0.05)
thrpt: [+1624.7% +1709.2% +1806.1%]
Performance has improved.
multi-threaded-traps/16 time: [10.152 us 10.840 us 11.545 us]
thrpt: [1.3858 Melem/s 1.4760 Melem/s 1.5761 Melem/s]
change:
time: [-97.042% -96.823% -96.577%] (p = 0.00 < 0.05)
thrpt: [+2821.5% +3048.1% +3281.1%]
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
many-modules-registered-traps/1
time: [2.6278 us 2.6361 us 2.6447 us]
thrpt: [378.11 Kelem/s 379.35 Kelem/s 380.55 Kelem/s]
change:
time: [-85.311% -85.108% -84.909%] (p = 0.00 < 0.05)
thrpt: [+562.65% +571.51% +580.76%]
Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
3 (3.00%) high mild
6 (6.00%) high severe
many-modules-registered-traps/8
time: [2.6294 us 2.6460 us 2.6623 us]
thrpt: [3.0049 Melem/s 3.0235 Melem/s 3.0425 Melem/s]
change:
time: [-85.895% -85.485% -85.022%] (p = 0.00 < 0.05)
thrpt: [+567.63% +588.95% +608.95%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
many-modules-registered-traps/64
time: [2.6218 us 2.6329 us 2.6452 us]
thrpt: [24.195 Melem/s 24.308 Melem/s 24.411 Melem/s]
change:
time: [-93.629% -93.551% -93.470%] (p = 0.00 < 0.05)
thrpt: [+1431.4% +1450.6% +1469.5%]
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
many-modules-registered-traps/512
time: [2.6569 us 2.6737 us 2.6923 us]
thrpt: [190.17 Melem/s 191.50 Melem/s 192.71 Melem/s]
change:
time: [-99.277% -99.268% -99.260%] (p = 0.00 < 0.05)
thrpt: [+13417% +13566% +13731%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
many-modules-registered-traps/4096
time: [2.7258 us 2.7390 us 2.7535 us]
thrpt: [1.4876 Gelem/s 1.4955 Gelem/s 1.5027 Gelem/s]
change:
time: [-99.956% -99.955% -99.955%] (p = 0.00 < 0.05)
thrpt: [+221417% +223380% +224881%]
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
many-stack-frames-traps/1
time: [1.4658 us 1.4719 us 1.4784 us]
thrpt: [676.39 Kelem/s 679.38 Kelem/s 682.21 Kelem/s]
change:
time: [-90.368% -89.947% -89.586%] (p = 0.00 < 0.05)
thrpt: [+860.23% +894.72% +938.21%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
many-stack-frames-traps/8
time: [2.4772 us 2.4870 us 2.4973 us]
thrpt: [3.2034 Melem/s 3.2167 Melem/s 3.2294 Melem/s]
change:
time: [-85.550% -85.370% -85.199%] (p = 0.00 < 0.05)
thrpt: [+575.65% +583.51% +592.03%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
many-stack-frames-traps/64
time: [10.109 us 10.171 us 10.236 us]
thrpt: [6.2525 Melem/s 6.2925 Melem/s 6.3309 Melem/s]
change:
time: [-78.144% -77.797% -77.336%] (p = 0.00 < 0.05)
thrpt: [+341.22% +350.38% +357.55%]
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
5 (5.00%) high mild
2 (2.00%) high severe
many-stack-frames-traps/512
time: [126.16 us 126.54 us 126.96 us]
thrpt: [4.0329 Melem/s 4.0461 Melem/s 4.0583 Melem/s]
change:
time: [-65.364% -64.933% -64.453%] (p = 0.00 < 0.05)
thrpt: [+181.32% +185.17% +188.71%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high severe
```
</details>
<h3>Calls</h3>
There is, however, a small regression in raw Wasm-to-host and host-to-Wasm call
performance due the new trampolines. It seems to be on the order of about 2-10
nanoseconds per call, depending on the benchmark.
I believe this regression is ultimately acceptable because
1. this overhead will be vastly dominated by whatever work a non-nop callee
actually does,
2. we will need these trampolines, or something like them, when implementing the
Wasm exceptions proposal to do things like translate Wasm's exceptions into
Rust's `Result`s,
3. and because the performance improvements to trapping and capturing stack
traces are of such a larger magnitude than this call regressions.
<details>
```
sync/no-hook/host-to-wasm - typed - nop
time: [28.683 ns 28.757 ns 28.844 ns]
change: [+16.472% +17.183% +17.904%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
4 (4.00%) high mild
5 (5.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop
time: [42.515 ns 42.652 ns 42.841 ns]
change: [+12.371% +14.614% +17.462%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop
time: [33.936 ns 34.052 ns 34.179 ns]
change: [+25.478% +26.938% +28.369%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
sync/no-hook/host-to-wasm - typed - nop-params-and-results
time: [34.290 ns 34.388 ns 34.502 ns]
change: [+40.802% +42.706% +44.526%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
5 (5.00%) high mild
8 (8.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [62.546 ns 62.721 ns 62.919 ns]
change: [+2.5014% +3.6319% +4.8078%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
2 (2.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop-params-and-results
time: [42.609 ns 42.710 ns 42.831 ns]
change: [+20.966% +22.282% +23.475%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop
time: [29.546 ns 29.675 ns 29.818 ns]
change: [+20.693% +21.794% +22.836%] (p = 0.00 < 0.05)
Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop
time: [45.448 ns 45.699 ns 45.961 ns]
change: [+17.204% +18.514% +19.590%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop
time: [34.334 ns 34.437 ns 34.558 ns]
change: [+23.225% +24.477% +25.886%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [36.594 ns 36.763 ns 36.974 ns]
change: [+41.967% +47.261% +52.086%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
3 (3.00%) high mild
9 (9.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [63.541 ns 63.831 ns 64.194 ns]
change: [-4.4337% -0.6855% +2.7134%] (p = 0.73 > 0.05)
No change in performance detected.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop-params-and-results
time: [43.968 ns 44.169 ns 44.437 ns]
change: [+18.772% +21.802% +24.623%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
3 (3.00%) high mild
12 (12.00%) high severe
async/no-hook/host-to-wasm - typed - nop
time: [4.9612 us 4.9743 us 4.9889 us]
change: [+9.9493% +11.911% +13.502%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async/no-hook/host-to-wasm - untyped - nop
time: [5.0030 us 5.0211 us 5.0439 us]
change: [+10.841% +11.873% +12.977%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) high mild
7 (7.00%) high severe
async/no-hook/host-to-wasm - typed - nop-params-and-results
time: [4.9273 us 4.9468 us 4.9700 us]
change: [+4.7381% +6.8445% +8.8238%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [5.1151 us 5.1338 us 5.1555 us]
change: [+9.5335% +11.290% +13.044%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/hook-sync/host-to-wasm - typed - nop
time: [4.9330 us 4.9394 us 4.9467 us]
change: [+10.046% +11.038% +12.035%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop
time: [5.0073 us 5.0183 us 5.0310 us]
change: [+9.3828% +10.565% +11.752%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
async/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [4.9610 us 4.9839 us 5.0097 us]
change: [+9.0857% +11.513% +14.359%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [5.0995 us 5.1272 us 5.1617 us]
change: [+9.3600% +11.506% +13.809%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop
time: [2.4242 us 2.4316 us 2.4396 us]
change: [+7.8756% +8.8803% +9.8346%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop
time: [2.5102 us 2.5155 us 2.5210 us]
change: [+12.130% +13.194% +14.270%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
4 (4.00%) high mild
8 (8.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop-params-and-results
time: [2.4203 us 2.4310 us 2.4440 us]
change: [+4.0380% +6.3623% +8.7534%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [2.5501 us 2.5593 us 2.5700 us]
change: [+8.8802% +10.976% +12.937%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop
time: [2.4135 us 2.4190 us 2.4254 us]
change: [+8.3640% +9.3774% +10.435%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop
time: [2.5172 us 2.5248 us 2.5357 us]
change: [+11.543% +12.750% +13.982%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [2.4214 us 2.4353 us 2.4532 us]
change: [+1.5158% +5.0872% +8.6765%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
2 (2.00%) high mild
13 (13.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [2.5499 us 2.5607 us 2.5748 us]
change: [+10.146% +12.459% +14.919%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
sync/no-hook/wasm-to-host - nop - typed
time: [6.6135 ns 6.6288 ns 6.6452 ns]
change: [+37.927% +38.837% +39.869%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.930 ns 15.993 ns 16.067 ns]
change: [+3.9583% +5.6286% +7.2430%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
sync/no-hook/wasm-to-host - nop - untyped
time: [20.596 ns 20.640 ns 20.690 ns]
change: [+4.3293% +5.2047% +6.0935%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [42.659 ns 42.882 ns 43.159 ns]
change: [-2.1466% -0.5079% +1.2554%] (p = 0.58 > 0.05)
No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
sync/no-hook/wasm-to-host - nop - unchecked
time: [10.671 ns 10.691 ns 10.713 ns]
change: [+83.911% +87.620% +92.062%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.136 ns 11.190 ns 11.263 ns]
change: [-29.719% -28.446% -27.029%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop - typed
time: [6.7964 ns 6.8087 ns 6.8226 ns]
change: [+21.531% +24.206% +27.331%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.865 ns 15.921 ns 15.985 ns]
change: [+4.8466% +6.3330% +7.8317%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
sync/hook-sync/wasm-to-host - nop - untyped
time: [21.505 ns 21.587 ns 21.677 ns]
change: [+8.0908% +9.1943% +10.254%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [44.018 ns 44.128 ns 44.261 ns]
change: [-1.4671% -0.0458% +1.2443%] (p = 0.94 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
sync/hook-sync/wasm-to-host - nop - unchecked
time: [11.264 ns 11.326 ns 11.387 ns]
change: [+80.225% +81.659% +83.068%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) high mild
3 (3.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.816 ns 11.865 ns 11.920 ns]
change: [-29.152% -28.040% -26.957%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
8 (8.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop - typed
time: [6.6221 ns 6.6385 ns 6.6569 ns]
change: [+43.618% +44.755% +45.965%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
6 (6.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.884 ns 15.929 ns 15.983 ns]
change: [+3.5987% +5.2053% +6.7846%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/no-hook/wasm-to-host - nop - untyped
time: [20.615 ns 20.702 ns 20.821 ns]
change: [+6.9799% +8.1212% +9.2819%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
2 (2.00%) high mild
8 (8.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.956 ns 42.207 ns 42.521 ns]
change: [-4.3057% -2.7730% -1.2428%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async/no-hook/wasm-to-host - nop - unchecked
time: [10.440 ns 10.474 ns 10.513 ns]
change: [+83.959% +85.826% +87.541%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.476 ns 11.512 ns 11.554 ns]
change: [-29.857% -28.383% -26.978%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
1 (1.00%) low mild
6 (6.00%) high mild
5 (5.00%) high severe
async/no-hook/wasm-to-host - nop - async-typed
time: [26.427 ns 26.478 ns 26.532 ns]
change: [+6.5730% +7.4676% +8.3983%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [28.557 ns 28.693 ns 28.880 ns]
change: [+1.9099% +3.7332% +5.9731%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
async/hook-sync/wasm-to-host - nop - typed
time: [6.7488 ns 6.7630 ns 6.7784 ns]
change: [+19.935% +22.080% +23.683%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.928 ns 16.031 ns 16.149 ns]
change: [+5.5188% +6.9567% +8.3839%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
9 (9.00%) high mild
2 (2.00%) high severe
async/hook-sync/wasm-to-host - nop - untyped
time: [21.930 ns 22.114 ns 22.296 ns]
change: [+4.6674% +7.7588% +10.375%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.684 ns 42.858 ns 43.081 ns]
change: [-5.2957% -3.4693% -1.6217%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severe
async/hook-sync/wasm-to-host - nop - unchecked
time: [11.026 ns 11.053 ns 11.086 ns]
change: [+70.751% +72.378% +73.961%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.840 ns 11.900 ns 11.982 ns]
change: [-27.977% -26.584% -24.887%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async/hook-sync/wasm-to-host - nop - async-typed
time: [27.601 ns 27.709 ns 27.882 ns]
change: [+8.1781% +9.1102% +10.030%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low mild
3 (3.00%) high mild
6 (6.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [28.955 ns 29.174 ns 29.413 ns]
change: [+1.1226% +3.0366% +5.1126%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async-pool/no-hook/wasm-to-host - nop - typed
time: [6.5626 ns 6.5733 ns 6.5851 ns]
change: [+40.561% +42.307% +44.514%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
5 (5.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.820 ns 15.886 ns 15.969 ns]
change: [+4.1044% +5.7928% +7.7122%] (p = 0.00 < 0.05)
Performance has regressed.
Found 17 outliers among 100 measurements (17.00%)
4 (4.00%) high mild
13 (13.00%) high severe
async-pool/no-hook/wasm-to-host - nop - untyped
time: [20.481 ns 20.521 ns 20.566 ns]
change: [+6.7962% +7.6950% +8.7612%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.834 ns 41.998 ns 42.189 ns]
change: [-3.8185% -2.2687% -0.7541%] (p = 0.01 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
3 (3.00%) high mild
10 (10.00%) high severe
async-pool/no-hook/wasm-to-host - nop - unchecked
time: [10.353 ns 10.380 ns 10.414 ns]
change: [+82.042% +84.591% +87.205%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.123 ns 11.168 ns 11.228 ns]
change: [-30.813% -29.285% -27.874%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
async-pool/no-hook/wasm-to-host - nop - async-typed
time: [27.442 ns 27.528 ns 27.638 ns]
change: [+7.5215% +9.9795% +12.266%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [29.014 ns 29.148 ns 29.312 ns]
change: [+2.0227% +3.4722% +4.9047%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - typed
time: [6.7916 ns 6.8116 ns 6.8325 ns]
change: [+20.937% +22.050% +23.281%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.917 ns 15.975 ns 16.051 ns]
change: [+4.6404% +6.4217% +8.3075%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - untyped
time: [21.558 ns 21.612 ns 21.679 ns]
change: [+8.1158% +9.1409% +10.217%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.475 ns 42.614 ns 42.775 ns]
change: [-6.3613% -4.4709% -2.7647%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - unchecked
time: [11.150 ns 11.195 ns 11.247 ns]
change: [+74.424% +77.056% +79.811%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.639 ns 11.695 ns 11.760 ns]
change: [-30.212% -29.023% -27.954%] (p = 0.00 < 0.05)
Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
7 (7.00%) high mild
8 (8.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - async-typed
time: [27.480 ns 27.712 ns 27.984 ns]
change: [+2.9764% +6.5061% +9.8914%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [29.218 ns 29.380 ns 29.600 ns]
change: [+5.2283% +7.7247% +10.822%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
2 (2.00%) high mild
14 (14.00%) high severe
```
</details>
* Add s390x support for frame pointer-based stack walking
* wasmtime: Allow `Caller::get_export` to get all exports
* fuzzing: Add a fuzz target to check that our stack traces are correct
We generate Wasm modules that keep track of their own stack as they call and
return between functions, and then we periodically check that if the host
captures a backtrace, it matches what the Wasm module has recorded.
* Remove VM offsets for `VMHostFuncContext` since it isn't used by JIT code
* Add doc comment with stack walking implementation notes
* Document the extra state that can be passed to `wasmtime_runtime::Backtrace` methods
* Add extensive comments for stack walking function
* Factor architecture-specific bits of stack walking out into modules
* Initialize store-related fields in a vmctx to null when there is no store yet
Rather than leaving them as uninitialized data.
* Use `set_callee` instead of manually setting the vmctx field
* Use a more informative compile error message for unsupported architectures
* Document unsafety of `prepare_host_to_wasm_trampoline`
* Use `bti c` instead of `hint #34` in inline aarch64 assembly
* Remove outdated TODO comment
* Remove setting of `last_wasm_exit_fp` in `set_jit_trap`
This is no longer needed as the value is plumbed through to the backtrace code
directly now.
* Only set the stack limit once, in the face of re-entrancy into Wasm
* Add comments for s390x-specific stack walking bits
* Use the helper macro for all libcalls
If we forget to use it, and then trigger a GC from the libcall, that means we
could miss stack frames when walking the stack, fail to find live GC refs, and
then get use after free bugs. Much less risky to always use the helper macro
that takes care of all of that for us.
* Use the `asm_sym!` macro in Wasm-to-libcall trampolines
This macro handles the macOS-specific underscore prefix stuff for us.
* wasmtime: add size and align to `externref` assertion error message
* Extend the `stacks` fuzzer to have host frames in between Wasm frames
This way we get one or more contiguous sequences of Wasm frames on the stack,
instead of exactly one.
* Add documentation for aarch64-specific backtrace helpers
* Clarify that we only support little-endian aarch64 in trampoline comment
* Use `.machine z13` in s390x assembly file
Since apparently our CI machines have pretty old assemblers that don't have
`.machine z14`. This should be fine though since these trampolines don't make
use of anything that is introduced in z14.
* Fix aarch64 build
* Fix macOS build
* Document the `asm_sym!` macro
* Add windows support to the `wasmtime-asm-macros` crate
* Add windows support to host<--->Wasm trampolines
* Fix trap handler build on windows
* Run `rustfmt` on s390x trampoline source file
* Temporarily disable some assertions about a trap's backtrace in the component model tests
Follow up to re-enable this and fix the associated issue:
https://github.com/bytecodealliance/wasmtime/issues/4535
* Refactor libcall definitions with less macros
This refactors the `libcall!` macro to use the
`foreach_builtin_function!` macro to define all of the trampolines.
Additionally the macro surrounding each libcall itself is no longer
necessary and helps avoid too many macros.
* Use `VMOpaqueContext::from_vm_host_func_context` in `VMHostFuncContext::new`
* Move `backtrace` module to be submodule of `traphandlers`
This avoids making some things `pub(crate)` in `traphandlers` that really
shouldn't be.
* Fix macOS aarch64 build
* Use "i64" instead of "word" in aarch64-specific file
* Save/restore entry SP and exit FP/return pointer in the face of panicking imported host functions
Also clean up assertions surrounding our saved entry/exit registers.
* Put "typed" vs "untyped" in the same position of call benchmark names
Regardless if we are doing wasm-to-host or host-to-wasm
* Fix stacks test case generator build for new `wasm-encoder`
* Fix build for s390x
* Expand libcalls in s390x asm
* Disable more parts of component tests now that backtrace assertions are a bit tighter
* Remove assertion that can maybe fail on s390x
Co-authored-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
|
||
|
|
2127c3a369 |
Fix CI for main (#4486)
* Skip new `table_ops` test under emulation When emulating we already have to disable most pooling-allocator related tests so this commit carries over that logic to the new fuzz test which may run some configurations with the pooling allocator depending on the random input. * Fix panics in s390x codegen related to aliases This commit fixes an issue introduced as part of the fix for GHSA-5fhj-g3p3-pq9g. The `reftyped_vregs` list given to `regalloc2` is not allowed to have duplicates in it and while the list originally doesn't have duplicates once aliases are applied the list may have duplicates. The fix here is to perform another pass to remove duplicates after the aliases have been processed. |
||
|
|
2154c63de9 |
Merge pull request from GHSA-5fhj-g3p3-pq9g
* Improve cranelift disassembly of stack maps
Print out extra information about stack maps such as their contents and
other related metadata available. Additionally also print out addresses
in hex to line up with the disassembly otherwise printed as well.
* Improve the `table_ops` fuzzer
* Generate more instructions by default
* Fix negative indices appearing in `table.{get,set}`
* Assert that the traps generated are expected to prevent accidental
other errors reporting a fuzzing success.
* Fix `reftype_vregs` reported to `regalloc2`
This fixes a mistake in the register allocation of Cranelift functions
where functions using reference-typed arguments incorrectly report which
virtual registers are reference-typed values if there are vreg aliases
in play. The fix here is to apply the vreg aliases to the final list of
reftyped regs which is eventually passed to `regalloc2`.
The main consequence of this fix is that functions which previously
accidentally didn't have correct stack maps should now have the missing
stack maps.
* Add a test that `table_ops` gc's eventually
* Add a comment about new alias resolution
* Update crates/fuzzing/src/oracles.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
* Add some comments
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
|
||
|
|
08a60a0f08 |
Fix fuel configuration in fuzzers (#4440)
Previously fuel configuration in our fuzzers ran afoul of #4439 so this commit fixes how fuel is managed to avoid that issue. |
||
|
|
52ad76ed7c |
Update differential fuzzing configuration (#4386)
* Update differential fuzzing configuration This uses some new features of `wasm-smith` and additionally tweaks the existing fuzz configuration: * More than one function is now allowed to be generated. There's no particular reason to limit differential execution to just one and we may want to explore other interesting module shapes. * More than one function type is now allowed to possibly allow more interesting `block` types. * Memories are now allowed to grow beyond one page, but still say small by staying underneath 10 pages. * Tables are now always limited in their growth to ensure consistent behavior across engines (e.g. with the pooling allocator vs v8). * The `export_everything` feature is used instead of specifying a min/max number of exports. The `wasmi` differential fuzzer was updated to still work if memory is exported, but otherwise the v8 differential fuzzer already worked if a function was exported but a memory wasn't. Both fuzzers continue to execute only the first exported function. Also notable from this update is that the `SwarmConfig` from `wasm-smith` will now include an arbitrary `allowed_instructions` configuration which may help explore the space of interesting modules more effectively. * Fix typos |
||
|
|
54aa720506 |
fuzzing: Refactor TableOps fuzz generator to allow GC with refs on the stack (#4016)
This makes the generator more similar to `wasm-smith` where it is keeping track of what is on the stack and making choices about what instructions are valid to generate given the current stack state. This should in theory allow the generator to emit GC calls while there are live refs on the stack. Fixes #3917 |
||
|
|
c8daf0b8db |
support SIMD fuzzing in reference interpreter (#3980)
* support SIMD fuzzing in reference interpreter * formatting |
||
|
|
c22033bf93 |
Delete historical interruptable support in Wasmtime (#3925)
* Delete historical interruptable support in Wasmtime This commit removes the `Config::interruptable` configuration along with the `InterruptHandle` type from the `wasmtime` crate. The original support for adding interruption to WebAssembly was added pretty early on in the history of Wasmtime when there was no other method to prevent an infinite loop from the host. Nowadays, however, there are alternative methods for interruption such as fuel or epoch-based interruption. One of the major downsides of `Config::interruptable` is that even when it's not enabled it forces an atomic swap to happen when entering WebAssembly code. This technically could be a non-atomic swap if the configuration option isn't enabled but that produces even more branch-y code on entry into WebAssembly which is already something we try to optimize. Calling into WebAssembly is on the order of a dozens of nanoseconds at this time and an atomic swap, even uncontended, can add up to 5ns on some platforms. The main goal of this PR is to remove this atomic swap on entry into WebAssembly. This is done by removing the `Config::interruptable` field entirely, moving all existing consumers to epochs instead which are suitable for the same purposes. This means that the stack overflow check is no longer entangled with the interruption check and perhaps one day we could continue to optimize that further as well. Some consequences of this change are: * Epochs are now the only method of remote-thread interruption. * There are no more Wasmtime traps that produces the `Interrupted` trap code, although we may wish to move future traps to this so I left it in place. * The C API support for interrupt handles was also removed and bindings for epoch methods were added. * Function-entry checks for interruption are a tiny bit less efficient since one check is performed for the stack limit and a second is performed for the epoch as opposed to the `Config::interruptable` style of bundling the stack limit and the interrupt check in one. It's expected though that this is likely to not really be measurable. * The old `VMInterrupts` structure is renamed to `VMRuntimeLimits`. |
||
|
|
6e9da94e43 |
Relax restrictions in the differential fuzzer (#3890)
If either end stack overflows we can't validate the other side since the other side, depending on codegen settings, may have been successful, hit a different trap, or also stack overflowed. |
||
|
|
dbe797447d |
Fix table element limits in spectest fuzzer (#3888)
Ensure that there's enough table elements allowed to execute the spec tests since some tests have a minimum required. |
||
|
|
7d1bc7d808 |
Move spec interpreter fuzzing behind a Cargo feature (#3871)
* Move spec interpreter fuzzing behind a Cargo feature Building the spec interpreter requires a local installation of Ocaml and now libgmp which isn't always available, so this enables the ability to disable building the spec interpreter by using `cargo +nightly fuzz build --no-default-features`. The spec interpreter is still built by default but if fuzzers are being built locally and the spec interpreter isn't needed then this should enable it to be relatively easily opted-out of. * Tweak manifest directives |
||
|
|
2f48c890a8 |
Fix failing fuzzers with too-small instance sizes (#3873)
The recent removal of `ModuleLimits` meant that the update to the fuzzers could quickly fail where the instance size limit was set to something small (like 0) and then nothing would succeed in compilation. This allows the modules to fail to compile and then continues to the next fuzz input in these situations. |
||
|
|
98ef18a22a |
Fuzzing against verified fork of spec interpreter (#3843)
* Revert "Remove spec interpreter fuzz target temporarily (#3399)"
This reverts commit
|
||
|
|
4e26c13bbe |
Add basic epoch-interruption config to fuzzing options. (#3844)
Without async fuzzing, we won't be able to test the most interesting aspects of epoch interruption, namely the interrupt/update-deadline/resume flow. However, the "trap on epoch change" behavior works even for synchronous stores, so we can fuzz with this the same way we fuzz with the interrupt flag. |
||
|
|
f425eb7ea5 |
Limit total memory usage in instantiate-many fuzzer (#3823)
Per-`Store` allocations are already limited with the `StoreLimits` structure while fuzzing to ensure fuzz targets don't allocate more than 1GB of memory, but the `instantiate-many` fuzzer created many separate stores which each had their own limit, meaning that the 2GB limit of fuzzing could be pretty easily reached. This commit fixes the issue by making `StoreLimits` a shareable type via `Rc` to ensure the same limits can be applied to all stores created within a fuzz run, globally limiting the memory even across stores to 1GB. |
||
|
|
6ffcd4ead9 |
Improve stability for fuzz targets. (#3804)
This commit improves the stability of the fuzz targets by ensuring the generated configs and modules are congruent, especially when the pooling allocator is being used. For the `differential` target, this means both configurations must use the same allocation strategy for now as one side generates the module that might not be compatible with another arbitrary config now that we fuzz the pooling allocator. These changes also ensure that constraints put on the config are more consistently applied, especially when using a fuel-based timeout. |
||
|
|
41eb225765 |
Add the instance allocation strategy to generated fuzzing configs. (#3780)
* Add the instance allocation strategy to generated fuzzing configs. This commit adds support for generating configs with arbitrary instance allocation strategies. With this, the pooling allocator will be fuzzed as part of the existing fuzz targets. * Refine maximum constants for arbitrary module limits. * Add an `instantiate-many` fuzz target. This commit adds a new `instantiate-many` fuzz target that will attempt to instantiate and terminate modules in an arbitrary order. It generates up to 5 modules, from which a random sequence of instances will be created. The primary benefactor of this fuzz target is the pooling instance allocator. * Allow no aliasing in generated modules when using the pooling allocator. This commit prevents aliases in the generated modules as they might count against the configured import limits of the pooling allocator. As the existing module linking proposal implementation will eventually be deprecated in favor of the component model proposal, it isn't very important that we test aliases in generated modules with the pooling allocator. * Improve distribution of memory config in fuzzing. The previous commit attempted to provide a 32-bit upper bound to 64-bit arbitrary values, which skewed the distribution heavily in favor of the upper bound. This commit removes the constraint and instead uses arbitrary 32-bit values that are converted to 64-bit values in the `Arbitrary` implementation. |
||
|
|
027dea549a |
Fuzz using precompiled modules on CI (#3788)
In working on #3787 I see now that our coverage of loading precompiled files specifically is somewhat lacking, so this adds a config option to the fuzzers where, if enabled, will round-trip all compiled modules through the filesystem to test out the mmapped-file case. |
||
|
|
4ba3404ca3 |
Fix MemFd's allocated memory for dynamic memories (#3763)
This fixes a bug in the memfd-related management of a linear memory where for dynamic memories memfd wasn't informed of the extra room that the dynamic memory could grow into, only the actual size of linear memory, which ended up tripping an assert once the memory was grown. The fix here is pretty simple which is to factor in this extra space when passing the allocation size to the creation of the `MemFdSlot`. |
||
|
|
f292ff55cf |
Do another GC after running Wasm in the table_ops test oracle
|
||
|
|
2459776424 |
fuzz: Fix infinite loops in table_ops fuzzers
I forgot in the recent refactoring to add back in fuel support to the `table_ops` fuzzer. This commit re-adds the previously existent logic to always use fuel to cancel execution of the table_ops fuzzer. |
||
|
|
ab1d845ac1 |
Refactor fuzzing configuration and sometimes disable debug verifier. (#3664)
* fuzz: Refactor Wasmtime's fuzz targets
A recent fuzz bug found is related to timing out when compiling a
module. This timeout, however, is predominately because Cranelift's
debug verifier is enabled and taking up over half the compilation time.
I wanted to fix this by disabling the verifier when input modules might
have a lot of functions, but this was pretty difficult to implement.
Over time we've grown a number of various fuzzers. Most are
`wasm-smith`-based at this point but there's various entry points for
configuring the wasm-smith module, the wasmtime configuration, etc. I've
historically gotten quite lost in trying to change defaults and feeling
like I have to touch a lot of different places. This is the motivation
for this commit, simplifying fuzzer default configuration.
This commit removes the ability to create a default `Config` for
fuzzing, instead only supporting generating a configuration via
`Arbitrary`. This then involved refactoring all targets and fuzzers to
ensure that configuration is generated through `Arbitrary`. This should
actually expand the coverage of some existing fuzz targets since
`Arbitrary for Config` will tweak options that don't affect runtime,
such as memory configuration or jump veneers.
All existing fuzz targets are refactored to use this new method of
configuration. Some fuzz targets were also shuffled around or
reimplemented:
* `compile` - this now directly calls `Module::new` to skip all the
fuzzing infrastructure. This is mostly done because this fuzz target
isn't too interesting and is largely just seeing what happens when
things are thrown at the wall for Wasmtime.
* `instantiate-maybe-invalid` - this fuzz target now skips instantiation
and instead simply goes into `Module::new` like the `compile` target.
The rationale behind this is that most modules won't instantiate
anyway and this fuzz target is primarily fuzzing the compiler. This
skips having to generate arbitrary configuration since
wasm-smith-generated-modules (or valid ones at least) aren't used
here.
* `instantiate` - this fuzz target was removed. In general this fuzz
target isn't too interesting in isolation. Almost everything it deals
with likely won't pass compilation and is covered by the `compile`
fuzz target, and otherwise interesting modules being instantiated can
all theoretically be created by `wasm-smith` anyway.
* `instantiate-wasm-smith` and `instantiate-swarm` - these were both merged
into a new `instantiate` target (replacing the old one from above).
There wasn't really much need to keep these separate since they really
only differed at this point in methods of timeout. Otherwise we much
more heavily use `SwarmConfig` than wasm-smith's built-in options.
The intention is that we should still have basically the same coverage
of fuzzing as before, if not better because configuration is now
possible on some targets. Additionally there is one centralized point of
configuration for fuzzing for wasmtime, `Arbitrary for ModuleConfig`.
This internally creates an arbitrary `SwarmConfig` from `wasm-smith` and
then further tweaks it for Wasmtime's needs, such as enabling various
wasm proposals by default. In the future enabling a wasm proposal on
fuzzing should largely just be modifying this one trait implementation.
* fuzz: Sometimes disable the cranelift debug verifier
This commit disables the cranelift debug verifier if the input wasm
module might be "large" for the definition of "more than 10 functions".
While fuzzing we disable threads (set them to 1) and enable the
cranelift debug verifier. Coupled with a 20-30x slowdown this means that
a module with the maximum number of functions, 100, gives:
60x / 100 functions / 30x slowdown = 20ms
With only 20 milliseconds per function this is even further halved by
the `differential` fuzz target compiling a module twice, which means
that, when compiling with a normal release mode Wasmtime, if any
function takes more than 10ms to compile then it's a candidate for
timing out while fuzzing. Given that the cranelift debug verifier can
more than double compilation time in fuzzing mode this actually means
that the real time budget for function compilation is more like 4ms.
The `wasm-smith` crate can pretty easily generate a large function that
takes 4ms to compile, and then when that function is multiplied 100x in
the `differential` fuzz target we trivially time out the fuzz target.
The hope of this commit is to buy back half our budget by disabling the
debug verifier for modules that may have many functions. Further
refinements can be implemented in the future such as limiting functions
for just the differential target as well.
* Fix the single-function-module fuzz configuration
* Tweak how features work in differential fuzzing
* Disable everything for baseline differential fuzzing
* Enable selectively for each engine afterwards
* Also forcibly enable reference types and bulk memory for spec tests
* Log wasms when compiling
* Add reference types support to v8 fuzzer
* Fix timeouts via fuel
The default store has "infinite" fuel so that needs to be consumed
before fuel is added back in.
* Remove fuzzing-specific tests
These no longer compile and also haven't been added to in a long time.
Most of the time a reduced form of original the fuzz test case is added
when a fuzz bug is fixed.
|
||
|
|
ec43254292 |
Enable nan canonicalization in differential fuzzing (#3557)
This fixes a fuzz issue discovered over the weekend where stores with different values for nan canonicalization may produce different results. This is expected, however, so the fix for differential execution is to always enable nan canonicalization. |
||
|
|
ff1af20479 |
Add a fuzz mode to stress unaligned wasm addresses (#3516)
Alignment on all memory instructions in wasm is currently best-effort and not actually required, meaning that whatever wasm actually uses as an address should work regardless of whether the address is aligned or not. This is theoretically tested in the fuzzers via wasm-smith-generated code, but wasm-smith doesn't today have too too high of a chance of generating an actual successful load/store. This commit adds a new configuration option to the `Config` generator for fuzzing which forces usage of a custom linear memory implementation which is backed by Rust's `Vec<u8>` and forces the base address of linear memory to be off-by-one relative to the base address of the `Vec<u8>` itself. This should theoretically force host addresses to almost always be unaligned, even if wasm addresses are otherwise aligned. The main interesting fuzz coverage here is likely to be in the existing `differential` target which compares running the same module in wasmtime with two different `Config` values to ensure the same results are produced. This probably won't increase coverage all that much in the near future due to wasm-smith rarely generating successful loads/stores, but in the meantime by hooking this up into `Config` it also means that we'll be running in comparison against v8 and also ensuring that all spec tests succeed if misalignment is forced at the hardware level. As a side effect this commit also cleans up the fuzzers slightly: * The `DifferentialConfig` struct is removed and folded into `Config` * The `init_hang_limit` processing is removed since we don't use `-ttf`-generated modules from binaryen any more. * Traps are now asserted to have the same trap code, otherwise differential fuzzing fails. * Some more debug logging was added to the differential fuzzer |
||
|
|
9a27fdad86 |
Update v8 used during fuzzing (#3493)
This commit updates the crate name from `rusty_v8` to `v8` as well since the upstream bindings have sinced moved. I originally wanted to do this to see if a fix for one of our fuzz bugs was pulled in but I don't think the fix has been pulled in yet. Despite that it seems reasonable to go ahead and update. |
||
|
|
1a5fa3ed3f |
Enable stable wasm features for spectest fuzzing (#3454)
These are now required for the general suite of spec tests we're running by default. |
||
|
|
91482f39d0 |
Disable module linking in compilation fuzzer (#3416)
Module linking is otherwise covered by other fuzzers and by enabling module linking it rejects more modules than necessary due to restrictions on import strings. |
||
|
|
1ee2af0098 |
Remove the lightbeam backend (#3390)
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14]. This backend hasn't received maintenance in quite some time, and as [RFC 14] indicates this doesn't meet the threshold for keeping the code in-tree, so this commit removes it. A fast "baseline" compiler may still be added in the future. The addition of such a backend should be in line with [RFC 14], though, with the principles we now have for stable releases of Wasmtime. I'll close out Lightbeam-related issues once this is merged. [RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14 |
||
|
|
bcf3544924 |
Optimize Func::call and its C API (#3319)
* Optimize `Func::call` and its C API This commit is an alternative to #3298 which achieves effectively the same goal of optimizing the `Func::call` API as well as its C API sibling of `wasmtime_func_call`. The strategy taken here is different than #3298 though where a new API isn't created, rather a small tweak to an existing API is done. Specifically this commit handles the major sources of slowness with `Func::call` with: * Looking up the type of a function, to typecheck the arguments with and use to guide how the results should be loaded, no longer hits the rwlock in the `Engine` but instead each `Func` contains its own `FuncType`. This can be an unnecessary allocation for funcs not used with `Func::call`, so this is a downside of this implementation relative to #3298. A mitigating factor, though, is that instance exports are loaded lazily into the `Store` and in theory not too many funcs are active in the store as `Func` objects. * Temporary storage is amortized with a long-lived `Vec` in the `Store` rather than allocating a new vector on each call. This is basically the same strategy as #3294 only applied to different types in different places. Specifically `wasmtime::Store` now retains a `Vec<u128>` for `Func::call`, and the C API retains a `Vec<Val>` for calling `Func::call`. * Finally, an API breaking change is made to `Func::call` and its type signature (as well as `Func::call_async`). Instead of returning `Box<[Val]>` as it did before this function now takes a `results: &mut [Val]` parameter. This allows the caller to manage the allocation and we can amortize-remove it in `wasmtime_func_call` by using space after the parameters in the `Vec<Val>` we're passing in. This change is naturally a breaking change and we'll want to consider it carefully, but mitigating factors are that most embeddings are likely using `TypedFunc::call` instead and this signature taking a mutable slice better aligns with `Func::new` which receives a mutable slice for the results. Overall this change, in the benchmark of "call a nop function from the C API" is not quite as good as #3298. It's still a bit slower, on the order of 15ns, because there's lots of capacity checks around vectors and the type checks are slightly less optimized than before. Overall though this is still significantly better than today because allocations and the rwlock to acquire the type information are both avoided. I personally feel that this change is the best to do because it has less of an API impact than #3298. * Rebase issues |
||
|
|
51131a3acc |
Fix s390x regressions (#3330)
- Add relocation handling needed after PR #3275 - Fix incorrect handling of signed constants detected by PR #3056 test - Fix LabelUse max pos/neg ranges; fix overflow in buffers.rs - Disable fuzzing tests that require pre-built v8 binaries - Disable cranelift test that depends on i128 - Temporarily disable memory64 tests |
||
|
|
d2ce1ac753 |
Fix a use-after-free bug when passing ExternRefs to Wasm
We _must not_ trigger a GC when moving refs from host code into Wasm (e.g. returned from a host function or passed as arguments to a Wasm function). After insertion into the table, this reference is no longer rooted. If multiple references are being sent from the host into Wasm and we allowed GCs during insertion, then the following events could happen: * Reference A is inserted into the activations table. This does not trigger a GC, but does fill the table to capacity. * The caller's reference to A is removed. Now the only reference to A is from the activations table. * Reference B is inserted into the activations table. Because the table is at capacity, a GC is triggered. * A is reclaimed because the only reference keeping it alive was the activation table's reference (it isn't inside any Wasm frames on the stack yet, so stack scanning and stack maps don't increment its reference count). * We transfer control to Wasm, giving it A and B. Wasm uses A. That's a use after free. To prevent uses after free, we cannot GC when moving refs into the `VMExternRefActivationsTable` because we are passing them from the host to Wasm. On the other hand, when we are *cloning* -- as opposed to moving -- refs from the host to Wasm, then it is fine to GC while inserting into the activations table, because the original referent that we are cloning from is still alive and rooting the ref. |
||
|
|
4376cf2609 |
Add differential fuzzing against V8 (#3264)
* Add differential fuzzing against V8 This commit adds a differential fuzzing target to Wasmtime along the lines of the wasmi and spec interpreters we already have, but with V8 instead. The intention here is that wasmi is unlikely to receive updates over time (e.g. for SIMD), and the spec interpreter is not suitable for fuzzing against in general due to its performance characteristics. The hope is that V8 is indeed appropriate to fuzz against because it's naturally receiving updates and it also is expected to have good performance. Here the `rusty_v8` crate is used which provides bindings to V8 as well as precompiled binaries by default. This matches exactly the use case we need and at least for now I think the `rusty_v8` crate will be maintained by the Deno folks as they continue to develop it. If it becomes an issue though maintaining we can evaluate other options to have differential fuzzing against. For now this commit enables the SIMD and bulk-memory feature of fuzz-target-generation which should enable them to get differentially-fuzzed with V8 in addition to the compilation fuzzing we're already getting. * Use weak linkage for GDB jit helpers This should help us deduplicate our symbol with other JIT runtimes, if any. For now this leans on some C helpers to define the weak linkage since Rust doesn't support that on stable yet. * Don't use rusty_v8 on MinGW They don't have precompiled libraries there. * Fix msvc build * Comment about execution |
||
|
|
0642e62f16 |
Use wasm-smith to canonicalize NaN in differential fuzzing (#3195)
* Update wasm-smith to 0.7.0 * Canonicalize NaN with wasm-smith for differential fuzzing This then also enables floating point executing in wasmi in addition to the spec interpreter. With NaN canonicalization at the wasm level this means that we should be producing deterministic results between Wasmtime and these alternative implementations. |
||
|
|
e68aa99588 |
Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime This commit implements the WebAssembly [memory64 proposal][proposal] in both Wasmtime and Cranelift. In terms of work done Cranelift ended up needing very little work here since most of it was already prepared for 64-bit memories at one point or another. Most of the work in Wasmtime is largely refactoring, changing a bunch of `u32` values to something else. A number of internal and public interfaces are changing as a result of this commit, for example: * Acessors on `wasmtime::Memory` that work with pages now all return `u64` unconditionally rather than `u32`. This makes it possible to accommodate 64-bit memories with this API, but we may also want to consider `usize` here at some point since the host can't grow past `usize`-limited pages anyway. * The `wasmtime::Limits` structure is removed in favor of minimum/maximum methods on table/memory types. * Many libcall intrinsics called by jit code now unconditionally take `u64` arguments instead of `u32`. Return values are `usize`, however, since the return value, if successful, is always bounded by host memory while arguments can come from any guest. * The `heap_addr` clif instruction now takes a 64-bit offset argument instead of a 32-bit one. It turns out that the legalization of `heap_addr` already worked with 64-bit offsets, so this change was fairly trivial to make. * The runtime implementation of mmap-based linear memories has changed to largely work in `usize` quantities in its API and in bytes instead of pages. This simplifies various aspects and reflects that mmap-memories are always bound by `usize` since that's what the host is using to address things, and additionally most calculations care about bytes rather than pages except for the very edge where we're going to/from wasm. Overall I've tried to minimize the amount of `as` casts as possible, using checked `try_from` and checked arithemtic with either error handling or explicit `unwrap()` calls to tell us about bugs in the future. Most locations have relatively obvious things to do with various implications on various hosts, and I think they should all be roughly of the right shape but time will tell. I mostly relied on the compiler complaining that various types weren't aligned to figure out type-casting, and I manually audited some of the more obvious locations. I suspect we have a number of hidden locations that will panic on 32-bit hosts if 64-bit modules try to run there, but otherwise I think we should be generally ok (famous last words). In any case I wouldn't want to enable this by default naturally until we've fuzzed it for some time. In terms of the actual underlying implementation, no one should expect memory64 to be all that fast. Right now it's implemented with "dynamic" heaps which have a few consequences: * All memory accesses are bounds-checked. I'm not sure how aggressively Cranelift tries to optimize out bounds checks, but I suspect not a ton since we haven't stressed this much historically. * Heaps are always precisely sized. This means that every call to `memory.grow` will incur a `memcpy` of memory from the old heap to the new. We probably want to at least look into `mremap` on Linux and otherwise try to implement schemes where dynamic heaps have some reserved pages to grow into to help amortize the cost of `memory.grow`. The memory64 spec test suite is scheduled to now run on CI, but as with all the other spec test suites it's really not all that comprehensive. I've tried adding more tests for basic things as I've had to implement guards for them, but I wouldn't really consider the testing adequate from just this PR itself. I did try to take care in one test to actually allocate a 4gb+ heap and then avoid running that in the pooling allocator or in emulation because otherwise that may fail or take excessively long. [proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md * Fix some tests * More test fixes * Fix wasmtime tests * Fix doctests * Revert to 32-bit immediate offsets in `heap_addr` This commit updates the generation of addresses in wasm code to always use 32-bit offsets for `heap_addr`, and if the calculated offset is bigger than 32-bits we emit a manual add with an overflow check. * Disable memory64 for spectest fuzzing * Fix wrong offset being added to heap addr * More comments! * Clarify bytes/pages |
||
|
|
76a93dc112 |
fuzz: log Wasm contents to file when log::debug is enabled
Previously, the WAT was printed as a log message. This change standardizes all of the oracles to use `log_wasm`, which emits a `.wasm` and `.wat` file for each case if `log::debug` is enabled and prints a message with the names of the created files. Closes #3140. |
||
|
|
651a321f1a |
fuzz: add differential_spec fuzzing target
This new target compares the outputs of executing the first exported function of a Wasm module in Wasmtime and in the official Wasm spec interpreter (using the `wasm-spec-interpreter` crate). This is an initial step towards more fully-featured fuzzing (e.g. compare memories, add `v128`, add references, add other proposals, etc.) |
||
|
|
f3955fa62a |
refactor: rename DifferentialWasmiModuleConfig to SingleFunctionModuleConfig
Since we plan to reuse this configuration, we rename it and ensure it has at least 1 type (this resulted in invalid modules). |