72eda0c6ef94380bcb6d811f11242d67ecfb6373
303 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
72eda0c6ef |
Update wasmi to 0.20.0 in wasmtime-fuzzing (#5256)
* update wasmi to 0.20 in wasmtime-fuzzing * add cargo-vet entries for wasmi_core 0.5.0 and wasmi 0.20.0 |
||
|
|
2be457c295 |
Change the return type of SharedMemory::data (#5240)
This commit is an attempt at improving the safety of using the return value of the `SharedMemory::data` method. Previously this returned `*mut [u8]` which, while correct, is unwieldy and unsafe to work with. The new return value of `&[UnsafeCell<u8>]` has a few advantages: * The lifetime of the returned data is now connected to the `SharedMemory` itself, removing the possibility for a class of errors of accidentally using the prior `*mut [u8]` beyond its original lifetime. * It's not possibly to safely access `.len()` as opposed to requiring an `unsafe` dereference before. * The data internally within the slice is now what retains the `unsafe` bits, namely indicating that accessing any memory inside of the contents returned is `unsafe` but addressing it is safe. I was inspired by the `wiggle`-based discussion on #5229 and felt it appropriate to apply a similar change here. |
||
|
|
50cffad0d3 |
Implement support for dynamic memories in the pooling allocator (#5208)
* Implement support for dynamic memories in the pooling allocator This is a continuation of the thrust in #5207 for reducing page faults and lock contention when using the pooling allocator. To that end this commit implements support for efficient memory management in the pooling allocator when using wasm that is instrumented with bounds checks. The `MemoryImageSlot` type now avoids unconditionally shrinking memory back to its initial size during the `clear_and_remain_ready` operation, instead deferring optional resizing of memory to the subsequent call to `instantiate` when the slot is reused. The instantiation portion then takes the "memory style" as an argument which dictates whether the accessible memory must be precisely fit or whether it's allowed to exceed the maximum. This in effect enables skipping a call to `mprotect` to shrink the heap when dynamic memory checks are enabled. In terms of page fault and contention this should improve the situation by: * Fewer calls to `mprotect` since once a heap grows it stays grown and it never shrinks. This means that a write lock is taken within the kernel much more rarely from before (only asymptotically now, not N-times-per-instance). * Accessed memory after a heap growth operation will not fault if it was previously paged in by a prior instance and set to zero with `memset`. Unlike #5207 which requires a 6.0 kernel to see this optimization this commit enables the optimization for any kernel. The major cost of choosing this strategy is naturally the performance hit of the wasm itself. This is being looked at in PRs such as #5190 to improve Wasmtime's story here. This commit does not implement any new configuration options for Wasmtime but instead reinterprets existing configuration options. The pooling allocator no longer unconditionally sets `static_memory_bound_is_maximum` and then implements support necessary for this memory type. This other change to this commit is that the `Tunables::static_memory_bound` configuration option is no longer gating on the creation of a `MemoryPool` and it will now appropriately size to `instance_limits.memory_pages` if the `static_memory_bound` is to small. This is done to accomodate fuzzing more easily where the `static_memory_bound` will become small during fuzzing and otherwise the configuration would be rejected and require manual handling. The spirit of the `MemoryPool` is one of large virtual address space reservations anyway so it seemed reasonable to interpret the configuration this way. * Skip zero memory_size cases These are causing errors to happen when fuzzing and otherwise in theory shouldn't be too interesting to optimize for anyway since they likely aren't used in practice. |
||
|
|
d3a6181939 |
Add support for keeping pooling allocator pages resident (#5207)
When new wasm instances are created repeatedly in high-concurrency environments one of the largest bottlenecks is the contention on kernel-level locks having to do with the virtual memory. It's expected that usage in this environment is leveraging the pooling instance allocator with the `memory-init-cow` feature enabled which means that the kernel level VM lock is acquired in operations such as: 1. Growing a heap with `mprotect` (write lock) 2. Faulting in memory during usage (read lock) 3. Resetting a heap's contents with `madvise` (read lock) 4. Shrinking a heap with `mprotect` when reusing a slot (write lock) Rapid usage of these operations can lead to detrimental performance especially on otherwise heavily loaded systems, worsening the more frequent the above operations are. This commit is aimed at addressing the (2) case above, reducing the number of page faults that are fulfilled by the kernel. Currently these page faults happen for three reasons: * When memory is first accessed after the heap is grown. * When the initial linear memory image is accessed for the first time. * When the initial zero'd heap contents, not part of the linear memory image, are accessed. This PR is attempting to address the latter of these cases, and to a lesser extent the first case as well. Specifically this PR provides the ability to partially reset a pooled linear memory with `memset` rather than `madvise`. This is done to have the same effect of resetting contents to zero but namely has a different effect on paging, notably keeping the pages resident in memory rather than returning them to the kernel. This means that reuse of a linear memory slot on a page that was previously `memset` will not trigger a page fault since everything remains paged into the process. The end result is that any access to linear memory which has been touched by `memset` will no longer page fault on reuse. On more recent kernels (6.0+) this also means pages which were zero'd by `memset`, made inaccessible with `PROT_NONE`, and then made accessible again with `PROT_READ | PROT_WRITE` will not page fault. This can be common when a wasm instances grows its heap slightly, uses that memory, but then it's shrunk when the memory is reused for the next instance. Note that this kernel optimization requires a 6.0+ kernel. This same optimization is furthermore applied to both async stacks with the pooling memory allocator in addition to table elements. The defaults of Wasmtime are not changing with this PR, instead knobs are being exposed for embedders to turn if they so desire. This is currently being experimented with at Fastly and I may come back and alter the defaults of Wasmtime if it seems suitable after our measurements. |
||
|
|
b14551d7ca |
Refactor configuration for the pooling allocator (#5205)
This commit changes the APIs in the `wasmtime` crate for configuring the pooling allocator. I plan on adding a few more configuration options in the near future and the current structure was feeling unwieldy for adding these new abstractions. The previous `struct`-based API has been replaced with a builder-style API in a similar shape as to `Config`. This is done to help make it easier to add more configuration options in the future through adding more methods as opposed to adding more field which could break prior initializations. |
||
|
|
2afaac5181 |
Return anyhow::Error from host functions instead of Trap, redesign Trap (#5149)
* Return `anyhow::Error` from host functions instead of `Trap` This commit refactors how errors are modeled when returned from host functions and additionally refactors how custom errors work with `Trap`. At a high level functions in Wasmtime that previously worked with `Result<T, Trap>` now work with `Result<T>` instead where the error is `anyhow::Error`. This includes functions such as: * Host-defined functions in a `Linker<T>` * `TypedFunc::call` * Host-related callbacks like call hooks Errors are now modeled primarily as `anyhow::Error` throughout Wasmtime. This subsequently removes the need for `Trap` to have the ability to represent all host-defined errors as it previously did. Consequently the `From` implementations for any error into a `Trap` have been removed here and the only embedder-defined way to create a `Trap` is to use `Trap::new` with a custom string. After this commit the distinction between a `Trap` and a host error is the wasm backtrace that it contains. Previously all errors in host functions would flow through a `Trap` and get a wasm backtrace attached to them, but now this only happens if a `Trap` itself is created meaning that arbitrary host-defined errors flowing from a host import to the other side won't get backtraces attached. Some internals of Wasmtime itself were updated or preserved to use `Trap::new` to capture a backtrace where it seemed useful, such as when fuel runs out. The main motivation for this commit is that it now enables hosts to thread a concrete error type from a host function all the way through to where a wasm function was invoked. Previously this could not be done since the host error was wrapped in a `Trap` that didn't provide the ability to get at the internals. A consequence of this commit is that when a host error is returned that isn't a `Trap` we'll capture a backtrace and then won't have a `Trap` to attach it to. To avoid losing the contextual information this commit uses the `Error::context` method to attach the backtrace as contextual information to ensure that the backtrace is itself not lost. This is a breaking change for likely all users of Wasmtime, but it's hoped to be a relatively minor change to workaround. Most use cases can likely change `-> Result<T, Trap>` to `-> Result<T>` and otherwise explicit creation of a `Trap` is largely no longer necessary. * Fix some doc links * add some tests and make a backtrace type public (#55) * Trap: avoid a trailing newline in the Display impl which in turn ends up with three newlines between the end of the backtrace and the `Caused by` in the anyhow Debug impl * make BacktraceContext pub, and add tests showing downcasting behavior of anyhow::Error to traps or backtraces * Remove now-unnecesary `Trap` downcasts in `Linker::module` * Fix test output expectations * Remove `Trap::i32_exit` This commit removes special-handling in the `wasmtime::Trap` type for the i32 exit code required by WASI. This is now instead modeled as a specific `I32Exit` error type in the `wasmtime-wasi` crate which is returned by the `proc_exit` hostcall. Embedders which previously tested for i32 exits now downcast to the `I32Exit` value. * Remove the `Trap::new` constructor This commit removes the ability to create a trap with an arbitrary error message. The purpose of this commit is to continue the prior trend of leaning into the `anyhow::Error` type instead of trying to recreate it with `Trap`. A subsequent simplification to `Trap` after this commit is that `Trap` will simply be an `enum` of trap codes with no extra information. This commit is doubly-motivated by the desire to always use the new `BacktraceContext` type instead of sometimes using that and sometimes using `Trap`. Most of the changes here were around updating `Trap::new` calls to `bail!` calls instead. Tests which assert particular error messages additionally often needed to use the `:?` formatter instead of the `{}` formatter because the prior formats the whole `anyhow::Error` and the latter only formats the top-most error, which now contains the backtrace. * Merge `Trap` and `TrapCode` With prior refactorings there's no more need for `Trap` to be opaque or otherwise contain a backtrace. This commit parse down `Trap` to simply an `enum` which was the old `TrapCode`. All various tests and such were updated to handle this. The main consequence of this commit is that all errors have a `BacktraceContext` context attached to them. This unfortunately means that the backtrace is printed first before the error message or trap code, but given all the prior simplifications that seems worth it at this time. * Rename `BacktraceContext` to `WasmBacktrace` This feels like a better name given how this has turned out, and additionally this commit removes having both `WasmBacktrace` and `BacktraceContext`. * Soup up documentation for errors and traps * Fix build of the C API Co-authored-by: Pat Hickey <pat@moreproductive.org> |
||
|
|
bb11e61d75 |
Cleanup wasmi fuzzing code (#5140)
* cleanup wasmi fuzzing code * apply rustfmt * change Into<DiffValue> to From<WasmiValue> for DiffValue impl block * add back unwrap in get_global and get_memory * apply code review suggestions * apply rustfmt * fix spelling mistake * fix spelling issue 2 It kinda is a mess when you cannot compile locally ... It would be great if we could disable the Ocaml spec interpreter at build time because it has more involved build setup than any other fuzzing target. |
||
|
|
bc3285e845 |
Update wasm-tools crates (#5130)
* Update wasm-tools crates Mostly just a hygienic update, nothing major here * Fix fuzz compile * Fix test expectations |
||
|
|
b3333bf9ea |
Cranelift: disable egraphs in fuzzing for now. (#5128)
* Cranelift: disable egraphs in fuzzing for now. As per [this comment], with a few recent discussions it's become clear that we want to refactor egraphs in a way that will subsume, or make irrelevant, some of the recent fuzzbugs that have arisen (and likely lead to others, which we'll want to fix!). Rather than chase these down then refactor later, it probably makes sense not to spend the human time or fuzzing time doing so. This PR turns off egraphs support in fuzzing configurations for now, to be re-enabled later. [this comment]: https://github.com/bytecodealliance/wasmtime/issues/5126#issuecomment-1291222515 * Disable in cranelift-fuzzgen as well. |
||
|
|
95f02eb67d |
Update wasmi used in differential fuzzing (#5104)
* Update `wasmi` used in differential fuzzing Closes #4818 Closes #5102 * Add audits |
||
|
|
25bc12ec82 |
Add egraphs option to Wasmtime config, and add it to fuzzing config generation. (#5067)
* Add egraphs option to Wasmtime config, and add it to fuzzing config generation. This PR adds a wrapper method for Cranelift's `use_egraphs` setting to Wasmtime's `Config`, named `cranelift_use_egraphs` analogously to its existing `cranelift_opt_level`. Eventually this should become a no-op as egraph-based optimization becomes the default, but until then it makes sense to expose this as another kind of optimization option. This PR then adds the option to the `Arbitrary`-based config generation for fuzzing, so compilation with egraphs will be fuzzed (on its own and against other configurations and oracles). * Don't use `NamedTempFile` on Windows It looks like this prevents mmap-ing since the named temporary file holds a `File` open which conflicts with the rights we're trying to open the file for mmap-ing. Instead use a temporary directory to try to fix this issue. Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
78ecc17d0f |
unsplat component::Linker::func_wrap args (#5065)
* component::Linker::func_wrap: replace IntoComponentFunc with directly accepting a closure We find that this makes the Linker::func_wrap type signature much easier to read. The IntoComponentFunc abstraction was adding a lot of weight to "splat" a set of arguments from a tuple of types into individual arguments to the closure. Additionally, making the StoreContextMut argument optional, or the Result<return> optional, wasn't very worthwhile. * Fixes for the new style of closure required by component::Linker::func_wrap * fix fuzzing generator |
||
|
|
6d1bce9c64 |
Adjust fuel consumption to be empty when fuel is 0 (#5013)
Co-authored-by: Jamey Sharp <jsharp@fastly.com> Co-authored-by: Jamey Sharp <jsharp@fastly.com> |
||
|
|
2607590d8c |
Update the wasm-tools family of crates (#5010)
* Update the wasm-tools family of crates Only minor updates here, mostly internal changes and no binary-related changes today. * Fix test expectation |
||
|
|
cdecc858b4 |
add riscv64 backend for cranelift. (#4271)
Add a RISC-V 64 (`riscv64`, RV64GC) backend. Co-authored-by: yuyang <756445638@qq.com> Co-authored-by: Chris Fallin <chris@cfallin.org> Co-authored-by: Afonso Bordado <afonsobordado@az8.co> |
||
|
|
29c7de7340 |
Update wasm-tools dependencies (#4970)
* Update wasm-tools dependencies This update brings in a number of features such as: * The component model binary format and AST has been slightly adjusted in a few locations. Names are dropped from parameters/results now in the internal representation since they were not used anyway. At this time the ability to bind a multi-return function has not been exposed. * The `wasmparser` validator pass will now share allocations with prior functions, providing what's probably a very minor speedup for Wasmtime itself. * The text format for many component-related tests now requires named parameters. * Some new relaxed-simd instructions are updated to be ignored. I hope to have a follow-up to expose the multi-return ability to the embedding API of components. * Update audit information for new crates |
||
|
|
7b311004b5 |
Leverage Cargo's workspace inheritance feature (#4905)
* Leverage Cargo's workspace inheritance feature This commit is an attempt to reduce the complexity of the Cargo manifests in this repository with Cargo's workspace-inheritance feature becoming stable in Rust 1.64.0. This feature allows specifying fields in the root workspace `Cargo.toml` which are then reused throughout the workspace. For example this PR shares definitions such as: * All of the Wasmtime-family of crates now use `version.workspace = true` to have a single location which defines the version number. * All crates use `edition.workspace = true` to have one default edition for the entire workspace. * Common dependencies are listed in `[workspace.dependencies]` to avoid typing the same version number in a lot of different places (e.g. the `wasmparser = "0.89.0"` is now in just one spot. Currently the workspace-inheritance feature doesn't allow having two different versions to inherit, so all of the Cranelift-family of crates still manually specify their version. The inter-crate dependencies, however, are shared amongst the root workspace. This feature can be seen as a method of "preprocessing" of sorts for Cargo manifests. This will help us develop Wasmtime but shouldn't have any actual impact on the published artifacts -- everything's dependency lists are still the same. * Fix wasi-crypto tests |
||
|
|
b8fa068ca8 |
Limit linear memories when fuzzing with pooling (#4918)
This commit limits the maximum number of linear memories when the pooling allocator is used to ensure that the virtual memory mapping for the pooling allocator itself can succeed. Currently there are a number of crashes in the differential fuzzer where the pooling allocator can't allocate its mapping because the maximum specified number of linear memories times the number of instances exceeds the address space presumably. |
||
|
|
c3f8415ac7 |
fuzz: improve the spec interpreter (#4881)
* fuzz: improve the API of the `wasm-spec-interpreter` crate This change addresses key parts of #4852 by improving the bindings to the OCaml spec interpreter. The new API allows users to `instantiate` a module, `interpret` named functions on that instance, and `export` globals and memories from that instance. This currently leaves the existing implementation ("instantiate and interpret the first function in a module") present under a new name: `interpret_legacy`. * fuzz: adapt the differential spec engine to the new API This removes the legacy uses in the differential spec engine, replacing them with the new `instantiate`-`interpret`-`export` API from the `wasm-spec-interpreter` crate. * fix: make instance access thread-safe This changes the OCaml-side definition of the instance so that each instance carries round a reference to a "global store" that's specific to that instantiation. Because everything is updated by reference there should be no visible behavioural change on the Rust side, apart from everything suddenly being thread-safe (modulo the fact that access to the OCaml runtime still needs to be locked). This fix will need to be generalised slightly in future if we want to allow multiple modules to be instantiated in the same store. Co-authored-by: conrad-watt <cnrdwtt@gmail.com> Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
d8b290898c |
Initial forward-edge CFI implementation (#3693)
* Initial forward-edge CFI implementation Give the user the option to start all basic blocks that are targets of indirect branches with the BTI instruction introduced by the Branch Target Identification extension to the Arm instruction set architecture. Copyright (c) 2022, Arm Limited. * Refactor `from_artifacts` to avoid second `make_executable` (#1) This involves "parsing" twice but this is parsing just the header of an ELF file so it's not a very intensive operation and should be ok to do twice. * Address the code review feedback Copyright (c) 2022, Arm Limited. Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
cd982c5a3f |
[fuzz] Add SIMD to single-instruction generator (#4778)
* [fuzz] Add SIMD to single-instruction generator This change extends the single-instruction generator with most of the SIMD instructions. Examples of instructions that were excluded are: all memory-related instructions, any instruction with an immediate. * [fuzz] Generate V128s with known values from each type To better cover the fuzzing search space, `DiffValue` will generate better known values for the `V128` type. First, it uses arbitrary data to select a sub-type (e.g., `I8x16`, `F32x4`, etc.) and then it fills in the bytes by generating biased values for each of the lanes. * [fuzz] Canonicalize NaN values in SIMD lanes This change ports the NaN canonicalization logic from `wasm-smith` ([here]) to the single-instruction generator. [here]: https://github.com/bytecodealliance/wasm-tools/blob/6c127a6/crates/wasm-smith/src/core/code_builder.rs#L927 |
||
|
|
a0e4bb0190 |
Prevent virtual memory OOM in spectest fuzzing (#4872)
This commit hard-codes the pooling allocator's limit of linear memories to 1 when used with fuzzing the spec tests themselves. This prevents the number from being set too high and hitting a virtual-memory-based OOM due to the virtual memory reservation of the pooling allocator being too large. |
||
|
|
543a487939 |
Throw out fewer fuzz inputs with differential fuzzer (#4859)
* Throw out fewer fuzz inputs with differential fuzzer Prior to this commit the differential fuzzer would generate a module and then select an engine to execute the module against Wasmtime. This meant, however, that the candidate list of engines were filtered against the configuration used to generate the module to ensure that the selected engine could run the generated module. This commit inverts this logic and instead selects an engine first, allowing the engine to then tweak the module configuration to ensure that the generated module is compatible with the engine selected. This means that fewer fuzz inputs are discarded because every fuzz input will result in an engine being executed. Internally the engine constructors have all been updated to update the configuration to work instead of filtering the configuration. Some other fixes were applied for the spec interpreter as well to work around #4852 * Fix tests |
||
|
|
10dbb19983 |
Various improvements to differential fuzzing (#4845)
* Improve wasmi differential fuzzer * Support modules with a `start` function * Implement trap-matching to ensure that wasmi and Wasmtime both report the same flavor of trap. * Support differential fuzzing where no engines match Locally I was attempting to run against just one wasm engine with `ALLOWED_ENGINES=wasmi` but the fuzzer quickly panicked because the generated test case didn't match wasmi's configuration. This commit updates engine-selection in the differential fuzzer to return `None` if no engine is applicable, throwing out the test case. This won't be hit at all with oss-fuzz-based runs but for local runs it'll be useful to have. * Improve proposal support in differential fuzzer * De-prioritize unstable wasm proposals such as multi-memory and memory64 by making them more unlikely with `Unstructured::ratio`. * Allow fuzzing multi-table (reference types) and multi-memory by avoiding setting their maximums to 1 in `set_differential_config`. * Update selection of the pooling strategy to unconditionally support the selected module config rather than the other way around. * Improve handling of traps in differential fuzzing This commit fixes an issue found via local fuzzing where engines were reporting different results but the underlying reason for this was that one engine was hitting stack overflow before the other. To fix the underlying issue I updated the execution to check for stack overflow and, if hit, it discards the entire fuzz test case from then on. The rationale behind this is that each engine can have unique limits for stack overflow. One test case I was looking at for example would stack overflow at less than 1000 frames with epoch interruption enabled but would stack overflow at more than 1000 frames with it disabled. This means that the state after the trap started to diverge and it looked like the engines produced different results. While I was at it I also improved the "function call returned a trap" case to compare traps to make sure the same trap reason popped out. * Fix fuzzer tests |
||
|
|
bca4dae8b0 |
feat: add a knob for reset stack (#4813)
* feat: add a knob for reset stack * Touch up documentation of `async_stack_zeroing` Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
d3c463aac0 |
[fuzz] Configure the differential target (#4773)
This change is a follow-on from #4515 to add the ability to configure the `differential` fuzz target by limiting which engines and modules are used for fuzzing. This is incredibly useful when troubleshooting, e.g., when an engine is more prone to failure, we can target that engine exclusively. The effect of this configuration is visible in the statistics now printed out from #4739. Engines are configured using the `ALLOWED_ENGINES` environment variable. We can either subtract from the set of allowed engines (e.g., `ALLOWED_ENGINES=-v8`) or build up a set of allowed engines (e.g., `ALLOWED_ENGINES=wasmi,spec`), but not both at the same time. `ALLOWED_ENGINES` only configures the left-hand side engine; the right-hand side is always Wasmtime. When omitted, `ALLOWED_ENGINES` defaults to [`wasmtime`, `wasmi`, `spec`, `v8`]. The generated WebAssembly modules are configured using `ALLOWED_MODULES`. This environment variables works the same as above but the available options are: [`wasm-smith`, `single-inst`]. |
||
|
|
b4c25ef63e |
[fuzz] Simplify macros used by single-instruction generator (#4774)
This removes the multiple macros used previously to describe the WebAssembly instruction signatures and replaces them with a single one--`inst!`. |
||
|
|
fd98814b96 |
Port v8 fuzzer to the new framework (#4739)
* Port v8 fuzzer to the new framework This commit aims to improve the support for the new "meta" differential fuzzer added in #4515 by ensuring that all existing differential fuzzing is migrated to this new fuzzer. This PR includes features such as: * The V8 differential execution is migrated to the new framework. * `Config::set_differential_config` no longer force-disables wasm features, instead allowing them to be enabled as per the fuzz input. * `DiffInstance::{hash, hash}` was replaced with `DiffInstance::get_{memory,global}` to allow more fine-grained assertions. * Support for `FuncRef` and `ExternRef` have been added to `DiffValue` and `DiffValueType`. For now though generating an arbitrary `ExternRef` and `FuncRef` simply generates a null value. * Arbitrary `DiffValue::{F32,F64}` values are guaranteed to use canonical NaN representations to fix an issue with v8 where with the v8 engine we can't communicate non-canonical NaN values through JS. * `DiffEngine::evaluate` allows "successful failure" for cases where engines can't support that particular invocation, for example v8 can't support `v128` arguments or return values. * Smoke tests were added for each engine to ensure that a simple wasm module works at PR-time. * Statistics printed from the main fuzzer now include percentage-rates for chosen engines as well as percentage rates for styles-of-module. There's also a few small refactorings here and there but mostly just things I saw along the way. * Update the fuzzing README |
||
|
|
9758f5420e |
[fuzz] Remove more fuzz targets (#4737)
* [fuzz] Remove the `differential` fuzz target This functionality is already covered by the `differential_meta` target. * [fuzz] Rename `differential_meta` to `differential` Now that the `differential_meta` fuzz target does everything that the existing `differential` target did and more, it can take over the original name. |
||
|
|
8b7fb19b1d |
[fuzz] Remove some differential fuzz targets (#4735)
* [fuzz] Remove some differential fuzz targets The changes in #4515 do everything the `differential_spec` and `differential_wasmi` fuzz target already do. These fuzz targets are now redundant and this PR removes them. It also updates the fuzz documentation slightly. |
||
|
|
5ec92d59d2 |
[fuzz] Add a meta-differential fuzz target (#4515)
* [fuzz] Add `Module` enum, refactor `ModuleConfig` This change adds a way to create either a single-instruction module or a regular (big) `wasm-smith` module. It has some slight refactorings in preparation for the use of this new code. * [fuzz] Add `DiffValue` for differential evaluation In order to evaluate functions with randomly-generated values, we needed a common way to generate these values. Using the Wasmtime `Val` type is not great because we would like to be able to implement various traits on the new value type, e.g., to convert `Into` and `From` boxed values of other engines we differentially fuzz against. This new type, `DiffValue`, gives us a common ground for all the conversions and comparisons between the other engine types. * [fuzz] Add interface for differential engines In order to randomly choose an engine to fuzz against, we expect all of the engines to meet a common interface. The traits in this commit allow us to instantiate a module from its binary form, evaluate exported functions, and (possibly) hash the exported items of the instance. This change has some missing pieces, though: - the `wasm-spec-interpreter` needs some work to be able to create instances, evaluate a function by name, and expose exported items - the `v8` engine is not implemented yet due to the complexity of its Rust lifetimes * [fuzz] Use `ModuleFeatures` instead of existing configuration When attempting to use both wasm-smith and single-instruction modules, there is a mismatch in how we communicate what an engine must be able to support. In the first case, we could use the `ModuleConfig`, a wrapper for wasm-smith's `SwarmConfig`, but single-instruction modules do not have a `SwarmConfig`--the many options simply don't apply. Here, we instead add `ModuleFeatures` and adapt a `ModuleConfig` to that. `ModuleFeatures` then becomes the way to communicate what features an engine must support to evaluate functions in a module. * [fuzz] Add a new fuzz target using the meta-differential oracle This change adds the `differential_meta` target to the list of fuzz targets. I expect that sometime soon this could replace the other `differential*` targets, as it almost checks all the things those check. The major missing piece is that currently it only chooses single-instruction modules instead of also generating arbitrary modules using `wasm-smith`. Also, this change adds the concept of an ignorable error: some differential engines will choke with certain inputs (e.g., `wasmi` might have an old opcode mapping) which we do not want to flag as fuzz bugs. Here we wrap those errors in `DiffIgnoreError` and then use a new helper trait, `DiffIgnorable`, to downcast and inspect the `anyhow` error to only panic on non-ignorable errors; the ignorable errors are converted to one of the `arbitrary::Error` variants, which we already ignore. * [fuzz] Compare `DiffValue` NaNs more leniently Because arithmetic NaNs can contain arbitrary payload bits, checking that two differential executions should produce the same result should relax the comparison of the `F32` and `F64` types (and eventually `V128` as well... TODO). This change adds several considerations, however, so that in the future we make the comparison a bit stricter, e.g., re: canonical NaNs. This change, however, just matches the current logic used by other fuzz targets. * review: allow hashing mutate the instance state @alexcrichton requested that the interface be adapted to accommodate Wasmtime's API, in which even reading from an instance could trigger mutation of the store. * review: refactor where configurations are made compatible See @alexcrichton's [suggestion](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928974376). * review: convert `DiffValueType` using `TryFrom` See @alexcrichton's [comment](https://github.com/bytecodealliance/wasmtime/pull/4515#discussion_r928962394). * review: adapt target implementation to Wasmtime-specific RHS This change is joint work with @alexcrichton to adapt the structure of the fuzz target to his comments [here](https://github.com/bytecodealliance/wasmtime/pull/4515#pullrequestreview-1073247791). This change: - removes `ModuleFeatures` and the `Module` enum (for big and small modules) - upgrades `SingleInstModule` to filter out cases that are not valid for a given `ModuleConfig` - adds `DiffEngine::name()` - constructs each `DiffEngine` using a `ModuleConfig`, eliminating `DiffIgnoreError` completely - prints an execution rate to the `differential_meta` target Still TODO: - `get_exported_function_signatures` could be re-written in terms of the Wasmtime API instead `wasmparser` - the fuzzer crashes eventually, we think due to the signal handler interference between OCaml and Wasmtime - the spec interpreter has several cases that we skip for now but could be fuzzed with further work Co-authored-by: Alex Crichton <alex@alexcrichton.com> * fix: avoid SIGSEGV by explicitly initializing OCaml runtime first * review: use Wasmtime's API to retrieve exported functions Co-authored-by: Alex Crichton <alex@alexcrichton.com> |
||
|
|
1481721c9d |
Enable back-edge CFI by default on macOS (#4720)
Also, adjust the tests that are executed on that platform. Finally, fix a bug with obtaining backtraces when back-edge CFI is enabled. Copyright (c) 2022, Arm Limited. |
||
|
|
57dca934ad |
Upgrade wasm-tools crates, namely the component model (#4715)
* Upgrade wasm-tools crates, namely the component model This commit pulls in the latest versions of all of the `wasm-tools` family of crates. There were two major changes that happened in `wasm-tools` in the meantime: * bytecodealliance/wasm-tools#697 - this commit introduced a new API for more efficiently reading binary operators from a wasm binary. The old `Operator`-based reading was left in place, however, and continues to be what Wasmtime uses. I hope to update Wasmtime in a future PR to use this new API, but for now the biggest change is... * bytecodealliance/wasm-tools#703 - this commit was a major update to the component model AST. This commit almost entirely deals with the fallout of this change. The changes made to the component model were: 1. The `unit` type no longer exists. This was generally a simple change where the `Unit` case in a few different locations were all removed. 2. The `expected` type was renamed to `result`. This similarly was relatively lightweight and mostly just a renaming on the surface. I took this opportunity to rename `val::Result` to `val::ResultVal` and `types::Result` to `types::ResultType` to avoid clashing with the standard library types. The `Option`-based types were handled with this as well. 3. The payload type of `variant` and `result` types are now optional. This affected many locations that calculate flat type representations, ABI information, etc. The `#[derive(ComponentType)]` macro now specifically handles Rust-defined `enum` types which have no payload to the equivalent in the component model. 4. Functions can now return multiple parameters. This changed the signature of invoking component functions because the return value is now bound by `ComponentNamedList` (renamed from `ComponentParams`). This had a large effect in the tests, fuzz test case generation, etc. 5. Function types with 2-or-more parameters/results must uniquely name all parameters/results. This mostly affected the text format used throughout the tests. I haven't added specifically new tests for multi-return but I changed a number of tests to use it. Additionally I've updated the fuzzers to all exercise multi-return as well so I think we should get some good coverage with that. * Update version numbers * Use crates.io |
||
|
|
2696462ccb |
Limit the size of functions in the stacks fuzzer (#4727)
* Limit the size of functions in the `stacks` fuzzer The fuzzers recently found a timeout in this fuzz test case related to the compile time of the generated module. Inspecting the generated module showed that it had 100k+ opcodes for one function, so this commit updates the fuzzer to limit the number of operations per-function to a smaller amount to avoid timeout limits. * Use `arbitrary_len` for `ops` length * Fix a max/min flip |
||
|
|
5add267b87 |
Fix a soundness issue with lowering variants (#4723)
* Fix a compile error on nightly Rust It looks like Rust nightly has gotten a bit more strict about attributes-on-expressions and previously accepted code is no longer accepted. This commit updates the generated code for a macro to a form which is accepted by rustc. * Fix a soundness issue with lowering variants This commit fixes a soundness issue lowering variants in the component model where host memory could be leaked to the guest module by accident. In reviewing code recently for `Val::lower` I noticed that the variant lowering was extending the payload with `ValRaw::u32(0)` to appropriately fit the size of the variant. In reading this it appeared incorrect to me due to the fact that it should be `ValRaw::u64(0)` since up to 64-bits can be read. Additionally this implementation was also incorrect because the lowered representation of the payload itself was not possibly zero-extended to 64-bits to accommodate other variants. It turned out these issues were benign because with the dynamic surface area to the component model the arguments were all initialized to 0 anyway. The static version of the API, however, does not initialize arguments to 0 and I wanted to initially align these two implementations so I updated the variant implementation of lowering for dynamic values and removed the zero-ing of arguments. To test this change I updated the `debug` mode of adapter module generation to assert that the upper bits of values in wasm are always zero when the value is casted down (during `stack_get` which only happens with variants). I then threaded through the `debug` boolean configuration parameter into the dynamic and static fuzzers. To my surprise this new assertion tripped even after the fix was applied. It turns out, though, that there was other leakage of bits through other means that I was previously unaware of. At the primitive level lowerings of types like `u32` will have a `Lower` representation of `ValRaw` and the lowering is simply `dst.write(ValRaw::i32(self))`, or the equivalent thereof. The problem, that the fuzzers detected, with this pattern is that the `ValRaw` type is 16-bytes, and `ValRaw::i32(X)` only initializes the first 4. This meant that all the lowerings for all primitives were writing up to 12 bytes of garbage from the host for the wasm module to read. It turned out that this write of a `ValRaw` was sometimes 16 bytes and sometimes the appropriate size depending on the number of optimizations in play. With enough inlining for example `dst.write(ValRaw::i32(self))` would only write 4 bytes, as expected. In debug mode though without inlining 16 bytes would be written, including the garbage from the upper bits. To solve this issue I ended up taking a somewhat different approach. I primarily updated the `ValRaw` constructors to simply always extend the values internally to 64-bits, meaning that the low 8 bytes of a `ValRaw` is always initialized. This prevents any undefined data from leaking from the host into a wasm module, and means that values are also zero-extended even if they're only used in 32-bit contexts outside of a variant. This felt like the best fix for now, though, in terms of not really having a performance impact while additionally not requiring a rewrite of all lowerings. This solution ended up also neatly removing the "zero out the entire payload" logic that was previously require. Now after a payload is lowered only the tail end of the payload, up to the size of the variant, is zeroed out. This means that each lowered argument is written to at most once which should hopefully be a small performance boost for calling into functions as well. |
||
|
|
c4fd6a95da |
[fuzz] Remove unnecessary allocation (#4689)
This resolves a comment @jameysharp made in a previous PR. |
||
|
|
c3e31c9946 |
[fuzz] Document Wasm-JS conversions (#4683)
During differential execution against V8, Wasm values need to be converted back and forth from JS values. This change documents the location in the specification where this is defined. |
||
|
|
354daf5b93 |
[fuzz] Fix issues with single-inst module generator (#4674)
* [fuzz] Fix signature of `i64.extend32_s` single-instruction test This single-instruction test incorrectly attempted to convert an `i32` to an `i64`; the correct signature is `i64 -> i64`. See the [WebAssembly specification](https://webassembly.github.io/spec/core/bikeshed/#a7-index-of-instructions). * [fuzz] Fix typo in single-instruction function generator Previously, the `unary!` macro created functions that used two operands instead of the expected one. |
||
|
|
7fa89c4a4f |
[fuzz] Fix order of operands passed in to wasm-spec-interpreter (#4672)
In #4671, the meta-differential fuzz target was finding errors when running certain Wasm modules (specifically `shr_s` in that case). @conrad-watt diagnosed the issue as a missing reversal in the operands passed to the spec interpreter. This change fixes #4671 and adds an additional unit test to keep it fixed. |
||
|
|
ec47335b9c |
wasmtime: Add a Config::native_unwind_info method (#4643)
This method configures whether native unwind information (e.g. `.eh_frame` on Linux) is generated or not. This helps integrate with third-party stack capturing tools, such as the system unwinder or the `backtrace` crate. It does not affect whether Wasmtime can capture stack traces in Wasm code that it is running or not. Unwind info is always enabled on Windows, since the Windows ABI requires it. This configuration option defaults to true. Additionally, we deprecate `Config::wasm_backtrace` since we can always cheaply capture stack traces ever since https://github.com/bytecodealliance/wasmtime/pull/4431. Fixes https://github.com/bytecodealliance/wasmtime/issues/4554 |
||
|
|
866ec46613 |
Implement roundtrip fuzzing of component adapters (#4640)
* Improve the `component_api` fuzzer on a few dimensions * Update the generated component to use an adapter module. This involves two core wasm instances communicating with each other to test that data flows through everything correctly. The intention here is to fuzz the fused adapter compiler. String encoding options have been plumbed here to exercise differences in string encodings. * Use `Cow<'static, ...>` and `static` declarations for each static test case to try to cut down on rustc codegen time. * Add `Copy` to derivation of fuzzed enums to make `derive(Clone)` smaller. * Use `Store<Box<dyn Any>>` to try to cut down on codegen by monomorphizing fewer `Store<T>` implementation. * Add debug logging to print out what's flowing in and what's flowing out for debugging failures. * Improve `Debug` representation of dynamic value types to more closely match their Rust counterparts. * Fix a variant issue with adapter trampolines Previously the offset of the payload was calculated as the discriminant aligned up to the alignment of a singular case, but instead this needs to be aligned up to the alignment of all cases to ensure all cases start at the same location. * Fix a copy/paste error when copying masked integers A 32-bit load was actually doing a 16-bit load by accident since it was copied from the 16-bit load-and-mask case. * Fix f32/i64 conversions in adapter modules The adapter previously erroneously converted the f32 to f64 and then to i64, where instead it should go from f32 to i32 to i64. * Fix zero-sized flags in adapter modules This commit corrects the size calculation for zero-sized flags in adapter modules. cc #4592 * Fix a variant size calculation bug in adapters This fixes the same issue found with variants during normal host-side fuzzing earlier where the size of a variant needs to align up the summation of the discriminant and the maximum case size. * Implement memory growth in libc bump realloc Some fuzz-generated test cases are copying lists large enough to exceed one page of memory so bake in a `memory.grow` to the bump allocator as well. * Avoid adapters of exponential size This commit is an attempt to avoid adapters being exponentially sized with respect to the type hierarchy of the input. Previously all adaptation was done inline within each adapter which meant that if something was structured as `tuple<T, T, T, T, ...>` the translation of `T` would be inlined N times. For very deeply nested types this can quickly create an exponentially sized adapter with types of the form: (type $t0 (list u8)) (type $t1 (tuple $t0 $t0)) (type $t2 (tuple $t1 $t1)) (type $t3 (tuple $t2 $t2)) ;; ... where the translation of `t4` has 8 different copies of translating `t0`. This commit changes the translation of types through memory to almost always go through a helper function. The hope here is that it doesn't lose too much performance because types already reside in memory. This can still lead to exponentially sized adapter modules to a lesser degree where if the translation all happens on the "stack", e.g. via `variant`s and their flat representation then many copies of one translation could still be made. For now this commit at least gets the problem under control for fuzzing where fuzzing doesn't trivially find type hierarchies that take over a minute to codegen the adapter module. One of the main tricky parts of this implementation is that when a function is generated the index that it will be placed at in the final module is not known at that time. To solve this the encoded form of the `Call` instruction is saved in a relocation-style format where the `Call` isn't encoded but instead saved into a different area for encoding later. When the entire adapter module is encoded to wasm these pseudo-`Call` instructions are encoded as real instructions at that time. * Fix some memory64 issues with string encodings Introduced just before #4623 I had a few mistakes related to 64-bit memories and mixing 32/64-bit memories. * Actually insert into the `translate_mem_funcs` map This... was the whole point of having the map! * Assert memory growth succeeds in bump allocator |
||
|
|
ed8908efcf |
implement fuzzing for component types (#4537)
This addresses #4307. For the static API we generate 100 arbitrary test cases at build time, each of which includes 0-5 parameter types, a result type, and a WAT fragment containing an imported function and an exported function. The exported function calls the imported function, which is implemented by the host. At runtime, the fuzz test selects a test case at random and feeds it zero or more sets of arbitrary parameters and results, checking that values which flow host-to-guest and guest-to-host make the transition unchanged. The fuzz test for the dynamic API follows a similar pattern, the only difference being that test cases are generated at runtime. Signed-off-by: Joel Dice <joel.dice@fermyon.com> |
||
|
|
f587b10eb9 |
Reduce wasm invocations in the stacks fuzzer (#4595)
On oss-fuzz a test case has been found that executes 30k iterations of a wasm trap which with a 60s timeout leaves 2ms for each invocation which under fuzzing instrumentation is a bit of a stretch with a ~20x slowdown. This commit places a limit on the number of inputs to the fuzzer at 200 to keep it reasonably sized. |
||
|
|
edf7f9f2bb |
wasmtime: Add lots of logging for externrefs and table_ops fuzz target (#4583)
I essentially add these same logs back in every time I'm debugging something related to this fuzz target or `externref`s in general. Probably like 5 times I've added roughly these logs. We should just make them available whenever we need them via `RUST_LOG=wasmtime_runtime=trace`. This also changes a couple `if let`s to `unwrap`s that are now infallible after |
||
|
|
05e6abf2f6 |
Fix the stacks fuzzer in the face of stack overflow (#4557)
When the `stacks` fuzzer hits a stack overflow the trace generated by Wasmtime will have one more frame than the trace generated by the wasm itself. This comes about due to the wasm not actually pushing the final frame when it stack overflows. The host, however, will still see the final frame that triggered the stack overflow. In this situation the fuzzer asserts that the host has one extra frame and then discards the frame. |
||
|
|
46782b18c2 |
wasmtime: Implement fast Wasm stack walking (#4431)
* Always preserve frame pointers in Wasmtime
This allows us to efficiently and simply capture Wasm stacks without maintaining
and synchronizing any safety-critical side tables between the compiler and the
runtime.
* wasmtime: Implement fast Wasm stack walking
Why do we want Wasm stack walking to be fast? Because we capture stacks whenever
there is a trap and traps actually happen fairly frequently with short-lived
programs and WASI's `exit`.
Previously, we would rely on generating the system unwind info (e.g.
`.eh_frame`) and using the system unwinder (via the `backtrace`crate) to walk
the full stack and filter out any non-Wasm stack frames. This can,
unfortunately, be slow for two primary reasons:
1. The system unwinder is doing `O(all-kinds-of-frames)` work rather than
`O(wasm-frames)` work.
2. System unwind info and the system unwinder need to be much more general than
a purpose-built stack walker for Wasm needs to be. It has to handle any kind of
stack frame that any compiler might emit where as our Wasm frames are emitted by
Cranelift and always have frame pointers. This translates into implementation
complexity and general overhead. There can also be unnecessary-for-our-use-cases
global synchronization and locks involved, further slowing down stack walking in
the presence of multiple threads trying to capture stacks in parallel.
This commit introduces a purpose-built stack walker for traversing just our Wasm
frames. To find all the sequences of Wasm-to-Wasm stack frames, and ignore
non-Wasm stack frames, we keep a linked list of `(entry stack pointer, exit
frame pointer)` pairs. This linked list is maintained via Wasm-to-host and
host-to-Wasm trampolines. Within a sequence of Wasm-to-Wasm calls, we can use
frame pointers (which Cranelift preserves) to find the next older Wasm frame on
the stack, and we keep doing this until we reach the entry stack pointer,
meaning that the next older frame will be a host frame.
The trampolines need to avoid a couple stumbling blocks. First, they need to be
compiled ahead of time, since we may not have access to a compiler at
runtime (e.g. if the `cranelift` feature is disabled) but still want to be able
to call functions that have already been compiled and get stack traces for those
functions. Usually this means we would compile the appropriate trampolines
inside `Module::new` and the compiled module object would hold the
trampolines. However, we *also* need to support calling host functions that are
wrapped into `wasmtime::Func`s and there doesn't exist *any* ahead-of-time
compiled module object to hold the appropriate trampolines:
```rust
// Define a host function.
let func_type = wasmtime::FuncType::new(
vec![wasmtime::ValType::I32],
vec![wasmtime::ValType::I32],
);
let func = Func::new(&mut store, func_type, |_, params, results| {
// ...
Ok(())
});
// Call that host function.
let mut results = vec![wasmtime::Val::I32(0)];
func.call(&[wasmtime::Val::I32(0)], &mut results)?;
```
Therefore, we define one host-to-Wasm trampoline and one Wasm-to-host trampoline
in assembly that work for all Wasm and host function signatures. These
trampolines are careful to only use volatile registers, avoid touching any
register that is an argument in the calling convention ABI, and tail call to the
target callee function. This allows forwarding any set of arguments and any
returns to and from the callee, while also allowing us to maintain our linked
list of Wasm stack and frame pointers before transferring control to the
callee. These trampolines are not used in Wasm-to-Wasm calls, only when crossing
the host-Wasm boundary, so they do not impose overhead on regular calls. (And if
using one trampoline for all host-Wasm boundary crossing ever breaks branch
prediction enough in the CPU to become any kind of bottleneck, we can do fun
things like have multiple copies of the same trampoline and choose a random copy
for each function, sharding the functions across branch predictor entries.)
Finally, this commit also ends the use of a synthetic `Module` and allocating a
stubbed out `VMContext` for host functions. Instead, we define a
`VMHostFuncContext` with its own magic value, similar to `VMComponentContext`,
specifically for host functions.
<h2>Benchmarks</h2>
<h3>Traps and Stack Traces</h3>
Large improvements to taking stack traces on traps, ranging from shaving off 64%
to 99.95% of the time it used to take.
<details>
```
multi-threaded-traps/0 time: [2.5686 us 2.5808 us 2.5934 us]
thrpt: [0.0000 elem/s 0.0000 elem/s 0.0000 elem/s]
change:
time: [-85.419% -85.153% -84.869%] (p = 0.00 < 0.05)
thrpt: [+560.90% +573.56% +585.84%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
multi-threaded-traps/1 time: [2.9021 us 2.9167 us 2.9322 us]
thrpt: [341.04 Kelem/s 342.86 Kelem/s 344.58 Kelem/s]
change:
time: [-91.455% -91.294% -91.096%] (p = 0.00 < 0.05)
thrpt: [+1023.1% +1048.6% +1070.3%]
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
multi-threaded-traps/2 time: [2.9996 us 3.0145 us 3.0295 us]
thrpt: [660.18 Kelem/s 663.47 Kelem/s 666.76 Kelem/s]
change:
time: [-94.040% -93.910% -93.762%] (p = 0.00 < 0.05)
thrpt: [+1503.1% +1542.0% +1578.0%]
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
multi-threaded-traps/4 time: [5.5768 us 5.6052 us 5.6364 us]
thrpt: [709.68 Kelem/s 713.63 Kelem/s 717.25 Kelem/s]
change:
time: [-93.193% -93.121% -93.052%] (p = 0.00 < 0.05)
thrpt: [+1339.2% +1353.6% +1369.1%]
Performance has improved.
multi-threaded-traps/8 time: [8.6408 us 9.1212 us 9.5438 us]
thrpt: [838.24 Kelem/s 877.08 Kelem/s 925.84 Kelem/s]
change:
time: [-94.754% -94.473% -94.202%] (p = 0.00 < 0.05)
thrpt: [+1624.7% +1709.2% +1806.1%]
Performance has improved.
multi-threaded-traps/16 time: [10.152 us 10.840 us 11.545 us]
thrpt: [1.3858 Melem/s 1.4760 Melem/s 1.5761 Melem/s]
change:
time: [-97.042% -96.823% -96.577%] (p = 0.00 < 0.05)
thrpt: [+2821.5% +3048.1% +3281.1%]
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
many-modules-registered-traps/1
time: [2.6278 us 2.6361 us 2.6447 us]
thrpt: [378.11 Kelem/s 379.35 Kelem/s 380.55 Kelem/s]
change:
time: [-85.311% -85.108% -84.909%] (p = 0.00 < 0.05)
thrpt: [+562.65% +571.51% +580.76%]
Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
3 (3.00%) high mild
6 (6.00%) high severe
many-modules-registered-traps/8
time: [2.6294 us 2.6460 us 2.6623 us]
thrpt: [3.0049 Melem/s 3.0235 Melem/s 3.0425 Melem/s]
change:
time: [-85.895% -85.485% -85.022%] (p = 0.00 < 0.05)
thrpt: [+567.63% +588.95% +608.95%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
many-modules-registered-traps/64
time: [2.6218 us 2.6329 us 2.6452 us]
thrpt: [24.195 Melem/s 24.308 Melem/s 24.411 Melem/s]
change:
time: [-93.629% -93.551% -93.470%] (p = 0.00 < 0.05)
thrpt: [+1431.4% +1450.6% +1469.5%]
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
many-modules-registered-traps/512
time: [2.6569 us 2.6737 us 2.6923 us]
thrpt: [190.17 Melem/s 191.50 Melem/s 192.71 Melem/s]
change:
time: [-99.277% -99.268% -99.260%] (p = 0.00 < 0.05)
thrpt: [+13417% +13566% +13731%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
many-modules-registered-traps/4096
time: [2.7258 us 2.7390 us 2.7535 us]
thrpt: [1.4876 Gelem/s 1.4955 Gelem/s 1.5027 Gelem/s]
change:
time: [-99.956% -99.955% -99.955%] (p = 0.00 < 0.05)
thrpt: [+221417% +223380% +224881%]
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
many-stack-frames-traps/1
time: [1.4658 us 1.4719 us 1.4784 us]
thrpt: [676.39 Kelem/s 679.38 Kelem/s 682.21 Kelem/s]
change:
time: [-90.368% -89.947% -89.586%] (p = 0.00 < 0.05)
thrpt: [+860.23% +894.72% +938.21%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
many-stack-frames-traps/8
time: [2.4772 us 2.4870 us 2.4973 us]
thrpt: [3.2034 Melem/s 3.2167 Melem/s 3.2294 Melem/s]
change:
time: [-85.550% -85.370% -85.199%] (p = 0.00 < 0.05)
thrpt: [+575.65% +583.51% +592.03%]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
many-stack-frames-traps/64
time: [10.109 us 10.171 us 10.236 us]
thrpt: [6.2525 Melem/s 6.2925 Melem/s 6.3309 Melem/s]
change:
time: [-78.144% -77.797% -77.336%] (p = 0.00 < 0.05)
thrpt: [+341.22% +350.38% +357.55%]
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
5 (5.00%) high mild
2 (2.00%) high severe
many-stack-frames-traps/512
time: [126.16 us 126.54 us 126.96 us]
thrpt: [4.0329 Melem/s 4.0461 Melem/s 4.0583 Melem/s]
change:
time: [-65.364% -64.933% -64.453%] (p = 0.00 < 0.05)
thrpt: [+181.32% +185.17% +188.71%]
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high severe
```
</details>
<h3>Calls</h3>
There is, however, a small regression in raw Wasm-to-host and host-to-Wasm call
performance due the new trampolines. It seems to be on the order of about 2-10
nanoseconds per call, depending on the benchmark.
I believe this regression is ultimately acceptable because
1. this overhead will be vastly dominated by whatever work a non-nop callee
actually does,
2. we will need these trampolines, or something like them, when implementing the
Wasm exceptions proposal to do things like translate Wasm's exceptions into
Rust's `Result`s,
3. and because the performance improvements to trapping and capturing stack
traces are of such a larger magnitude than this call regressions.
<details>
```
sync/no-hook/host-to-wasm - typed - nop
time: [28.683 ns 28.757 ns 28.844 ns]
change: [+16.472% +17.183% +17.904%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
4 (4.00%) high mild
5 (5.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop
time: [42.515 ns 42.652 ns 42.841 ns]
change: [+12.371% +14.614% +17.462%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop
time: [33.936 ns 34.052 ns 34.179 ns]
change: [+25.478% +26.938% +28.369%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
sync/no-hook/host-to-wasm - typed - nop-params-and-results
time: [34.290 ns 34.388 ns 34.502 ns]
change: [+40.802% +42.706% +44.526%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
5 (5.00%) high mild
8 (8.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [62.546 ns 62.721 ns 62.919 ns]
change: [+2.5014% +3.6319% +4.8078%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
2 (2.00%) high mild
10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop-params-and-results
time: [42.609 ns 42.710 ns 42.831 ns]
change: [+20.966% +22.282% +23.475%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop
time: [29.546 ns 29.675 ns 29.818 ns]
change: [+20.693% +21.794% +22.836%] (p = 0.00 < 0.05)
Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop
time: [45.448 ns 45.699 ns 45.961 ns]
change: [+17.204% +18.514% +19.590%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop
time: [34.334 ns 34.437 ns 34.558 ns]
change: [+23.225% +24.477% +25.886%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [36.594 ns 36.763 ns 36.974 ns]
change: [+41.967% +47.261% +52.086%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
3 (3.00%) high mild
9 (9.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [63.541 ns 63.831 ns 64.194 ns]
change: [-4.4337% -0.6855% +2.7134%] (p = 0.73 > 0.05)
No change in performance detected.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop-params-and-results
time: [43.968 ns 44.169 ns 44.437 ns]
change: [+18.772% +21.802% +24.623%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
3 (3.00%) high mild
12 (12.00%) high severe
async/no-hook/host-to-wasm - typed - nop
time: [4.9612 us 4.9743 us 4.9889 us]
change: [+9.9493% +11.911% +13.502%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async/no-hook/host-to-wasm - untyped - nop
time: [5.0030 us 5.0211 us 5.0439 us]
change: [+10.841% +11.873% +12.977%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) high mild
7 (7.00%) high severe
async/no-hook/host-to-wasm - typed - nop-params-and-results
time: [4.9273 us 4.9468 us 4.9700 us]
change: [+4.7381% +6.8445% +8.8238%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [5.1151 us 5.1338 us 5.1555 us]
change: [+9.5335% +11.290% +13.044%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/hook-sync/host-to-wasm - typed - nop
time: [4.9330 us 4.9394 us 4.9467 us]
change: [+10.046% +11.038% +12.035%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop
time: [5.0073 us 5.0183 us 5.0310 us]
change: [+9.3828% +10.565% +11.752%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
async/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [4.9610 us 4.9839 us 5.0097 us]
change: [+9.0857% +11.513% +14.359%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [5.0995 us 5.1272 us 5.1617 us]
change: [+9.3600% +11.506% +13.809%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop
time: [2.4242 us 2.4316 us 2.4396 us]
change: [+7.8756% +8.8803% +9.8346%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop
time: [2.5102 us 2.5155 us 2.5210 us]
change: [+12.130% +13.194% +14.270%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
4 (4.00%) high mild
8 (8.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop-params-and-results
time: [2.4203 us 2.4310 us 2.4440 us]
change: [+4.0380% +6.3623% +8.7534%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop-params-and-results
time: [2.5501 us 2.5593 us 2.5700 us]
change: [+8.8802% +10.976% +12.937%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop
time: [2.4135 us 2.4190 us 2.4254 us]
change: [+8.3640% +9.3774% +10.435%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop
time: [2.5172 us 2.5248 us 2.5357 us]
change: [+11.543% +12.750% +13.982%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop-params-and-results
time: [2.4214 us 2.4353 us 2.4532 us]
change: [+1.5158% +5.0872% +8.6765%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
2 (2.00%) high mild
13 (13.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop-params-and-results
time: [2.5499 us 2.5607 us 2.5748 us]
change: [+10.146% +12.459% +14.919%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
sync/no-hook/wasm-to-host - nop - typed
time: [6.6135 ns 6.6288 ns 6.6452 ns]
change: [+37.927% +38.837% +39.869%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.930 ns 15.993 ns 16.067 ns]
change: [+3.9583% +5.6286% +7.2430%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
sync/no-hook/wasm-to-host - nop - untyped
time: [20.596 ns 20.640 ns 20.690 ns]
change: [+4.3293% +5.2047% +6.0935%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [42.659 ns 42.882 ns 43.159 ns]
change: [-2.1466% -0.5079% +1.2554%] (p = 0.58 > 0.05)
No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
sync/no-hook/wasm-to-host - nop - unchecked
time: [10.671 ns 10.691 ns 10.713 ns]
change: [+83.911% +87.620% +92.062%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.136 ns 11.190 ns 11.263 ns]
change: [-29.719% -28.446% -27.029%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop - typed
time: [6.7964 ns 6.8087 ns 6.8226 ns]
change: [+21.531% +24.206% +27.331%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) high mild
10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.865 ns 15.921 ns 15.985 ns]
change: [+4.8466% +6.3330% +7.8317%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
sync/hook-sync/wasm-to-host - nop - untyped
time: [21.505 ns 21.587 ns 21.677 ns]
change: [+8.0908% +9.1943% +10.254%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) high mild
4 (4.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [44.018 ns 44.128 ns 44.261 ns]
change: [-1.4671% -0.0458% +1.2443%] (p = 0.94 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
5 (5.00%) high mild
9 (9.00%) high severe
sync/hook-sync/wasm-to-host - nop - unchecked
time: [11.264 ns 11.326 ns 11.387 ns]
change: [+80.225% +81.659% +83.068%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) high mild
3 (3.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.816 ns 11.865 ns 11.920 ns]
change: [-29.152% -28.040% -26.957%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
8 (8.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop - typed
time: [6.6221 ns 6.6385 ns 6.6569 ns]
change: [+43.618% +44.755% +45.965%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
6 (6.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.884 ns 15.929 ns 15.983 ns]
change: [+3.5987% +5.2053% +6.7846%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) high mild
13 (13.00%) high severe
async/no-hook/wasm-to-host - nop - untyped
time: [20.615 ns 20.702 ns 20.821 ns]
change: [+6.9799% +8.1212% +9.2819%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
2 (2.00%) high mild
8 (8.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.956 ns 42.207 ns 42.521 ns]
change: [-4.3057% -2.7730% -1.2428%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async/no-hook/wasm-to-host - nop - unchecked
time: [10.440 ns 10.474 ns 10.513 ns]
change: [+83.959% +85.826% +87.541%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.476 ns 11.512 ns 11.554 ns]
change: [-29.857% -28.383% -26.978%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
1 (1.00%) low mild
6 (6.00%) high mild
5 (5.00%) high severe
async/no-hook/wasm-to-host - nop - async-typed
time: [26.427 ns 26.478 ns 26.532 ns]
change: [+6.5730% +7.4676% +8.3983%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [28.557 ns 28.693 ns 28.880 ns]
change: [+1.9099% +3.7332% +5.9731%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
async/hook-sync/wasm-to-host - nop - typed
time: [6.7488 ns 6.7630 ns 6.7784 ns]
change: [+19.935% +22.080% +23.683%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.928 ns 16.031 ns 16.149 ns]
change: [+5.5188% +6.9567% +8.3839%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
9 (9.00%) high mild
2 (2.00%) high severe
async/hook-sync/wasm-to-host - nop - untyped
time: [21.930 ns 22.114 ns 22.296 ns]
change: [+4.6674% +7.7588% +10.375%] (p = 0.00 < 0.05)
Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.684 ns 42.858 ns 43.081 ns]
change: [-5.2957% -3.4693% -1.6217%] (p = 0.00 < 0.05)
Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severe
async/hook-sync/wasm-to-host - nop - unchecked
time: [11.026 ns 11.053 ns 11.086 ns]
change: [+70.751% +72.378% +73.961%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
5 (5.00%) high mild
5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.840 ns 11.900 ns 11.982 ns]
change: [-27.977% -26.584% -24.887%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async/hook-sync/wasm-to-host - nop - async-typed
time: [27.601 ns 27.709 ns 27.882 ns]
change: [+8.1781% +9.1102% +10.030%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low mild
3 (3.00%) high mild
6 (6.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [28.955 ns 29.174 ns 29.413 ns]
change: [+1.1226% +3.0366% +5.1126%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
async-pool/no-hook/wasm-to-host - nop - typed
time: [6.5626 ns 6.5733 ns 6.5851 ns]
change: [+40.561% +42.307% +44.514%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
5 (5.00%) high mild
4 (4.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - typed
time: [15.820 ns 15.886 ns 15.969 ns]
change: [+4.1044% +5.7928% +7.7122%] (p = 0.00 < 0.05)
Performance has regressed.
Found 17 outliers among 100 measurements (17.00%)
4 (4.00%) high mild
13 (13.00%) high severe
async-pool/no-hook/wasm-to-host - nop - untyped
time: [20.481 ns 20.521 ns 20.566 ns]
change: [+6.7962% +7.6950% +8.7612%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
6 (6.00%) high mild
5 (5.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - untyped
time: [41.834 ns 41.998 ns 42.189 ns]
change: [-3.8185% -2.2687% -0.7541%] (p = 0.01 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
3 (3.00%) high mild
10 (10.00%) high severe
async-pool/no-hook/wasm-to-host - nop - unchecked
time: [10.353 ns 10.380 ns 10.414 ns]
change: [+82.042% +84.591% +87.205%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - unchecked
time: [11.123 ns 11.168 ns 11.228 ns]
change: [-30.813% -29.285% -27.874%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
async-pool/no-hook/wasm-to-host - nop - async-typed
time: [27.442 ns 27.528 ns 27.638 ns]
change: [+7.5215% +9.9795% +12.266%] (p = 0.00 < 0.05)
Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - async-typed
time: [29.014 ns 29.148 ns 29.312 ns]
change: [+2.0227% +3.4722% +4.9047%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - typed
time: [6.7916 ns 6.8116 ns 6.8325 ns]
change: [+20.937% +22.050% +23.281%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - typed
time: [15.917 ns 15.975 ns 16.051 ns]
change: [+4.6404% +6.4217% +8.3075%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - untyped
time: [21.558 ns 21.612 ns 21.679 ns]
change: [+8.1158% +9.1409% +10.217%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - untyped
time: [42.475 ns 42.614 ns 42.775 ns]
change: [-6.3613% -4.4709% -2.7647%] (p = 0.00 < 0.05)
Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
3 (3.00%) high mild
15 (15.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - unchecked
time: [11.150 ns 11.195 ns 11.247 ns]
change: [+74.424% +77.056% +79.811%] (p = 0.00 < 0.05)
Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) high mild
11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - unchecked
time: [11.639 ns 11.695 ns 11.760 ns]
change: [-30.212% -29.023% -27.954%] (p = 0.00 < 0.05)
Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
7 (7.00%) high mild
8 (8.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - async-typed
time: [27.480 ns 27.712 ns 27.984 ns]
change: [+2.9764% +6.5061% +9.8914%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - async-typed
time: [29.218 ns 29.380 ns 29.600 ns]
change: [+5.2283% +7.7247% +10.822%] (p = 0.00 < 0.05)
Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
2 (2.00%) high mild
14 (14.00%) high severe
```
</details>
* Add s390x support for frame pointer-based stack walking
* wasmtime: Allow `Caller::get_export` to get all exports
* fuzzing: Add a fuzz target to check that our stack traces are correct
We generate Wasm modules that keep track of their own stack as they call and
return between functions, and then we periodically check that if the host
captures a backtrace, it matches what the Wasm module has recorded.
* Remove VM offsets for `VMHostFuncContext` since it isn't used by JIT code
* Add doc comment with stack walking implementation notes
* Document the extra state that can be passed to `wasmtime_runtime::Backtrace` methods
* Add extensive comments for stack walking function
* Factor architecture-specific bits of stack walking out into modules
* Initialize store-related fields in a vmctx to null when there is no store yet
Rather than leaving them as uninitialized data.
* Use `set_callee` instead of manually setting the vmctx field
* Use a more informative compile error message for unsupported architectures
* Document unsafety of `prepare_host_to_wasm_trampoline`
* Use `bti c` instead of `hint #34` in inline aarch64 assembly
* Remove outdated TODO comment
* Remove setting of `last_wasm_exit_fp` in `set_jit_trap`
This is no longer needed as the value is plumbed through to the backtrace code
directly now.
* Only set the stack limit once, in the face of re-entrancy into Wasm
* Add comments for s390x-specific stack walking bits
* Use the helper macro for all libcalls
If we forget to use it, and then trigger a GC from the libcall, that means we
could miss stack frames when walking the stack, fail to find live GC refs, and
then get use after free bugs. Much less risky to always use the helper macro
that takes care of all of that for us.
* Use the `asm_sym!` macro in Wasm-to-libcall trampolines
This macro handles the macOS-specific underscore prefix stuff for us.
* wasmtime: add size and align to `externref` assertion error message
* Extend the `stacks` fuzzer to have host frames in between Wasm frames
This way we get one or more contiguous sequences of Wasm frames on the stack,
instead of exactly one.
* Add documentation for aarch64-specific backtrace helpers
* Clarify that we only support little-endian aarch64 in trampoline comment
* Use `.machine z13` in s390x assembly file
Since apparently our CI machines have pretty old assemblers that don't have
`.machine z14`. This should be fine though since these trampolines don't make
use of anything that is introduced in z14.
* Fix aarch64 build
* Fix macOS build
* Document the `asm_sym!` macro
* Add windows support to the `wasmtime-asm-macros` crate
* Add windows support to host<--->Wasm trampolines
* Fix trap handler build on windows
* Run `rustfmt` on s390x trampoline source file
* Temporarily disable some assertions about a trap's backtrace in the component model tests
Follow up to re-enable this and fix the associated issue:
https://github.com/bytecodealliance/wasmtime/issues/4535
* Refactor libcall definitions with less macros
This refactors the `libcall!` macro to use the
`foreach_builtin_function!` macro to define all of the trampolines.
Additionally the macro surrounding each libcall itself is no longer
necessary and helps avoid too many macros.
* Use `VMOpaqueContext::from_vm_host_func_context` in `VMHostFuncContext::new`
* Move `backtrace` module to be submodule of `traphandlers`
This avoids making some things `pub(crate)` in `traphandlers` that really
shouldn't be.
* Fix macOS aarch64 build
* Use "i64" instead of "word" in aarch64-specific file
* Save/restore entry SP and exit FP/return pointer in the face of panicking imported host functions
Also clean up assertions surrounding our saved entry/exit registers.
* Put "typed" vs "untyped" in the same position of call benchmark names
Regardless if we are doing wasm-to-host or host-to-wasm
* Fix stacks test case generator build for new `wasm-encoder`
* Fix build for s390x
* Expand libcalls in s390x asm
* Disable more parts of component tests now that backtrace assertions are a bit tighter
* Remove assertion that can maybe fail on s390x
Co-authored-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
|
||
|
|
174b60dcf7 |
Add *.wast support for invoking components (#4526)
This commit builds on bytecodealliance/wasm-tools#690 to add support to testing of the component model to execute functions when running `*.wast` files. This support is all built on #4442 as functions are invoked through a "dynamic" API. Right now the testing and integration is fairly crude but I'm hoping that we can try to improve it over time as necessary. For now this should provide a hopefully more convenient syntax for unit tests and the like. |
||
|
|
02c3b47db2 |
x64: Implement SIMD fma (#4474)
* x64: Add VEX Instruction Encoder This uses a similar builder pattern to the EVEX Encoder. Does not yet support memory accesses. * x64: Add FMA Flag * x64: Implement SIMD `fma` * x64: Use 4 register Vex Inst * x64: Reorder VEX pretty print args |
||
|
|
02477988dd |
table_ops: allow 0-sized tables, locals, globals (#4495)
I noticed that `TableOp::insert` had assertions that `num_params` and `table_size` were greater than 0, but no assert for `num_globals`. These asserts couldn't be hit because the `*_RANGE` constants were all set to a minimum of 1. But the only reason I can see to prohibit 0-sized tables, locals, or globals, was because indexes into those spaces were generated with the `%` operator. Allowing 0-sized spaces requires not generating the corresponding instructions at all when there are no valid indexes. So I pushed the final selection of which table/local/global to access earlier, to the moment when we're picking which TableOps to run. Then, instead of generating a random u8 or u32 and taking the remainder to get it into the right range, I can just ask `arbitrary` to generate a number in the right range to begin with. So this now explores some size-0 corners that it didn't before, and it doesn't require reasoning about whether remainder can divide by zero. Also I think it uses fewer bits of the `Unstructured` input to produce the same cases, and I hope that lets libFuzzer more quickly find bits it can mutate to get to novel coverage paths. |
||
|
|
2127c3a369 |
Fix CI for main (#4486)
* Skip new `table_ops` test under emulation When emulating we already have to disable most pooling-allocator related tests so this commit carries over that logic to the new fuzz test which may run some configurations with the pooling allocator depending on the random input. * Fix panics in s390x codegen related to aliases This commit fixes an issue introduced as part of the fix for GHSA-5fhj-g3p3-pq9g. The `reftyped_vregs` list given to `regalloc2` is not allowed to have duplicates in it and while the list originally doesn't have duplicates once aliases are applied the list may have duplicates. The fix here is to perform another pass to remove duplicates after the aliases have been processed. |