This new target compares the outputs of executing the first exported
function of a Wasm module in Wasmtime and in the official Wasm spec
interpreter (using the `wasm-spec-interpreter` crate). This is an
initial step towards more fully-featured fuzzing (e.g. compare memories,
add `v128`, add references, add other proposals, etc.)
This functionality is now subsumed by the limiter built-in to all
fuzzing stores, so there's no longer any need for it. It was also
triggering arithmetic overflows in fuzzing, so instead of fixing I'm
removing it!
* Enable simd fuzzing on oss-fuzz
This commit generally enables the simd feature while fuzzing, which
should affect almost all fuzzers. For fuzzers that just throw random
data at the wall and see what sticks, this means that they'll now be
able to throw simd-shaped data at the wall and have it stick. For
wasm-smith-based fuzzers this commit also updates wasm-smith to 0.6.0
which allows further configuring the `SwarmConfig` after generation,
notably allowing `instantiate-swarm` to generate modules using simd
using `wasm-smith`. This should much more reliably feed simd-related
things into the fuzzers.
Finally, this commit updates wasmtime to avoid usage of the general
`wasm_smith::Module` generator to instead use a Wasmtime-specific custom
default configuration which enables various features we have
implemented.
* Allow dummy table creation to fail
Tables might creation for imports may exceed the memory limit on the
store, which we'll want to gracefully recover from and not fail the
fuzzers.
* fuzz: Implement finer memory limits per-store
This commit implements a custom resource limiter for fuzzing. Locally I
was seeing a lot of ooms while fuzzing and I believe it was generally
caused from not actually having any runtime limits for wasm modules. I'm
actually surprised that this hasn't come up more on oss-fuzz more in
reality, but with a custom store limiter I think this'll get the job
done where we have an easier knob to turn for controlling the memory
usage of fuzz-generated modules.
For now I figure a 2gb limit should be good enough for limiting fuzzer
execution. Additionally the "out of resources" check if instantiation
fails now looks for the `oom` flag to be set instead of pattern matching
on some error messages about resources.
* Fix tests
* Bump the wasm-tools crates
Pulls in some updates here and there, mostly for updating crates to the
latest version to prepare for later memory64 work.
* Update lightbeam
We've got a lot of fuzz failures right now of modules instantiating
memories of 65536 pages, which we specifically disallow since the
representation of limits within Wasmtime don't support full 4GB
memories. This is ok, however, and it's not a fuzz failure that we're
interested in, so this commit allows strings of that error to pass
through the fuzzer.
Wasmtime was updated to reject creation of memories exactly 4gb in size
in #3013, but the fuzzers still had the assumption that any request to
create a host object for a particular wasm type would succeed.
Unfortunately now, though, a request to create a 4gb memory fails. This
is an expected failure, though, so the fix here was to catch the error
and allow it.
* Add guard pages to the front of linear memories
This commit implements a safety feature for Wasmtime to place guard
pages before the allocation of all linear memories. Guard pages placed
after linear memories are typically present for performance (at least)
because it can help elide bounds checks. Guard pages before a linear
memory, however, are never strictly needed for performance or features.
The intention of a preceding guard page is to help insulate against bugs
in Cranelift or other code generators, such as CVE-2021-32629.
This commit adds a `Config::guard_before_linear_memory` configuration
option, defaulting to `true`, which indicates whether guard pages should
be present both before linear memories as well as afterwards. Guard
regions continue to be controlled by
`{static,dynamic}_memory_guard_size` methods.
The implementation here affects both on-demand allocated memories as
well as the pooling allocator for memories. For on-demand memories this
adjusts the size of the allocation as well as adjusts the calculations
for the base pointer of the wasm memory. For the pooling allocator this
will place a singular extra guard region at the very start of the
allocation for memories. Since linear memories in the pooling allocator
are contiguous every memory already had a preceding guard region in
memory, it was just the previous memory's guard region afterwards. Only
the first memory needed this extra guard.
I've attempted to write some tests to help test all this, but this is
all somewhat tricky to test because the settings are pretty far away
from the actual behavior. I think, though, that the tests added here
should help cover various use cases and help us have confidence in
tweaking the various `Config` settings beyond their defaults.
Note that this also contains a semantic change where
`InstanceLimits::memory_reservation_size` has been removed. Instead this
field is now inferred from the `static_memory_maximum_size` and guard
size settings. This should hopefully remove some duplication in these
settings, canonicalizing on the guard-size/static-size settings as the
way to control memory sizes and virtual reservations.
* Update config docs
* Fix a typo
* Fix benchmark
* Fix wasmtime-runtime tests
* Fix some more tests
* Try to fix uffd failing test
* Review items
* Tweak 32-bit defaults
Makes the pooling allocator a bit more reasonable by default on 32-bit
with these settings.
* wasmtime_runtime: move ResourceLimiter defaults into this crate
In preparation of changing wasmtime::ResourceLimiter to be a re-export
of this definition, because translating between two traits was causing
problems elsewhere.
* wasmtime: make ResourceLimiter a re-export of wasmtime_runtime::ResourceLimiter
* refactor Store internals to support ResourceLimiter as part of store's data
* add hooks for entering and exiting native code to Store
* wasmtime-wast, fuzz: changes to adapt ResourceLimiter API
* fix tests
* wrap calls into wasm with entering/exiting exit hooks as well
* the most trivial test found a bug, lets write some more
* store: mark some methods as #[inline] on Store, StoreInner, StoreInnerMost
Co-authored-By: Alex Crichton <alex@alexcrichton.com>
* improve tests for the entering/exiting native hooks
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
* Add resource limiting to the Wasmtime API.
This commit adds a `ResourceLimiter` trait to the Wasmtime API.
When used in conjunction with `Store::new_with_limiter`, this can be used to
monitor and prevent WebAssembly code from growing linear memories and tables.
This is particularly useful when hosts need to take into account host resource
usage to determine if WebAssembly code can consume more resources.
A simple `StaticResourceLimiter` is also included with these changes that will
simply limit the size of linear memories or tables for all instances created in
the store based on static values.
* Code review feedback.
* Implemented `StoreLimits` and `StoreLimitsBuilder`.
* Moved `max_instances`, `max_memories`, `max_tables` out of `Config` and into
`StoreLimits`.
* Moved storage of the limiter in the runtime into `Memory` and `Table`.
* Made `InstanceAllocationRequest` use a reference to the limiter.
* Updated docs.
* Made `ResourceLimiterProxy` generic to remove a level of indirection.
* Fixed the limiter not being used for `wasmtime::Memory` and
`wasmtime::Table`.
* Code review feedback and bug fix.
* `Memory::new` now returns `Result<Self>` so that an error can be returned if
the initial requested memory exceeds any limits placed on the store.
* Changed an `Arc` to `Rc` as the `Arc` wasn't necessary.
* Removed `Store` from the `ResourceLimiter` callbacks. Custom resource limiter
implementations are free to capture any context they want, so no need to
unnecessarily store a weak reference to `Store` from the proxy type.
* Fixed a bug in the pooling instance allocator where an instance would be
leaked from the pool. Previously, this would only have happened if the OS was
unable to make the necessary linear memory available for the instance. With
these changes, however, the instance might not be created due to limits
placed on the store. We now properly deallocate the instance on error.
* Added more tests, including one that covers the fix mentioned above.
* Code review feedback.
* Add another memory to `test_pooling_allocator_initial_limits_exceeded` to
ensure a partially created instance is successfully deallocated.
* Update some doc comments for better documentation of `Store` and
`ResourceLimiter`.
Yesterday fuzzing was switched to using a `Linker` to improve coverage
when using module linking since we can fake instance imports with
definitions of each individual item. Using a `Linker`, however, means
that we can't necessarily instantiate all modules, such as
(module
(import "" "" (memory (;0;) 0 1))
(import "" "" (memory (;1;) 2)))
As a result this just allows these sorts of "incompatible import type"
errors when fuzzing to not trigger crashes.
* Increase allowances for values when fuzzing
The wasm-smith limits for generating modules are a bit higher than what
we specify, so sync those up to avoid getting too many false positives
with limits getting blown.
* Ensure fuzzing `*.wat` files are in sync
I keep looking at `*.wat` files that are actually stale, so remove stale
files if we write out a `*.wasm` file and can't disassemble it.
* Enable shadowing in dummy_linker
Fixes an issues where the same name is imported twice and we generated
two values for that. We don't mind the error here, we just want to
ignore the shadowing errors.
Currently this exposes a bug where modules broken by module linking
cause failures in the fuzzer, but we want to fuzz those modules since
module linking isn't enabled when generating these modules.
This commit fixes an issue where when module linking was enabled for
fuzzing (which it is) import types of modules show as imports of
instances. In an attempt to satisfy the dummy values of such imports the
fuzzing integration would create instances for each import. This would,
however, count towards instance limits and isn't always desired.
This commit refactors the creation of dummy import values to decompose
imports of instances into imports of each individual item. This should
retain the pre-module-linking behavior of dummy imports for various
fuzzers.
* Implement defining host functions at the Config level.
This commit introduces defining host functions at the `Config` rather than with
`Func` tied to a `Store`.
The intention here is to enable a host to define all of the functions once
with a `Config` and then use a `Linker` (or directly with
`Store::get_host_func`) to use the functions when instantiating a module.
This should help improve the performance of use cases where a `Store` is
short-lived and redefining the functions at every module instantiation is a
noticeable performance hit.
This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi`
type.
The new method adds the WASI functions to the given config as host functions.
This commit adds context functions to `Store`: `get` to get a context of a
particular type and `set` to set the context on the store.
For safety, `set` cannot replace an existing context value of the same type.
`Wasi::set_context` was added to set the WASI context for a `Store` when using
`Wasi::add_to_config`.
* Add `Config::define_host_func_async`.
* Make config "async" rather than store.
This commit moves the concept of "async-ness" to `Config` rather than `Store`.
Note: this is a breaking API change for anyone that's already adopted the new
async support in Wasmtime.
Now `Config::new_async` is used to create an "async" config and any `Store`
associated with that config is inherently "async".
This is needed for async shared host functions to have some sanity check during their
execution (async host functions, like "async" `Func`, need to be called with
the "async" variants).
* Update async function tests to smoke async shared host functions.
This commit updates the async function tests to also smoke the shared host
functions, plus `Func::wrap0_async`.
This also changes the "wrap async" method names on `Config` to
`wrap$N_host_func_async` to slightly better match what is on `Func`.
* Move the instance allocator into `Engine`.
This commit moves the instantiated instance allocator from `Config` into
`Engine`.
This makes certain settings in `Config` no longer order-dependent, which is how
`Config` should ideally be.
This also removes the confusing concept of the "default" instance allocator,
instead opting to construct the on-demand instance allocator when needed.
This does alter the semantics of the instance allocator as now each `Engine`
gets its own instance allocator rather than sharing a single one between all
engines created from a configuration.
* Make `Engine::new` return `Result`.
This is a breaking API change for anyone using `Engine::new`.
As creating the pooling instance allocator may fail (likely cause is not enough
memory for the provided limits), instead of panicking when creating an
`Engine`, `Engine::new` now returns a `Result`.
* Remove `Config::new_async`.
This commit removes `Config::new_async` in favor of treating "async support" as
any other setting on `Config`.
The setting is `Config::async_support`.
* Remove order dependency when defining async host functions in `Config`.
This commit removes the order dependency where async support must be enabled on
the `Config` prior to defining async host functions.
The check is now delayed to when an `Engine` is created from the config.
* Update WASI example to use shared `Wasi::add_to_config`.
This commit updates the WASI example to use `Wasi::add_to_config`.
As only a single store and instance are used in the example, it has no semantic
difference from the previous example, but the intention is to steer users
towards defining WASI on the config and only using `Wasi::add_to_linker` when
more explicit scoping of the WASI context is required.
* Ensure `store` is in the function names
* Don't abort the process on `add_fuel` when fuel isn't configured
* Allow learning about failure in both `add_fuel` and `fuel_consumed`
* Consume fuel during function execution
This commit adds codegen infrastructure necessary to instrument wasm
code to consume fuel as it executes. Currently nothing is really done
with the fuel, but that'll come in later commits.
The focus of this commit is to implement the codegen infrastructure
necessary to consume fuel and account for fuel consumed correctly.
* Periodically check remaining fuel in wasm JIT code
This commit enables wasm code to periodically check to see if fuel has
run out. When fuel runs out an intrinsic is called which can do what it
needs to do in the result of fuel running out. For now a trap is thrown
to have at least some semantics in synchronous stores, but another
planned use for this feature is for asynchronous stores to periodically
yield back to the host based on fuel running out.
Checks for remaining fuel happen in the same locations as interrupt
checks, which is to say the start of the function as well as loop
headers.
* Improve codegen by caching `*const VMInterrupts`
The location of the shared interrupt value and fuel value is through a
double-indirection on the vmctx (load through the vmctx and then load
through that pointer). The second pointer in this chain, however, never
changes, so we can alter codegen to account for this and remove some
extraneous load instructions and hopefully reduce some register
pressure even maybe.
* Add tests fuel can abort infinite loops
* More fuzzing with fuel
Use fuel to time out modules in addition to time, using fuzz input to
figure out which.
* Update docs on trapping instructions
* Fix doc links
* Fix a fuzz test
* Change setting fuel to adding fuel
* Fix a doc link
* Squelch some rustdoc warnings
Fuzzing has turned up that module linking can create large amounts of
tables and memories in addition to instances. For example if N instances
are allowed and M tables are allowed per-instance, then currently
wasmtime allows MxN tables (which is quite a lot). This is causing some
wasm-smith-generated modules to exceed resource limits while fuzzing!
This commits adds corresponding `max_tables` and `max_memories`
functions to sit alongside the `max_instances` configuration.
Additionally fuzzing now by default configures all of these to a
somewhat low value to avoid too much resource usage while fuzzing.
We already cover module linking with the `instantiate-swarm` target and
otherwise enabling module linking is preventing otherwise-valid modules
from being compiled because of the breaking change in the module linking
proposal with respect to imports.
* 2499: First pass on TableOps fuzzer generator wasm_encoder migration
- wasm binary generated via sections and smushed together into a module
- test: compare generated wat against expected wat
- note: doesn't work
- Grouped instructions not implemented
- Vec<u8> to wat String not implemented
* 2499: Add typesection, abstract instruction puts, and update test
- TableOp.insert now will interact with a function object directly
- add types for generated function
- expected test string now reflects expected generated code
* 2499: Mark unused index as _i
* 2499: Function insertion is in proper stack order, and fix off by 1
index
- imported functions must be typed
- instructions operate on a stack ie. define values as instructions
before using
* 2499: Apply suggestions from code review
- typo fixing
- oracle ingests binary bytes itself
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
* 2499: Code cleanup + renaming vars
- busywork, nothing to see here
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
Recent changes to fuzzers made expectations more strict about handling
errors while fuzzing, but this erroneously changed a module compilation
step to always assume that the input wasm is valid. Instead a flag is
now passed through indicating whether the wasm blob is known valid or
invalid, and only if compilation fails and it's known valid do we panic.
This commit updates all the wasm-tools crates that we use and enables
fuzzing of the module linking proposal in our various fuzz targets. This
also refactors some of the dummy value generation logic to not be
fallible and to always succeed, the thinking being that we don't want to
accidentally hide errors while fuzzing. Additionally instantiation is
only allowed to fail with a `Trap`, other failure reasons are unwrapped.
I was having limited success fuzzing locally because apparently the
fuzzer was spawning too many threads. Looking into it that indeed
appears to be the case! The threads which time out runtime of wasm only
exit after the sleep has completely finished, meaning that if we execute
a ton of wasm that exits quickly each run will generate a sleeping thread.
This commit fixes the issue by using some synchronization to ensure the
sleeping thread exits when our fuzzed run also exits.
This PR adds a new fuzz target, `differential_wasmi`, that runs a
Cranelift-based Wasm backend alongside a simple third-party Wasm
interpeter crate (`wasmi`). The fuzzing runs the first function in a
given module to completion on each side, and then diffs the return value
and linear memory contents.
This strategy should provide end-to-end coverage including both the Wasm
translation to CLIF (which has seen some subtle and scary bugs at
times), the lowering from CLIF to VCode, the register allocation, and
the final code emission.
This PR also adds a feature `experimental_x64` to the fuzzing crate (and
the chain of dependencies down to `cranelift-codegen`) so that we can
fuzz the new x86-64 backend as well as the current one.
This commit adds lots of plumbing to get the type section from the
module linking proposal plumbed all the way through to the `wasmtime`
crate and the `wasmtime-c-api` crate. This isn't all that useful right
now because Wasmtime doesn't support imported/exported
modules/instances, but this is all necessary groundwork to getting that
exported at some point. I've added some light tests but I suspect the
bulk of the testing will come in a future commit.
One major change in this commit is that `SignatureIndex` no longer
follows type type index space in a wasm module. Instead a new
`TypeIndex` type is used to track that. Function signatures, still
indexed by `SignatureIndex`, are then packed together tightly.
This commit updates `wasmtime::FuncType` to exactly store an internal
`WasmFuncType` from the cranelift crates. This allows us to remove a
translation layer when we are given a `FuncType` and want to get an
internal cranelift type out as a result.
The other major change from this commit was changing the constructor and
accessors of `FuncType` to be iterator-based instead of exposing
implementation details.
This commit removes the binaryen support for fuzzing from wasmtime,
instead switching over to `wasm-smith`. In general it's great to have
what fuzzing we can, but our binaryen support suffers from a few issues:
* The Rust crate, binaryen-sys, seems largely unmaintained at this
point. While we could likely take ownership and/or send PRs to update
the crate it seems like the maintenance is largely on us at this point.
* Currently the binaryen-sys crate doesn't support fuzzing anything
beyond MVP wasm, but we're interested at least in features like bulk
memory and reference types. Additionally we'll also be interested in
features like module-linking. New features would require either
implementation work in binaryen or the binaryen-sys crate to support.
* We have 4-5 fuzz-bugs right now related to timeouts simply in
generating a module for wasmtime to fuzz. One investigation along
these lines in the past revealed a bug in binaryen itself, and in any
case these bugs would otherwise need to get investigated, reported,
and possibly fixed ourselves in upstream binaryen.
Overall I'm not sure at this point if maintaining binaryen fuzzing is
worth it with the advent of `wasm-smith` which has similar goals for
wasm module generation, but is much more readily maintainable on our
end.
Additonally in this commit I've added a fuzzer for wasm-smith's
`SwarmConfig`-based fuzzer which should expand the coverage of tested
modules.
Closes#2163
* Validate modules while translating
This commit is a change to cranelift-wasm to validate each function body
as it is translated. Additionally top-level module translation functions
will perform module validation. This commit builds on changes in
wasmparser to perform module validation interwtwined with parsing and
translation. This will be necessary for future wasm features such as
module linking where the type behind a function index, for example, can
be far away in another module. Additionally this also brings a nice
benefit where parsing the binary only happens once (instead of having an
up-front serial validation step) and validation can happen in parallel
for each function.
Most of the changes in this commit are plumbing to make sure everything
lines up right. The major functional change here is that module
compilation should be faster by validating in parallel (or skipping
function validation entirely in the case of a cache hit). Otherwise from
a user-facing perspective nothing should be that different.
This commit does mean that cranelift's translation now inherently
validates the input wasm module. This means that the Spidermonkey
integration of cranelift-wasm will also be validating the function as
it's being translated with cranelift. The associated PR for wasmparser
(bytecodealliance/wasmparser#62) provides the necessary tools to create
a `FuncValidator` for Gecko, but this is something I'll want careful
review for before landing!
* Read function operators until EOF
This way we can let the validator take care of any issues with
mismatched `end` instructions and/or trailing operators/bytes.
This commit uses the new `MaybeInvalidModule` type in `wasm-smith` to
try to explore more points in the fuzz target space in the
`instantiate-maybe-invalid` fuzz target. The goal here is to use the raw
fuzz input as the body of a function to stress the validator/decoder a
bit more, and try to get inputs we might not otherwise generate.
We've enabled bulk memory and reference types by default now which means
that wasmtime in its default settings no longer passes the spec test
suite (due to changes in error messages in initialization), so when
we're running the spec test fuzzer be sure to disable reference types
and bulk memory since that's required to pass.
This commit is intended to update wasmparser to 0.59.0. This primarily
includes bytecodealliance/wasm-tools#40 which is a large update to how
parsing and validation works. The impact on Wasmtime is pretty small at
this time, but over time I'd like to refactor the internals here to lean
more heavily on that upstream wasmparser refactoring.
For now, though, the intention is to get on the train of wasmparser's
latest `main` branch to ensure we get bug fixes and such.
As part of this update a few other crates and such were updated. This is
primarily to handle the new encoding of `ref.is_null` where the type is
not part of the instruction encoding any more.
This new fuzz target exercises sequences of `table.get`s, `table.set`s, and
GCs.
It already found a couple bugs:
* Some leaks due to ref count cycles between stores and host-defined functions
closing over those stores.
* If there are no live references for a PC, Cranelift can avoid emiting an
associated stack map. This was running afoul of a debug assertion.
* Add CLI flags for internal cranelift options
This commit adds two flags to the `wasmtime` CLI:
* `--enable-cranelift-debug-verifier`
* `--enable-cranelift-nan-canonicalization`
These previously weren't exposed from the command line but have been
useful to me at least for reproducing slowdowns found during fuzzing on
the CLI.
* Disable Cranelift debug verifier when fuzzing
This commit disables Cranelift's debug verifier for our fuzz targets.
We've gotten a good number of timeouts on OSS-Fuzz and some I've
recently had some discussion over at google/oss-fuzz#3944 about this
issue and what we can do. The result of that discussion was that there
are two primary ways we can speed up our fuzzers:
* One is independent of Wasmtime, which is to tweak the flags used to
compile code. The conclusion was that one flag was passed to LLVM
which significantly increased runtime for very little benefit. This
has now been disabled in rust-fuzz/cargo-fuzz#229.
* The other way is to reduce the amount of debug checks we run while
fuzzing wasmtime itself. To put this in perspective, a test case which
took ~100ms to instantiate was taking 50 *seconds* to instantiate in
the fuzz target. This 500x slowdown was caused by a ton of
multiplicative factors, but two major contributors were NaN
canonicalization and cranelift's debug verifier. I suspect the NaN
canonicalization itself isn't too pricy but when paired with the debug
verifier in float-heavy code it can create lots of IR to verify.
This commit is specifically tackling this second point in an attempt to
avoid slowing down our fuzzers too much. The intent here is that we'll
disable the cranelift debug verifier for now but leave all other checks
enabled. If the debug verifier gets a speed boost we can try re-enabling
it, but otherwise it seems like for now it's otherwise not catching any
bugs and creating lots of noise about timeouts that aren't relevant.
It's not great that we have to turn off internal checks since that's
what fuzzing is supposed to trigger, but given the timeout on OSS-Fuzz
and the multiplicative effects of all the slowdowns we have when
fuzzing, I'm not sure we can afford the massive slowdown of the debug verifier.
* Moves CodeMemory, VMInterrupts and SignatureRegistry from Compiler
* CompiledModule holds CodeMemory and GdbJitImageRegistration
* Store keeps track of its JIT code
* Makes "jit_int.rs" stuff Send+Sync
* Adds the threads example.