* Add CLI flags for internal cranelift options
This commit adds two flags to the `wasmtime` CLI:
* `--enable-cranelift-debug-verifier`
* `--enable-cranelift-nan-canonicalization`
These previously weren't exposed from the command line but have been
useful to me at least for reproducing slowdowns found during fuzzing on
the CLI.
* Disable Cranelift debug verifier when fuzzing
This commit disables Cranelift's debug verifier for our fuzz targets.
We've gotten a good number of timeouts on OSS-Fuzz and some I've
recently had some discussion over at google/oss-fuzz#3944 about this
issue and what we can do. The result of that discussion was that there
are two primary ways we can speed up our fuzzers:
* One is independent of Wasmtime, which is to tweak the flags used to
compile code. The conclusion was that one flag was passed to LLVM
which significantly increased runtime for very little benefit. This
has now been disabled in rust-fuzz/cargo-fuzz#229.
* The other way is to reduce the amount of debug checks we run while
fuzzing wasmtime itself. To put this in perspective, a test case which
took ~100ms to instantiate was taking 50 *seconds* to instantiate in
the fuzz target. This 500x slowdown was caused by a ton of
multiplicative factors, but two major contributors were NaN
canonicalization and cranelift's debug verifier. I suspect the NaN
canonicalization itself isn't too pricy but when paired with the debug
verifier in float-heavy code it can create lots of IR to verify.
This commit is specifically tackling this second point in an attempt to
avoid slowing down our fuzzers too much. The intent here is that we'll
disable the cranelift debug verifier for now but leave all other checks
enabled. If the debug verifier gets a speed boost we can try re-enabling
it, but otherwise it seems like for now it's otherwise not catching any
bugs and creating lots of noise about timeouts that aren't relevant.
It's not great that we have to turn off internal checks since that's
what fuzzing is supposed to trigger, but given the timeout on OSS-Fuzz
and the multiplicative effects of all the slowdowns we have when
fuzzing, I'm not sure we can afford the massive slowdown of the debug verifier.
* Expose memory-related options in `Config`
This commit was initially motivated by looking more into #1501, but it
ended up balooning a bit after finding a few issues. The high-level
items in this commit are:
* New configuration options via `wasmtime::Config` are exposed to
configure the tunable limits of how memories are allocated and such.
* The `MemoryCreator` trait has been updated to accurately reflect the
required allocation characteristics that JIT code expects.
* A bug has been fixed in the cranelift wasm code generation where if no
guard page was present bounds checks weren't accurately performed.
The new `Config` methods allow tuning the memory allocation
characteristics of wasmtime. Currently 64-bit platforms will reserve 6GB
chunks of memory for each linear memory, but by tweaking various config
options you can change how this is allocate, perhaps at the cost of
slower JIT code since it needs more bounds checks. The methods are
intended to be pretty thoroughly documented as to the effect they have
on the JIT code and what values you may wish to select. These new
methods have been added to the spectest fuzzer to ensure that various
configuration values for these methods don't affect correctness.
The `MemoryCreator` trait previously only allocated memories with a
`MemoryType`, but this didn't actually reflect the guarantees that JIT
code expected. JIT code is generated with an assumption about the
minimum size of the guard region, as well as whether memory is static or
dynamic (whether the base pointer can be relocated). These properties
must be upheld by custom allocation engines for JIT code to perform
correctly, so extra parameters have been added to
`MemoryCreator::new_memory` to reflect this.
Finally the fuzzing with `Config` turned up an issue where if no guard
pages present the wasm code wouldn't correctly bounds-check memory
accesses. The issue here was that with a guard page we only need to
bounds-check the first byte of access, but without a guard page we need
to bounds-check the last byte of access. This meant that the code
generation needed to account for the size of the memory operation
(load/store) and use this as the offset-to-check in the no-guard-page
scenario. I've attempted to make the various comments in cranelift a bit
more exhaustive too to hopefully make it a bit clearer for future
readers!
Closes#1501
* Review comments
* Update a comment
* Implement interrupting wasm code, reimplement stack overflow
This commit is a relatively large change for wasmtime with two main
goals:
* Primarily this enables interrupting executing wasm code with a trap,
preventing infinite loops in wasm code. Note that resumption of the
wasm code is not a goal of this commit.
* Additionally this commit reimplements how we handle stack overflow to
ensure that host functions always have a reasonable amount of stack to
run on. This fixes an issue where we might longjmp out of a host
function, skipping destructors.
Lots of various odds and ends end up falling out in this commit once the
two goals above were implemented. The strategy for implementing this was
also lifted from Spidermonkey and existing functionality inside of
Cranelift. I've tried to write up thorough documentation of how this all
works in `crates/environ/src/cranelift.rs` where gnarly-ish bits are.
A brief summary of how this works is that each function and each loop
header now checks to see if they're interrupted. Interrupts and the
stack overflow check are actually folded into one now, where function
headers check to see if they've run out of stack and the sentinel value
used to indicate an interrupt, checked in loop headers, tricks functions
into thinking they're out of stack. An interrupt is basically just
writing a value to a location which is read by JIT code.
When interrupts are delivered and what triggers them has been left up to
embedders of the `wasmtime` crate. The `wasmtime::Store` type has a
method to acquire an `InterruptHandle`, where `InterruptHandle` is a
`Send` and `Sync` type which can travel to other threads (or perhaps
even a signal handler) to get notified from. It's intended that this
provides a good degree of flexibility when interrupting wasm code. Note
though that this does have a large caveat where interrupts don't work
when you're interrupting host code, so if you've got a host import
blocking for a long time an interrupt won't actually be received until
the wasm starts running again.
Some fallout included from this change is:
* Unix signal handlers are no longer registered with `SA_ONSTACK`.
Instead they run on the native stack the thread was already using.
This is possible since stack overflow isn't handled by hitting the
guard page, but rather it's explicitly checked for in wasm now. Native
stack overflow will continue to abort the process as usual.
* Unix sigaltstack management is now no longer necessary since we don't
use it any more.
* Windows no longer has any need to reset guard pages since we no longer
try to recover from faults on guard pages.
* On all targets probestack intrinsics are disabled since we use a
different mechanism for catching stack overflow.
* The C API has been updated with interrupts handles. An example has
also been added which shows off how to interrupt a module.
Closes#139Closes#860Closes#900
* Update comment about magical interrupt value
* Store stack limit as a global value, not a closure
* Run rustfmt
* Handle review comments
* Add a comment about SA_ONSTACK
* Use `usize` for type of `INTERRUPTED`
* Parse human-readable durations
* Bring back sigaltstack handling
Allows libstd to print out stack overflow on failure still.
* Add parsing and emission of stack limit-via-preamble
* Fix new example for new apis
* Fix host segfault test in release mode
* Fix new doc example
* Add a spec test fuzzer for Config
This commit adds a new fuzzer which is intended to run on oss-fuzz. This
fuzzer creates and arbitrary `Config` which *should* pass spec tests and
then asserts that it does so. The goal here is to weed out any
accidental bugs in global configuration which could cause
non-spec-compliant behavior.
* Move implementation to `fuzzing` crate
... but turn it back on in CI by default. The `binaryen-sys` crate
builds binaryen from source, which is a drag on CI for a few reasons:
* This is quite large and takes a good deal of time to build
* The debug build directory for binaryen is 4GB large
In an effort to both save time and disk space on the builders this
commit adds a `binaryen` feature to the `wasmtime-fuzz` crate. This
feature is enabled specifically when running the fuzzers on CI, but it
is disabled during the typical `cargo test --all` command. This means
that the test builders should save an extra 4G of space and be a bit
speedier now that they don't build a giant wad of C++.
We'll need to update the OSS-fuzz integration to enable the `binaryen`
feature when executing `cargo fuzz build`, and I'll do that once this
gets closer to landing.
We only generate *valid* sequences of API calls. To do this, we keep track of
what objects we've already created in earlier API calls via the `Scope` struct.
To generate even-more-pathological sequences of API calls, we use [swarm
testing]:
> In swarm testing, the usual practice of potentially including all features
> in every test case is abandoned. Rather, a large “swarm” of randomly
> generated configurations, each of which omits some features, is used, with
> configurations receiving equal resources.
[swarm testing]: https://www.cs.utah.edu/~regehr/papers/swarm12.pdf
There are more public APIs and instance introspection APIs that we have than
this fuzzer exercises right now. We will need a better generator of valid Wasm
than `wasm-opt -ttf` to really get the most out of those currently-unexercised
APIs, since the Wasm modules generated by `wasm-opt -ttf` don't import and
export a huge variety of things.
When the test case that causes the failure can successfully be disassembled to
WAT, we get logs like this:
```
[2019-11-26T18:48:46Z INFO wasmtime_fuzzing] Wrote WAT disassembly to: /home/fitzgen/wasmtime/crates/fuzzing/target/scratch/8437-0.wat
[2019-11-26T18:48:46Z INFO wasmtime_fuzzing] If this fuzz test fails, copy `/home/fitzgen/wasmtime/crates/fuzzing/target/scratch/8437-0.wat` to `wasmtime/crates/fuzzing/tests/regressions/my-regression.wat` and add the following test to `wasmtime/crates/fuzzing/tests/regressions.rs`:
```
#[test]
fn my_fuzzing_regression_test() {
let data = wat::parse_str(
include_str!("./regressions/my-regression.wat")
).unwrap();
oracles::instantiate(data, CompilationStrategy::Auto)
}
```
```
If the test case cannot be disassembled to WAT, then we get logs like this:
```
[2019-11-26T18:48:46Z INFO wasmtime_fuzzing] Wrote Wasm test case to: /home/fitzgen/wasmtime/crates/fuzzing/target/scratch/8437-0.wasm
[2019-11-26T18:48:46Z INFO wasmtime_fuzzing] Failed to disassemble Wasm into WAT:
Bad magic number (at offset 0)
Stack backtrace:
Run with RUST_LIB_BACKTRACE=1 env variable to display a backtrace
[2019-11-26T18:48:46Z INFO wasmtime_fuzzing] If this fuzz test fails, copy `/home/fitzgen/wasmtime/crates/fuzzing/target/scratch/8437-0.wasm` to `wasmtime/crates/fuzzing/tests/regressions/my-regression.wasm` and add the following test to `wasmtime/crates/fuzzing/tests/regressions.rs`:
```
#[test]
fn my_fuzzing_regression_test() {
let data = include_bytes!("./regressions/my-regression.wasm");
oracles::instantiate(data, CompilationStrategy::Auto)
}
```
```