* Delete historical interruptable support in Wasmtime
This commit removes the `Config::interruptable` configuration along with
the `InterruptHandle` type from the `wasmtime` crate. The original
support for adding interruption to WebAssembly was added pretty early on
in the history of Wasmtime when there was no other method to prevent an
infinite loop from the host. Nowadays, however, there are alternative
methods for interruption such as fuel or epoch-based interruption.
One of the major downsides of `Config::interruptable` is that even when
it's not enabled it forces an atomic swap to happen when entering
WebAssembly code. This technically could be a non-atomic swap if the
configuration option isn't enabled but that produces even more branch-y
code on entry into WebAssembly which is already something we try to
optimize. Calling into WebAssembly is on the order of a dozens of
nanoseconds at this time and an atomic swap, even uncontended, can add
up to 5ns on some platforms.
The main goal of this PR is to remove this atomic swap on entry into
WebAssembly. This is done by removing the `Config::interruptable` field
entirely, moving all existing consumers to epochs instead which are
suitable for the same purposes. This means that the stack overflow check
is no longer entangled with the interruption check and perhaps one day
we could continue to optimize that further as well.
Some consequences of this change are:
* Epochs are now the only method of remote-thread interruption.
* There are no more Wasmtime traps that produces the `Interrupted` trap
code, although we may wish to move future traps to this so I left it
in place.
* The C API support for interrupt handles was also removed and bindings
for epoch methods were added.
* Function-entry checks for interruption are a tiny bit less efficient
since one check is performed for the stack limit and a second is
performed for the epoch as opposed to the `Config::interruptable`
style of bundling the stack limit and the interrupt check in one. It's
expected though that this is likely to not really be measurable.
* The old `VMInterrupts` structure is renamed to `VMRuntimeLimits`.
* Implement runtime checks for compilation settings
This commit fills out a few FIXME annotations by implementing run-time
checks that when a `Module` is created it has compatible codegen
settings for the current host (as `Module` is proof of "this code can
run"). This is done by implementing new `Engine`-level methods which
validate compiler settings. These settings are validated on
`Module::new` as well as when loading serialized modules.
Settings are split into two categories, one for "shared" top-level
settings and one for ISA-specific settings. Both categories now have
allow-lists hardcoded into `Engine` which indicate the acceptable values
for each setting (if applicable). ISA-specific settings are checked with
the Rust standard library's `std::is_x86_feature_detected!` macro. Other
macros for other platforms are not stable at this time but can be added
here if necessary.
Closes#3897
* Fix fall-through logic to actually be correct
* Use a `OnceCell`, not an `AtomicBool`
* Fix some broken tests
This commit fixes calling the call hooks configured in a store for host
functions defined with `Func::new_unchecked` or similar. I believe that
this was just an accidental oversight and there's no fundamental reason
to not support this.
This commit aims to achieve the goal of being able to run the test suite
on Windows with `--test-threads 1`, or more notably allowing Wasmtime's
defaults to work better with the main thread on Windows which appears to
have a smaller stack by default than Linux by comparison. In decreasing
the default wasm stack size a test is also update to probe for less
stack to work on Windows' main thread by default, ideally allowing the
full test suite to work with `--test-threads 1` (although this isn't
added to CI as it's not really critical).
Closes#3857
* Remove the `ModuleLimits` pooling configuration structure
This commit is an attempt to improve the usability of the pooling
allocator by removing the need to configure a `ModuleLimits` structure.
Internally this structure has limits on all forms of wasm constructs but
this largely bottoms out in the size of an allocation for an instance in
the instance pooling allocator. Maintaining this list of limits can be
cumbersome as modules may get tweaked over time and there's otherwise no
real reason to limit the number of globals in a module since the main
goal is to limit the memory consumption of a `VMContext` which can be
done with a memory allocation limit rather than fine-tuned control over
each maximum and minimum.
The new approach taken in this commit is to remove `ModuleLimits`. Some
fields, such as `tables`, `table_elements` , `memories`, and
`memory_pages` are moved to `InstanceLimits` since they're still
enforced at runtime. A new field `size` is added to `InstanceLimits`
which indicates, in bytes, the maximum size of the `VMContext`
allocation. If the size of a `VMContext` for a module exceeds this value
then instantiation will fail.
This involved adding a few more checks to `{Table, Memory}::new_static`
to ensure that the minimum size is able to fit in the allocation, since
previously modules were validated at compile time of the module that
everything fit and that validation no longer happens (it happens at
runtime).
A consequence of this commit is that Wasmtime will have no built-in way
to reject modules at compile time if they'll fail to be instantiated
within a particular pooling allocator configuration. Instead a module
must attempt instantiation see if a failure happens.
* Fix benchmark compiles
* Fix some doc links
* Fix a panic by ensuring modules have limited tables/memories
* Review comments
* Add back validation at `Module` time instantiation is possible
This allows for getting an early signal at compile time that a module
will never be instantiable in an engine with matching settings.
* Provide a better error message when sizes are exceeded
Improve the error message when an instance size exceeds the maximum by
providing a breakdown of where the bytes are all going and why the large
size is being requested.
* Try to fix test in qemu
* Flag new test as 64-bit only
Sizes are all specific to 64-bit right now
* Port fix for `CVE-2022-23636` to `main`.
This commit ports the fix for `CVE-2022-23636` to `main`, but performs a
refactoring that makes it unnecessary for the instance itself to track if it
has been initialized; such a change was not targeted enough for a security
patch.
The pooling allocator will now only initialize an instance if all of its
associated resource creation succeeds. If the resource creation fails, no
instance is dropped as none was initialized.
Also updates `RELEASES.md` to include the related patch releases.
* Add `Instance::new_at` to fully initialize an instance.
Added `Instance::new_at` to fully initialize an instance at a given address.
This will hopefully prevent the possibility that an `Instance` structure
doesn't have an initialized `VMContext` when it is dropped.
This commit corrects a few places where `MemoryIndex` was used and treated like
a `DefinedMemoryIndex` in the pooling instance allocator.
When the unstable `multi-memory` proposal is enabled, it is possible to cause a
newly allocated instance to use an incorrect base address for any defined
memories by having the module being instantiated also import a memory.
This requires enabling the unstable `multi-memory` proposal, configuring the
use of the pooling instance allocator (not the default), and then configuring
the module limits to allow imported memories (also not the default).
The fix is to replace all uses of `MemoryIndex` with `DefinedMemoryIndex` in
the pooling instance allocator.
Several `debug_assert!` have also been updated to `assert!` to sanity check the
state of the pooling allocator even in release builds.
* Consolidate methods of memory initialization
This commit consolidates the few locations that we have which are
performing memory initialization. Namely the uffd logic for creating
paged memory as well as the memfd logic for creating a memory image now
share an implementation to avoid duplicating bounds-checks or other
validation conditions. The main purpose of this commit is to fix a
fuzz-bug where a multiplication overflowed. The overflow itself was
benign but it seemed better to fix the overflow in only one place
instead of multiple.
The overflow in question is specifically when an initializer is checked
to be statically out-of-bounds and multiplies a memory's minimum size by
the wasm page size, returning the result as a `u64`. For
memory64-memories of size `1 << 48` this multiplication will overflow.
This was actually a preexisting bug with the `try_paged_init` function
which was copied for memfd, but cropped up here since memfd is used more
often than paged initialization. The fix here is to skip validation of
the `end` index if the size of memory is `1 << 64` since if the `end`
index can be represented as a `u64` then it's in-bounds. This is
somewhat of an esoteric case, though, since a memory of minimum size `1
<< 64` can't ever exist (we can't even ask the os for that much memory,
and even if we could it would fail).
* Fix memfd test
* Fix some tests
* Remove InitMemory enum
* Add an `is_segmented` helper method
* More clear variable name
* Make arguments to `init_memory` more descriptive
This fixes a bug in the memfd-related management of a linear memory
where for dynamic memories memfd wasn't informed of the extra room that
the dynamic memory could grow into, only the actual size of linear
memory, which ended up tripping an assert once the memory was grown. The
fix here is pretty simple which is to factor in this extra space when
passing the allocation size to the creation of the `MemFdSlot`.
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.
This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.
The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
* fuzz: Refactor Wasmtime's fuzz targets
A recent fuzz bug found is related to timing out when compiling a
module. This timeout, however, is predominately because Cranelift's
debug verifier is enabled and taking up over half the compilation time.
I wanted to fix this by disabling the verifier when input modules might
have a lot of functions, but this was pretty difficult to implement.
Over time we've grown a number of various fuzzers. Most are
`wasm-smith`-based at this point but there's various entry points for
configuring the wasm-smith module, the wasmtime configuration, etc. I've
historically gotten quite lost in trying to change defaults and feeling
like I have to touch a lot of different places. This is the motivation
for this commit, simplifying fuzzer default configuration.
This commit removes the ability to create a default `Config` for
fuzzing, instead only supporting generating a configuration via
`Arbitrary`. This then involved refactoring all targets and fuzzers to
ensure that configuration is generated through `Arbitrary`. This should
actually expand the coverage of some existing fuzz targets since
`Arbitrary for Config` will tweak options that don't affect runtime,
such as memory configuration or jump veneers.
All existing fuzz targets are refactored to use this new method of
configuration. Some fuzz targets were also shuffled around or
reimplemented:
* `compile` - this now directly calls `Module::new` to skip all the
fuzzing infrastructure. This is mostly done because this fuzz target
isn't too interesting and is largely just seeing what happens when
things are thrown at the wall for Wasmtime.
* `instantiate-maybe-invalid` - this fuzz target now skips instantiation
and instead simply goes into `Module::new` like the `compile` target.
The rationale behind this is that most modules won't instantiate
anyway and this fuzz target is primarily fuzzing the compiler. This
skips having to generate arbitrary configuration since
wasm-smith-generated-modules (or valid ones at least) aren't used
here.
* `instantiate` - this fuzz target was removed. In general this fuzz
target isn't too interesting in isolation. Almost everything it deals
with likely won't pass compilation and is covered by the `compile`
fuzz target, and otherwise interesting modules being instantiated can
all theoretically be created by `wasm-smith` anyway.
* `instantiate-wasm-smith` and `instantiate-swarm` - these were both merged
into a new `instantiate` target (replacing the old one from above).
There wasn't really much need to keep these separate since they really
only differed at this point in methods of timeout. Otherwise we much
more heavily use `SwarmConfig` than wasm-smith's built-in options.
The intention is that we should still have basically the same coverage
of fuzzing as before, if not better because configuration is now
possible on some targets. Additionally there is one centralized point of
configuration for fuzzing for wasmtime, `Arbitrary for ModuleConfig`.
This internally creates an arbitrary `SwarmConfig` from `wasm-smith` and
then further tweaks it for Wasmtime's needs, such as enabling various
wasm proposals by default. In the future enabling a wasm proposal on
fuzzing should largely just be modifying this one trait implementation.
* fuzz: Sometimes disable the cranelift debug verifier
This commit disables the cranelift debug verifier if the input wasm
module might be "large" for the definition of "more than 10 functions".
While fuzzing we disable threads (set them to 1) and enable the
cranelift debug verifier. Coupled with a 20-30x slowdown this means that
a module with the maximum number of functions, 100, gives:
60x / 100 functions / 30x slowdown = 20ms
With only 20 milliseconds per function this is even further halved by
the `differential` fuzz target compiling a module twice, which means
that, when compiling with a normal release mode Wasmtime, if any
function takes more than 10ms to compile then it's a candidate for
timing out while fuzzing. Given that the cranelift debug verifier can
more than double compilation time in fuzzing mode this actually means
that the real time budget for function compilation is more like 4ms.
The `wasm-smith` crate can pretty easily generate a large function that
takes 4ms to compile, and then when that function is multiplied 100x in
the `differential` fuzz target we trivially time out the fuzz target.
The hope of this commit is to buy back half our budget by disabling the
debug verifier for modules that may have many functions. Further
refinements can be implemented in the future such as limiting functions
for just the differential target as well.
* Fix the single-function-module fuzz configuration
* Tweak how features work in differential fuzzing
* Disable everything for baseline differential fuzzing
* Enable selectively for each engine afterwards
* Also forcibly enable reference types and bulk memory for spec tests
* Log wasms when compiling
* Add reference types support to v8 fuzzer
* Fix timeouts via fuel
The default store has "infinite" fuel so that needs to be consumed
before fuel is added back in.
* Remove fuzzing-specific tests
These no longer compile and also haven't been added to in a long time.
Most of the time a reduced form of original the fuzz test case is added
when a fuzz bug is fixed.
This patch makes spillslot allocation, spilling and reloading all based
on register class only. Hence when we have a 32- or 64-bit value in a
128-bit XMM register on x86-64 or vector register on aarch64, this
results in larger spillslots and spills/restores.
Why make this change, if it results in less efficient stack-frame usage?
Simply put, it is safer: there is always a risk when allocating
spillslots or spilling/reloading that we get the wrong type and make the
spillslot or the store/load too small. This was one contributing factor
to CVE-2021-32629, and is now the source of a fuzzbug in SIMD code that
puns an arbitrary user-controlled vector constant over another
stackslot. (If this were a pointer, that could result in RCE. SIMD is
not yet on by default in a release, fortunately.
In particular, we have not been particularly careful about using moves
between values of different types, for example with `raw_bitcast` or
with certain SIMD operations, and such moves indicate to regalloc.rs
that vregs are in equivalence classes and some arbitrary vreg in the
class is provided when allocating the spillslot or spilling/reloading.
Since regalloc.rs does not track actual type, and since we haven't been
careful about moves, we can't really trust this "arbitrary vreg in
equivalence class" to provide accurate type information.
In the fix to CVE-2021-32629 we fixed this for integer registers by
always spilling/reloading 64 bits; this fix can be seen as the analogous
change for FP/vector regs.
* Add a compilation section to disable address maps
This commit adds a new `Config::generate_address_map` compilation
setting which is used to disable emission of the `.wasmtime.addrmap`
section of compiled artifacts. This section is currently around the size
of the entire `.text` section itself unfortunately and for size reasons
may wish to be omitted. Functionality-wise all that is lost is knowing
the precise wasm module offset address of a faulting instruction or in a
backtrace of instructions. This also means that if the module has DWARF
debugging information available with it Wasmtime isn't able to produce a
filename and line number in the backtrace.
This option remains enabled by default. This option may not be needed in
the future with #3547 perhaps, but in the meantime it seems reasonable
enough to support a configuration mode where the section is entirely
omitted if the smallest module possible is desired.
* Fix some CI issues
* Update tests/all/traps.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
* Do less work in compilation for address maps
But only when disabled
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
Following up on WebAssembly/wasi-sdk#210, this makes the trap message
for `unreachable` traps more descriptive of what actually caused the
trap, so that it doesn't sound like maybe Wasmtime itself executed a
`unreachable!()` macro in Rust.
Before:
```
wasm trap: unreachable
wasm backtrace:
[...]
```
After:
```
wasm trap: wasm `unreachable` instruction executed
wasm backtrace:
[...]
```
As reported in #3173, the `select` instruction fails an assertion when it is given `v128` types as operands. This change relaxes the assertion to allow the same type of XMM move that occurs for the f32 and f64 types. This fixes#3173 in the old `lower.rs` code temporarily until the relatively complex `select` lowering can be ported to ISLE.
This commit adds a test from #3337 which is an issue that was fixed
in #3506 due to moving `imul` lowering rules to ISLE which fixed the
underlying issue of accidentally not falling through to the necessary
case for general `i64x2.mul` multiplication.
Closes#3337
This also fixes a bug where `movsd` was incorrectly used with a memory
operand for `insertlane`, causing it to actually zero the upper bits
instead of preserving them.
Note that the insertlane logic still exists in `lower.rs` because it's
used as a helper for a few other instruction lowerings which aren't
migrated to ISLE yet. This commit also adds a helper in ISLE itself for
those other lowerings to use when they get implemented.
Closes#3216
This was my first attempt at transitioning code to ISLE to originally
fix#3327 but that fix has since landed on `main`, so this is instead
now just porting a few operations to ISLE.
Closes#3336
This pulls in a fix for Android, where Android's seccomp policy on older
versions is to make `openat2` irrecoverably crash the process, so we have
to do a version check up front rather than relying on `ENOSYS` to
determine if `openat2` is supported.
And it pulls in the fix for the link errors when multiple versions of
rsix/rustix are linked in.
And it has updates for two crate renamings: rsix has been renamed to
rustix, and unsafe-io has been renamed to io-extras.
When encountering a subprogram that is dead code (as indicated by the
dead code proposal
https://dwarfstd.org/ShowIssue.php?issue=200609.1), don't generate debug
output for the subprogram or any of its children.
This commit fixes a panic which can happen on a module with an invalid
name section where one of the functions named has the index `u32::MAX`.
Previously Wasmtime would create a new `FuncIndex` with the indices
found in the name section but the sentinel `u32::MAX` causes a panic.
Cranelift otherwise limits the number of functions through `wasmparser`
which has a hard limit (lower than `u32::MAX`) so this commit applies a
fix of only recording function names for function indices that are
actually present in the module.
* Add a configuration option to force "static" memories
In poking around at some things earlier today I realized that one
configuration option for memories we haven't exposed from embeddings
like the CLI is to forcibly limit the size of memory growth and force
using a static memory style. This means that the CLI, for example, can't
limit memory growth by default and memories are only limited in size by
what the OS can give and the wasm's own memory type. This configuration
option means that the CLI can artificially limit the size of wasm linear
memories.
Additionally another motivation for this is for testing out various
codegen ramifications of static/dynamic memories. This is the only way
to force a static memory, by default, for wasm64 memories with no
maximum size listed for example.
* Review feedback
When there is a linking error caused by an undefined instance, list all
the instances exports in the error message. This will clarify errors for
undefined two-level imports that get desugared to one-level instance
imports under the module-linking proposal.
This commit fixes an issue in Cranelift where legalization of
`heap_addr` instructions (used by wasm to represent heap accesses) could
be off-by-two where loads that should be valid were actually treated as
invalid. The bug here happened in an optimization where tests against
odd constants were being altered to tests against even constants by
subtracting one from the limit instead of adding one to the limit. The
comment around this area has been updated in accordance with a little
more math-stuff as well to help future readers.
* Update the spec reference testsuite submodule
This commit brings in recent updates to the spec test suite. Most of the
changes here were already fixed in `wasmparser` with some tweaks to
esoteric modules, but Wasmtime also gets a bug fix where where import
matching for the size of tables/memories is based on the current runtime
size of the table/memory rather than the original type of the
table/memory. This means that during type matching the actual value is
consulted for its size rather than using the minimum size listed in its
type.
* Fix now-missing directories in build script
* Optimize `Func::call` and its C API
This commit is an alternative to #3298 which achieves effectively the
same goal of optimizing the `Func::call` API as well as its C API
sibling of `wasmtime_func_call`. The strategy taken here is different
than #3298 though where a new API isn't created, rather a small tweak to
an existing API is done. Specifically this commit handles the major
sources of slowness with `Func::call` with:
* Looking up the type of a function, to typecheck the arguments with and
use to guide how the results should be loaded, no longer hits the
rwlock in the `Engine` but instead each `Func` contains its own
`FuncType`. This can be an unnecessary allocation for funcs not used
with `Func::call`, so this is a downside of this implementation
relative to #3298. A mitigating factor, though, is that instance
exports are loaded lazily into the `Store` and in theory not too many
funcs are active in the store as `Func` objects.
* Temporary storage is amortized with a long-lived `Vec` in the `Store`
rather than allocating a new vector on each call. This is basically
the same strategy as #3294 only applied to different types in
different places. Specifically `wasmtime::Store` now retains a
`Vec<u128>` for `Func::call`, and the C API retains a `Vec<Val>` for
calling `Func::call`.
* Finally, an API breaking change is made to `Func::call` and its type
signature (as well as `Func::call_async`). Instead of returning
`Box<[Val]>` as it did before this function now takes a
`results: &mut [Val]` parameter. This allows the caller to manage the
allocation and we can amortize-remove it in `wasmtime_func_call` by
using space after the parameters in the `Vec<Val>` we're passing in.
This change is naturally a breaking change and we'll want to consider
it carefully, but mitigating factors are that most embeddings are
likely using `TypedFunc::call` instead and this signature taking a
mutable slice better aligns with `Func::new` which receives a mutable
slice for the results.
Overall this change, in the benchmark of "call a nop function from the C
API" is not quite as good as #3298. It's still a bit slower, on the
order of 15ns, because there's lots of capacity checks around vectors
and the type checks are slightly less optimized than before. Overall
though this is still significantly better than today because allocations
and the rwlock to acquire the type information are both avoided. I
personally feel that this change is the best to do because it has less
of an API impact than #3298.
* Rebase issues
* Use rsix to make system calls in Wasmtime.
`rsix` is a system call wrapper crate that we use in `wasi-common`,
which can provide the following advantages in the rest of Wasmtime:
- It eliminates some `unsafe` blocks in Wasmtime's code. There's
still an `unsafe` block in the library, but this way, the `unsafe`
is factored out and clearly scoped.
- And, it makes error handling more consistent, factoring out code for
checking return values and `io::Error::last_os_error()`, and code that
does `errno::set_errno(0)`.
This doesn't cover *all* system calls; `rsix` doesn't implement
signal-handling APIs, and this doesn't cover calls made through `std` or
crates like `userfaultfd`, `rand`, and `region`.
This test uses `rlimit` which can't be executed in parallel with other
tests. Previously this used `libc::fork` but the call afterwards to
`libc::wait` was racing all other child subprocesses since it would wait
for any child instead of the specific child we were interested in. There
was also difficulty getting the output of the child on failure coming
to the parent, so this commit simplifies the situation by moving the
test to its own executable where it's the only test.