* Consume fuel during function execution
This commit adds codegen infrastructure necessary to instrument wasm
code to consume fuel as it executes. Currently nothing is really done
with the fuel, but that'll come in later commits.
The focus of this commit is to implement the codegen infrastructure
necessary to consume fuel and account for fuel consumed correctly.
* Periodically check remaining fuel in wasm JIT code
This commit enables wasm code to periodically check to see if fuel has
run out. When fuel runs out an intrinsic is called which can do what it
needs to do in the result of fuel running out. For now a trap is thrown
to have at least some semantics in synchronous stores, but another
planned use for this feature is for asynchronous stores to periodically
yield back to the host based on fuel running out.
Checks for remaining fuel happen in the same locations as interrupt
checks, which is to say the start of the function as well as loop
headers.
* Improve codegen by caching `*const VMInterrupts`
The location of the shared interrupt value and fuel value is through a
double-indirection on the vmctx (load through the vmctx and then load
through that pointer). The second pointer in this chain, however, never
changes, so we can alter codegen to account for this and remove some
extraneous load instructions and hopefully reduce some register
pressure even maybe.
* Add tests fuel can abort infinite loops
* More fuzzing with fuel
Use fuel to time out modules in addition to time, using fuzz input to
figure out which.
* Update docs on trapping instructions
* Fix doc links
* Fix a fuzz test
* Change setting fuel to adding fuel
* Fix a doc link
* Squelch some rustdoc warnings
With `Module::{serialize,deserialize}` it should be possible to share
wasmtime modules across machines or CPUs. Serialization, however, embeds
a hash of all configuration values, including cranelift compilation
settings. By default wasmtime's selection of the native ISA would enable
ISA flags according to CPU features available on the host, but the same
CPU features may not be available across two machines.
This commit adds a `Config::cranelift_clear_cpu_flags` method which
allows clearing the target-specific ISA flags that are automatically
inferred by default for the native CPU. Options can then be
incrementally built back up as-desired with teh `cranelift_other_flag`
method.
This commit is intended to be the first of many in implementing the
module linking proposal. At this time this builds on #2059 so it
shouldn't land yet. The goal of this commit is to compile bare-bones
modules which use module linking, e.g. those with nested modules.
My hope with module linking is that almost everything in wasmtime only
needs mild refactorings to handle it. The goal is that all per-module
structures are still per-module and at the top level there's just a
`Vec` containing a bunch of modules. That's implemented currently where
`wasmtime::Module` contains `Arc<[CompiledModule]>` and an index of
which one it's pointing to. This should enable
serialization/deserialization of any module in a nested modules
scenario, no matter how you got it.
Tons of features of the module linking proposal are missing from this
commit. For example instantiation flat out doesn't work, nor does
import/export of modules or instances. That'll be coming as future
commits, but the purpose here is to start laying groundwork in Wasmtime
for handling lots of modules in lots of places.
`funcref`s are implemented as `NonNull<VMCallerCheckedAnyfunc>`.
This should be more efficient than using a `VMExternRef` that points at a
`VMCallerCheckedAnyfunc` because it gets rid of an indirection, dynamic
allocation, and some reference counting.
Note that the null function reference is *NOT* a null pointer; it is a
`VMCallerCheckedAnyfunc` that has a null `func_ptr` member.
Part of #929
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.
When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.
The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.
When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).
For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf
cc #929Fixes#1804
* Revamp memory management of `InstanceHandle`
This commit fixes a known but in Wasmtime where an instance could still
be used after it was freed. Unfortunately the fix here is a bit of a
hammer, but it's the best that we can do for now. The changes made in
this commit are:
* A `Store` now stores all `InstanceHandle` objects it ever creates.
This keeps all instances alive unconditionally (along with all host
functions and such) until the `Store` is itself dropped. Note that a
`Store` is reference counted so basically everything has to be dropped
to drop anything, there's no longer any partial deallocation of instances.
* The `InstanceHandle` type's own reference counting has been removed.
This is largely redundant with what's already happening in `Store`, so
there's no need to manage two reference counts.
* Each `InstanceHandle` no longer tracks its dependencies in terms of
instance handles. This set was actually inaccurate due to dynamic
updates to tables and such, so we needed to revamp it anyway.
* Initialization of an `InstanceHandle` is now deferred until after
`InstanceHandle::new`. This allows storing the `InstanceHandle` before
side-effectful initialization, such as copying element segments or
running the start function, to ensure that regardless of the result of
instantiation the underlying `InstanceHandle` is still available to
persist in storage.
Overall this should fix a known possible way to safely segfault Wasmtime
today (yay!) and it should also fix some flaikness I've seen on CI.
Turns out one of the spec tests
(bulk-memory-operations/partial-init-table-segment.wast) exercises this
functionality and we were hitting sporating use-after-free, but only on
Windows.
* Shuffle some APIs around
* Comment weak cycle
* Move most wasmtime tests into one test suite
This commit moves most wasmtime tests into a single test suite which
gets compiled into one executable instead of having lots of test
executables. The goal here is to reduce disk space on CI, and this
should be achieved by having fewer executables which means fewer copies
of `libwasmtime.rlib` linked across binaries on the system. More
importantly though this means that DWARF debug information should only
be in one executable rather than duplicated across many.
* Share more build caches
Globally set `RUSTFLAGS` to `-Dwarnings` instead of individually so all
build steps share the same value.
* Allow some dead code in cranelift-codegen
Prevents having to fix all warnings for all possible feature
combinations, only the main ones which come up.
* Update some debug file paths