* Implement defining host functions at the Config level.
This commit introduces defining host functions at the `Config` rather than with
`Func` tied to a `Store`.
The intention here is to enable a host to define all of the functions once
with a `Config` and then use a `Linker` (or directly with
`Store::get_host_func`) to use the functions when instantiating a module.
This should help improve the performance of use cases where a `Store` is
short-lived and redefining the functions at every module instantiation is a
noticeable performance hit.
This commit adds `add_to_config` to the code generation for Wasmtime's `Wasi`
type.
The new method adds the WASI functions to the given config as host functions.
This commit adds context functions to `Store`: `get` to get a context of a
particular type and `set` to set the context on the store.
For safety, `set` cannot replace an existing context value of the same type.
`Wasi::set_context` was added to set the WASI context for a `Store` when using
`Wasi::add_to_config`.
* Add `Config::define_host_func_async`.
* Make config "async" rather than store.
This commit moves the concept of "async-ness" to `Config` rather than `Store`.
Note: this is a breaking API change for anyone that's already adopted the new
async support in Wasmtime.
Now `Config::new_async` is used to create an "async" config and any `Store`
associated with that config is inherently "async".
This is needed for async shared host functions to have some sanity check during their
execution (async host functions, like "async" `Func`, need to be called with
the "async" variants).
* Update async function tests to smoke async shared host functions.
This commit updates the async function tests to also smoke the shared host
functions, plus `Func::wrap0_async`.
This also changes the "wrap async" method names on `Config` to
`wrap$N_host_func_async` to slightly better match what is on `Func`.
* Move the instance allocator into `Engine`.
This commit moves the instantiated instance allocator from `Config` into
`Engine`.
This makes certain settings in `Config` no longer order-dependent, which is how
`Config` should ideally be.
This also removes the confusing concept of the "default" instance allocator,
instead opting to construct the on-demand instance allocator when needed.
This does alter the semantics of the instance allocator as now each `Engine`
gets its own instance allocator rather than sharing a single one between all
engines created from a configuration.
* Make `Engine::new` return `Result`.
This is a breaking API change for anyone using `Engine::new`.
As creating the pooling instance allocator may fail (likely cause is not enough
memory for the provided limits), instead of panicking when creating an
`Engine`, `Engine::new` now returns a `Result`.
* Remove `Config::new_async`.
This commit removes `Config::new_async` in favor of treating "async support" as
any other setting on `Config`.
The setting is `Config::async_support`.
* Remove order dependency when defining async host functions in `Config`.
This commit removes the order dependency where async support must be enabled on
the `Config` prior to defining async host functions.
The check is now delayed to when an `Engine` is created from the config.
* Update WASI example to use shared `Wasi::add_to_config`.
This commit updates the WASI example to use `Wasi::add_to_config`.
As only a single store and instance are used in the example, it has no semantic
difference from the previous example, but the intention is to steer users
towards defining WASI on the config and only using `Wasi::add_to_linker` when
more explicit scoping of the WASI context is required.
This commit implements the pooling instance allocator.
The allocation strategy can be set with `Config::with_allocation_strategy`.
The pooling strategy uses the pooling instance allocator to preallocate a
contiguous region of memory for instantiating modules that adhere to various
limits.
The intention of the pooling instance allocator is to reserve as much of the
host address space needed for instantiating modules ahead of time and to reuse
committed memory pages wherever possible.
* Implement support for `async` functions in Wasmtime
This is an implementation of [RFC 2] in Wasmtime which is to support
`async`-defined host functions. At a high level support is added by
executing WebAssembly code that might invoke an asynchronous host
function on a separate native stack. When the host function's future is
not ready we switch back to the main native stack to continue execution.
There's a whole bunch of details in this commit, and it's a bit much to
go over them all here in this commit message. The most important changes
here are:
* A new `wasmtime-fiber` crate has been written to manage the low-level
details of stack-switching. Unixes use `mmap` to allocate a stack and
Windows uses the native fibers implementation. We'll surely want to
refactor this to move stack allocation elsewhere in the future. Fibers
are intended to be relatively general with a lot of type paremters to
fling values back and forth across suspension points. The whole crate
is a giant wad of `unsafe` unfortunately and involves handwritten
assembly with custom dwarf CFI directives to boot. Definitely deserves
a close eye in review!
* The `Store` type has two new methods -- `block_on` and `on_fiber`
which bridge between the async and non-async worlds. Lots of unsafe
fiddly bits here as we're trying to communicate context pointers
between disparate portions of the code. Extra eyes and care in review
is greatly appreciated.
* The APIs for binding `async` functions are unfortunately pretty ugly
in `Func`. This is mostly due to language limitations and compiler
bugs (I believe) in Rust. Instead of `Func::wrap` we have a
`Func::wrapN_async` family of methods, and we've also got a whole
bunch of `Func::getN_async` methods now too. It may be worth
rethinking the API of `Func` to try to make the documentation page
actually grok'able.
This isn't super heavily tested but the various test should suffice for
engaging hopefully nearly all the infrastructure in one form or another.
This is just the start though!
[RFC 2]: https://github.com/bytecodealliance/rfcs/pull/2
* Add wasmtime-fiber to publish script
* Save vector/float registers on ARM too.
* Fix a typo
* Update lock file
* Implement periodically yielding with fuel consumption
This commit implements APIs on `Store` to periodically yield execution
of futures through the consumption of fuel. When fuel runs out a
future's execution is yielded back to the caller, and then upon
resumption fuel is re-injected. The goal of this is to allow cooperative
multi-tasking with futures.
* Fix compile without async
* Save/restore the frame pointer in fiber switching
Turns out this is another caller-saved register!
* Simplify x86_64 fiber asm
Take a leaf out of aarch64's playbook and don't have extra memory to
load/store these arguments, instead leverage how `wasmtime_fiber_switch`
already loads a bunch of data into registers which we can then
immediately start using on a fiber's start without any extra memory
accesses.
* Add x86 support to wasmtime-fiber
* Add ARM32 support to fiber crate
* Make fiber build file probing more flexible
* Use CreateFiberEx on Windows
* Remove a stray no-longer-used trait declaration
* Don't reach into `Caller` internals
* Tweak async fuel to eventually run out.
With fuel it's probably best to not provide any way to inject infinite
fuel.
* Fix some typos
* Cleanup asm a bit
* Use a shared header file to deduplicate some directives
* Guarantee hidden visibility for functions
* Enable gc-sections on macOS x86_64
* Add `.type` annotations for ARM
* Update lock file
* Fix compile error
* Review comments
* Consume fuel during function execution
This commit adds codegen infrastructure necessary to instrument wasm
code to consume fuel as it executes. Currently nothing is really done
with the fuel, but that'll come in later commits.
The focus of this commit is to implement the codegen infrastructure
necessary to consume fuel and account for fuel consumed correctly.
* Periodically check remaining fuel in wasm JIT code
This commit enables wasm code to periodically check to see if fuel has
run out. When fuel runs out an intrinsic is called which can do what it
needs to do in the result of fuel running out. For now a trap is thrown
to have at least some semantics in synchronous stores, but another
planned use for this feature is for asynchronous stores to periodically
yield back to the host based on fuel running out.
Checks for remaining fuel happen in the same locations as interrupt
checks, which is to say the start of the function as well as loop
headers.
* Improve codegen by caching `*const VMInterrupts`
The location of the shared interrupt value and fuel value is through a
double-indirection on the vmctx (load through the vmctx and then load
through that pointer). The second pointer in this chain, however, never
changes, so we can alter codegen to account for this and remove some
extraneous load instructions and hopefully reduce some register
pressure even maybe.
* Add tests fuel can abort infinite loops
* More fuzzing with fuel
Use fuel to time out modules in addition to time, using fuzz input to
figure out which.
* Update docs on trapping instructions
* Fix doc links
* Fix a fuzz test
* Change setting fuel to adding fuel
* Fix a doc link
* Squelch some rustdoc warnings
With `Module::{serialize,deserialize}` it should be possible to share
wasmtime modules across machines or CPUs. Serialization, however, embeds
a hash of all configuration values, including cranelift compilation
settings. By default wasmtime's selection of the native ISA would enable
ISA flags according to CPU features available on the host, but the same
CPU features may not be available across two machines.
This commit adds a `Config::cranelift_clear_cpu_flags` method which
allows clearing the target-specific ISA flags that are automatically
inferred by default for the native CPU. Options can then be
incrementally built back up as-desired with teh `cranelift_other_flag`
method.
This commit is intended to be the first of many in implementing the
module linking proposal. At this time this builds on #2059 so it
shouldn't land yet. The goal of this commit is to compile bare-bones
modules which use module linking, e.g. those with nested modules.
My hope with module linking is that almost everything in wasmtime only
needs mild refactorings to handle it. The goal is that all per-module
structures are still per-module and at the top level there's just a
`Vec` containing a bunch of modules. That's implemented currently where
`wasmtime::Module` contains `Arc<[CompiledModule]>` and an index of
which one it's pointing to. This should enable
serialization/deserialization of any module in a nested modules
scenario, no matter how you got it.
Tons of features of the module linking proposal are missing from this
commit. For example instantiation flat out doesn't work, nor does
import/export of modules or instances. That'll be coming as future
commits, but the purpose here is to start laying groundwork in Wasmtime
for handling lots of modules in lots of places.
`funcref`s are implemented as `NonNull<VMCallerCheckedAnyfunc>`.
This should be more efficient than using a `VMExternRef` that points at a
`VMCallerCheckedAnyfunc` because it gets rid of an indirection, dynamic
allocation, and some reference counting.
Note that the null function reference is *NOT* a null pointer; it is a
`VMCallerCheckedAnyfunc` that has a null `func_ptr` member.
Part of #929
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.
When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.
The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.
When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).
For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf
cc #929Fixes#1804
* Revamp memory management of `InstanceHandle`
This commit fixes a known but in Wasmtime where an instance could still
be used after it was freed. Unfortunately the fix here is a bit of a
hammer, but it's the best that we can do for now. The changes made in
this commit are:
* A `Store` now stores all `InstanceHandle` objects it ever creates.
This keeps all instances alive unconditionally (along with all host
functions and such) until the `Store` is itself dropped. Note that a
`Store` is reference counted so basically everything has to be dropped
to drop anything, there's no longer any partial deallocation of instances.
* The `InstanceHandle` type's own reference counting has been removed.
This is largely redundant with what's already happening in `Store`, so
there's no need to manage two reference counts.
* Each `InstanceHandle` no longer tracks its dependencies in terms of
instance handles. This set was actually inaccurate due to dynamic
updates to tables and such, so we needed to revamp it anyway.
* Initialization of an `InstanceHandle` is now deferred until after
`InstanceHandle::new`. This allows storing the `InstanceHandle` before
side-effectful initialization, such as copying element segments or
running the start function, to ensure that regardless of the result of
instantiation the underlying `InstanceHandle` is still available to
persist in storage.
Overall this should fix a known possible way to safely segfault Wasmtime
today (yay!) and it should also fix some flaikness I've seen on CI.
Turns out one of the spec tests
(bulk-memory-operations/partial-init-table-segment.wast) exercises this
functionality and we were hitting sporating use-after-free, but only on
Windows.
* Shuffle some APIs around
* Comment weak cycle
* Move most wasmtime tests into one test suite
This commit moves most wasmtime tests into a single test suite which
gets compiled into one executable instead of having lots of test
executables. The goal here is to reduce disk space on CI, and this
should be achieved by having fewer executables which means fewer copies
of `libwasmtime.rlib` linked across binaries on the system. More
importantly though this means that DWARF debug information should only
be in one executable rather than duplicated across many.
* Share more build caches
Globally set `RUSTFLAGS` to `-Dwarnings` instead of individually so all
build steps share the same value.
* Allow some dead code in cranelift-codegen
Prevents having to fix all warnings for all possible feature
combinations, only the main ones which come up.
* Update some debug file paths