This commit is intended to be the first of many in implementing the
module linking proposal. At this time this builds on #2059 so it
shouldn't land yet. The goal of this commit is to compile bare-bones
modules which use module linking, e.g. those with nested modules.
My hope with module linking is that almost everything in wasmtime only
needs mild refactorings to handle it. The goal is that all per-module
structures are still per-module and at the top level there's just a
`Vec` containing a bunch of modules. That's implemented currently where
`wasmtime::Module` contains `Arc<[CompiledModule]>` and an index of
which one it's pointing to. This should enable
serialization/deserialization of any module in a nested modules
scenario, no matter how you got it.
Tons of features of the module linking proposal are missing from this
commit. For example instantiation flat out doesn't work, nor does
import/export of modules or instances. That'll be coming as future
commits, but the purpose here is to start laying groundwork in Wasmtime
for handling lots of modules in lots of places.
* Validate modules while translating
This commit is a change to cranelift-wasm to validate each function body
as it is translated. Additionally top-level module translation functions
will perform module validation. This commit builds on changes in
wasmparser to perform module validation interwtwined with parsing and
translation. This will be necessary for future wasm features such as
module linking where the type behind a function index, for example, can
be far away in another module. Additionally this also brings a nice
benefit where parsing the binary only happens once (instead of having an
up-front serial validation step) and validation can happen in parallel
for each function.
Most of the changes in this commit are plumbing to make sure everything
lines up right. The major functional change here is that module
compilation should be faster by validating in parallel (or skipping
function validation entirely in the case of a cache hit). Otherwise from
a user-facing perspective nothing should be that different.
This commit does mean that cranelift's translation now inherently
validates the input wasm module. This means that the Spidermonkey
integration of cranelift-wasm will also be validating the function as
it's being translated with cranelift. The associated PR for wasmparser
(bytecodealliance/wasmparser#62) provides the necessary tools to create
a `FuncValidator` for Gecko, but this is something I'll want careful
review for before landing!
* Read function operators until EOF
This way we can let the validator take care of any issues with
mismatched `end` instructions and/or trailing operators/bytes.
This commit extracts the two implementations of `Compiler` into two
separate crates, `wasmtime-cranelfit` and `wasmtime-lightbeam`. The
`wasmtime-jit` crate then depends on these two and instantiates them
appropriately. The goal here is to start reducing the weight of the
`wasmtime-environ` crate, which currently serves as a common set of
types between all `wasmtime-*` crates. Long-term I'd like to remove the
dependency on Cranelift from `wasmtime-environ`, but that's going to
take a lot more work.
In the meantime I figure it's a good way to get started by separating
out the lightbeam/cranelift function compilers from the
`wasmtime-environ` crate. We can continue to iterate on moving things
out in the future, too.
* Refactor where results of compilation are stored
This commit refactors the internals of compilation in Wasmtime to change
where results of individual function compilation are stored. Previously
compilation resulted in many maps being returned, and compilation
results generally held all these maps together. This commit instead
switches this to have all metadata stored in a `CompiledFunction`
instead of having a separate map for each item that can be stored.
The motivation for this is primarily to help out with future
module-linking-related PRs. What exactly "module level" is depends on
how we interpret modules and how many modules are in play, so it's a bit
easier for operations in wasmtime to work at the function level where
possible. This means that we don't have to pass around multiple
different maps and a function index, but instead just one map or just
one entry representing a compiled function.
Additionally this change updates where the parallelism of compilation
happens, pushing it into `wasmtime-jit` instead of `wasmtime-environ`.
This is another goal where `wasmtime-jit` will have more knowledge about
module-level pieces with module linking in play. User-facing-wise this
should be the same in terms of parallel compilation, though.
The ultimate goal of this refactoring is to make it easier for the
results of compilation to actually be a set of wasm modules. This means
we won't be able to have a map-per-metadata where the primary key is the
function index, because there will be many modules within one "object
file".
* Don't clear out fields, just don't store them
Persist a smaller set of fields in `CompilationArtifacts` instead of
trying to clear fields out and dynamically not accessing them.
* Don't re-parse wasm for debuginfo
This commit updates debuginfo parsing to happen during the main
translation of the original wasm module. This avoid re-parsing the wasm
module twice (at least the section-level headers). Additionally this
ties debuginfo directly to a `ModuleTranslation` which makes it easier
to process debuginfo for nested modules in the upcoming module linking
proposal.
The changes here are summarized by taking the `read_debuginfo` function
and merging it with the main module translation that happens which is
driven by cranelift. Some new hooks were added to the module environment
trait to support this, but most of it was integrating with existing hooks.
* Fix tests in debug crate
* move caching to the CompilationArtifacts
* mv cache_config from Compiler to CompiledModule
* hash isa flags
* no cache for wasm2obj
* mv caching to wasmtime crate
* account each Compiler field when hash
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.
When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.
The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.
When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).
For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf
cc #929Fixes#1804
* Refactor how relocs are stored and handled
* refactor CompiledModule::instantiate and link_module
* Refactor DWARF creation: split generation and serialization
* Separate DWARF data transform from instantiation
* rm LinkContext
This commit makes the following changes to unwind information generation in
Cranelift:
* Remove frame layout change implementation in favor of processing the prologue
and epilogue instructions when unwind information is requested. This also
means this work is no longer performed for Windows, which didn't utilize it.
It also helps simplify the prologue and epilogue generation code.
* Remove the unwind sink implementation that required each unwind information
to be represented in final form. For FDEs, this meant writing a
complete frame table per function, which wastes 20 bytes or so for each
function with duplicate CIEs. This also enables Cranelift users to collect the
unwind information and write it as a single frame table.
* For System V calling convention, the unwind information is no longer stored
in code memory (it's only a requirement for Windows ABI to do so). This allows
for more compact code memory for modules with a lot of functions.
* Deletes some duplicate code relating to frame table generation. Users can
now simply use gimli to create a frame table from each function's unwind
information.
Fixes#1181.
* wasmtime: Pass around more contexts instead of fields
This commit refactors some wasmtime internals to pass around more
context-style structures rather than individual fields of each
structure. The intention here is to make the addition of fields to a
structure easier to plumb throughout the internals of wasmtime.
Currently you need to edit lots of functions to pass lots of parameters,
but ideally after this you'll only need to edit one or two struct fields
and then relevant locations have access to the information already.
Updates in this commit are:
* `debug_info` configuration is now folded into `Tunables`. Additionally
a `wasmtime::Config` now holds a `Tunables` directly and is passed
into an internal `Compiler`. Eventually this should allow for direct
configuration of the `Tunables` attributes from the `wasmtime` API,
but no new configuration is exposed at this time.
* `ModuleTranslation` is now passed around as a whole rather than
passing individual components to allow access to all the fields,
including `Tunables`.
This was motivated by investigating what it would take to optionally
allow loops and such to get interrupted, but that sort of codegen
setting was currently relatively difficult to plumb all the way through
and now it's hoped to be largely just an addition to `Tunables`.
* Fix lightbeam compile
* Improve robustness of cache loading/storing
Today wasmtime incorrectly loads compiled compiled modules from the
global cache when toggling settings such as optimizations. For example
if you execute `wasmtime foo.wasm` that will cache globally an
unoptimized version of the wasm module. If you then execute `wasmtime -O
foo.wasm` it would then reload the unoptimized version from cache, not
realizing the compilation settings were different, and use that instead.
This can lead to very surprising behavior naturally!
This commit updates how the cache is managed in an attempt to make it
much more robust against these sorts of issues. This takes a leaf out of
rustc's playbook and models the cache with a function that looks like:
fn load<T: Hash>(
&self,
data: T,
compute: fn(T) -> CacheEntry,
) -> CacheEntry;
The goal here is that it guarantees that all the `data` necessary to
`compute` the result of the cache entry is hashable and stored into the
hash key entry. This was previously open-coded and manually managed
where items were hashed explicitly, but this construction guarantees
that everything reasonable `compute` could use to compile the module is
stored in `data`, which is itself hashable.
This refactoring then resulted in a few workarounds and a few fixes,
including the original issue:
* The `Module` type was split into `Module` and `ModuleLocal` where only
the latter is hashed. The previous hash function for a `Module` left
out items like the `start_func` and didn't hash items like the imports
of the module. Omitting the `start_func` was fine since compilation
didn't actually use it, but omitting imports seemed uncomfortable
because while compilation didn't use the import values it did use the
*number* of imports, which seems like it should then be put into the
cache key. The `ModuleLocal` type now derives `Hash` to guarantee that
all of its contents affect the hash key.
* The `ModuleTranslationState` from `cranelift-wasm` doesn't implement
`Hash` which means that we have a manual wrapper to work around that.
This will be fixed with an upstream implementation, since this state
affects the generated wasm code. Currently this is just a map of
signatures, which is present in `Module` anyway, so we should be good
for the time being.
* Hashing `dyn TargetIsa` was also added, where previously it was not
fully hashed. Previously only the target name was used as part of the
cache key, but crucially the flags of compilation were omitted (for
example the optimization flags). Unfortunately the trait object itself
is not hashable so we still have to manually write a wrapper to hash
it, but we likely want to add upstream some utilities to hash isa
objects into cranelift itself. For now though we can continue to add
hashed fields as necessary.
Overall the goal here was to use the compiler to expose what we're not
hashing, and then make sure we organize data and write the right code to
ensure everything is hashed, and nothing more.
* Update crates/environ/src/module.rs
Co-Authored-By: Peter Huene <peterhuene@protonmail.com>
* Fix lightbeam
* Fix compilation of tests
* Update the expected structure of the cache
* Revert "Update the expected structure of the cache"
This reverts commit 2b53fee426a4e411c313d8c1e424841ba304a9cd.
* Separate the cache dir a bit
* Add a test the cache is busted with opt levels
* rustfmt
Co-authored-by: Peter Huene <peterhuene@protonmail.com>
* Optimize generated code via the CLI by default
This commit updates the behavior of the CLI and adds a new flag. It
first enables the `--optimize` flag by default, ensuring that usage of
the `wasmtime` CLI will enable cranelift optimizations by default. Next
it also adds a `--opt-level` flag which is similar to Rust's
`-Copt-level` where it takes a string argument of how to optimize. This
is updates to support 0/1/2/s, where 1 is currently the same as 2 but
added for consistency with other compilers. The default setting is
`--opt-level=2`.
When the `-O` flag is not passed the `--opt-level` flag is used,
otherwise `-O` takes precedent in the sense that it implies
`--opt-level=2` which is the highest optimization level. The thinking is
that these flags will in general select the highest optimization level
specified as the final optimization level.
* Add inline docs
* fix a test