Commit Graph

12 Commits

Author SHA1 Message Date
Nick Fitzgerald
f30ce1fe97 externref: implement stack map-based garbage collection
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.

When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.

The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.

When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).

For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf

cc #929

Fixes #1804
2020-06-15 09:39:37 -07:00
bjorn3
9788b02dd5 Bump object to 0.19.0 (#1767)
* Bump object to 0.19.0
2020-06-12 15:37:04 -05:00
Yury Delendik
e5b81bbc28 Migrating code to object (from faerie) (#1848)
* Using the "object" library everywhere in wasmtime.
* scroll_derive
2020-06-10 11:27:00 -05:00
Yury Delendik
4ebbcb82a9 Refactor CompiledModule to separate compile and linking stages (#1831)
* Refactor how relocs are stored and handled

* refactor CompiledModule::instantiate and link_module

* Refactor DWARF creation: split generation and serialization

* Separate DWARF data transform from instantiation

* rm LinkContext
2020-06-09 15:09:48 -05:00
Peter Huene
f7e9f86ba9 Refactor unwind generation in Cranelift.
This commit makes the following changes to unwind information generation in
Cranelift:

* Remove frame layout change implementation in favor of processing the prologue
  and epilogue instructions when unwind information is requested.  This also
  means this work is no longer performed for Windows, which didn't utilize it.
  It also helps simplify the prologue and epilogue generation code.

* Remove the unwind sink implementation that required each unwind information
  to be represented in final form. For FDEs, this meant writing a
  complete frame table per function, which wastes 20 bytes or so for each
  function with duplicate CIEs.  This also enables Cranelift users to collect the
  unwind information and write it as a single frame table.

* For System V calling convention, the unwind information is no longer stored
  in code memory (it's only a requirement for Windows ABI to do so).  This allows
  for more compact code memory for modules with a lot of functions.

* Deletes some duplicate code relating to frame table generation.  Users can
  now simply use gimli to create a frame table from each function's unwind
  information.

Fixes #1181.
2020-04-16 11:15:32 -07:00
Alex Crichton
c4e90f729c wasmtime: Pass around more contexts instead of fields (#1486)
* wasmtime: Pass around more contexts instead of fields

This commit refactors some wasmtime internals to pass around more
context-style structures rather than individual fields of each
structure. The intention here is to make the addition of fields to a
structure easier to plumb throughout the internals of wasmtime.
Currently you need to edit lots of functions to pass lots of parameters,
but ideally after this you'll only need to edit one or two struct fields
and then relevant locations have access to the information already.

Updates in this commit are:

* `debug_info` configuration is now folded into `Tunables`. Additionally
  a `wasmtime::Config` now holds a `Tunables` directly and is passed
  into an internal `Compiler`. Eventually this should allow for direct
  configuration of the `Tunables` attributes from the `wasmtime` API,
  but no new configuration is exposed at this time.

* `ModuleTranslation` is now passed around as a whole rather than
  passing individual components to allow access to all the fields,
  including `Tunables`.

This was motivated by investigating what it would take to optionally
allow loops and such to get interrupted, but that sort of codegen
setting was currently relatively difficult to plumb all the way through
and now it's hoped to be largely just an addition to `Tunables`.

* Fix lightbeam compile
2020-04-08 19:02:49 -05:00
Yury Delendik
f76b36f737 Write .debug_frame information (#53)
* Write .debug_frame information

* mv map_reg
2020-03-11 10:22:51 -05:00
Yury Delendik
ba1f10f4d4 Removes panic! from the debug crate. (#1261) 2020-03-09 12:25:38 -05:00
Andrew Brown
1d15054310 Remove the debug crate's hard-coded dependency on register ordering 2020-03-06 10:53:22 -08:00
Alex Crichton
c8ab1e293e Improve robustness of cache loading/storing (#974)
* Improve robustness of cache loading/storing

Today wasmtime incorrectly loads compiled compiled modules from the
global cache when toggling settings such as optimizations. For example
if you execute `wasmtime foo.wasm` that will cache globally an
unoptimized version of the wasm module. If you then execute `wasmtime -O
foo.wasm` it would then reload the unoptimized version from cache, not
realizing the compilation settings were different, and use that instead.
This can lead to very surprising behavior naturally!

This commit updates how the cache is managed in an attempt to make it
much more robust against these sorts of issues. This takes a leaf out of
rustc's playbook and models the cache with a function that looks like:

    fn load<T: Hash>(
        &self,
        data: T,
        compute: fn(T) -> CacheEntry,
    ) -> CacheEntry;

The goal here is that it guarantees that all the `data` necessary to
`compute` the result of the cache entry is hashable and stored into the
hash key entry. This was previously open-coded and manually managed
where items were hashed explicitly, but this construction guarantees
that everything reasonable `compute` could use to compile the module is
stored in `data`, which is itself hashable.

This refactoring then resulted in a few workarounds and a few fixes,
including the original issue:

* The `Module` type was split into `Module` and `ModuleLocal` where only
  the latter is hashed. The previous hash function for a `Module` left
  out items like the `start_func` and didn't hash items like the imports
  of the module. Omitting the `start_func` was fine since compilation
  didn't actually use it, but omitting imports seemed uncomfortable
  because while compilation didn't use the import values it did use the
  *number* of imports, which seems like it should then be put into the
  cache key. The `ModuleLocal` type now derives `Hash` to guarantee that
  all of its contents affect the hash key.

* The `ModuleTranslationState` from `cranelift-wasm` doesn't implement
  `Hash` which means that we have a manual wrapper to work around that.
  This will be fixed with an upstream implementation, since this state
  affects the generated wasm code. Currently this is just a map of
  signatures, which is present in `Module` anyway, so we should be good
  for the time being.

* Hashing `dyn TargetIsa` was also added, where previously it was not
  fully hashed. Previously only the target name was used as part of the
  cache key, but crucially the flags of compilation were omitted (for
  example the optimization flags). Unfortunately the trait object itself
  is not hashable so we still have to manually write a wrapper to hash
  it, but we likely want to add upstream some utilities to hash isa
  objects into cranelift itself. For now though we can continue to add
  hashed fields as necessary.

Overall the goal here was to use the compiler to expose what we're not
hashing, and then make sure we organize data and write the right code to
ensure everything is hashed, and nothing more.

* Update crates/environ/src/module.rs

Co-Authored-By: Peter Huene <peterhuene@protonmail.com>

* Fix lightbeam

* Fix compilation of tests

* Update the expected structure of the cache

* Revert "Update the expected structure of the cache"

This reverts commit 2b53fee426a4e411c313d8c1e424841ba304a9cd.

* Separate the cache dir a bit

* Add a test the cache is busted with opt levels

* rustfmt

Co-authored-by: Peter Huene <peterhuene@protonmail.com>
2020-02-26 16:18:02 -06:00
Alex Crichton
d4fcd32cdc Optimize generated code via the CLI by default (#973)
* Optimize generated code via the CLI by default

This commit updates the behavior of the CLI and adds a new flag. It
first enables the `--optimize` flag by default, ensuring that usage of
the `wasmtime` CLI will enable cranelift optimizations by default. Next
it also adds a `--opt-level` flag which is similar to Rust's
`-Copt-level` where it takes a string argument of how to optimize. This
is updates to support 0/1/2/s, where 1 is currently the same as 2 but
added for consistency with other compilers. The default setting is
`--opt-level=2`.

When the `-O` flag is not passed the `--opt-level` flag is used,
otherwise `-O` takes precedent in the sense that it implies
`--opt-level=2` which is the highest optimization level. The thinking is
that these flags will in general select the highest optimization level
specified as the final optimization level.

* Add inline docs

* fix a test
2020-02-24 15:18:08 -06:00
Yury Delendik
b96b53eafb Test basic DWARF generation (#931)
* Add obj generation with debug info
* Add simple transform check
2020-02-20 11:42:36 -06:00