Commit Graph

19 Commits

Author SHA1 Message Date
Alex Crichton
7a1b7cdf92 Implement RFC 11: Redesigning Wasmtime's APIs (#2897)
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
2021-06-03 09:10:53 -05:00
Peter Huene
b775b68cfb Make module information lookup from runtime safe.
This commit uses a two-phase lookup of stack map information from modules
rather than giving back raw pointers to stack maps.

First the runtime looks up information about a module from a pc value, which
returns an `Arc` it keeps a reference on while completing the stack map lookup.

Second it then queries the module information for the stack map from a pc
value, getting a reference to the stack map (which is now safe because of the
`Arc` held by the runtime).
2021-04-16 12:30:10 -07:00
Peter Huene
ea72c621f0 Remove the stack map registry.
This commit removes the stack map registry and instead uses the existing
information from the store's module registry to lookup stack maps.

A trait is now used to pass the lookup context to the runtime, implemented by
`Store` to do the lookup.

With this change, module registration in `Store` is now entirely limited to
inserting the module into the module registry.
2021-04-16 11:08:21 -07:00
Alex Crichton
c91e14d83f Precompute fields in VMOffsets
This commit updates the implementation of `VMOffsets` to frontload all
checked arithmetic on construction of the `VMOffsets` which allows
eliding all checked arithmetic when accessing the fields of `VMOffsets`.
For testing and such this adds a new constructor as well from a new
`VMOffsetsFields` structure which is a clone of the old definition.

This should help speed up some profile hot spots I've been seeing where
with all the checked arithmetic on field sizes this was slowing down the
various accessors during instantiation (which uses `VMOffsets` to
initialize various fields of the `VMContext`).
2021-04-08 12:46:17 -07:00
Alex Crichton
c77ea0c5c7 Add some more #[inline] annotations for trivial functions (#2817)
Looking at some profiles these or their related functions were all
showing up, so this commit adds `#[inline]` to allow cross-crate
inlining by default.
2021-04-08 12:23:54 -05:00
Nick Fitzgerald
ed31f28158 wasmtime-environ: Mark all VM offset functions as #[inline]
Otherwise they won't get inlined across crates unless we enable LTO, and much of
the usage of these function is across crates (eg from the `wasmtime-runtime`
crate).
2021-04-07 16:17:26 -07:00
Peter Huene
e71ccbf9bc Implement the pooling instance allocator.
This commit implements the pooling instance allocator.

The allocation strategy can be set with `Config::with_allocation_strategy`.

The pooling strategy uses the pooling instance allocator to preallocate a
contiguous region of memory for instantiating modules that adhere to various
limits.

The intention of the pooling instance allocator is to reserve as much of the
host address space needed for instantiating modules ahead of time and to reuse
committed memory pages wherever possible.
2021-03-04 18:18:51 -08:00
Alex Crichton
0e41861662 Implement limiting WebAssembly execution with fuel (#2611)
* Consume fuel during function execution

This commit adds codegen infrastructure necessary to instrument wasm
code to consume fuel as it executes. Currently nothing is really done
with the fuel, but that'll come in later commits.

The focus of this commit is to implement the codegen infrastructure
necessary to consume fuel and account for fuel consumed correctly.

* Periodically check remaining fuel in wasm JIT code

This commit enables wasm code to periodically check to see if fuel has
run out. When fuel runs out an intrinsic is called which can do what it
needs to do in the result of fuel running out. For now a trap is thrown
to have at least some semantics in synchronous stores, but another
planned use for this feature is for asynchronous stores to periodically
yield back to the host based on fuel running out.

Checks for remaining fuel happen in the same locations as interrupt
checks, which is to say the start of the function as well as loop
headers.

* Improve codegen by caching `*const VMInterrupts`

The location of the shared interrupt value and fuel value is through a
double-indirection on the vmctx (load through the vmctx and then load
through that pointer). The second pointer in this chain, however, never
changes, so we can alter codegen to account for this and remove some
extraneous load instructions and hopefully reduce some register
pressure even maybe.

* Add tests fuel can abort infinite loops

* More fuzzing with fuel

Use fuel to time out modules in addition to time, using fuzz input to
figure out which.

* Update docs on trapping instructions

* Fix doc links

* Fix a fuzz test

* Change setting fuel to adding fuel

* Fix a doc link

* Squelch some rustdoc warnings
2021-01-29 08:57:17 -06:00
Alex Crichton
9ac7d01288 Implement the module linking alias section (#2451)
This commit is intended to do almost everything necessary for processing
the alias section of module linking. Most of this is internal
refactoring, the highlights being:

* Type contents are now stored separately from a `wasmtime_env::Module`.
  Given that modules can freely alias types and have them used all over
  the place, it seemed best to have one canonical location to type
  storage which everywhere else points to (with indices). A new
  `TypeTables` structure is produced during compilation which is shared
  amongst all member modules in a wasm blob.

* Instantiation is heavily refactored to account for module linking. The
  main gotcha here is that imports are now listed as "initializers". We
  have a sort of pseudo-bytecode-interpreter which interprets the
  initialization of a module. This is more complicated than just
  matching imports at this point because in the module linking proposal
  the module, alias, import, and instance sections may all be
  interleaved. This means that imports aren't guaranteed to show up at
  the beginning of the address space for modules/instances.

Otherwise most of the changes here largely fell out from these two
design points. Aliases are recorded as initializers in this scheme.
Copying around type information and/or just knowing type information
during compilation is also pretty easy since everything is just a
pointer into a `TypeTables` and we don't have to actually copy any types
themselves. Lots of various refactorings were necessary to accomodate
these changes.

Tests are hoped to cover a breadth of functionality here, but not
necessarily a depth. There's still one more piece of the module linking
proposal missing which is exporting instances/modules, which will come
in a future PR.

It's also worth nothing that there's one large TODO which isn't
implemented in this change that I plan on opening an issue for.
With module linking when a set of modules comes back from compilation
each modules has all the trampolines for the entire set of modules. This
is quite a lot of duplicate trampolines across module-linking modules.
We'll want to refactor this at some point to instead have only one set
of trampolines per set of module linking modules and have them shared
from there. I figured it was best to separate out this change, however,
since it's purely related to resource usage, and doesn't impact
non-module-linking modules at all.

cc #2094
2020-12-02 17:24:06 -06:00
Alex Crichton
3d2e0e55f2 Remove the local field of Module (#2091)
This was added long ago at this point to assist with caching, but
caching has moved to a different level such that this wonky second level
of a `Module` isn't necessary. This commit removes the `ModuleLocal`
type to simplify accessors and generally make it easier to work with.
2020-08-04 12:29:16 -05:00
Nick Fitzgerald
58bb5dd953 wasmtime: Add support for func.ref and table.grow with funcrefs
`funcref`s are implemented as `NonNull<VMCallerCheckedAnyfunc>`.

This should be more efficient than using a `VMExternRef` that points at a
`VMCallerCheckedAnyfunc` because it gets rid of an indirection, dynamic
allocation, and some reference counting.

Note that the null function reference is *NOT* a null pointer; it is a
`VMCallerCheckedAnyfunc` that has a null `func_ptr` member.

Part of #929
2020-06-24 10:08:13 -07:00
Nick Fitzgerald
f30ce1fe97 externref: implement stack map-based garbage collection
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.

When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.

The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.

When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).

For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf

cc #929

Fixes #1804
2020-06-15 09:39:37 -07:00
Nick Fitzgerald
1898d52966 runtime: Introduce VMExternRef and VMExternData
`VMExternRef` is a reference-counted box for any kind of data that is
external and opaque to running Wasm. Sometimes it might hold a Wasmtime
thing, other times it might hold something from a Wasmtime embedder and is
opaque even to us. It is morally equivalent to `Rc<dyn Any>` in Rust, but
additionally always fits in a pointer-sized word. `VMExternRef` is
non-nullable, but `Option<VMExternRef>` is a null pointer.

The one part of `VMExternRef` that can't ever be opaque to us is the
reference count. Even when we don't know what's inside an `VMExternRef`, we
need to be able to manipulate its reference count as we add and remove
references to it. And we need to do this from compiled Wasm code, so it must
be `repr(C)`!

`VMExternRef` itself is just a pointer to an `VMExternData`, which holds the
opaque, boxed value, its reference count, and its vtable pointer.

The `VMExternData` struct is *preceded* by the dynamically-sized value boxed
up and referenced by one or more `VMExternRef`s:

```ignore
     ,-------------------------------------------------------.
     |                                                       |
     V                                                       |
    +----------------------------+-----------+-----------+   |
    | dynamically-sized value... | ref_count | value_ptr |---'
    +----------------------------+-----------+-----------+
                                 | VMExternData          |
                                 +-----------------------+
                                  ^
+-------------+                   |
| VMExternRef |-------------------+
+-------------+                   |
                                  |
+-------------+                   |
| VMExternRef |-------------------+
+-------------+                   |
                                  |
  ...                            ===
                                  |
+-------------+                   |
| VMExternRef |-------------------'
+-------------+
```

The `value_ptr` member always points backwards to the start of the
dynamically-sized value (which is also the start of the heap allocation for
this value-and-`VMExternData` pair). Because it is a `dyn` pointer, it is
fat, and also points to the value's `Any` vtable.

The boxed value and the `VMExternRef` footer are held a single heap
allocation. The layout described above is used to make satisfying the
value's alignment easy: we just need to ensure that the heap allocation used
to hold everything satisfies its alignment. It also ensures that we don't
need a ton of excess padding between the `VMExternData` and the value for
values with large alignment.
2020-06-01 14:53:10 -07:00
Alex Crichton
c9a0ba81a0 Implement interrupting wasm code, reimplement stack overflow (#1490)
* Implement interrupting wasm code, reimplement stack overflow

This commit is a relatively large change for wasmtime with two main
goals:

* Primarily this enables interrupting executing wasm code with a trap,
  preventing infinite loops in wasm code. Note that resumption of the
  wasm code is not a goal of this commit.

* Additionally this commit reimplements how we handle stack overflow to
  ensure that host functions always have a reasonable amount of stack to
  run on. This fixes an issue where we might longjmp out of a host
  function, skipping destructors.

Lots of various odds and ends end up falling out in this commit once the
two goals above were implemented. The strategy for implementing this was
also lifted from Spidermonkey and existing functionality inside of
Cranelift. I've tried to write up thorough documentation of how this all
works in `crates/environ/src/cranelift.rs` where gnarly-ish bits are.

A brief summary of how this works is that each function and each loop
header now checks to see if they're interrupted. Interrupts and the
stack overflow check are actually folded into one now, where function
headers check to see if they've run out of stack and the sentinel value
used to indicate an interrupt, checked in loop headers, tricks functions
into thinking they're out of stack. An interrupt is basically just
writing a value to a location which is read by JIT code.

When interrupts are delivered and what triggers them has been left up to
embedders of the `wasmtime` crate. The `wasmtime::Store` type has a
method to acquire an `InterruptHandle`, where `InterruptHandle` is a
`Send` and `Sync` type which can travel to other threads (or perhaps
even a signal handler) to get notified from. It's intended that this
provides a good degree of flexibility when interrupting wasm code. Note
though that this does have a large caveat where interrupts don't work
when you're interrupting host code, so if you've got a host import
blocking for a long time an interrupt won't actually be received until
the wasm starts running again.

Some fallout included from this change is:

* Unix signal handlers are no longer registered with `SA_ONSTACK`.
  Instead they run on the native stack the thread was already using.
  This is possible since stack overflow isn't handled by hitting the
  guard page, but rather it's explicitly checked for in wasm now. Native
  stack overflow will continue to abort the process as usual.

* Unix sigaltstack management is now no longer necessary since we don't
  use it any more.

* Windows no longer has any need to reset guard pages since we no longer
  try to recover from faults on guard pages.

* On all targets probestack intrinsics are disabled since we use a
  different mechanism for catching stack overflow.

* The C API has been updated with interrupts handles. An example has
  also been added which shows off how to interrupt a module.

Closes #139
Closes #860
Closes #900

* Update comment about magical interrupt value

* Store stack limit as a global value, not a closure

* Run rustfmt

* Handle review comments

* Add a comment about SA_ONSTACK

* Use `usize` for type of `INTERRUPTED`

* Parse human-readable durations

* Bring back sigaltstack handling

Allows libstd to print out stack overflow on failure still.

* Add parsing and emission of stack limit-via-preamble

* Fix new example for new apis

* Fix host segfault test in release mode

* Fix new doc example
2020-04-21 11:03:28 -07:00
Alex Crichton
c8ab1e293e Improve robustness of cache loading/storing (#974)
* Improve robustness of cache loading/storing

Today wasmtime incorrectly loads compiled compiled modules from the
global cache when toggling settings such as optimizations. For example
if you execute `wasmtime foo.wasm` that will cache globally an
unoptimized version of the wasm module. If you then execute `wasmtime -O
foo.wasm` it would then reload the unoptimized version from cache, not
realizing the compilation settings were different, and use that instead.
This can lead to very surprising behavior naturally!

This commit updates how the cache is managed in an attempt to make it
much more robust against these sorts of issues. This takes a leaf out of
rustc's playbook and models the cache with a function that looks like:

    fn load<T: Hash>(
        &self,
        data: T,
        compute: fn(T) -> CacheEntry,
    ) -> CacheEntry;

The goal here is that it guarantees that all the `data` necessary to
`compute` the result of the cache entry is hashable and stored into the
hash key entry. This was previously open-coded and manually managed
where items were hashed explicitly, but this construction guarantees
that everything reasonable `compute` could use to compile the module is
stored in `data`, which is itself hashable.

This refactoring then resulted in a few workarounds and a few fixes,
including the original issue:

* The `Module` type was split into `Module` and `ModuleLocal` where only
  the latter is hashed. The previous hash function for a `Module` left
  out items like the `start_func` and didn't hash items like the imports
  of the module. Omitting the `start_func` was fine since compilation
  didn't actually use it, but omitting imports seemed uncomfortable
  because while compilation didn't use the import values it did use the
  *number* of imports, which seems like it should then be put into the
  cache key. The `ModuleLocal` type now derives `Hash` to guarantee that
  all of its contents affect the hash key.

* The `ModuleTranslationState` from `cranelift-wasm` doesn't implement
  `Hash` which means that we have a manual wrapper to work around that.
  This will be fixed with an upstream implementation, since this state
  affects the generated wasm code. Currently this is just a map of
  signatures, which is present in `Module` anyway, so we should be good
  for the time being.

* Hashing `dyn TargetIsa` was also added, where previously it was not
  fully hashed. Previously only the target name was used as part of the
  cache key, but crucially the flags of compilation were omitted (for
  example the optimization flags). Unfortunately the trait object itself
  is not hashable so we still have to manually write a wrapper to hash
  it, but we likely want to add upstream some utilities to hash isa
  objects into cranelift itself. For now though we can continue to add
  hashed fields as necessary.

Overall the goal here was to use the compiler to expose what we're not
hashing, and then make sure we organize data and write the right code to
ensure everything is hashed, and nothing more.

* Update crates/environ/src/module.rs

Co-Authored-By: Peter Huene <peterhuene@protonmail.com>

* Fix lightbeam

* Fix compilation of tests

* Update the expected structure of the cache

* Revert "Update the expected structure of the cache"

This reverts commit 2b53fee426a4e411c313d8c1e424841ba304a9cd.

* Separate the cache dir a bit

* Add a test the cache is busted with opt levels

* rustfmt

Co-authored-by: Peter Huene <peterhuene@protonmail.com>
2020-02-26 16:18:02 -06:00
XAMPPRocky
907e7aac01 Clippy fixes (#692) 2019-12-24 12:50:07 -08:00
Alex Crichton
39e57e3e9a Migrate back to std:: stylistically (#554)
* Migrate back to `std::` stylistically

This commit moves away from idioms such as `alloc::` and `core::` as
imports of standard data structures and types. Instead it migrates all
crates to uniformly use `std::` for importing standard data structures
and types. This also removes the `std` and `core` features from all
crates to and removes any conditional checking for `feature = "std"`

All of this support was previously added in #407 in an effort to make
wasmtime/cranelift "`no_std` compatible". Unfortunately though this
change comes at a cost:

* The usage of `alloc` and `core` isn't idiomatic. Especially trying to
  dual between types like `HashMap` from `std` as well as from
  `hashbrown` causes imports to be surprising in some cases.
* Unfortunately there was no CI check that crates were `no_std`, so none
  of them actually were. Many crates still imported from `std` or
  depended on crates that used `std`.

It's important to note, however, that **this does not mean that wasmtime
will not run in embedded environments**. The style of the code today and
idioms aren't ready in Rust to support this degree of multiplexing and
makes it somewhat difficult to keep up with the style of `wasmtime`.
Instead it's intended that embedded runtime support will be added as
necessary. Currently only `std` is necessary to build `wasmtime`, and
platforms that natively need to execute `wasmtime` will need to use a
Rust target that supports `std`. Note though that not all of `std` needs
to be supported, but instead much of it could be configured off to
return errors, and `wasmtime` would be configured to gracefully handle
errors.

The goal of this PR is to move `wasmtime` back to idiomatic usage of
features/`std`/imports/etc and help development in the short-term.
Long-term when platform concerns arise (if any) they can be addressed by
moving back to `no_std` crates (but fixing the issues mentioned above)
or ensuring that the target in Rust has `std` available.

* Start filling out platform support doc
2019-11-18 22:04:06 -08:00
Dan Gohman
1a0ed6e388 Use the more-asserts crate in more places.
This provides assert_le, assert_lt, and so on, which can print the
values of the operands.
2019-11-08 15:24:53 -08:00
Dan Gohman
22641de629 Initial reorg.
This is largely the same as #305, but updated for the current tree.
2019-11-08 06:35:40 -08:00