Commit Graph

2155 Commits

Author SHA1 Message Date
Alex Crichton
2af358dd9c Add a VMComponentContext type and create it on instantiation (#4215)
* Add a `VMComponentContext` type and create it on instantiation

This commit fills out the `wasmtime-runtime` crate's support for
`VMComponentContext` and creates it as part of the instantiation
process. This moves a few maps that were temporarily allocated in an
`InstanceData` into the `VMComponentContext` and additionally reads the
canonical options data from there instead.

This type still won't be used in its "full glory" until the lowering of
host functions is completely implemented, however, which will be coming
in a future commit.

* Remove `DerefMut` implementation

* Rebase conflicts
2022-06-03 13:34:50 -05:00
Alex Crichton
4c1339a8fa Refactor lifting/lowering to not require a Func (#4216)
When lifting and lowering for component host imports there won't be a
`Func` available to represent the options and such for the lowering.
That means that the current construction of the `ComponentValue` trait
won't be sufficient for host imports. This commit instead refactors the
traits to instead work with an `Options` type where the `Options` type
can be manufactured from thin air out of the arguments passed to the
component host trampolines.

This new `Options` type is also suitable for storing in `WasmStr` and
`WasmList` to continue to be used to refer back to memory after
these lifted values have been given back to the embedder.

Overall this should largely just be shuffling code around and renaming
`func: &Func` to `options: &Options`.
2022-06-03 12:37:59 -05:00
Alex Crichton
3ed6fae7b3 Add trampoline compilation support for lowered imports (#4206)
* Add trampoline compilation support for lowered imports

This commit adds support to the component model implementation for
compiling trampolines suitable for calling host imports. Currently this
is purely just the compilation side of things, modifying the
wasmtime-cranelift crate and additionally filling out a new
`VMComponentOffsets` type (similar to `VMOffsets`). The actual creation
of a `VMComponentContext` is still not performed and will be a
subsequent PR.

Internally though some tests are actually possible with this where we at
least assert that compilation of a component and creation of everything
in-memory doesn't panic or trip any assertions, so some tests are added
here for that as well.

* Fix some test errors
2022-06-03 10:01:42 -05:00
Alex Crichton
b49c5c878e Implement module imports into components (#4208)
* Implement module imports into components

As a step towards implementing function imports into a component this
commit implements importing modules into a component. This fills out
missing pieces of functionality such as exporting modules as well. The
previous translation code had initial support for translating imported
modules but some of the AST type information was restructured with
feedback from this implementation, namely splitting the
`InstantiateModule` initializer into separate upvar/import variants to
clarify that the item orderings for imports are resolved differently at
runtime.

Much of this commit is also adding infrastructure for any imports at all
into a component. For example a `Linker` type (analagous to
`wasmtime::Linker`) was added here as well. For now this type is quite
limited due to the inability to define host functions (it can only work
with instances and instances-of-modules) but it's enough to start
writing `*.wast` tests which exercise lots of module-related functionality.

* Fix a warning
2022-06-03 09:33:18 -05:00
Alex Crichton
9f5f978baa Fix double-counting imports in VMOffsets calculations (#4209)
* Fix double-counting imports in `VMOffsets` calculations

This fixes an oversight in the initial creation of `VMOffsets` for a
module to avoid double-counting imported globals, tables, and memories
for calculating the size of the `VMContext`. Prior to this PR imported
items are accidentally also counted as defined items for sizing
calculations meaning that when a memory is imported but not defined, for
example, the `VMContext` will have a space for an inline
`VMMemoryDefinition` when it doesn't need to.

Auditing where all this relates to it appears that the only issue from
this mistake is that `VMContext` is a bit larger than it would otherwise
need to be. Extra slots are uninitialized memory but nothing in Wasmtime
ever actually accesses the memory either, so it should be harmless to
have extra space here. Nevertheless it seems better to shrink the size
as much as possible to avoid wasting space where we can.

* Fix tests
2022-06-02 13:39:38 -05:00
Alex Crichton
0cf0230432 Add dataflow processing to component translation for imports (#4205)
This commit enhances the processing of components to track all the
dataflow for the processing of `canon.lower`'d functions. At the same
time this fills out a few other missing details to component processing
such as aliasing from some kinds of component instances and similar.

The major changes contained within this are the updates the `info`
submodule which has the AST of component type information. This has been
significantly refactored to prepare for representing lowered functions
and implementing those. The major change is from an `Instantiation` list
to an `Initializer` list which abstractly represents a few other
initialization actions.

This work is split off from my main work to implement component imports
of host functions. This is incomplete in the sense that it doesn't
actually finish everything necessary to define host functions and import
them into components. Instead this is only the changes necessary at the
translation layer (so far). Consequently this commit does not have tests
and also namely doesn't actually include the `VMComponentContext`
initialization and usage. The full body of work is still a bit too messy
to PR just yet so I'm hoping that this is a slimmed-down-enough piece to
adequately be reviewed.
2022-06-01 16:27:49 -05:00
Alex Crichton
f638b390b6 Refactor some internals of wasmtime-cranelift (#4202)
* Split `wasm_to_host_trampoline` into pieces

In the upcoming component model supoprt for imports my plan is to reuse
some of these pieces but not the entirety of the current
`wasm_to_host_trampoline`. In an effort to make that diff smaller this
commit splits up the function preemptively into pieces to get reused
later.

* Delete unused `for_each_libcall` macros

Came across this when working in the object support for cranelift.

* Refactor some object creation details

This commit refactors some of the internals around creating an object
file in the wasmtime-cranelift integration. The old `ObjectBuilder` is
now named `ModuleTextBuilder` and is only used to create the text
section rather than other sections too. This helps maintain the
invariant that the unwind information section is placed directly after
the text section without having an odd API for doing this.

Additionally the unwind information creation is moved externally from
the `ModuleTextBuilder` to a standalone structure. This separate
structure is currently in use in the component model work I'm doing
although I may change that to using the `ModuleTextBuilder` instead. In
any case it seemed nice to encapsulate all of the unwinding information
into one standalone structure.

Finally, the insertion of native debug information has been refactored
to happen in a new `append_dwarf` method to keep all the dwarf-related
stuff together in one place as much as possible.

* Fix a doctest

* Fix a typo
2022-06-01 15:39:53 -05:00
Alex Crichton
d5ce51e8d1 Redesign interface type value representation (#4198)
Prior to this PR a major feature of calling component exports (#4039)
was the usage of the `Value<T>` type. This type represents a value
stored in wasm linear memory (the type `T` stored there). This
implementation had a number of drawbacks though:

* When returning a value it's ABI-specific whether you use `T` or
  `Value<T>` as a return value. If `T` is represented with one wasm
  primitive then you have to return `T`, otherwise the return value must
  be `Value<T>`. This is somewhat non-obvious and leaks ABI-details into
  the API which is unfortunate.

* The `T` in `Value<T>` was somewhat non-obvious. For example a
  wasm-owned string was `Value<String>`. Using `Value<&str>` didn't
  work.

* Working with `Value<T>` was unergonomic in the sense that you had to
  first "pair" it with a `&Store<U>` to get a `Cursor<T>` and then you
  could start reading the value.

* Custom structs and enums, while not implemented yet, were planned to
  be quite wonky where when you had `Cursor<MyStruct>` then you would
  have to import a `CursorMyStructExt` trait generated by a proc-macro
  (think a `#[derive]` on the definition of `MyStruct`) which would
  enable field accessors, returning cursors of all the fields.

* In general there was no "generic way" to load a `T` from memory. Other
  operations like lift/lower/store all had methods in the
  `ComponentValue` trait but load had no equivalent.

None of these drawbacks were deal-breakers per-se. When I started
to implement imported functions, though, the `Value<T>` type no longer
worked. The major difference between imports and exports is that when
receiving values from wasm an export returns at most one wasm primitive
where an import can yield (through arguments) up to 16 wasm primitives.
This means that if an export returned a string it would always be
`Value<String>` but if an import took a string as an argument there was
actually no way to represent this with `Value<String>` since the value
wasn't actually stored in memory but rather the pointer/length pair is
received as arguments. Overall this meant that `Value<T>` couldn't be
used for arguments-to-imports, which means that altogether something new
would be required.

This PR completely removes the `Value<T>` and `Cursor<T>` type in favor
of a different implementation. The inspiration from this comes from the
fact that all primitives can be both lifted and lowered into wasm while
it's just some times which can only go one direction. For example
`String` can be lowered into wasm but can't be lifted from wasm. Instead
some sort of "view" into wasm needs to be created during lifting.

One of the realizations from #4039 was that we could leverage
run-time-type-checking to reject static constructions that don't make
sense. For example if an embedder asserts that a wasm function returns a
Rust `String` we can reject that at typechecking time because it's
impossible for a wasm module to ever do that.

The new system of imports/exports in this PR now looks like:

* Type-checking takes into accont an `Op` operation which indicates
  whether we'll be lifting or lowering the type. This means that we can
  allow the lowering operation for `String` but disallow the lifting
  operation. While we can't statically rule out an embedder saying that
  a component returns a `String` we can now reject it at runtime and
  disallow it from being called.

* The `ComponentValue` trait now sports a new `load` function. This
  function will load and instance of `Self` from the byte-array
  provided. This is implemented for all types but only ever actually
  executed when the `lift` operation is allowed during type-checking.

* The `Lift` associated type is removed since it's now expected that the
  lift operation returns `Self`.

* The `ComponentReturn` trait is now no longer necessary and is removed.
  Instead returns are bounded by `ComponentValue`. During type-checking
  it's required that the return value can be lifted, disallowing, for
  example, returning a `String` or `&str`.

* With `Value` gone there's no need to specify the ABI details of the
  return value, or whether it's communicated through memory or not. This
  means that handling return values through memory is transparently
  handled by Wasmtime.

* Validation is in a sense more eagerly performed now. Whenever a value
  `T` is loaded the entire immediate structure of `T` is loaded and
  validated. Note that recursive through memory validation still does
  not happen, so the contents of lists or strings aren't validated, it's
  just validated that the pointers are in-bounds.

Overall this felt like a much clearer system to work with and should be
much easier to integrate with imported functions as well. The new
`WasmStr` and `WasmList<T>` types can be used in import arguments and
lifted from the immediate arguments provided rather than forcing them to
always be stored in memory.
2022-06-01 15:38:36 -05:00
Alex Crichton
704db02e00 Add a first-class StoreId type to Wasmtime (#4204)
* Add a first-class `StoreId` type to Wasmtime

This commit adds a `StoreId` type to uniquely identify a store
internally within Wasmtime. This hasn't been created previously as it
was never really needed but I've run across a case for its usage in the
component model so I've gone ahead and split out a commit to add this type.

While I was here in this file I opted to improve some other
miscellaneous things as well:

* Notes were added to the `Index` impls that unchecked indexing could be
  used in theory if we ever need it one day.
* The check in `Index` for the same store should now be a bit lighter on
  codegen where instead of having a `panic!()` in the codegen for each
  `Index` there's now an out-of-line version which is `#[cold]`. This
  should improve codegen as calling a function with no arguments is
  slighly more efficient than calling the panic macro with one string argument.
* An `assert!` guarded with a `cfg(debug_assertions)` was changed to a
  `debug_assert!`.
* Allocation of a `StoreId` was refactored to a method on the `StoreId`
  itself.

* Review comments

* Fix an ordering
2022-06-01 14:46:21 -05:00
Alex Crichton
2a4851ad2b Change some VMContext pointers to () pointers (#4190)
* Change some `VMContext` pointers to `()` pointers

This commit is motivated by my work on the component model
implementation for imported functions. Currently all context pointers in
wasm are `*mut VMContext` but with the component model my plan is to
make some pointers instead along the lines of `*mut VMComponentContext`.
In doing this though one worry I have is breaking what has otherwise
been a core invariant of Wasmtime for quite some time, subtly
introducing bugs by accident.

To help assuage my worry I've opted here to erase knowledge of
`*mut VMContext` where possible. Instead where applicable a context
pointer is simply known as `*mut ()` and the embedder doesn't actually
know anything about this context beyond the value of the pointer. This
will help prevent Wasmtime from accidentally ever trying to interpret
this context pointer as an actual `VMContext` when it might instead be a
`VMComponentContext`.

Overall this was a pretty smooth transition. The main change here is
that the `VMTrampoline` (now sporting more docs) has its first argument
changed to `*mut ()`. The second argument, the caller context, is still
configured as `*mut VMContext` though because all functions are always
called from wasm still. Eventually for component-to-component calls I
think we'll probably "fake" the second argument as the same as the first
argument, losing track of the original caller, as an intentional way of
isolating components from each other.

Along the way there are a few host locations which do actually assume
that the first argument is indeed a `VMContext`. These are valid
assumptions that are upheld from a correct implementation, but I opted
to add a "magic" field to `VMContext` to assert this in debug mode. This
new "magic" field is inintialized during normal vmcontext initialization
and it's checked whenever a `VMContext` is reinterpreted as an
`Instance` (but only in debug mode). My hope here is to catch any future
accidental mistakes, if ever.

* Use a VMOpaqueContext wrapper

* Fix typos
2022-06-01 11:00:43 -05:00
Alex Crichton
f4b9020913 Change wasm-to-host trampolines to take the values_vec size (#4192)
* Change wasm-to-host trampolines to take the values_vec size

This commit changes the ABI of wasm-to-host trampolines, which are
only used right now for functions created with `Func::new`, to pass
along the size of the `values_vec` argument. Previously the trampoline
simply received `*mut ValRaw` and assumed that it was the appropriate
size. By receiving a size as well we can thread through `&mut [ValRaw]`
internally instead of `*mut ValRaw`.

The original motivation for this is that I'm planning to leverage these
trampolines for the component model for host-defined functions. Out of
an abundance of caution of making sure that everything lines up I wanted
to be able to write down asserts about the size received at runtime
compared to the size expected. This overall led me to the desire to
thread this size parameter through on the assumption that it would not
impact performance all that much.

I ran two benchmarks locally from the `call.rs` benchmark and got:

* `sync/no-hook/wasm-to-host - nop - unchecked` - no change
* `sync/no-hook/wasm-to-host - nop-params-and-results - unchecked` - 5%
  slower

This is what I roughly expected in that if nothing actually reads the
new parameter (e.g. no arguments) then threading through the parameter
is effectively otherwise free. Otherwise though accesses to the `ValRaw`
storage is now bounds-checked internally in Wasmtime instead of assuming
it's valid, leading to the 5% slowdown (~9.6ns to ~10.3ns). If this
becomes a peformance bottleneck for a particular use case then we should
be fine to remove the bounds checking here or otherwise only bounds
check in debug mode, otherwise I plan on leaving this as-is.

Of particular note this also changes the C API for `*_unchecked`
functions where the C callback now receives the size of the array as
well.

* Add docs
2022-06-01 09:05:37 -05:00
Alex Crichton
4d9e10dae1 Fix panics in the C API related to trap frames (#4196)
The `wasmtime-cpp` test suite uncovered an issue where asking for the
frames of a trap would fail immediately after the trap was created. In
addition to fixing this issue I've also updated the documentation of
`Trap::frames` to indicate when it returns `None`.
2022-05-31 10:39:11 -05:00
Alex Crichton
7d3639522e Capture unresolved backtraces on traps (#4193)
I was running tests recently and was surprised that the `--test all`
test was taking more than a minute to run when I didn't recall it ever
taking more than a minute historically. A bisection pointed out #4183 as
the cause and after re-reviewing I realized I forgot that we capture
unresolved backtraces by default (and don't actually resolve them
anywhere yet but that's a problem for another day) rather than resolved
backtraces. This means that it's intended that we use
`Backtrace::new_unresolved` instead of `Backtrace::new` in the
traphandlers crate.

The reason that tests were running so slowly is that the tests which
deal with deep stacks (e.g. stack overflow) would take forever in
testing as the Rust-based decoding of DWARF information is egregiously
slow in unoptimized mode. I did discover independently that optimizing
these dependencies makes the tests ~6x faster, but that's irrelevant if
we're not symbolicating in the first place.
2022-05-31 09:56:56 -05:00
Pat Hickey
bffce37050 make backtrace collection a Config field rather than a cargo feature (#4183)
* sorta working in runtime

* wasmtime-runtime: get rid of wasm-backtrace feature

* wasmtime: factor to make backtraces recording optional. not configurable yet

* get rid of wasm-backtrace features

* trap tests: now a Trap optionally contains backtrace

* eliminate wasm-backtrace feature

* code review fixes

* ci: no more wasm-backtrace feature

* c_api: backtraces always enabled

* config: unwind required by backtraces and ref types

* plumbed

* test that disabling backtraces works

* code review comments

* fuzzing generator: wasm_backtrace is a runtime config now

* doc fix
2022-05-25 12:25:50 -07:00
Alex Crichton
a02a609528 Make ValRaw fields private (#4186)
* Make `ValRaw` fields private

Force accessing to go through constructors and accessors to localize the
knowledge about little-endian-ness. This is spawned since I made a
mistake in #4039 about endianness.

* Fix some tests

* Component model changes
2022-05-24 19:14:29 -05:00
Alex Crichton
140b83597b components: Implement the ability to call component exports (#4039)
* components: Implement the ability to call component exports

This commit is an implementation of the typed method of calling
component exports. This is intended to represent the most efficient way
of calling a component in Wasmtime, similar to what `TypedFunc`
represents today for core wasm.

Internally this contains all the traits and implementations necessary to
invoke component exports with any type signature (e.g. arbitrary
parameters and/or results). The expectation is that for results we'll
reuse all of this infrastructure except in reverse (arguments and
results will be swapped when defining imports).

Some features of this implementation are:

* Arbitrary type hierarchies are supported
* The Rust-standard `Option`, `Result`, `String`, `Vec<T>`, and tuple
  types all map down to the corresponding type in the component model.
* Basic utf-16 string support is implemented as proof-of-concept to show
  what handling might look like. This will need further testing and
  benchmarking.
* Arguments can be behind "smart pointers", so for example
  `&Rc<Arc<[u8]>>` corresponds to `list<u8>` in interface types.
* Bulk copies from linear memory never happen unless explicitly
  instructed to do so.

The goal of this commit is to create the ability to actually invoke wasm
components. This represents what is expected to be the performance
threshold for these calls where it ideally should be optimal how
WebAssembly is invoked. One major missing piece of this is a `#[derive]`
of some sort to generate Rust types for arbitrary `*.wit` types such as
custom records, variants, flags, unions, etc. The current trait impls
for tuples and `Result<T, E>` are expected to have fleshed out most of
what such a derive would look like.

There are some downsides and missing pieces to this commit and method of
calling components, however, such as:

* Passing `&[u8]` to WebAssembly is currently not optimal. Ideally this
  compiles down to a `memcpy`-equivalent somewhere but that currently
  doesn't happen due to all the bounds checks of copying data into
  memory. I have been unsuccessful so far at getting these bounds checks
  to be removed.
* There is no finalization at this time (the "post return" functionality
  in the canonical ABI). Implementing this should be relatively
  straightforward but at this time requires `wasmparser` changes to
  catch up with the current canonical ABI.
* There is no guarantee that results of a wasm function will be
  validated. As results are consumed they are validated but this means
  that if function returns an invalid string which the host doesn't look
  at then no trap will be generated. This is probably not the intended
  semantics of hosts in the component model.
* At this time there's no support for memory64 memories, just a bunch of
  `FIXME`s to get around to. It's expected that this won't be too
  onerous, however. Some extra care will need to ensure that the various
  methods related to size/alignment all optimize to the same thing they
  do today (e.g. constants).
* The return value of a typed component function is either `T` or
  `Value<T>`, and it depends on the ABI details of `T` and whether it
  takes up more than one return value slot or not. This is an
  ABI-implementation detail which is being forced through to the API
  layer which is pretty unfortunate. For example if you say the return
  value of a function is `(u8, u32)` then it's a runtime type-checking
  error. I don't know of a great way to solve this at this time.

Overall I'm feeling optimistic about this trajectory of implementing
value lifting/lowering in Wasmtime. While there are a number of
downsides none seem completely insurmountable. There's naturally still a
good deal of work with the component model but this should be a
significant step up towards implementing and testing the component model.

* Review comments

* Write tests for calling functions

This commit adds a new test file for actually executing functions and
testing their results. This is not written as a `*.wast` test yet since
it's not 100% clear if that's the best way to do that for now (given
that dynamic signatures aren't supported yet). The tests themselves
could all largely be translated to `*.wast` testing in the future,
though, if supported.

Along the way a number of minor issues were fixed with lowerings with
the bugs exposed here.

* Fix an endian mistake

* Fix a typo and the `memory.fill` instruction
2022-05-24 17:02:31 -05:00
Benjamin Bouvier
3a7910ecb0 Reuse Cranelift codegen contexts across wasmtime compilations (#4181) 2022-05-24 11:03:01 +02:00
Benjamin Bouvier
6e828df632 Remove unused SourceLoc in many Mach data structures (#4180)
* Remove unused srcloc in MachReloc

* Remove unused srcloc in MachTrap

* Use `into_iter` on array in bench code to suppress a warning

* Remove unused srcloc in MachCallSite
2022-05-23 09:27:28 -07:00
Alex Crichton
fcf6208750 Initial skeleton of some component model processing (#4005)
* Initial skeleton of some component model processing

This commit is the first of what will likely be many to implement the
component model proposal in Wasmtime. This will be structured as a
series of incremental commits, most of which haven't been written yet.
My hope is to make this incremental and over time to make this easier to
review and easier to test each step in isolation.

Here much of the skeleton of how components are going to work in
Wasmtime is sketched out. This is not a complete implementation of the
component model so it's not all that useful yet, but some things you can
do are:

* Process the type section into a representation amenable for working
  with in Wasmtime.
* Process the module section and register core wasm modules.
* Process the instance section for core wasm modules.
* Process core wasm module imports.
* Process core wasm instance aliasing.
* Ability to compile a component with core wasm embedded.
* Ability to instantiate a component with no imports.
* Ability to get functions from this component.

This is already starting to diverge from the previous module linking
representation where a `Component` will try to avoid unnecessary
metadata about the component and instead internally only have the bare
minimum necessary to instantiate the module. My hope is we can avoid
constructing most of the index spaces during instantiation only for it
to all ge thrown away. Additionally I'm predicting that we'll need to
see through processing where possible to know how to generate adapters
and where they are fused.

At this time you can't actually call a component's functions, and that's
the next PR that I would like to make.

* Add tests for the component model support

This commit uses the recently updated wasm-tools crates to add tests for
the component model added in the previous commit. This involved updating
the `wasmtime-wast` crate for component-model changes. Currently the
component support there is quite primitive, but enough to at least
instantiate components and verify the internals of Wasmtime are all
working correctly. Additionally some simple tests for the embedding API
have also been added.
2022-05-20 15:33:18 -05:00
Alex Crichton
a75f383f96 Improve the wasmtime crate's README (#4174)
* Improve the `wasmtime` crate's README

This commit is me finally getting back to #2688 and improving the README
of the `wasmtime` crate. Currently we have a [pretty drab README][drab]
that doesn't really convey what we want about Wasmtime.

While I was doing this I opted to update the feature list of Wasmtime as
well in the main README (which is mirrored into the crate readme),
namely adding a bullet point for "secure" which I felt was missing
relative to how we think about Wasmtime.

Naturally there's a lot of ways to paint this shed, so feedback is of
course welcome on this! (I'm not the best writer myself)

[drab]: https://crates.io/crates/wasmtime/0.37.0

* Expand the "Fast" bullet a bit more

* Reference the book from the wasmtime crate

* Update more security docs

Also merge the sandboxing security page with the main security page to
avoid the empty security page.
2022-05-20 15:33:00 -05:00
Chris Fallin
0824abbae4 Add a basic alias analysis with redundant-load elim and store-to-load fowarding opts. (#4163)
This PR adds a basic *alias analysis*, and optimizations that use it.
This is a "mid-end optimization": it operates on CLIF, the
machine-independent IR, before lowering occurs.

The alias analysis (or maybe more properly, a sort of memory-value
analysis) determines when it can prove a particular memory
location is equal to a given SSA value, and when it can, it replaces any
loads of that location.

This subsumes two common optimizations:

* Redundant load elimination: when the same memory address is loaded two
  times, and it can be proven that no intervening operations will write
  to that memory, then the second load is *redundant* and its result
  must be the same as the first. We can use the first load's result and
  remove the second load.

* Store-to-load forwarding: when a load can be proven to access exactly
  the memory written by a preceding store, we can replace the load's
  result with the store's data operand, and remove the load.

Both of these optimizations rely on a "last store" analysis that is a
sort of coloring mechanism, split across disjoint categories of abstract
state. The basic idea is that every memory-accessing operation is put
into one of N disjoint categories; it is disallowed for memory to ever
be accessed by an op in one category and later accessed by an op in
another category. (The frontend must ensure this.)

Then, given this, we scan the code and determine, for each
memory-accessing op, when a single prior instruction is a store to the
same category. This "colors" the instruction: it is, in a sense, a
static name for that version of memory.

This analysis provides an important invariant: if two operations access
memory with the same last-store, then *no other store can alias* in the
time between that last store and these operations. This must-not-alias
property, together with a check that the accessed address is *exactly
the same* (same SSA value and offset), and other attributes of the
access (type, extension mode) are the same, let us prove that the
results are the same.

Given last-store info, we scan the instructions and build a table from
"memory location" key (last store, address, offset, type, extension) to
known SSA value stored in that location. A store inserts a new mapping.
A load may also insert a new mapping, if we didn't already have one.
Then when a load occurs and an entry already exists for its "location",
we can reuse the value. This will be either RLE or St-to-Ld depending on
where the value came from.

Note that this *does* work across basic blocks: the last-store analysis
is a full iterative dataflow pass, and we are careful to check dominance
of a previously-defined value before aliasing to it at a potentially
redundant load. So we will do the right thing if we only have a
"partially redundant" load (loaded already but only in one predecessor
block), but we will also correctly reuse a value if there is a store or
load above a loop and a redundant load of that value within the loop, as
long as no potentially-aliasing stores happen within the loop.
2022-05-20 13:19:32 -07:00
Alex Crichton
985ed07c3f Improve documentation around ResourceLimiter (#4173)
* Improve documentation around `ResourceLimiter`

This commit takes a pass through the `Store::limiter` method and related
types/traits to improve the documentation with an example and soup up
any recent developments in the documentation.

Closes #4138

* Fix a broken doc link
2022-05-20 12:06:11 -05:00
Alex Crichton
6cf4c95585 Ensure simd is enabled for spectest fuzzing (#4172)
This is required now that the simd specification has been merged into
the upstream specification, so to run the spec tests this must always be
enabled instead of being left to the whims of the fuzzer about whether
to enable it or not.
2022-05-20 09:57:56 -05:00
Alex Crichton
89ccc56e46 Update the wasm-tools family of crates (#4165)
* Update the wasm-tools family of crates

This commit updates these crates as used by Wasmtime for the recently
published versions to pull in changes necessary to support the component
model. I've split this out from #4005 to make it clear what's impacted
here and #4005 can simply rebase on top of this to pick up the necessary
changes.

* More test fixes
2022-05-19 14:13:04 -05:00
Alex Crichton
0a0c232a14 Fix CI for Rust 1.61.0 (#4164)
A new version of rustc was released this morning and we have a few small
breakages on our CI which need fixing:

* A new warning was coming out of the c-api crate about an unneeded
  `unsafe` block.
* The panic message of a task in `cranelift-object` needed updating
  since the standard library changed how it formats strings with the nul
  byte.
2022-05-19 10:44:45 -05:00
Jonathan Coates
f19d8cc851 Run a callback when the interruption epoch is reached (#4152)
* Run a callback when the interruption epoch is reached

Adds Store::epoch_deadline_callback. This accepts a callback which, when
invoked, can mutate the store's contents. The callback can either return
an error (in which case we trap) or return a delta which we'll use to
set the new epoch deadline.

* Add a basic test for epoch interruption callback

* Some small nits

 - Remove use of &mut in the pattern match
 - Return both yields and state from run_and_count_yields_or_trap in
   test code and assert on them separately.
 - Add a test for trapping on a state failure.
2022-05-16 07:28:23 -05:00
Olexiy Kulchitskiy
8d7bccefcb Expose cranelift nan canonicalization config via C API (#4154)
* Add cranelift_nan_canonicalization to c api header

* Add cranelift_nan_canonicalization to capi/config.rs

* Fix func name
2022-05-14 11:28:49 -07:00
Conrad Watt
d3087487ea enable multi-value in spec intepreter fuzzing (#4118) 2022-05-10 10:33:07 -05:00
Saúl Cabrera
52524d258c Expose TrapCode::Interrupt on epoch based interruption (#4105) 2022-05-10 10:27:30 -05:00
Conrad Watt
4e6f3ea899 bump spec interpreter commit to address performance issues (#4113) 2022-05-09 11:09:42 -05:00
Alex Crichton
ccf834b473 Fix an issue where massive memory images are created (#4112)
This commit fixes an issue introduced in #4046 where the checks for
ensuring that the memory initialization image for a module was
constrained in its size failed to trigger and a very small module could
produce an arbitrarily large memory image.

The bug in question was that if a module only had empty data segments at
arbitrarily small and large addresses then the loop which checks whether
or not the image is allowed was skipped entirely since it was seen that
the memory had no data size. The fix here is to skip segments that are
empty to ensure that if the validation loop is skipped then no data
segments will be processed to create the image (and the module won't end
up having an image in the end).
2022-05-09 11:04:56 -05:00
wasmtime-publish
9a6854456d Bump Wasmtime to 0.38.0 (#4103)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
2022-05-05 13:43:02 -05:00
Ulrich Weigand
e1f7b50a12 Add ISA flag detection for s390x (#4101)
Adds support for s390x to check_compatible_with_isa_flag,
which fixes running the test suite on z15 and later.
2022-05-05 11:26:19 -05:00
Alex Crichton
7fdc616368 Remove the Paged memory initialization variant (#4046)
* Remove the `Paged` memory initialization variant

This commit simplifies the `MemoryInitialization` enum by removing the
`Paged` variant. The `Paged` variant was originally added for uffd, but
that support has now been removed in #4040. This is no longer necessary
but is still used as an intermediate step of becoming a `Static` variant
of initialized memory (which copy-on-write uses). As a result this
commit largely modifies the static initialization of memory steps and
folds the two methods together.

* Apply suggestions from code review

Co-authored-by: Peter Huene <peter@huene.dev>

Co-authored-by: Peter Huene <peter@huene.dev>
2022-05-05 09:44:48 -05:00
Andrew Brown
5c3642fcb1 bench-api: configure execution with a flags string (#4096)
As discussed previously, we need a way to be able to configure Wasmtime when running it in the Sightglass benchmark infrastructure. The easiest way to do this seemed to be to pass a string from Sightglass to the `bench-api` library and parse this in the same way that Wasmtime parses its CLI flags. The structure that contains these flags is `CommonOptions`, so it has been moved to its own crate to be depended on by both `wasmtime-cli` and `wasmtime-bench-api`. Also, this change adds an externally-visible function for parsing a string into `CommonOptions`, which is used for configuring an engine.
2022-05-04 16:30:39 -07:00
Chris Fallin
61dc38c065 Implement Spectre mitigations for table accesses and br_tables. (#4092)
Currently, we have partial Spectre mitigation: we protect heap accesses
with dynamic bounds checks. Specifically, we guard against errant
accesses on the misspeculated path beyond the bounds-check conditional
branch by adding a conditional move that is also dependent on the
bounds-check condition. This data dependency on the condition is not
speculated and thus will always pick the "safe" value (in the heap case,
a NULL address) on the misspeculated path, until the pipeline flushes
and recovers onto the correct path.

This PR uses the same technique both for table accesses -- used to
implement Wasm tables -- and for jumptables, used to implement Wasm
`br_table` instructions.

In the case of Wasm tables, the cmove picks the table base address on
the misspeculated path. This is equivalent to reading the first table
entry. This prevents loads of arbitrary data addresses on the
misspeculated path.

In the case of `br_table`, the cmove picks index 0 on the misspeculated
path. This is safer than allowing a branch to an address loaded from an
index under misspeculation (i.e., it preserves control-flow integrity
even under misspeculation).

The table mitigation is controlled by a Cranelift setting, on by
default. The br_table mitigation is always on, because it is part of the
single lowering pseudoinstruction. In both cases, the impact should be
minimal: a single extra cmove in a (relatively) rarely-used operation.

The table mitigation is architecture-independent (happens during
legalization); the br_table mitigation has been implemented for both x64
and aarch64. (I don't know enough about s390x to implement this
confidently there, but would happily review a PR to do the same on that
platform.)
2022-05-02 11:19:16 -07:00
Andrew Brown
3dbdcfa220 runtime: refactor Memory to always use Box<dyn RuntimeLinearMemory> (#4086)
While working with the runtime `Memory` object, it became clear that
some refactoring was needed. In order to implement shared memory from
the threads proposal, we must be able to atomically change the memory
size. Previously, the split into variants, `Memory::Static` and
`Memory::Dynamic`, made any attempt to lock forced us to duplicate logic
in various places.

This change moves `enum Memory { Static..., Dynamic... }` to simply
`struct Memory(Box<dyn RuntimeLinearMemory>)`. A new type,
`ExternalMemory`, takes the place of `Memory::Static` and also
implements the `RuntimeLinearMemory` trait, allowing `Memory` to contain
the same two options as before: `MmapMemory` for `Memory::Dynamic` and
`ExternalMemory` for `Memory::Static`. To interface with the
`PoolingAllocator`, this change also required the ability to downcast to
the internal representation.
2022-04-29 08:12:38 -07:00
Alex Crichton
5fe06f7345 Update to clap 3.* (#4082)
* Update to clap 3.0

This commit migrates all CLI commands internally used in this project
from structopt/clap2 to clap 3. The intent here is to ensure that we're
using maintained versions of the dependencies as structopt and clap 2
are less maintained nowadays. Most transitions were pretty
straightforward and mostly dealing with structopt/clap3 differences.

* Fix a number of `cargo deny` errors

This commit fixes a few errors around duplicate dependencies which
arose from the prior update to clap3. This also uses a new feature in
`deny.toml`, `skip-tree`, which allows having a bit more targeted
ignores for skips of duplicate version checks. This showed a few more
locations in Wasmtime itself where we could update some dependencies.
2022-04-28 12:47:12 -05:00
Alex Crichton
871a9d93f2 Update some dependencies in Cargo.lock (#4081)
* Run a `cargo update` over our dependencies

This'll notably fix a `cargo audit` error where we have a pinned version
of the `regex` crate which has a CVE assigned to it.

* Update to `object` and `hashbrown` crates

Prune some duplicate versions showing up from the previous `cargo update`
2022-04-28 11:12:58 -05:00
Anton Kirilov
a1e4b4b521 Enable AArch64 processor feature detection unconditionally (#4034)
std::arch::is_aarch64_feature_detected!() is now part of stable
Rust, so we can always use it.

Copyright (c) 2022, Arm Limited.
2022-04-28 09:27:32 -05:00
Dan Gohman
321124ad21 Update to rustix 0.33.7. (#4052)
This pulls in the fix for bytecodealliance/rustix#285, which fixes a
failure in the WASI `time` APIs on powerpc64.
2022-04-19 16:27:56 -07:00
Alex Crichton
90791a0e32 Reduce contention on the global module rwlock (#4041)
* Reduce contention on the global module rwlock

This commit intendes to close #4025 by reducing contention on the global
rwlock Wasmtime has for module information during instantiation and
dropping a store. Currently registration of a module into this global
map happens during instantiation, but this can be a hot path as
embeddings may want to, in parallel, instantiate modules.

Instead this switches to a strategy of inserting into the global module
map when a `Module` is created and then removing it from the map when
the `Module` is dropped. Registration in a `Store` now preserves the
entire `Module` within the store as opposed to trying to only save it
piecemeal. In reality the only piece that wasn't saved within a store
was the `TypeTables` which was pretty inconsequential for core wasm
modules anyway.

This means that instantiation should now clone a singluar `Arc` into a
`Store` per `Module` (previously it cloned two) with zero managemnt on
the global rwlock as that happened at `Module` creation time.
Additionally dropping a `Store` again involves zero rwlock management
and only a single `Arc` drop per-instantiated module (previously it was
two).

In the process of doing this I also went ahead and removed the
`Module::new_with_name` API. This has been difficult to support
historically with various variations on the internals of `ModuleInner`
because it involves mutating a `Module` after it's been created. My hope
is that this API is pretty rarely used and/or isn't super important, so
it's ok to remove.

Finally this change removes some internal `Arc` layerings that are no
longer necessary, attempting to use either `T` or `&T` where possible
without dealing with the overhead of an `Arc`.

Closes #4025

* Move back to a `BTreeMap` in `ModuleRegistry`
2022-04-19 15:13:47 -05:00
Alex Crichton
3394c2bb91 Reduce clones of Arc<HostFunc> during instantiation (#4051)
This commit implements an optimization to help improve concurrently
creating instances of a module on many threads simultaneously. One
bottleneck to this measured has been the reference count modification on
`Arc<HostFunc>`. Each host function stored within a `Linker<T>` is
wrapped in an `Arc<HostFunc>` structure, and when any of those host
functions are inserted into a store the reference count is incremented.
When the store is dropped the reference count is then decremented.

This ends up meaning that when a module imports N functions it ends up
doing 2N atomic modifications over the lifetime of the instance. For
embeddings where the `Linker<T>` is rarely modified but instances are
frequently created this can be a surprising bottleneck to creating many
instances.

A change implemented here is to optimize the instantiation process when
using an `InstancePre<T>`. An `InstancePre` serves as an opportunity to
take the list of items used to instantiate a module and wrap them all up
in an `Arc<[T]>`. Everything is going to get cloned into a `Store<T>`
anyway so to optimize this the `Arc<[T]>` is cloned at the top-level and
then nothing else is cloned internally. This continues to, however,
preserve a strong reference count for all contained items to prevent
them from being deallocated.

A new variant of `FuncKind` was added for host functions which is
effectively stored via `*mut HostFunc`. This variant is unsafe to create
and manage and has been documented internally.

Performance-wise the overall impact of this change is somewhat minor.
It's already a bit esoteric if this atomic increment and decrement are a
bottleneck due to the number of concurrent instances being created. In
my measurements I've seen that this can reduce instantiation time by up
to 10% for a module that imports two dozen functions. For larger modules
with more imports this is expected to have a larger win.
2022-04-19 14:23:36 -05:00
Piotr Sikora
19fe0878cb c-api: add missing bcrypt.lib dependency in docs. (#4049)
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
2022-04-19 08:58:31 -05:00
Chris Fallin
0af8737ec3 Add support for running the regalloc2 checker. (#4043)
With these fixes, all this PR has to do is instantiate and run the
checker on the `regalloc2::Output`. This is off by default, and is
enabled by setting the `regalloc_checker` Cranelift option.

This restores the old functionality provided by e.g. the
`backtracking_checked` regalloc algorithm setting rather than
`backtracking` when we were still on regalloc.rs.
2022-04-18 14:06:07 -07:00
Alex Crichton
3f3afb455e Remove support for userfaultfd (#4040)
This commit removes support for the `userfaultfd` or "uffd" syscall on
Linux. This support was originally added for users migrating from Lucet
to Wasmtime, but the recent developments of kernel-supported
copy-on-write support for memory initialization wound up being more
appropriate for these use cases than usefaultfd. The main reason for
moving to copy-on-write initialization are:

* The `userfaultfd` feature was never necessarily intended for this
  style of use case with wasm and was susceptible to subtle and rare
  bugs that were extremely difficult to track down. We were never 100%
  certain that there were kernel bugs related to userfaultfd but the
  suspicion never went away.

* Handling faults with userfaultfd was always slow and single-threaded.
  Only one thread could handle faults and traveling to user-space to
  handle faults is inherently slower than handling them all in the
  kernel. The single-threaded aspect in particular presented a
  significant scaling bottleneck for embeddings that want to run many
  wasm instances in parallel.

* One of the major benefits of userfaultfd was lazy initialization of
  wasm linear memory which is also achieved with the copy-on-write
  initialization support we have right now.

* One of the suspected benefits of userfaultfd was less frobbing of the
  kernel vma structures when wasm modules are instantiated. Currently
  the copy-on-write support has a mitigation where we attempt to reuse
  the memory images where possible to avoid changing vma structures.
  When comparing this to userfaultfd's performance it was found that
  kernel modifications of vmas aren't a worrisome bottleneck so
  copy-on-write is suitable for this as well.

Overall there are no remaining benefits that userfaultfd gives that
copy-on-write doesn't, and copy-on-write solves a major downsides of
userfaultfd, the scaling issue with a single faulting thread.
Additionally copy-on-write support seems much more robust in terms of
kernel implementation since it's only using standard memory-management
syscalls which are heavily exercised. Finally copy-on-write support
provides a new bonus where read-only memory in WebAssembly can be mapped
directly to the same kernel cache page, even amongst many wasm instances
of the same module, which was never possible with userfaultfd.

In light of all this it's expected that all users of userfaultfd should
migrate to the copy-on-write initialization of Wasmtime (which is
enabled by default).
2022-04-18 12:42:26 -05:00
Alex Crichton
51d82aebfd Store the ValRaw type in little-endian format (#4035)
* Store the `ValRaw` type in little-endian format

This commit changes the internal representation of the `ValRaw` type to
an unconditionally little-endian format instead of its current
native-endian format. The documentation and various accessors here have
been updated as well as the associated trampolines that read `ValRaw`
to always work with little-endian values, converting to the host
endianness as necessary.

The motivation for this change originally comes from the implementation
of the component model that I'm working on. One aspect of the component
model's canonical ABI is how variants are passed to functions as
immediate arguments. For example for a component model function:

```
foo: function(x: expected<i32, f64>)
```

This translates to a core wasm function:

```wasm
(module
  (func (export "foo") (param i32 i64)
    ;; ...
  )
)
```

The first `i32` parameter to the core wasm function is the discriminant
of whether the result is an "ok" or an "err". The second `i64`, however,
is the "join" operation on the `i32` and `f64` payloads. Essentially
these two types are unioned into one type to get passed into the function.

Currently in the implementation of the component model my plan is to
construct a `*mut [ValRaw]` to pass through to WebAssembly, always
invoking component exports through host trampolines. This means that the
implementation for `Result<T, E>` needs to do the correct "join"
operation here when encoding a particular case into the corresponding
`ValRaw`.

I personally found this particularly tricky to do structurally. The
solution that I settled on with fitzgen was that if `ValRaw` was always
stored in a little endian format then we could employ a trick where when
encoding a variant we first set all the `ValRaw` slots to zero, then the
associated case we have is encoding. Afterwards the `ValRaw` values are
already encoded into the correct format as if they'd been "join"ed.

For example if we were to encode `Ok(1i32)` then this would produce
`ValRaw { i32: 1 }`, which memory-wise is equivalent to `ValRaw { i64: 1 }`
if the other bytes in the `ValRaw` are guaranteed to be zero. Similarly
storing `ValRaw { f64 }` is equivalent to the storage required for
`ValRaw { i64 }` here in the join operation.

Note, though, that this equivalence relies on everything being
little-endian. Otherwise the in-memory representations of `ValRaw { i32: 1 }`
and `ValRaw { i64: 1 }` are different.

That motivation is what leads to this change. It's expected that this is
a low-to-zero cost change in the sense that little-endian platforms will
see no change and big-endian platforms are already required to
efficiently byte-swap loads/stores as WebAssembly requires that.
Additionally the `ValRaw` type is an esoteric niche use case primarily
used for accelerating the C API right now, so it's expected that not
many users will have to update for this change.

* Track down some more endianness conversions
2022-04-14 13:09:32 -05:00
Yang Hau
bfae6384aa fix typo (#4030) 2022-04-14 09:35:53 -05:00
Dan Gohman
ade04c92c2 Update to rustix 0.33.6. (#4022)
Relevant to Wasmtime, this fixes undefined references to `utimensat` and
`futimens` on macOS 10.12 and earlier. See bytecodealliance/rustix#157
for details.

It also contains a fix for s390x which isn't currently needed by Wasmtime
itself, but which is needed to make rustix's own testsuite pass on s390x,
which helps people packaging rustix for use in Wasmtime. See
bytecodealliance/rustix#277 for details.
2022-04-13 11:51:57 -05:00
Nick Fitzgerald
54aa720506 fuzzing: Refactor TableOps fuzz generator to allow GC with refs on the stack (#4016)
This makes the generator more similar to `wasm-smith` where it is keeping track
of what is on the stack and making choices about what instructions are valid to
generate given the current stack state. This should in theory allow the
generator to emit GC calls while there are live refs on the stack.

Fixes #3917
2022-04-11 14:33:27 -07:00