Commit Graph

122 Commits

Author SHA1 Message Date
wasmtime-publish
55946704cb Bump Wasmtime to 0.39.0 (#4225)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
2022-06-06 09:12:47 -05:00
Alex Crichton
3ed6fae7b3 Add trampoline compilation support for lowered imports (#4206)
* Add trampoline compilation support for lowered imports

This commit adds support to the component model implementation for
compiling trampolines suitable for calling host imports. Currently this
is purely just the compilation side of things, modifying the
wasmtime-cranelift crate and additionally filling out a new
`VMComponentOffsets` type (similar to `VMOffsets`). The actual creation
of a `VMComponentContext` is still not performed and will be a
subsequent PR.

Internally though some tests are actually possible with this where we at
least assert that compilation of a component and creation of everything
in-memory doesn't panic or trip any assertions, so some tests are added
here for that as well.

* Fix some test errors
2022-06-03 10:01:42 -05:00
Alex Crichton
f638b390b6 Refactor some internals of wasmtime-cranelift (#4202)
* Split `wasm_to_host_trampoline` into pieces

In the upcoming component model supoprt for imports my plan is to reuse
some of these pieces but not the entirety of the current
`wasm_to_host_trampoline`. In an effort to make that diff smaller this
commit splits up the function preemptively into pieces to get reused
later.

* Delete unused `for_each_libcall` macros

Came across this when working in the object support for cranelift.

* Refactor some object creation details

This commit refactors some of the internals around creating an object
file in the wasmtime-cranelift integration. The old `ObjectBuilder` is
now named `ModuleTextBuilder` and is only used to create the text
section rather than other sections too. This helps maintain the
invariant that the unwind information section is placed directly after
the text section without having an odd API for doing this.

Additionally the unwind information creation is moved externally from
the `ModuleTextBuilder` to a standalone structure. This separate
structure is currently in use in the component model work I'm doing
although I may change that to using the `ModuleTextBuilder` instead. In
any case it seemed nice to encapsulate all of the unwinding information
into one standalone structure.

Finally, the insertion of native debug information has been refactored
to happen in a new `append_dwarf` method to keep all the dwarf-related
stuff together in one place as much as possible.

* Fix a doctest

* Fix a typo
2022-06-01 15:39:53 -05:00
Alex Crichton
f4b9020913 Change wasm-to-host trampolines to take the values_vec size (#4192)
* Change wasm-to-host trampolines to take the values_vec size

This commit changes the ABI of wasm-to-host trampolines, which are
only used right now for functions created with `Func::new`, to pass
along the size of the `values_vec` argument. Previously the trampoline
simply received `*mut ValRaw` and assumed that it was the appropriate
size. By receiving a size as well we can thread through `&mut [ValRaw]`
internally instead of `*mut ValRaw`.

The original motivation for this is that I'm planning to leverage these
trampolines for the component model for host-defined functions. Out of
an abundance of caution of making sure that everything lines up I wanted
to be able to write down asserts about the size received at runtime
compared to the size expected. This overall led me to the desire to
thread this size parameter through on the assumption that it would not
impact performance all that much.

I ran two benchmarks locally from the `call.rs` benchmark and got:

* `sync/no-hook/wasm-to-host - nop - unchecked` - no change
* `sync/no-hook/wasm-to-host - nop-params-and-results - unchecked` - 5%
  slower

This is what I roughly expected in that if nothing actually reads the
new parameter (e.g. no arguments) then threading through the parameter
is effectively otherwise free. Otherwise though accesses to the `ValRaw`
storage is now bounds-checked internally in Wasmtime instead of assuming
it's valid, leading to the 5% slowdown (~9.6ns to ~10.3ns). If this
becomes a peformance bottleneck for a particular use case then we should
be fine to remove the bounds checking here or otherwise only bounds
check in debug mode, otherwise I plan on leaving this as-is.

Of particular note this also changes the C API for `*_unchecked`
functions where the C callback now receives the size of the array as
well.

* Add docs
2022-06-01 09:05:37 -05:00
Benjamin Bouvier
3a7910ecb0 Reuse Cranelift codegen contexts across wasmtime compilations (#4181) 2022-05-24 11:03:01 +02:00
Benjamin Bouvier
6e828df632 Remove unused SourceLoc in many Mach data structures (#4180)
* Remove unused srcloc in MachReloc

* Remove unused srcloc in MachTrap

* Use `into_iter` on array in bench code to suppress a warning

* Remove unused srcloc in MachCallSite
2022-05-23 09:27:28 -07:00
Alex Crichton
fcf6208750 Initial skeleton of some component model processing (#4005)
* Initial skeleton of some component model processing

This commit is the first of what will likely be many to implement the
component model proposal in Wasmtime. This will be structured as a
series of incremental commits, most of which haven't been written yet.
My hope is to make this incremental and over time to make this easier to
review and easier to test each step in isolation.

Here much of the skeleton of how components are going to work in
Wasmtime is sketched out. This is not a complete implementation of the
component model so it's not all that useful yet, but some things you can
do are:

* Process the type section into a representation amenable for working
  with in Wasmtime.
* Process the module section and register core wasm modules.
* Process the instance section for core wasm modules.
* Process core wasm module imports.
* Process core wasm instance aliasing.
* Ability to compile a component with core wasm embedded.
* Ability to instantiate a component with no imports.
* Ability to get functions from this component.

This is already starting to diverge from the previous module linking
representation where a `Component` will try to avoid unnecessary
metadata about the component and instead internally only have the bare
minimum necessary to instantiate the module. My hope is we can avoid
constructing most of the index spaces during instantiation only for it
to all ge thrown away. Additionally I'm predicting that we'll need to
see through processing where possible to know how to generate adapters
and where they are fused.

At this time you can't actually call a component's functions, and that's
the next PR that I would like to make.

* Add tests for the component model support

This commit uses the recently updated wasm-tools crates to add tests for
the component model added in the previous commit. This involved updating
the `wasmtime-wast` crate for component-model changes. Currently the
component support there is quite primitive, but enough to at least
instantiate components and verify the internals of Wasmtime are all
working correctly. Additionally some simple tests for the embedding API
have also been added.
2022-05-20 15:33:18 -05:00
Chris Fallin
0824abbae4 Add a basic alias analysis with redundant-load elim and store-to-load fowarding opts. (#4163)
This PR adds a basic *alias analysis*, and optimizations that use it.
This is a "mid-end optimization": it operates on CLIF, the
machine-independent IR, before lowering occurs.

The alias analysis (or maybe more properly, a sort of memory-value
analysis) determines when it can prove a particular memory
location is equal to a given SSA value, and when it can, it replaces any
loads of that location.

This subsumes two common optimizations:

* Redundant load elimination: when the same memory address is loaded two
  times, and it can be proven that no intervening operations will write
  to that memory, then the second load is *redundant* and its result
  must be the same as the first. We can use the first load's result and
  remove the second load.

* Store-to-load forwarding: when a load can be proven to access exactly
  the memory written by a preceding store, we can replace the load's
  result with the store's data operand, and remove the load.

Both of these optimizations rely on a "last store" analysis that is a
sort of coloring mechanism, split across disjoint categories of abstract
state. The basic idea is that every memory-accessing operation is put
into one of N disjoint categories; it is disallowed for memory to ever
be accessed by an op in one category and later accessed by an op in
another category. (The frontend must ensure this.)

Then, given this, we scan the code and determine, for each
memory-accessing op, when a single prior instruction is a store to the
same category. This "colors" the instruction: it is, in a sense, a
static name for that version of memory.

This analysis provides an important invariant: if two operations access
memory with the same last-store, then *no other store can alias* in the
time between that last store and these operations. This must-not-alias
property, together with a check that the accessed address is *exactly
the same* (same SSA value and offset), and other attributes of the
access (type, extension mode) are the same, let us prove that the
results are the same.

Given last-store info, we scan the instructions and build a table from
"memory location" key (last store, address, offset, type, extension) to
known SSA value stored in that location. A store inserts a new mapping.
A load may also insert a new mapping, if we didn't already have one.
Then when a load occurs and an entry already exists for its "location",
we can reuse the value. This will be either RLE or St-to-Ld depending on
where the value came from.

Note that this *does* work across basic blocks: the last-store analysis
is a full iterative dataflow pass, and we are careful to check dominance
of a previously-defined value before aliasing to it at a potentially
redundant load. So we will do the right thing if we only have a
"partially redundant" load (loaded already but only in one predecessor
block), but we will also correctly reuse a value if there is a store or
load above a loop and a redundant load of that value within the loop, as
long as no potentially-aliasing stores happen within the loop.
2022-05-20 13:19:32 -07:00
Alex Crichton
89ccc56e46 Update the wasm-tools family of crates (#4165)
* Update the wasm-tools family of crates

This commit updates these crates as used by Wasmtime for the recently
published versions to pull in changes necessary to support the component
model. I've split this out from #4005 to make it clear what's impacted
here and #4005 can simply rebase on top of this to pick up the necessary
changes.

* More test fixes
2022-05-19 14:13:04 -05:00
Saúl Cabrera
52524d258c Expose TrapCode::Interrupt on epoch based interruption (#4105) 2022-05-10 10:27:30 -05:00
wasmtime-publish
9a6854456d Bump Wasmtime to 0.38.0 (#4103)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
2022-05-05 13:43:02 -05:00
Alex Crichton
871a9d93f2 Update some dependencies in Cargo.lock (#4081)
* Run a `cargo update` over our dependencies

This'll notably fix a `cargo audit` error where we have a pinned version
of the `regex` crate which has a CVE assigned to it.

* Update to `object` and `hashbrown` crates

Prune some duplicate versions showing up from the previous `cargo update`
2022-04-28 11:12:58 -05:00
Alex Crichton
51d82aebfd Store the ValRaw type in little-endian format (#4035)
* Store the `ValRaw` type in little-endian format

This commit changes the internal representation of the `ValRaw` type to
an unconditionally little-endian format instead of its current
native-endian format. The documentation and various accessors here have
been updated as well as the associated trampolines that read `ValRaw`
to always work with little-endian values, converting to the host
endianness as necessary.

The motivation for this change originally comes from the implementation
of the component model that I'm working on. One aspect of the component
model's canonical ABI is how variants are passed to functions as
immediate arguments. For example for a component model function:

```
foo: function(x: expected<i32, f64>)
```

This translates to a core wasm function:

```wasm
(module
  (func (export "foo") (param i32 i64)
    ;; ...
  )
)
```

The first `i32` parameter to the core wasm function is the discriminant
of whether the result is an "ok" or an "err". The second `i64`, however,
is the "join" operation on the `i32` and `f64` payloads. Essentially
these two types are unioned into one type to get passed into the function.

Currently in the implementation of the component model my plan is to
construct a `*mut [ValRaw]` to pass through to WebAssembly, always
invoking component exports through host trampolines. This means that the
implementation for `Result<T, E>` needs to do the correct "join"
operation here when encoding a particular case into the corresponding
`ValRaw`.

I personally found this particularly tricky to do structurally. The
solution that I settled on with fitzgen was that if `ValRaw` was always
stored in a little endian format then we could employ a trick where when
encoding a variant we first set all the `ValRaw` slots to zero, then the
associated case we have is encoding. Afterwards the `ValRaw` values are
already encoded into the correct format as if they'd been "join"ed.

For example if we were to encode `Ok(1i32)` then this would produce
`ValRaw { i32: 1 }`, which memory-wise is equivalent to `ValRaw { i64: 1 }`
if the other bytes in the `ValRaw` are guaranteed to be zero. Similarly
storing `ValRaw { f64 }` is equivalent to the storage required for
`ValRaw { i64 }` here in the join operation.

Note, though, that this equivalence relies on everything being
little-endian. Otherwise the in-memory representations of `ValRaw { i32: 1 }`
and `ValRaw { i64: 1 }` are different.

That motivation is what leads to this change. It's expected that this is
a low-to-zero cost change in the sense that little-endian platforms will
see no change and big-endian platforms are already required to
efficiently byte-swap loads/stores as WebAssembly requires that.
Additionally the `ValRaw` type is an esoteric niche use case primarily
used for accelerating the C API right now, so it's expected that not
many users will have to update for this change.

* Track down some more endianness conversions
2022-04-14 13:09:32 -05:00
Alex Crichton
d147802d51 Update wasm-tools crates (#3997)
* Update wasm-tools crates

This commit updates the wasm-tools family of crates as used in Wasmtime.
Notably this brings in the update which removes module linking support
as well as a number of internal refactorings around names and such
within wasmparser itself. This updates all of the wasm translation
support which binds to wasmparser as appropriate.

Other crates all had API-compatible changes for at least what Wasmtime
used so no further changes were necessary beyond updating version
requirements.

* Update a test expectation
2022-04-05 14:32:33 -05:00
wasmtime-publish
78a595ac88 Bump Wasmtime to 0.37.0 (#3994)
Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
2022-04-05 09:24:28 -05:00
Alex Crichton
c89dc55108 Add a two-week delay to Wasmtime's release process (#3955)
* Bump to 0.36.0

* Add a two-week delay to Wasmtime's release process

This commit is a proposal to update Wasmtime's release process with a
two-week delay from branching a release until it's actually officially
released. We've had two issues lately that came up which led to this proposal:

* In #3915 it was realized that changes just before the 0.35.0 release
  weren't enough for an embedding use case, but the PR didn't meet the
  expectations for a full patch release.

* At Fastly we were about to start rolling out a new version of Wasmtime
  when over the weekend the fuzz bug #3951 was found. This led to the
  desire internally to have a "must have been fuzzed for this long"
  period of time for Wasmtime changes which we felt were better
  reflected in the release process itself rather than something about
  Fastly's own integration with Wasmtime.

This commit updates the automation for releases to unconditionally
create a `release-X.Y.Z` branch on the 5th of every month. The actual
release from this branch is then performed on the 20th of every month,
roughly two weeks later. This should provide a period of time to ensure
that all changes in a release are fuzzed for at least two weeks and
avoid any further surprises. This should also help with any last-minute
changes made just before a release if they need tweaking since
backporting to a not-yet-released branch is much easier.

Overall there are some new properties about Wasmtime with this proposal
as well:

* The `main` branch will always have a section in `RELEASES.md` which is
  listed as "Unreleased" for us to fill out.
* The `main` branch will always be a version ahead of the latest
  release. For example it will be bump pre-emptively as part of the
  release process on the 5th where if `release-2.0.0` was created then
  the `main` branch will have 3.0.0 Wasmtime.
* Dates for major versions are automatically updated in the
  `RELEASES.md` notes.

The associated documentation for our release process is updated and the
various scripts should all be updated now as well with this commit.

* Add notes on a security patch

* Clarify security fixes shouldn't be previewed early on CI
2022-04-01 13:11:10 -05:00
Alex Crichton
d1d10dc8da Refactor the TypeTables type (#3971)
* Remove duplicate `TypeTables` type

This was once needed historically but it is no longer needed.

* Make the internals of `TypeTables` private

Instead of reaching internally for the `wasm_signatures` map an `Index`
implementation now exists to indirect accesses through the type of the
index being accessed. For the component model this table of types will
grow a number of other tables and this'll assist in consuming sites not
having to worry so much about which map they're reaching into.
2022-03-30 13:51:25 -05:00
Alex Crichton
3f9bff17c8 Support disabling backtraces at compile time (#3932)
* Support disabling backtraces at compile time

This commit adds support to Wasmtime to disable, at compile time, the
gathering of backtraces on traps. The `wasmtime` crate now sports a
`wasm-backtrace` feature which, when disabled, will mean that backtraces
are never collected at compile time nor are unwinding tables inserted
into compiled objects.

The motivation for this commit stems from the fact that generating a
backtrace is quite a slow operation. Currently backtrace generation is
done with libunwind and `_Unwind_Backtrace` typically found in glibc or
other system libraries. When thousands of modules are loaded into the
same process though this means that the initial backtrace can take
nearly half a second and all subsequent backtraces can take upwards of
hundreds of milliseconds. Relative to all other operations in Wasmtime
this is extremely expensive at this time. In the future we'd like to
implement a more performant backtrace scheme but such an implementation
would require coordination with Cranelift and is a big chunk of work
that may take some time, so in the meantime if embedders don't need a
backtrace they can still use this option to disable backtraces at
compile time and avoid the performance pitfalls of collecting
backtraces.

In general I tried to originally make this a runtime configuration
option but ended up opting for a compile-time option because `Trap::new`
otherwise has no arguments and always captures a backtrace. By making
this a compile-time option it was possible to configure, statically, the
behavior of `Trap::new`. Additionally I also tried to minimize the
amount of `#[cfg]` necessary by largely only having it at the producer
and consumer sites.

Also a noteworthy restriction of this implementation is that if
backtrace support is disabled at compile time then reference types
support will be unconditionally disabled at runtime. With backtrace
support disabled there's no way to trace the stack of wasm frames which
means that GC can't happen given our current implementation.

* Always enable backtraces for the C API
2022-03-16 09:18:16 -05:00
Alex Crichton
c22033bf93 Delete historical interruptable support in Wasmtime (#3925)
* Delete historical interruptable support in Wasmtime

This commit removes the `Config::interruptable` configuration along with
the `InterruptHandle` type from the `wasmtime` crate. The original
support for adding interruption to WebAssembly was added pretty early on
in the history of Wasmtime when there was no other method to prevent an
infinite loop from the host. Nowadays, however, there are alternative
methods for interruption such as fuel or epoch-based interruption.

One of the major downsides of `Config::interruptable` is that even when
it's not enabled it forces an atomic swap to happen when entering
WebAssembly code. This technically could be a non-atomic swap if the
configuration option isn't enabled but that produces even more branch-y
code on entry into WebAssembly which is already something we try to
optimize. Calling into WebAssembly is on the order of a dozens of
nanoseconds at this time and an atomic swap, even uncontended, can add
up to 5ns on some platforms.

The main goal of this PR is to remove this atomic swap on entry into
WebAssembly. This is done by removing the `Config::interruptable` field
entirely, moving all existing consumers to epochs instead which are
suitable for the same purposes. This means that the stack overflow check
is no longer entangled with the interruption check and perhaps one day
we could continue to optimize that further as well.

Some consequences of this change are:

* Epochs are now the only method of remote-thread interruption.
* There are no more Wasmtime traps that produces the `Interrupted` trap
  code, although we may wish to move future traps to this so I left it
  in place.
* The C API support for interrupt handles was also removed and bindings
  for epoch methods were added.
* Function-entry checks for interruption are a tiny bit less efficient
  since one check is performed for the stack limit and a second is
  performed for the epoch as opposed to the `Config::interruptable`
  style of bundling the stack limit and the interrupt check in one. It's
  expected though that this is likely to not really be measurable.
* The old `VMInterrupts` structure is renamed to `VMRuntimeLimits`.
2022-03-14 15:25:11 -05:00
Alex Crichton
884ca1f75b Remove more dead relocation handling code (#3924)
Forgotten from #3905 I now realized.
2022-03-14 12:29:45 -05:00
Alex Crichton
4d404c90b4 Ensure functions are aligned properly on AArch64 (#3908)
Previously (as in an hour ago) #3905 landed a new ability for fuzzing to
arbitrarily insert padding between functions. Running some fuzzers
locally though this instantly hit a lot of problems on AArch64 because
the arbitrary padding isn't aligned to 4 bytes like all other functions
are. To fix this issue appending functions now correctly aligns the
output as appropriate for the platform. The alignment argument for
appending was switched to `None` where `None` means "use the platform
default" and otherwise and explicit alignment can be specified for
inserting other data (like arbitrary padding or Windows unwind tables).
2022-03-09 15:45:30 -06:00
Alex Crichton
f21aa98ccb Fuzz-code-coverage motivated improvements (#3905)
* fuzz: Fuzz padding between compiled functions

This commit hooks up the custom
`wasmtime_linkopt_padding_between_functions` configuration option to the
cranelift compiler into the fuzz configuration, enabling us to ensure
that randomly inserting a moderate amount of padding between functions
shouldn't tamper with any results.

* fuzz: Fuzz the `Config::generate_address_map` option

This commit adds fuzz configuration where `generate_address_map` is
either enabled or disabled, unlike how it's always enabled for fuzzing
today.

* Remove unnecessary handling of relocations

This commit removes a number of bits and pieces all related to handling
relocations in JIT code generated by Wasmtime. None of this is necessary
nowadays that the "old backend" has been removed (quite some time ago)
and relocations are no longer expected to be in the JIT code at all.
Additionally with the minimum x86_64 features required to run wasm code
it should be expected that no libcalls are required either for
Wasmtime-based JIT code.
2022-03-09 10:58:27 -08:00
wasmtime-publish
9137b4a50e Bump Wasmtime to 0.35.0 (#3885)
[automatically-tag-and-release-this-commit]

Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
2022-03-07 15:18:34 -06:00
Alex Crichton
2a6969d2bd Shrink the size of the anyfunc table in VMContext (#3850)
* Shrink the size of the anyfunc table in `VMContext`

This commit shrinks the size of the `VMCallerCheckedAnyfunc` table
allocated into a `VMContext` to be the size of the number of "escaped"
functions in a module rather than the number of functions in a module.
Escaped functions include exports, table elements, etc, and are
typically an order of magnitude smaller than the number of functions in
general. This should greatly shrink the `VMContext` for some modules
which while we aren't necessarily having any problems with that today
shouldn't cause any problems in the future.

The original motivation for this was that this came up during the recent
lazy-table-initialization work and while it no longer has a direct
performance benefit since tables aren't initialized at all on
instantiation it should still improve long-running instances
theoretically with smaller `VMContext` allocations as well as better
locality between anyfuncs.

* Fix some tests

* Remove redundant hash set

* Use a helper for pushing function type information

* Use a more descriptive `is_escaping` method

* Clarify a comment

* Fix condition
2022-02-28 10:11:04 -06:00
Nick Fitzgerald
bad9a35418 wasm-mutate fuzz targets (#3836)
* fuzzing: Add a custom mutator based on `wasm-mutate`

* fuzz: Add a version of the `compile` fuzz target that uses `wasm-mutate`

* Update `wasmparser` dependencies
2022-02-23 12:14:11 -08:00
bjorn3
bbd52772de Make VMOffset calculation more readable (#3793)
* Fix typo

* Move vmoffset field size and field name together

The previous code was quite confusing about what applied to which field.
The new code also makes it easier to move fields around and insert and
delete fields.

* Move builtin_functions before all variable sized fields

This allows the offset to be calculated at compile time

* Add cadd and cmul convenience functions

* Remove comment

* Change fields! syntax as per review

* Add implicit u32::from to fields!
2022-02-22 09:48:53 -06:00
Chris Fallin
1c014d129a Cranelift: ensure ISA level needed for SIMD is present when SIMD is enabled. (#3816)
Addresses #3809: when we are asked to create a Cranelift backend with
shared flags that indicate support for SIMD, we should check that the
ISA level needed for our SIMD lowerings is present.
2022-02-16 17:29:30 -08:00
Alex Crichton
b438617e12 Further minor optimizations to instantiation (#3791)
* Shrink the size of `FuncData`

Before this commit on a 64-bit system the `FuncData` type had a size of
88 bytes and after this commit it has a size of 32 bytes. A `FuncData`
is required for all host functions in a store, including those inserted
from a `Linker` into a store used during linking. This means that
instantiation ends up creating a nontrivial number of these types and
pushing them into the store. Looking at some profiles there were some
surprisingly expensive movements of `FuncData` from the stack to a
vector for moves-by-value generated by Rust. Shrinking this type enables
more efficient code to be generated and additionally means less storage
is needed in a store's function array.

For instantiating the spidermonkey and rustpython modules this improves
instantiation by 10% since they each import a fair number of host
functions and the speedup here is relative to the number of items
imported.

* Use `ptr::copy_nonoverlapping` during initialization

Prevoiusly `ptr::copy` was used for copying imports into place which
translates to `memmove`, but `ptr::copy_nonoverlapping` can be used here
since it's statically known these areas don't overlap. While this
doesn't end up having a performance difference it's something I kept
noticing while looking at the disassembly of `initialize_vmcontext` so I
figured I'd go ahead and implement.

* Indirect shared signature ids in the VMContext

This commit is a small improvement for the instantiation time of modules
by avoiding copying a list of `VMSharedSignatureIndex` entries into each
`VMContext`, instead building one inside of a module and sharing that
amongst all instances. This involves less lookups at instantiation time
and less movement of data during instantiation. The downside is that
type-checks on `call_indirect` now involve an additionally load, but I'm
assuming that these are somewhat pessimized enough as-is that the
runtime impact won't be much there.

For instantiation performance this is a 5-10% win with
rustpyhon/spidermonky instantiation. This should also reduce the size of
each `VMContext` for an instantiation since signatures are no longer
stored inline but shared amongst all instances with one module.

Note that one subtle change here is that the array of
`VMSharedSignatureIndex` was previously indexed by `TypeIndex`, and now
it's indexed by `SignaturedIndex` which is a deduplicated form of
`TypeIndex`. This is done because we already had a list of those lying
around in `Module`, so it was easier to reuse that than to build a
separate array and store it somewhere.

* Reserve space in `Store<T>` with `InstancePre`

This commit updates the instantiation process to reserve space in a
`Store<T>` for the functions that an `InstancePre<T>`, as part of
instantiation, will insert into it. Using an `InstancePre<T>` to
instantiate allows pre-computing the number of host functions that will
be inserted into a store, and by pre-reserving space we can avoid costly
reallocations during instantiation by ensuring the function vector has
enough space to fit everything during the instantiation process.

Overall this makes instantiation of rustpython/spidermonkey about 8%
faster locally.

* Fix tests

* Use checked arithmetic
2022-02-11 09:55:08 -06:00
Alex Crichton
c0c368d151 Use mmap'd *.cwasm as a source for memory initialization images (#3787)
* Skip memfd creation with precompiled modules

This commit updates the memfd support internally to not actually use a
memfd if a compiled module originally came from disk via the
`wasmtime::Module::deserialize_file` API. In this situation we already
have a file descriptor open and there's no need to copy a module's heap
image to a new file descriptor.

To facilitate a new source of `mmap` the currently-memfd-specific-logic
of creating a heap image is generalized to a new form of
`MemoryInitialization` which is attempted for all modules at
module-compile-time. This means that the serialized artifact to disk
will have the memory image in its entirety waiting for us. Furthermore
the memory image is ensured to be padded and aligned carefully to the
target system's page size, notably meaning that the data section in the
final object file is page-aligned and the size of the data section is
also page aligned.

This means that when a precompiled module is mapped from disk we can
reuse the underlying `File` to mmap all initial memory images. This
means that the offset-within-the-memory-mapped-file can differ for
memfd-vs-not, but that's just another piece of state to track in the
memfd implementation.

In the limit this waters down the term "memfd" for this technique of
quickly initializing memory because we no longer use memfd
unconditionally (only when the backing file isn't available).
This does however open up an avenue in the future to porting this
support to other OSes because while `memfd_create` is Linux-specific
both macOS and Windows support mapping a file with copy-on-write. This
porting isn't done in this PR and is left for a future refactoring.

Closes #3758

* Enable "memfd" support on all unix systems

Cordon off the Linux-specific bits and enable the memfd support to
compile and run on platforms like macOS which have a Linux-like `mmap`.
This only works if a module is mapped from a precompiled module file on
disk, but that's better than not supporting it at all!

* Fix linux compile

* Use `Arc<File>` instead of `MmapVecFileBacking`

* Use a named struct instead of mysterious tuples

* Comment about unsafety in `Module::deserialize_file`

* Fix tests

* Fix uffd compile

* Always align data segments

No need to have conditional alignment since their sizes are all aligned
anyway

* Update comment in build.rs

* Use rustix, not `region`

* Fix some confusing logic/names around memory indexes

These functions all work with memory indexes, not specifically defined
memory indexes.
2022-02-10 15:40:40 -06:00
Alex Crichton
520a7f26d7 Move function names out of Module (#3789)
* Move function names out of `Module`

This commit moves function names in a module out of the
`wasmtime_environ::Module` type and into separate sections stored in the
final compiled artifact. Spurred on by #3787 to look at module load
times I noticed that a huge amount of time was spent in deserializing
this map. The `spidermonkey.wasm` file, for example, has a 3MB name
section which is a lot of unnecessary data to deserialize at module load
time.

The names of functions are now split out into their own dedicated
section of the compiled artifact and metadata about them is stored in a
more compact format at runtime by avoiding a `BTreeMap` and instead
using a sorted array. Overall this improves deserialize times by up to
80% for modules with large name sections since the name section is no
longer deserialized at load time and it's lazily paged in as names are
actually referenced.

* Fix a typo

* Fix compiled module determinism

Need to not only sort afterwards but also first to ensure the data of
the name section is consistent.
2022-02-10 14:34:48 -06:00
Chris Fallin
39a52ceb4f Implement lazy funcref table and anyfunc initialization. (#3733)
During instance initialization, we build two sorts of arrays eagerly:

- We create an "anyfunc" (a `VMCallerCheckedAnyfunc`) for every function
  in an instance.

- We initialize every element of a funcref table with an initializer to
  a pointer to one of these anyfuncs.

Most instances will not touch (via call_indirect or table.get) all
funcref table elements. And most anyfuncs will never be referenced,
because most functions are never placed in tables or used with
`ref.func`. Thus, both of these initialization tasks are quite wasteful.
Profiling shows that a significant fraction of the remaining
instance-initialization time after our other recent optimizations is
going into these two tasks.

This PR implements two basic ideas:

- The anyfunc array can be lazily initialized as long as we retain the
  information needed to do so. For now, in this PR, we just recreate the
  anyfunc whenever a pointer is taken to it, because doing so is fast
  enough; in the future we could keep some state to know whether the
  anyfunc has been written yet and skip this work if redundant.

  This technique allows us to leave the anyfunc array as uninitialized
  memory, which can be a significant savings. Filling it with
  initialized anyfuncs is very expensive, but even zeroing it is
  expensive: e.g. in a large module, it can be >500KB.

- A funcref table can be lazily initialized as long as we retain a link
  to its corresponding instance and function index for each element. A
  zero in a table element means "uninitialized", and a slowpath does the
  initialization.

Funcref tables are a little tricky because funcrefs can be null. We need
to distinguish "element was initially non-null, but user stored explicit
null later" from "element never touched" (ie the lazy init should not
blow away an explicitly stored null). We solve this by stealing the LSB
from every funcref (anyfunc pointer): when the LSB is set, the funcref
is initialized and we don't hit the lazy-init slowpath. We insert the
bit on storing to the table and mask it off after loading.

We do have to set up a precomputed array of `FuncIndex`s for the table
in order for this to work. We do this as part of the module compilation.

This PR also refactors the way that the runtime crate gains access to
information computed during module compilation.

Performance effect measured with in-tree benches/instantiation.rs, using
SpiderMonkey built for WASI, and with memfd enabled:

```
BEFORE:

sequential/default/spidermonkey.wasm
                        time:   [68.569 us 68.696 us 68.856 us]
sequential/pooling/spidermonkey.wasm
                        time:   [69.406 us 69.435 us 69.465 us]

parallel/default/spidermonkey.wasm: with 1 background thread
                        time:   [69.444 us 69.470 us 69.497 us]
parallel/default/spidermonkey.wasm: with 16 background threads
                        time:   [183.72 us 184.31 us 184.89 us]
parallel/pooling/spidermonkey.wasm: with 1 background thread
                        time:   [69.018 us 69.070 us 69.136 us]
parallel/pooling/spidermonkey.wasm: with 16 background threads
                        time:   [326.81 us 337.32 us 347.01 us]

WITH THIS PR:

sequential/default/spidermonkey.wasm
                        time:   [6.7821 us 6.8096 us 6.8397 us]
                        change: [-90.245% -90.193% -90.142%] (p = 0.00 < 0.05)
                        Performance has improved.
sequential/pooling/spidermonkey.wasm
                        time:   [3.0410 us 3.0558 us 3.0724 us]
                        change: [-95.566% -95.552% -95.537%] (p = 0.00 < 0.05)
                        Performance has improved.

parallel/default/spidermonkey.wasm: with 1 background thread
                        time:   [7.2643 us 7.2689 us 7.2735 us]
                        change: [-89.541% -89.533% -89.525%] (p = 0.00 < 0.05)
                        Performance has improved.
parallel/default/spidermonkey.wasm: with 16 background threads
                        time:   [147.36 us 148.99 us 150.74 us]
                        change: [-18.997% -18.081% -17.285%] (p = 0.00 < 0.05)
                        Performance has improved.
parallel/pooling/spidermonkey.wasm: with 1 background thread
                        time:   [3.1009 us 3.1021 us 3.1033 us]
                        change: [-95.517% -95.511% -95.506%] (p = 0.00 < 0.05)
                        Performance has improved.
parallel/pooling/spidermonkey.wasm: with 16 background threads
                        time:   [49.449 us 50.475 us 51.540 us]
                        change: [-85.423% -84.964% -84.465%] (p = 0.00 < 0.05)
                        Performance has improved.
```

So an improvement of something like 80-95% for a very large module (7420
functions in its one funcref table, 31928 functions total).
2022-02-09 13:56:53 -08:00
wasmtime-publish
39b88e4e9e Release Wasmtime 0.34.0 (#3768)
* Bump Wasmtime to 0.34.0

[automatically-tag-and-release-this-commit]

* Add release notes for 0.34.0

* Update release date to today

Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2022-02-07 19:16:26 -06:00
Alex Crichton
65486a0680 Update wasm-tools crates
Nothing major here, just a routine update with a few extra things to
handle here-and-there.
2022-02-02 09:50:08 -08:00
Alex Crichton
a25f7bdba5 Don't copy VMBuiltinFunctionsArray into each VMContext (#3741)
* Don't copy `VMBuiltinFunctionsArray` into each `VMContext`

This is another PR along the lines of "let's squeeze all possible
performance we can out of instantiation". Before this PR we would copy,
by value, the contents of `VMBuiltinFunctionsArray` into each
`VMContext` allocated. This array of function pointers is modestly-sized
but growing over time as we add various intrinsics. Additionally it's
the exact same for all `VMContext` allocations.

This PR attempts to speed up instantiation slightly by instead storing
an indirection to the function array. This means that calling a builtin
intrinsic is a tad bit slower since it requires two loads instead of one
(one to get the base pointer, another to get the actual address).
Otherwise though `VMContext` initialization is now simply setting one
pointer instead of doing a `memcpy` from one location to another.

With some macro-magic this commit also replaces the previous
implementation with one that's more `const`-friendly which also gets us
compile-time type-checks of libcalls as well as compile-time
verification that all libcalls are defined.

Overall, as with #3739, the win is very modest here. Locally I measured
a speedup from 1.9us to 1.7us taken to instantiate an empty module with
one function. While small at these scales it's still a 10% improvement!

* Review comments
2022-01-28 16:24:34 -06:00
Chris Fallin
8a55b5c563 Add epoch-based interruption for cooperative async timeslicing.
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.

This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.

The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
2022-01-20 13:58:17 -08:00
bjorn3
17021bc77a Extract helper functions 2022-01-12 17:19:34 +01:00
bjorn3
f0e821b9e0 Remove all Sink traits 2022-01-11 19:03:10 +01:00
bjorn3
b803514d55 Remove sink arguments from compile_and_emit
The data can be accessed after the fact using context.mach_compile_result
2022-01-11 18:17:29 +01:00
bjorn3
58c25d9e24 Add text_section_builder method to TargetIsa 2022-01-06 14:39:50 +01:00
wasmtime-publish
8043c1f919 Release Wasmtime 0.33.0 (#3648)
* Bump Wasmtime to 0.33.0

[automatically-tag-and-release-this-commit]

* Update relnotes for 0.33.0

* Wordsmithing relnotes

Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2022-01-05 13:26:50 -06:00
bjorn3
e98a85e1e2 Make get_mach_backend non-optional 2022-01-04 15:48:19 +01:00
Alex Crichton
f1225dfd93 Add a compilation section to disable address maps (#3598)
* Add a compilation section to disable address maps

This commit adds a new `Config::generate_address_map` compilation
setting which is used to disable emission of the `.wasmtime.addrmap`
section of compiled artifacts. This section is currently around the size
of the entire `.text` section itself unfortunately and for size reasons
may wish to be omitted. Functionality-wise all that is lost is knowing
the precise wasm module offset address of a faulting instruction or in a
backtrace of instructions. This also means that if the module has DWARF
debugging information available with it Wasmtime isn't able to produce a
filename and line number in the backtrace.

This option remains enabled by default. This option may not be needed in
the future with #3547 perhaps, but in the meantime it seems reasonable
enough to support a configuration mode where the section is entirely
omitted if the smallest module possible is desired.

* Fix some CI issues

* Update tests/all/traps.rs

Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>

* Do less work in compilation for address maps

But only when disabled

Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
2021-12-13 13:48:05 -06:00
wasmtime-publish
c1c4c59670 Release Wasmtime 0.32.0 (#3589)
* Bump Wasmtime to 0.32.0

[automatically-tag-and-release-this-commit]

* Update release notes for 0.32.0

Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2021-12-13 13:47:30 -06:00
Alex Crichton
0e90d4b903 Update addr2line and gimli deps (#3580)
Just a routine update, figured it was good to stay close to their most
recent versions
2021-12-01 15:48:36 -06:00
Adam Bratschi-Kaye
12bfbdfaca Skip generating DWARF info for dead code (#3498)
When encountering a subprogram that is dead code (as indicated by the
dead code proposal
https://dwarfstd.org/ShowIssue.php?issue=200609.1), don't generate debug
output for the subprogram or any of its children.
2021-11-08 09:31:04 -06:00
wasmtime-publish
c1a6a0523d Release Wasmtime 0.31.0 (#3489)
* Bump Wasmtime to 0.31.0

[automatically-tag-and-release-this-commit]

* Update 0.31.0 release notes

Co-authored-by: Wasmtime Publish <wasmtime-publish@users.noreply.github.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2021-10-29 09:09:35 -05:00
Alex Crichton
490d49a768 Adjust dependency directives between crates (#3420)
* Adjust dependency directives between crates

This commit is a preparation for the release process for Wasmtime. The
specific changes here are to delineate which crates are "public", and
all version requirements on non-public crates will now be done with
`=A.B.C` version requirements instead of today's `A.B.C` version
requirements.

The purpose for doing this is to assist with patch releases that might
happen in the future. Patch releases of wasmtime are already required to
not break the APIs of "public" crates, but no such guarantee is given
about "internal" crates. This means that a patch release runs the risk,
for example, of breaking an internal API. In doing so though we would
also need to release a new major version of the internal crate, but we
wouldn't have a great hole in the number scheme of major versions to do
so. By using `=A.B.C` requirements for internal crates it means we can
safely ignore strict semver-compatibility between releases of internal
crates for patch releases, since the only consumers of the crate will be
the corresponding patch release of the `wasmtime` crate itself (or other
public crates).

The `publish.rs` script has been updated with a check to verify that
dependencies on internal crates are all specified with an `=`
dependency, and dependnecies on all public crates are without a `=`
dependency. This will hopefully make it so we don't have to worry about
what to use where, we just let CI tell us what to do. Using this
modification all version dependency declarations have been updated.

Note that some crates were adjusted to simply remove their `version`
requirement in cases such as the crate wasn't published anyway (`publish
= false` was specified) or it's in the `dev-dependencies` section which
doesn't need version specifiers for path dependencies.

* Switch to normal sever deps for cranelift dependencies

These crates will now all be considered "public" where in patch releases
they will be guaranteed to not have breaking changes.
2021-10-26 09:06:03 -05:00
Alex Crichton
e2a724ce18 Update the object crate to 0.27.0 (#3465)
Mostly just keeping us up to date with changes there since we somewhat
heavily rely on it now.
2021-10-20 10:52:06 -05:00
Alex Crichton
9c6884e28d Update the spec reference testsuite submodule (#3450)
* Update the spec reference testsuite submodule

This commit brings in recent updates to the spec test suite. Most of the
changes here were already fixed in `wasmparser` with some tweaks to
esoteric modules, but Wasmtime also gets a bug fix where where import
matching for the size of tables/memories is based on the current runtime
size of the table/memory rather than the original type of the
table/memory. This means that during type matching the actual value is
consulted for its size rather than using the minimum size listed in its
type.

* Fix now-missing directories in build script
2021-10-13 16:14:12 -05:00
Alex Crichton
713ce07d35 Add some debug logging for timing in module compiles (#3417)
* Add some debug logging for timing in module compiles

This is sometimes helpful when debugging slow compiles from fuzz bugs or
similar.

* Fix total duration calculation to not double-count
2021-10-11 12:50:15 -05:00