Commit Graph

346 Commits

Author SHA1 Message Date
Chris Fallin
0ff8f6ab20 Make build-config magic use memfd by default. 2022-01-31 22:39:20 -08:00
Chris Fallin
ccfa245261 Optimization: only mprotect the *new* bit of heap, not all of it.
(This was not a correctness bug, but is an obvious performance bug...)
2022-01-31 21:25:40 -08:00
Chris Fallin
982df2f2e5 Review feedback. 2022-01-31 16:40:14 -08:00
Chris Fallin
570dee63f3 Use MemFdSlot in the on-demand allocator as well. 2022-01-31 13:59:51 -08:00
Chris Fallin
3702e81d30 Remove ftruncate-trick for heap growth with memfd backend.
Testing so far with recent Wasmtime has not been able to show the need
for avoiding the process-wide mmap lock in real-world use-cases. As
such, the technique of using an anonymous file and ftruncate() to extend
it seems unnecessary; instead, memfd can always use anonymous zeroed
memory for heap backing where the CoW image is not present, and
mprotect() to extend the heap limit by changing page protections.
2022-01-31 12:53:22 -08:00
Chris Fallin
b73ac83c37 Add a pooling allocator mode based on copy-on-write mappings of memfds.
As first suggested by Jan on the Zulip here [1], a cheap and effective
way to obtain copy-on-write semantics of a "backing image" for a Wasm
memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism
provided by the Linux kernel allows us to create anonymous,
in-memory-only files that we can use for this mapping, so we can
construct the image contents on-the-fly then effectively create a CoW
overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)`
will discard the CoW overlay, returning the mapping to its original
state.

By itself this is almost enough for a very fast
instantiation-termination loop of the same image over and over,
without changing the address space mapping at all (which is
expensive). The only missing bit is how to implement
heap *growth*. But here memfds can help us again: if we create another
anonymous file and map it where the extended parts of the heap would
go, we can take advantage of the fact that a `mmap()` mapping can
be *larger than the file itself*, with accesses beyond the end
generating a `SIGBUS`, and the fact that we can cheaply resize the
file with `ftruncate`, even after a mapping exists. So we can map the
"heap extension" file once with the maximum memory-slot size and grow
the memfd itself as `memory.grow` operations occur.

The above CoW technique and heap-growth technique together allow us a
fastpath of `madvise()` and `ftruncate()` only when we re-instantiate
the same module over and over, as long as we can reuse the same
slot. This fastpath avoids all whole-process address-space locks in
the Linux kernel, which should mean it is highly scalable. It also
avoids the cost of copying data on read, as the `uffd` heap backend
does when servicing pagefaults; the kernel's own optimized CoW
logic (same as used by all file mmaps) is used instead.

[1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
2022-01-31 12:53:18 -08:00
Alex Crichton
a25f7bdba5 Don't copy VMBuiltinFunctionsArray into each VMContext (#3741)
* Don't copy `VMBuiltinFunctionsArray` into each `VMContext`

This is another PR along the lines of "let's squeeze all possible
performance we can out of instantiation". Before this PR we would copy,
by value, the contents of `VMBuiltinFunctionsArray` into each
`VMContext` allocated. This array of function pointers is modestly-sized
but growing over time as we add various intrinsics. Additionally it's
the exact same for all `VMContext` allocations.

This PR attempts to speed up instantiation slightly by instead storing
an indirection to the function array. This means that calling a builtin
intrinsic is a tad bit slower since it requires two loads instead of one
(one to get the base pointer, another to get the actual address).
Otherwise though `VMContext` initialization is now simply setting one
pointer instead of doing a `memcpy` from one location to another.

With some macro-magic this commit also replaces the previous
implementation with one that's more `const`-friendly which also gets us
compile-time type-checks of libcalls as well as compile-time
verification that all libcalls are defined.

Overall, as with #3739, the win is very modest here. Locally I measured
a speedup from 1.9us to 1.7us taken to instantiate an empty module with
one function. While small at these scales it's still a 10% improvement!

* Review comments
2022-01-28 16:24:34 -06:00
Alex Crichton
2f494240f8 Lazily allocate the bump-alloc chunk in the externref table (#3739)
This commit updates the allocation of a `VMExternRefActivationsTable`
structure to perform zero malloc memory allocations. Previously it would
allocate a page-size of `chunk` plus some space in hash sets for future
insertions. The main trick here implemented is that after the first gc
during the slow path the fast chunk allocation is allocated and
configured.

The motivation for this PR is that given our recent work to further
refine and optimize the instantiation process this allocation started to
show up in a nontrivial fashion. Most modules today never touch this
table anyway as almost none of them use reference types, so the time
spent allocation and deallocating the table per-store was largely wasted
time.

Concretely on a microbenchmark this PR speeds up instantiation of a
module with one function by 30%, decreasing the instantiation cost from
1.8us to 1.2us. Overall a pretty minor win but when the instantiation
times we're measuring start being in the single-digit microseconds this
win ends up getting magnified!
2022-01-28 16:10:05 -06:00
Nick Fitzgerald
19f8d94959 Expand on activations table invariants comment in libcalls.rs 2022-01-28 09:47:05 -08:00
Nick Fitzgerald
cbc6f6071f Fix a debug assertion in externref garbage collections
When we GC, we assert the invariant that all `externref`s we find on the stack
have a corresponding entry in the `VMExternRefActivationsTable`. However, we
also might be in code that is in the process of fixing up this invariant and
adding an entry to the table, but the table's bump chunk is full, and so we do a
GC and then add the entry into the table. This will ultimately maintain our
desired invariant, but there is a moment in time when we are doing the GC where
the invariant is relaxed which is okay because the reference will be in the
table before we return to Wasm or do anything else. This isn't a possible UAF,
in other words. To make it so that the assertion won't trip, we explicitly
insert the reference into the table *before* we GC, so that the invariant is not
relaxed across a possibly-GCing operation (even though it would be safe in this
particular case).
2022-01-27 14:46:01 -08:00
Dan Gohman
881c19473d Use ptr::cast instead of as casts in several places. (#3507)
`ptr::cast` has the advantage of being unable to silently cast
`*const T` to `*mut T`. This turned up several places that were
performing such casts, which this PR also fixes.
2022-01-21 13:03:17 -08:00
Chris Fallin
8a55b5c563 Add epoch-based interruption for cooperative async timeslicing.
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.

This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.

The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
2022-01-20 13:58:17 -08:00
Mrmaxmeier
2afd6900f4 runtime: expose DefaultMemoryCreator (#3670) 2022-01-18 09:17:33 -06:00
Piotr Sikora
642102e699 Fix build with clang on s390x. (#3673)
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
2022-01-10 09:27:27 -06:00
Dan Gohman
ea0cb971fb Update to rustix 0.26.2. (#3521)
This pulls in a fix for Android, where Android's seccomp policy on older
versions is to make `openat2` irrecoverably crash the process, so we have
to do a version check up front rather than relying on `ENOSYS` to
determine if `openat2` is supported.

And it pulls in the fix for the link errors when multiple versions of
rsix/rustix are linked in.

And it has updates for two crate renamings: rsix has been renamed to
rustix, and unsafe-io has been renamed to io-extras.
2021-11-15 10:21:13 -08:00
Peter Huene
58aab85680 Add the pooling-allocator feature.
This commit adds the `pooling-allocator` feature to both the `wasmtime` and
`wasmtime-runtime` crates.

The feature controls whether or not the pooling allocator implementation is
built into the runtime and exposed as a supported instance allocation strategy
in the wasmtime API.

The feature is on by default for the `wasmtime` crate.

Closes #3513.
2021-11-10 13:25:55 -08:00
Alex Crichton
6bcee7f5f7 Add a configuration option to force "static" memories (#3503)
* Add a configuration option to force "static" memories

In poking around at some things earlier today I realized that one
configuration option for memories we haven't exposed from embeddings
like the CLI is to forcibly limit the size of memory growth and force
using a static memory style. This means that the CLI, for example, can't
limit memory growth by default and memories are only limited in size by
what the OS can give and the wasm's own memory type. This configuration
option means that the CLI can artificially limit the size of wasm linear
memories.

Additionally another motivation for this is for testing out various
codegen ramifications of static/dynamic memories. This is the only way
to force a static memory, by default, for wasm64 memories with no
maximum size listed for example.

* Review feedback
2021-11-03 16:50:49 -05:00
Alex Crichton
2f2c5231b4 Add Alex's solution for null handling in TlsRestore 2021-10-25 10:04:31 -07:00
Pat Hickey
b00d811e83 code review 2021-10-22 10:43:46 -07:00
Pat Hickey
52542b6c01 mock enough of the store to pass the uffd test 2021-10-22 08:56:13 -07:00
Pat Hickey
efef0769fe make uffd test compile, but not pass 2021-10-22 08:39:00 -07:00
Pat Hickey
0370d5c1a2 code review suggestion 2021-10-21 16:46:31 -07:00
Pat Hickey
a1301f8dae add table_grow_failed 2021-10-21 15:07:40 -07:00
Pat Hickey
5aef8f47c8 catch panic in libcalls for memory and table grow 2021-10-21 12:15:00 -07:00
Pat Hickey
6c70b81ff5 review feedback 2021-10-21 12:10:03 -07:00
Pat Hickey
a5007f318f runtime: use anyhow::Error instead of Box<dyn std::error::Error...> 2021-10-21 12:10:03 -07:00
Pat Hickey
67a6c27e22 pooling needs the store earlier 2021-10-21 12:10:03 -07:00
Pat Hickey
147c8f8ed7 rename 2021-10-21 12:10:03 -07:00
Pat Hickey
18a355e092 give sychronous ResourceLimiter an async alternative 2021-10-21 12:10:03 -07:00
Steve
807619a874 as requested: cargo fmt 2021-10-11 19:57:07 +01:00
Steve
92a10d1ace Added resolve_vmctx_memory function to enable debuggers to resolve sandbox pointers - required because sandbox 'this' pointer cannot be resolved by lldb any other way as lldb expects "this" and "self" to be standard pointers, not sandbox handles. 2021-10-11 09:08:14 +01:00
Alex Crichton
5b3b459ad5 Fix some nightly dead code warnings (#3404)
* Fix some nightly dead code warnings

Looks like the "struct field not used" lint has improved on nightly and
caught a few more instances of fields that were never actually read.

* Fix windows
2021-10-01 14:26:30 -05:00
Dan Gohman
e5ebef1b94 Use empty() instead of NONE with rsix flags types.
`empty()` is provided by all `bitflags` types, so it's more idiomatic
than having `NONE` values.
2021-09-30 08:14:13 -07:00
Alex Crichton
bfdbd10a13 Add *_unchecked variants of Func APIs for the C API (#3350)
* Add `*_unchecked` variants of `Func` APIs for the C API

This commit is what is hopefully going to be my last installment within
the saga of optimizing function calls in/out of WebAssembly modules in
the C API. This is yet another alternative approach to #3345 (sorry) but
also contains everything necessary to make the C API fast. As in #3345
the general idea is just moving checks out of the call path in the same
style of `TypedFunc`.

This new strategy takes inspiration from previously learned attempts
effectively "just" exposes how we previously passed `*mut u128` through
trampolines for arguments/results. This storage format is formalized
through a new `ValRaw` union that is exposed from the `wasmtime` crate.
By doing this it made it relatively easy to expose two new APIs:

* `Func::new_unchecked`
* `Func::call_unchecked`

These are the same as their checked equivalents except that they're
`unsafe` and they work with `*mut ValRaw` rather than safe slices of
`Val`. Working with these eschews type checks and such and requires
callers/embedders to do the right thing.

These two new functions are then exposed via the C API with new
functions, enabling C to have a fast-path of calling/defining functions.
This fast path is akin to `Func::wrap` in Rust, although that API can't
be built in C due to C not having generics in the same way that Rust
has.

For some benchmarks, the benchmarks here are:

* `nop` - Call a wasm function from the host that does nothing and
  returns nothing.
* `i64` - Call a wasm function from the host, the wasm function calls a
  host function, and the host function returns an `i64` all the way out to
  the original caller.
* `many` - Call a wasm function from the host, the wasm calls
   host function with 5 `i32` parameters, and then an `i64` result is
   returned back to the original host
* `i64` host - just the overhead of the wasm calling the host, so the
  wasm calls the host function in a loop.
* `many` host - same as `i64` host, but calling the `many` host function.

All numbers in this table are in nanoseconds, and this is just one
measurement as well so there's bound to be some variation in the precise
numbers here.

| Name      | Rust | C (before) | C (after) |
|-----------|------|------------|-----------|
| nop       | 19   | 112        | 25        |
| i64       | 22   | 207        | 32        |
| many      | 27   | 189        | 34        |
| i64 host  | 2    | 38         | 5         |
| many host | 7    | 75         | 8         |

The main conclusion here is that the C API is significantly faster than
before when using the `*_unchecked` variants of APIs. The Rust
implementation is still the ceiling (or floor I guess?) for performance
The main reason that C is slower than Rust is that a little bit more has
to travel through memory where on the Rust side of things we can
monomorphize and inline a bit more to get rid of that. Overall though
the costs are way way down from where they were originally and I don't
plan on doing a whole lot more myself at this time. There's various
things we theoretically could do I've considered but implementation-wise
I think they'll be much more weighty.

* Tweak `wasmtime_externref_t` API comments
2021-09-24 14:05:45 -05:00
Advance Software
a8467d0824 Exports symbols to be shared with external GDB/JIT debugging interfac… (#3373)
* Exports symbols to be shared with external GDB/JIT debugging interface tools.
Windows O/S specific requirement.

* Moved comments into platform specific compiler directive sections.
2021-09-20 12:33:20 -05:00
Dan Gohman
87ff24a4aa Use __builtin_setjmp instead of sigsetjmp. (#3360)
* Use `__builtin_setjmp` instead of `sigsetjmp`.

Use [`__builtin_setjmp`] instead of `sigsetjmp`, as it is implemented in
the compiler, performed inline, and saves much less state. This speeds up
calls into wasm by about 8% on my machine.

[`__builtin_setjmp`]: https://gcc.gnu.org/onlinedocs/gcc/Nonlocal-Gotos.html

* Add a comment confirming that 5 really is the documented size.

* Add a comment about callee-saved state and __builtin_setjmp.

* On clang on aarch64, use sigsetjmp.

* Fix a stray `#endif`.
2021-09-20 09:14:52 -05:00
Dan Gohman
47490b4383 Use rsix to make system calls in Wasmtime. (#3355)
* Use rsix to make system calls in Wasmtime.

`rsix` is a system call wrapper crate that we use in `wasi-common`,
which can provide the following advantages in the rest of Wasmtime:

 - It eliminates some `unsafe` blocks in Wasmtime's code. There's
   still an `unsafe` block in the library, but this way, the `unsafe`
   is factored out and clearly scoped.

 - And, it makes error handling more consistent, factoring out code for
   checking return values and `io::Error::last_os_error()`, and code that
   does `errno::set_errno(0)`.

This doesn't cover *all* system calls; `rsix` doesn't implement
signal-handling APIs, and this doesn't cover calls made through `std` or
crates like `userfaultfd`, `rand`, and `region`.
2021-09-17 15:28:56 -07:00
Nick Fitzgerald
101998733b Merge pull request from GHSA-v4cp-h94r-m7xf
Fix a use-after-free bug when passing `ExternRef`s to Wasm
2021-09-17 10:27:29 -07:00
Pat Hickey
bb7f58d936 add a hook to ResourceLimiter to detect memory grow failure.
* allow the ResourceLimiter to reject a memory grow before the
memory's own maximum.
* add a hook so a ResourceLimiter can detect any reason that
a memory grow fails, including if the OS denies additional memory
* add tests for this new functionality. I only took the time to
test the OS denial on Linux, it should be possible on Mac OS
as well but I don't have a test setup. I have no idea how to
do this on windows.
2021-09-15 11:50:23 -07:00
Nick Fitzgerald
d2ce1ac753 Fix a use-after-free bug when passing ExternRefs to Wasm
We _must not_ trigger a GC when moving refs from host code into
Wasm (e.g. returned from a host function or passed as arguments to a Wasm
function). After insertion into the table, this reference is no longer
rooted. If multiple references are being sent from the host into Wasm and we
allowed GCs during insertion, then the following events could happen:

* Reference A is inserted into the activations table. This does not trigger a
  GC, but does fill the table to capacity.

* The caller's reference to A is removed. Now the only reference to A is from
  the activations table.

* Reference B is inserted into the activations table. Because the table is at
  capacity, a GC is triggered.

* A is reclaimed because the only reference keeping it alive was the activation
  table's reference (it isn't inside any Wasm frames on the stack yet, so stack
  scanning and stack maps don't increment its reference count).

* We transfer control to Wasm, giving it A and B. Wasm uses A. That's a use
  after free.

To prevent uses after free, we cannot GC when moving refs into the
`VMExternRefActivationsTable` because we are passing them from the host to Wasm.

On the other hand, when we are *cloning* -- as opposed to moving -- refs from
the host to Wasm, then it is fine to GC while inserting into the activations
table, because the original referent that we are cloning from is still alive and
rooting the ref.
2021-09-14 14:23:42 -07:00
Alex Crichton
c8f55ed688 Optimize codegen slightly calling wasm functions
Currently wasm-calls work with `Result<T, Trap>` internally but `Trap`
is an enum defined in `wasmtime-runtime` which is actually quite large.
Since traps are supposed to be rare this commit changes these functions
to return a `Box<Trap>` which is un-boxed later up in the `wasmtime`
crate within a `#[cold]` function.
2021-09-02 07:26:03 -07:00
Dan Gohman
05d113148d Use std::alloc::alloc instead of libc::posix_memalign.
This makes Cranelift use the Rust `alloc` API its allocations,
rather than directly calling into `libc`, which makes it respect
the `#[global_allocator]` configuration.

Also, use `region::page::ceil` instead of having our own copies of
that logic.
2021-08-31 15:49:50 -07:00
Alex Crichton
9e0c910023 Add a Module::deserialize_file method (#3266)
* Add a `Module::deserialize_file` method

This commit adds a new method to the `wasmtime::Module` type,
`deserialize_file`. This is intended to be the same as the `deserialize`
method except for the serialized module is present as an on-disk file.
This enables Wasmtime to internally use `mmap` to avoid copying bytes
around and generally makes loading a module much faster.

A C API is added in this commit as well for various bindings to use this
accelerated path now as well. Another option perhaps for a Rust-based
API is to have an API taking a `File` itself to allow for a custom file
descriptor in one way or another, but for now that's left for a possible
future refactoring if we find a use case.

* Fix compat with main - handle readdonly mmap

* wip

* Try to fix Windows support
2021-08-31 13:05:51 -05:00
Alex Crichton
4376cf2609 Add differential fuzzing against V8 (#3264)
* Add differential fuzzing against V8

This commit adds a differential fuzzing target to Wasmtime along the
lines of the wasmi and spec interpreters we already have, but with V8
instead. The intention here is that wasmi is unlikely to receive updates
over time (e.g. for SIMD), and the spec interpreter is not suitable for
fuzzing against in general due to its performance characteristics. The
hope is that V8 is indeed appropriate to fuzz against because it's
naturally receiving updates and it also is expected to have good
performance.

Here the `rusty_v8` crate is used which provides bindings to V8 as well
as precompiled binaries by default. This matches exactly the use case we
need and at least for now I think the `rusty_v8` crate will be
maintained by the Deno folks as they continue to develop it. If it
becomes an issue though maintaining we can evaluate other options to
have differential fuzzing against.

For now this commit enables the SIMD and bulk-memory feature of
fuzz-target-generation which should enable them to get
differentially-fuzzed with V8 in addition to the compilation fuzzing
we're already getting.

* Use weak linkage for GDB jit helpers

This should help us deduplicate our symbol with other JIT runtimes, if
any. For now this leans on some C helpers to define the weak linkage
since Rust doesn't support that on stable yet.

* Don't use rusty_v8 on MinGW

They don't have precompiled libraries there.

* Fix msvc build

* Comment about execution
2021-08-31 09:34:55 -05:00
Alex Crichton
a237e73b5a Remove some allocations in CodeMemory (#3253)
* Remove some allocations in `CodeMemory`

This commit removes the `FinishedFunctions` type as well as allocations
associated with trampolines when allocating inside of a `CodeMemory`.
The main goal of this commit is to improve the time spent in
`CodeMemory` where currently today a good portion of time is spent
simply parsing symbol names and trying to extract function indices from
them. Instead this commit implements a new strategy (different from #3236)
where compilation records offset/length information for all
functions/trampolines so this doesn't need to be re-learned from the
object file later.

A consequence of this commit is that this offset information will be
decoded/encoded through `bincode` unconditionally, but we can also
optimize that later if necessary as well.

Internally this involved quite a bit of refactoring since the previous
map for `FinishedFunctions` was relatively heavily relied upon.

* comments
2021-08-30 10:35:17 -05:00
Alex Crichton
a662f5361d Move wasm data sections out of wasmtime_environ::Module (#3231)
* Reduce indentation in `to_paged`

Use a few early-returns from `match` to avoid lots of extra indentation.

* Move wasm data sections out of `wasmtime_environ::Module`

This is the first step down the road of #3230. The long-term goal is
that `Module` is always `bincode`-decoded, but wasm data segments are a
possibly very-large portion of this residing in modules which we don't
want to shove through bincode. This refactors the internals of wasmtime
to be ok with this data living separately from the `Module` itself,
providing access at necessary locations.

Wasm data segments are now extracted from a wasm module and
concatenated directly. Data sections then describe ranges within this
concatenated list of data, and passive data works the same way. This
implementation does not lend itself to eventually optimizing the case
where passive data is dropped and no longer needed. That's left for a
future PR.
2021-08-24 14:04:03 -05:00
Alex Crichton
f3977f1d97 Fix determinism of compiled modules (#3229)
* Fix determinism of compiled modules

Currently wasmtime's compilation artifacts are not deterministic due to
the usage of `HashMap` during serialization which has randomized order
of its elements. This commit fixes that by switching to a sorted
`BTreeMap` for various maps. A test is also added to ensure determinism.

If in the future the performance of `BTreeMap` is not as good as
`HashMap` for some of these cases we can implement a fancier
`serialize_with`-style solution where we sort keys during serialization,
but only during serialization and otherwise use a `HashMap`.

* fix lightbeam
2021-08-23 17:08:19 -05:00
Alex Crichton
f5041dd362 Implement a setting for reserved dynamic memory growth (#3215)
* Implement a setting for reserved dynamic memory growth

Dynamic memories aren't really that heavily used in Wasmtime right now
because for most 32-bit memories they're classified as "static" which
means they reserve 4gb of address space and never move. Growth of a
static memory is simply making pages accessible, so it's quite fast.

With the memory64 feature, however, this is no longer true since all
memory64 memories are classified as "dynamic" at this time. Previous to
this commit growth of a dynamic memory unconditionally moved the entire
linear memory in the host's address space, always resulting in a new
`Mmap` allocation. This behavior is causing fuzzers to time out when
working with 64-bit memories because incrementally growing a memory by 1
page at a time can incur a quadratic time complexity as bytes are
constantly moved.

This commit implements a scheme where there is now a tunable setting for
memory to be reserved at the end of a dynamic memory to grow into. This
means that dynamic memory growth is ideally amortized as most calls to
`memory.grow` will be able to grow into the pre-reserved space. Some
calls, though, will still need to copy the memory around.

This helps enable a commented out test for 64-bit memories now that it's
fast enough to run in debug mode. This is because the growth of memory
in the test no longer needs to copy 4gb of zeros.

* Test fixes & review comments

* More comments
2021-08-20 10:54:23 -05:00
Alex Crichton
87c33c2969 Remove wasmtime-environ's dependency on cranelift-codegen (#3199)
* Move `CompiledFunction` into wasmtime-cranelift

This commit moves the `wasmtime_environ::CompiledFunction` type into the
`wasmtime-cranelift` crate. This type has lots of Cranelift-specific
pieces of compilation and doesn't need to be generated by all Wasmtime
compilers. This replaces the usage in the `Compiler` trait with a
`Box<Any>` type that each compiler can select. Each compiler must still
produce a `FunctionInfo`, however, which is shared information we'll
deserialize for each module.

The `wasmtime-debug` crate is also folded into the `wasmtime-cranelift`
crate as a result of this commit. One possibility was to move the
`CompiledFunction` commit into its own crate and have `wasmtime-debug`
depend on that, but since `wasmtime-debug` is Cranelift-specific at this
time it didn't seem like it was too too necessary to keep it separate.
If `wasmtime-debug` supports other backends in the future we can
recreate a new crate, perhaps with it refactored to not depend on
Cranelift.

* Move wasmtime_environ::reference_type

This now belongs in wasmtime-cranelift and nowhere else

* Remove `Type` reexport in wasmtime-environ

One less dependency on `cranelift-codegen`!

* Remove `types` reexport from `wasmtime-environ`

Less cranelift!

* Remove `SourceLoc` from wasmtime-environ

Change the `srcloc`, `start_srcloc`, and `end_srcloc` fields to a custom
`FilePos` type instead of `ir::SourceLoc`. These are only used in a few
places so there's not much to lose from an extra abstraction for these
leaf use cases outside of cranelift.

* Remove wasmtime-environ's dep on cranelift's `StackMap`

This commit "clones" the `StackMap` data structure in to
`wasmtime-environ` to have an independent representation that that
chosen by Cranelift. This allows Wasmtime to decouple this runtime
dependency of stack map information and let the two evolve
independently, if necessary.

An alternative would be to refactor cranelift's implementation into a
separate crate and have wasmtime depend on that but it seemed a bit like
overkill to do so and easier to clone just a few lines for this.

* Define code offsets in wasmtime-environ with `u32`

Don't use Cranelift's `binemit::CodeOffset` alias to define this field
type since the `wasmtime-environ` crate will be losing the
`cranelift-codegen` dependency soon.

* Commit to using `cranelift-entity` in Wasmtime

This commit removes the reexport of `cranelift-entity` from the
`wasmtime-environ` crate and instead directly depends on the
`cranelift-entity` crate in all referencing crates. The original reason
for the reexport was to make cranelift version bumps easier since it's
less versions to change, but nowadays we have a script to do that.
Otherwise this encourages crates to use whatever they want from
`cranelift-entity` since  we'll always depend on the whole crate.

It's expected that the `cranelift-entity` crate will continue to be a
lean crate in dependencies and suitable for use at both runtime and
compile time. Consequently there's no need to avoid its usage in
Wasmtime at runtime, since "remove Cranelift at compile time" is
primarily about the `cranelift-codegen` crate.

* Remove most uses of `cranelift-codegen` in `wasmtime-environ`

There's only one final use remaining, which is the reexport of
`TrapCode`, which will get handled later.

* Limit the glob-reexport of `cranelift_wasm`

This commit removes the glob reexport of `cranelift-wasm` from the
`wasmtime-environ` crate. This is intended to explicitly define what
we're reexporting and is a transitionary step to curtail the amount of
dependencies taken on `cranelift-wasm` throughout the codebase. For
example some functions used by debuginfo mapping are better imported
directly from the crate since they're Cranelift-specific. Note that
this is intended to be a temporary state affairs, soon this reexport
will be gone entirely.

Additionally this commit reduces imports from `cranelift_wasm` and also
primarily imports from `crate::wasm` within `wasmtime-environ` to get a
better sense of what's imported from where and what will need to be
shared.

* Extract types from cranelift-wasm to cranelift-wasm-types

This commit creates a new crate called `cranelift-wasm-types` and
extracts type definitions from the `cranelift-wasm` crate into this new
crate. The purpose of this crate is to be a shared definition of wasm
types that can be shared both by compilers (like Cranelift) as well as
wasm runtimes (e.g. Wasmtime). This new `cranelift-wasm-types` crate
doesn't depend on `cranelift-codegen` and is the final step in severing
the unconditional dependency from Wasmtime to `cranelift-codegen`.

The final refactoring in this commit is to then reexport this crate from
`wasmtime-environ`, delete the `cranelift-codegen` dependency, and then
update all `use` paths to point to these new types.

The main change of substance here is that the `TrapCode` enum is
mirrored from Cranelift into this `cranelift-wasm-types` crate. While
this unfortunately results in three definitions (one more which is
non-exhaustive in Wasmtime itself) it's hopefully not too onerous and
ideally something we can patch up in the future.

* Get lightbeam compiling

* Remove unnecessary dependency

* Fix compile with uffd

* Update publish script

* Fix more uffd tests

* Rename cranelift-wasm-types to wasmtime-types

This reflects the purpose a bit more where it's types specifically
intended for Wasmtime and its support.

* Fix publish script
2021-08-18 13:14:52 -05:00
Alex Crichton
02ecfed7a0 Print more error info on sigaltstack failures (#3204)
A meager but hopefully somewhat useful attempt to further debugging of #3203
2021-08-18 12:33:06 -05:00