* Store the `ValRaw` type in little-endian format
This commit changes the internal representation of the `ValRaw` type to
an unconditionally little-endian format instead of its current
native-endian format. The documentation and various accessors here have
been updated as well as the associated trampolines that read `ValRaw`
to always work with little-endian values, converting to the host
endianness as necessary.
The motivation for this change originally comes from the implementation
of the component model that I'm working on. One aspect of the component
model's canonical ABI is how variants are passed to functions as
immediate arguments. For example for a component model function:
```
foo: function(x: expected<i32, f64>)
```
This translates to a core wasm function:
```wasm
(module
(func (export "foo") (param i32 i64)
;; ...
)
)
```
The first `i32` parameter to the core wasm function is the discriminant
of whether the result is an "ok" or an "err". The second `i64`, however,
is the "join" operation on the `i32` and `f64` payloads. Essentially
these two types are unioned into one type to get passed into the function.
Currently in the implementation of the component model my plan is to
construct a `*mut [ValRaw]` to pass through to WebAssembly, always
invoking component exports through host trampolines. This means that the
implementation for `Result<T, E>` needs to do the correct "join"
operation here when encoding a particular case into the corresponding
`ValRaw`.
I personally found this particularly tricky to do structurally. The
solution that I settled on with fitzgen was that if `ValRaw` was always
stored in a little endian format then we could employ a trick where when
encoding a variant we first set all the `ValRaw` slots to zero, then the
associated case we have is encoding. Afterwards the `ValRaw` values are
already encoded into the correct format as if they'd been "join"ed.
For example if we were to encode `Ok(1i32)` then this would produce
`ValRaw { i32: 1 }`, which memory-wise is equivalent to `ValRaw { i64: 1 }`
if the other bytes in the `ValRaw` are guaranteed to be zero. Similarly
storing `ValRaw { f64 }` is equivalent to the storage required for
`ValRaw { i64 }` here in the join operation.
Note, though, that this equivalence relies on everything being
little-endian. Otherwise the in-memory representations of `ValRaw { i32: 1 }`
and `ValRaw { i64: 1 }` are different.
That motivation is what leads to this change. It's expected that this is
a low-to-zero cost change in the sense that little-endian platforms will
see no change and big-endian platforms are already required to
efficiently byte-swap loads/stores as WebAssembly requires that.
Additionally the `ValRaw` type is an esoteric niche use case primarily
used for accelerating the C API right now, so it's expected that not
many users will have to update for this change.
* Track down some more endianness conversions
* Update wasm-tools crates
This commit updates the wasm-tools family of crates as used in Wasmtime.
Notably this brings in the update which removes module linking support
as well as a number of internal refactorings around names and such
within wasmparser itself. This updates all of the wasm translation
support which binds to wasmparser as appropriate.
Other crates all had API-compatible changes for at least what Wasmtime
used so no further changes were necessary beyond updating version
requirements.
* Update a test expectation
* Remove duplicate `TypeTables` type
This was once needed historically but it is no longer needed.
* Make the internals of `TypeTables` private
Instead of reaching internally for the `wasm_signatures` map an `Index`
implementation now exists to indirect accesses through the type of the
index being accessed. For the component model this table of types will
grow a number of other tables and this'll assist in consuming sites not
having to worry so much about which map they're reaching into.
* Support disabling backtraces at compile time
This commit adds support to Wasmtime to disable, at compile time, the
gathering of backtraces on traps. The `wasmtime` crate now sports a
`wasm-backtrace` feature which, when disabled, will mean that backtraces
are never collected at compile time nor are unwinding tables inserted
into compiled objects.
The motivation for this commit stems from the fact that generating a
backtrace is quite a slow operation. Currently backtrace generation is
done with libunwind and `_Unwind_Backtrace` typically found in glibc or
other system libraries. When thousands of modules are loaded into the
same process though this means that the initial backtrace can take
nearly half a second and all subsequent backtraces can take upwards of
hundreds of milliseconds. Relative to all other operations in Wasmtime
this is extremely expensive at this time. In the future we'd like to
implement a more performant backtrace scheme but such an implementation
would require coordination with Cranelift and is a big chunk of work
that may take some time, so in the meantime if embedders don't need a
backtrace they can still use this option to disable backtraces at
compile time and avoid the performance pitfalls of collecting
backtraces.
In general I tried to originally make this a runtime configuration
option but ended up opting for a compile-time option because `Trap::new`
otherwise has no arguments and always captures a backtrace. By making
this a compile-time option it was possible to configure, statically, the
behavior of `Trap::new`. Additionally I also tried to minimize the
amount of `#[cfg]` necessary by largely only having it at the producer
and consumer sites.
Also a noteworthy restriction of this implementation is that if
backtrace support is disabled at compile time then reference types
support will be unconditionally disabled at runtime. With backtrace
support disabled there's no way to trace the stack of wasm frames which
means that GC can't happen given our current implementation.
* Always enable backtraces for the C API
* Delete historical interruptable support in Wasmtime
This commit removes the `Config::interruptable` configuration along with
the `InterruptHandle` type from the `wasmtime` crate. The original
support for adding interruption to WebAssembly was added pretty early on
in the history of Wasmtime when there was no other method to prevent an
infinite loop from the host. Nowadays, however, there are alternative
methods for interruption such as fuel or epoch-based interruption.
One of the major downsides of `Config::interruptable` is that even when
it's not enabled it forces an atomic swap to happen when entering
WebAssembly code. This technically could be a non-atomic swap if the
configuration option isn't enabled but that produces even more branch-y
code on entry into WebAssembly which is already something we try to
optimize. Calling into WebAssembly is on the order of a dozens of
nanoseconds at this time and an atomic swap, even uncontended, can add
up to 5ns on some platforms.
The main goal of this PR is to remove this atomic swap on entry into
WebAssembly. This is done by removing the `Config::interruptable` field
entirely, moving all existing consumers to epochs instead which are
suitable for the same purposes. This means that the stack overflow check
is no longer entangled with the interruption check and perhaps one day
we could continue to optimize that further as well.
Some consequences of this change are:
* Epochs are now the only method of remote-thread interruption.
* There are no more Wasmtime traps that produces the `Interrupted` trap
code, although we may wish to move future traps to this so I left it
in place.
* The C API support for interrupt handles was also removed and bindings
for epoch methods were added.
* Function-entry checks for interruption are a tiny bit less efficient
since one check is performed for the stack limit and a second is
performed for the epoch as opposed to the `Config::interruptable`
style of bundling the stack limit and the interrupt check in one. It's
expected though that this is likely to not really be measurable.
* The old `VMInterrupts` structure is renamed to `VMRuntimeLimits`.
Previously (as in an hour ago) #3905 landed a new ability for fuzzing to
arbitrarily insert padding between functions. Running some fuzzers
locally though this instantly hit a lot of problems on AArch64 because
the arbitrary padding isn't aligned to 4 bytes like all other functions
are. To fix this issue appending functions now correctly aligns the
output as appropriate for the platform. The alignment argument for
appending was switched to `None` where `None` means "use the platform
default" and otherwise and explicit alignment can be specified for
inserting other data (like arbitrary padding or Windows unwind tables).
* fuzz: Fuzz padding between compiled functions
This commit hooks up the custom
`wasmtime_linkopt_padding_between_functions` configuration option to the
cranelift compiler into the fuzz configuration, enabling us to ensure
that randomly inserting a moderate amount of padding between functions
shouldn't tamper with any results.
* fuzz: Fuzz the `Config::generate_address_map` option
This commit adds fuzz configuration where `generate_address_map` is
either enabled or disabled, unlike how it's always enabled for fuzzing
today.
* Remove unnecessary handling of relocations
This commit removes a number of bits and pieces all related to handling
relocations in JIT code generated by Wasmtime. None of this is necessary
nowadays that the "old backend" has been removed (quite some time ago)
and relocations are no longer expected to be in the JIT code at all.
Additionally with the minimum x86_64 features required to run wasm code
it should be expected that no libcalls are required either for
Wasmtime-based JIT code.
* Shrink the size of the anyfunc table in `VMContext`
This commit shrinks the size of the `VMCallerCheckedAnyfunc` table
allocated into a `VMContext` to be the size of the number of "escaped"
functions in a module rather than the number of functions in a module.
Escaped functions include exports, table elements, etc, and are
typically an order of magnitude smaller than the number of functions in
general. This should greatly shrink the `VMContext` for some modules
which while we aren't necessarily having any problems with that today
shouldn't cause any problems in the future.
The original motivation for this was that this came up during the recent
lazy-table-initialization work and while it no longer has a direct
performance benefit since tables aren't initialized at all on
instantiation it should still improve long-running instances
theoretically with smaller `VMContext` allocations as well as better
locality between anyfuncs.
* Fix some tests
* Remove redundant hash set
* Use a helper for pushing function type information
* Use a more descriptive `is_escaping` method
* Clarify a comment
* Fix condition
* Fix typo
* Move vmoffset field size and field name together
The previous code was quite confusing about what applied to which field.
The new code also makes it easier to move fields around and insert and
delete fields.
* Move builtin_functions before all variable sized fields
This allows the offset to be calculated at compile time
* Add cadd and cmul convenience functions
* Remove comment
* Change fields! syntax as per review
* Add implicit u32::from to fields!
Addresses #3809: when we are asked to create a Cranelift backend with
shared flags that indicate support for SIMD, we should check that the
ISA level needed for our SIMD lowerings is present.
* Shrink the size of `FuncData`
Before this commit on a 64-bit system the `FuncData` type had a size of
88 bytes and after this commit it has a size of 32 bytes. A `FuncData`
is required for all host functions in a store, including those inserted
from a `Linker` into a store used during linking. This means that
instantiation ends up creating a nontrivial number of these types and
pushing them into the store. Looking at some profiles there were some
surprisingly expensive movements of `FuncData` from the stack to a
vector for moves-by-value generated by Rust. Shrinking this type enables
more efficient code to be generated and additionally means less storage
is needed in a store's function array.
For instantiating the spidermonkey and rustpython modules this improves
instantiation by 10% since they each import a fair number of host
functions and the speedup here is relative to the number of items
imported.
* Use `ptr::copy_nonoverlapping` during initialization
Prevoiusly `ptr::copy` was used for copying imports into place which
translates to `memmove`, but `ptr::copy_nonoverlapping` can be used here
since it's statically known these areas don't overlap. While this
doesn't end up having a performance difference it's something I kept
noticing while looking at the disassembly of `initialize_vmcontext` so I
figured I'd go ahead and implement.
* Indirect shared signature ids in the VMContext
This commit is a small improvement for the instantiation time of modules
by avoiding copying a list of `VMSharedSignatureIndex` entries into each
`VMContext`, instead building one inside of a module and sharing that
amongst all instances. This involves less lookups at instantiation time
and less movement of data during instantiation. The downside is that
type-checks on `call_indirect` now involve an additionally load, but I'm
assuming that these are somewhat pessimized enough as-is that the
runtime impact won't be much there.
For instantiation performance this is a 5-10% win with
rustpyhon/spidermonky instantiation. This should also reduce the size of
each `VMContext` for an instantiation since signatures are no longer
stored inline but shared amongst all instances with one module.
Note that one subtle change here is that the array of
`VMSharedSignatureIndex` was previously indexed by `TypeIndex`, and now
it's indexed by `SignaturedIndex` which is a deduplicated form of
`TypeIndex`. This is done because we already had a list of those lying
around in `Module`, so it was easier to reuse that than to build a
separate array and store it somewhere.
* Reserve space in `Store<T>` with `InstancePre`
This commit updates the instantiation process to reserve space in a
`Store<T>` for the functions that an `InstancePre<T>`, as part of
instantiation, will insert into it. Using an `InstancePre<T>` to
instantiate allows pre-computing the number of host functions that will
be inserted into a store, and by pre-reserving space we can avoid costly
reallocations during instantiation by ensuring the function vector has
enough space to fit everything during the instantiation process.
Overall this makes instantiation of rustpython/spidermonkey about 8%
faster locally.
* Fix tests
* Use checked arithmetic
* Skip memfd creation with precompiled modules
This commit updates the memfd support internally to not actually use a
memfd if a compiled module originally came from disk via the
`wasmtime::Module::deserialize_file` API. In this situation we already
have a file descriptor open and there's no need to copy a module's heap
image to a new file descriptor.
To facilitate a new source of `mmap` the currently-memfd-specific-logic
of creating a heap image is generalized to a new form of
`MemoryInitialization` which is attempted for all modules at
module-compile-time. This means that the serialized artifact to disk
will have the memory image in its entirety waiting for us. Furthermore
the memory image is ensured to be padded and aligned carefully to the
target system's page size, notably meaning that the data section in the
final object file is page-aligned and the size of the data section is
also page aligned.
This means that when a precompiled module is mapped from disk we can
reuse the underlying `File` to mmap all initial memory images. This
means that the offset-within-the-memory-mapped-file can differ for
memfd-vs-not, but that's just another piece of state to track in the
memfd implementation.
In the limit this waters down the term "memfd" for this technique of
quickly initializing memory because we no longer use memfd
unconditionally (only when the backing file isn't available).
This does however open up an avenue in the future to porting this
support to other OSes because while `memfd_create` is Linux-specific
both macOS and Windows support mapping a file with copy-on-write. This
porting isn't done in this PR and is left for a future refactoring.
Closes#3758
* Enable "memfd" support on all unix systems
Cordon off the Linux-specific bits and enable the memfd support to
compile and run on platforms like macOS which have a Linux-like `mmap`.
This only works if a module is mapped from a precompiled module file on
disk, but that's better than not supporting it at all!
* Fix linux compile
* Use `Arc<File>` instead of `MmapVecFileBacking`
* Use a named struct instead of mysterious tuples
* Comment about unsafety in `Module::deserialize_file`
* Fix tests
* Fix uffd compile
* Always align data segments
No need to have conditional alignment since their sizes are all aligned
anyway
* Update comment in build.rs
* Use rustix, not `region`
* Fix some confusing logic/names around memory indexes
These functions all work with memory indexes, not specifically defined
memory indexes.
* Move function names out of `Module`
This commit moves function names in a module out of the
`wasmtime_environ::Module` type and into separate sections stored in the
final compiled artifact. Spurred on by #3787 to look at module load
times I noticed that a huge amount of time was spent in deserializing
this map. The `spidermonkey.wasm` file, for example, has a 3MB name
section which is a lot of unnecessary data to deserialize at module load
time.
The names of functions are now split out into their own dedicated
section of the compiled artifact and metadata about them is stored in a
more compact format at runtime by avoiding a `BTreeMap` and instead
using a sorted array. Overall this improves deserialize times by up to
80% for modules with large name sections since the name section is no
longer deserialized at load time and it's lazily paged in as names are
actually referenced.
* Fix a typo
* Fix compiled module determinism
Need to not only sort afterwards but also first to ensure the data of
the name section is consistent.
During instance initialization, we build two sorts of arrays eagerly:
- We create an "anyfunc" (a `VMCallerCheckedAnyfunc`) for every function
in an instance.
- We initialize every element of a funcref table with an initializer to
a pointer to one of these anyfuncs.
Most instances will not touch (via call_indirect or table.get) all
funcref table elements. And most anyfuncs will never be referenced,
because most functions are never placed in tables or used with
`ref.func`. Thus, both of these initialization tasks are quite wasteful.
Profiling shows that a significant fraction of the remaining
instance-initialization time after our other recent optimizations is
going into these two tasks.
This PR implements two basic ideas:
- The anyfunc array can be lazily initialized as long as we retain the
information needed to do so. For now, in this PR, we just recreate the
anyfunc whenever a pointer is taken to it, because doing so is fast
enough; in the future we could keep some state to know whether the
anyfunc has been written yet and skip this work if redundant.
This technique allows us to leave the anyfunc array as uninitialized
memory, which can be a significant savings. Filling it with
initialized anyfuncs is very expensive, but even zeroing it is
expensive: e.g. in a large module, it can be >500KB.
- A funcref table can be lazily initialized as long as we retain a link
to its corresponding instance and function index for each element. A
zero in a table element means "uninitialized", and a slowpath does the
initialization.
Funcref tables are a little tricky because funcrefs can be null. We need
to distinguish "element was initially non-null, but user stored explicit
null later" from "element never touched" (ie the lazy init should not
blow away an explicitly stored null). We solve this by stealing the LSB
from every funcref (anyfunc pointer): when the LSB is set, the funcref
is initialized and we don't hit the lazy-init slowpath. We insert the
bit on storing to the table and mask it off after loading.
We do have to set up a precomputed array of `FuncIndex`s for the table
in order for this to work. We do this as part of the module compilation.
This PR also refactors the way that the runtime crate gains access to
information computed during module compilation.
Performance effect measured with in-tree benches/instantiation.rs, using
SpiderMonkey built for WASI, and with memfd enabled:
```
BEFORE:
sequential/default/spidermonkey.wasm
time: [68.569 us 68.696 us 68.856 us]
sequential/pooling/spidermonkey.wasm
time: [69.406 us 69.435 us 69.465 us]
parallel/default/spidermonkey.wasm: with 1 background thread
time: [69.444 us 69.470 us 69.497 us]
parallel/default/spidermonkey.wasm: with 16 background threads
time: [183.72 us 184.31 us 184.89 us]
parallel/pooling/spidermonkey.wasm: with 1 background thread
time: [69.018 us 69.070 us 69.136 us]
parallel/pooling/spidermonkey.wasm: with 16 background threads
time: [326.81 us 337.32 us 347.01 us]
WITH THIS PR:
sequential/default/spidermonkey.wasm
time: [6.7821 us 6.8096 us 6.8397 us]
change: [-90.245% -90.193% -90.142%] (p = 0.00 < 0.05)
Performance has improved.
sequential/pooling/spidermonkey.wasm
time: [3.0410 us 3.0558 us 3.0724 us]
change: [-95.566% -95.552% -95.537%] (p = 0.00 < 0.05)
Performance has improved.
parallel/default/spidermonkey.wasm: with 1 background thread
time: [7.2643 us 7.2689 us 7.2735 us]
change: [-89.541% -89.533% -89.525%] (p = 0.00 < 0.05)
Performance has improved.
parallel/default/spidermonkey.wasm: with 16 background threads
time: [147.36 us 148.99 us 150.74 us]
change: [-18.997% -18.081% -17.285%] (p = 0.00 < 0.05)
Performance has improved.
parallel/pooling/spidermonkey.wasm: with 1 background thread
time: [3.1009 us 3.1021 us 3.1033 us]
change: [-95.517% -95.511% -95.506%] (p = 0.00 < 0.05)
Performance has improved.
parallel/pooling/spidermonkey.wasm: with 16 background threads
time: [49.449 us 50.475 us 51.540 us]
change: [-85.423% -84.964% -84.465%] (p = 0.00 < 0.05)
Performance has improved.
```
So an improvement of something like 80-95% for a very large module (7420
functions in its one funcref table, 31928 functions total).
* Don't copy `VMBuiltinFunctionsArray` into each `VMContext`
This is another PR along the lines of "let's squeeze all possible
performance we can out of instantiation". Before this PR we would copy,
by value, the contents of `VMBuiltinFunctionsArray` into each
`VMContext` allocated. This array of function pointers is modestly-sized
but growing over time as we add various intrinsics. Additionally it's
the exact same for all `VMContext` allocations.
This PR attempts to speed up instantiation slightly by instead storing
an indirection to the function array. This means that calling a builtin
intrinsic is a tad bit slower since it requires two loads instead of one
(one to get the base pointer, another to get the actual address).
Otherwise though `VMContext` initialization is now simply setting one
pointer instead of doing a `memcpy` from one location to another.
With some macro-magic this commit also replaces the previous
implementation with one that's more `const`-friendly which also gets us
compile-time type-checks of libcalls as well as compile-time
verification that all libcalls are defined.
Overall, as with #3739, the win is very modest here. Locally I measured
a speedup from 1.9us to 1.7us taken to instantiate an empty module with
one function. While small at these scales it's still a 10% improvement!
* Review comments
This PR introduces a new way of performing cooperative timeslicing that
is intended to replace the "fuel" mechanism. The tradeoff is that this
mechanism interrupts with less precision: not at deterministic points
where fuel runs out, but rather when the Engine enters a new epoch. The
generated code instrumentation is substantially faster, however, because
it does not need to do as much work as when tracking fuel; it only loads
the global "epoch counter" and does a compare-and-branch at backedges
and function prologues.
This change has been measured as ~twice as fast as fuel-based
timeslicing for some workloads, especially control-flow-intensive
workloads such as the SpiderMonkey JS interpreter on Wasm/WASI.
The intended interface is that the embedder of the `Engine` performs an
`engine.increment_epoch()` call periodically, e.g. once per millisecond.
An async invocation of a Wasm guest on a `Store` can specify a number of
epoch-ticks that are allowed before an async yield back to the
executor's event loop. (The initial amount and automatic "refills" are
configured on the `Store`, just as for fuel.) This call does only
signal-safe work (it increments an `AtomicU64`) so could be invoked from
a periodic signal, or from a thread that wakes up once per period.
* Add a compilation section to disable address maps
This commit adds a new `Config::generate_address_map` compilation
setting which is used to disable emission of the `.wasmtime.addrmap`
section of compiled artifacts. This section is currently around the size
of the entire `.text` section itself unfortunately and for size reasons
may wish to be omitted. Functionality-wise all that is lost is knowing
the precise wasm module offset address of a faulting instruction or in a
backtrace of instructions. This also means that if the module has DWARF
debugging information available with it Wasmtime isn't able to produce a
filename and line number in the backtrace.
This option remains enabled by default. This option may not be needed in
the future with #3547 perhaps, but in the meantime it seems reasonable
enough to support a configuration mode where the section is entirely
omitted if the smallest module possible is desired.
* Fix some CI issues
* Update tests/all/traps.rs
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
* Do less work in compilation for address maps
But only when disabled
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
When encountering a subprogram that is dead code (as indicated by the
dead code proposal
https://dwarfstd.org/ShowIssue.php?issue=200609.1), don't generate debug
output for the subprogram or any of its children.
* Add some debug logging for timing in module compiles
This is sometimes helpful when debugging slow compiles from fuzz bugs or
similar.
* Fix total duration calculation to not double-count
This also paves the way for unifying TargetIsa and MachBackend, since now they map one to one. In theory the two traits could be merged, which would be nice to limit the number of total concepts. Also they have quite different responsibilities, so it might be fine to keep them separate.
Interestingly, this PR started as removing RegInfo from the TargetIsa trait since the adapter returned a dummy value there. From the fallout, noticed that all Display implementations didn't needed an ISA anymore (since these were only used to render ISA specific registers). Also the whole family of RegInfo / ValueLoc / RegUnit was exclusively used for the old backend, and these could be removed. Notably, some IR instructions needed to be removed, because they were using RegUnit too: this was the oddball of regfill / regmove / regspill / copy_special, which were IR instructions inserted by the old regalloc. Fare thee well!
We _must not_ trigger a GC when moving refs from host code into
Wasm (e.g. returned from a host function or passed as arguments to a Wasm
function). After insertion into the table, this reference is no longer
rooted. If multiple references are being sent from the host into Wasm and we
allowed GCs during insertion, then the following events could happen:
* Reference A is inserted into the activations table. This does not trigger a
GC, but does fill the table to capacity.
* The caller's reference to A is removed. Now the only reference to A is from
the activations table.
* Reference B is inserted into the activations table. Because the table is at
capacity, a GC is triggered.
* A is reclaimed because the only reference keeping it alive was the activation
table's reference (it isn't inside any Wasm frames on the stack yet, so stack
scanning and stack maps don't increment its reference count).
* We transfer control to Wasm, giving it A and B. Wasm uses A. That's a use
after free.
To prevent uses after free, we cannot GC when moving refs into the
`VMExternRefActivationsTable` because we are passing them from the host to Wasm.
On the other hand, when we are *cloning* -- as opposed to moving -- refs from
the host to Wasm, then it is fine to GC while inserting into the activations
table, because the original referent that we are cloning from is still alive and
rooting the ref.
When debug info was enabled, we would put the debug info sections in between the
text section and the unwind info section. But the unwind info is encoded in a
position-independent manner (so that we don't need relocs for it) that relies on
it directly following the text section. The result of the misplacement was some
crashes inside the unwinder.
We were previously using `_wasmtime_eh_frame` but there is no good reason to
add the prefix Wasmtime-specific prefix. Using the standard name allows for
better inspection with standard tools like `dwarfdump`.
* Use relative `call` instructions between wasm functions
This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:
* Function calls are always indirect which means they are more difficult
to predict, resulting in worse performance.
* Generating a relocation-per-function call requires expensive
relocation resolution at module-load time, which can be a large
contributing factor to how long it takes to load a precompiled module.
To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.
When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.
The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.
This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:
* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
AArch64 calls use this to resolve relocations.
* The `emit_island` function has been rewritten internally to handle
some cases which previously didn't come up before, such as:
* When emitting an island the deadline is now recalculated, where
previously it was always set to infinitely in the future. This was ok
prior since only a `Branch19` supported veneers and once it was
promoted no veneers were supported, so without multiple layers of
promotion the lack of a new deadline was ok.
* When emitting an island all pending fixups had veneers forced if
their branch target wasn't known yet. This was generally ok for
19-bit fixups since the only kind getting a veneer was a 19-bit
fixup, but with mixed kinds it's a bit odd to force veneers for a
26-bit fixup just because a nearby 19-bit fixup needed a veneer.
Instead fixups are now re-enqueued unless they're known to be
out-of-bounds. This may run the risk of generating more islands for
19-bit branches but it should also reduce the number of islands for
between-function calls.
* Otherwise the internal logic was tweaked to ideally be a bit more
simple, but that's a pretty subjective criteria in compilers...
I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.
The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.
* Fix arm32
* Review comments
* Don't copy executable code into a `CodeMemory`
This commit moves a copy from compiled artifacts into a `CodeMemory`. In
general this commit drastically changes the meaning of a `CodeMemory`.
Previously it was an iteratively-pushed-on structure that would
accumulate executable code over time. Afterwards, however, it's a
manager for an `MmapVec` which updates the permissions on text section
to ensure that the pages are executable.
By taking ownership of an `MmapVec` within a `CodeMemory` there's no
need to copy any data around, which means that the `.text` section in
the ELF image produced by Wasmtime is usable as-is after placement in
memory and relocations have been resolved. This moves Wasmtime one step
closer to being able to directly use a module after it's `mmap`'d into
memory, optimizing when a module is loaded.
* Fix windows section alignment
* Review comments
* Remove some allocations in `CodeMemory`
This commit removes the `FinishedFunctions` type as well as allocations
associated with trampolines when allocating inside of a `CodeMemory`.
The main goal of this commit is to improve the time spent in
`CodeMemory` where currently today a good portion of time is spent
simply parsing symbol names and trying to extract function indices from
them. Instead this commit implements a new strategy (different from #3236)
where compilation records offset/length information for all
functions/trampolines so this doesn't need to be re-learned from the
object file later.
A consequence of this commit is that this offset information will be
decoded/encoded through `bincode` unconditionally, but we can also
optimize that later if necessary as well.
Internally this involved quite a bit of refactoring since the previous
map for `FinishedFunctions` was relatively heavily relied upon.
* comments
This commit moves the `traps` field of `FunctionInfo` into a section of
the compiled artifact produced by Cranelift. This section is quite large
and when previously encoded/decoded with `bincode` this can take quite
some time to process. Traps are expected to be relatively rare and it's
not necessarily the right tradeoff to spend so much time
serializing/deserializing this data, so this commit offloads the section
into a custom-encoded binary format located elsewhere in the compiled image.
This is similar to #3240 in its goal which is to move very large pieces
of metadata to their own sections to avoid decoding anything when we
load a precompiled modules. This also has a small benefit that it's
slightly more efficient storage for the trap information too, but that's
a negligible benefit.
This is part of #3230 to make loading modules fast.
This commit moves the `address_map` field of `FunctionInfo` into a
custom-encoded section of the executable. The goal of this commit is, as
previous commits, to push less data through `bincode`. The `address_map`
field is actually extremely large and has huge benefits of not being
decoded when we load a module. This data is only used for traps and such
as well, so it's not overly important that it's massaged in to precise
data the runtime can extremely speedily use.
The `FunctionInfo` type does retain a tiny bit of information about the
function itself (it's start source location), but other than that the
`FunctionAddressMap` structure is moved from `wasmtime-environ` to
`wasmtime-cranelift` since it's now no longer needed outside of that
context.
* Replace some cfg(debug) with cfg(debug_assertions)
Cargo nor rustc ever sets `cfg(debug)` automatically, so it's expected
that these usages were intended to be `cfg(debug_assertions)`.
* Fix MachBuffer debug-assertion invariant checks.
We should only check invariants when we expect them to be true --
specifically, before the branch-simplification algorithm runs. At other
times, they may be temporarily violated: e.g., after
`add_{cond,uncond}_branch()` but before emitting the branch bytes. This
is the expected sequence, and the rest of the code is consistent with
that.
Some of the checks also were not quite right (w.r.t. the written
invariants); specifically, we should not check validity of a label's
offset when the label has been aliased to another label.
It seems that this is an unfortunate consequence of leftover
debug-assertions that weren't actually being run, so weren't kept
up-to-date. Should no longer happen now that we actually check these!
Co-authored-by: Chris Fallin <chris@cfallin.org>
* Move wasm data/debuginfo into the ELF compilation image
This commit moves existing allocations of `Box<[u8]>` stored separately
from compilation's final ELF image into the ELF image itself. The goal
of this commit is to reduce the amount of data which `bincode` will need
to process in the future. DWARF debugging information and wasm data
segments can be quite large, and they're relatively rarely read, so
there's typically no need to copy them around. Instead by moving them
into the ELF image this opens up the opportunity in the future to
eliminate copies and use data directly as-found in the image itself.
For information accessed possibly-multiple times, such as the wasm data
ranges, the indexes of the data within the ELF image are computed when
a `CompiledModule` is created. These indexes are then used to directly
index into the image without having to root around in the ELF file each
time they're accessed.
One other change located here is that the symbolication context
previously cloned the debug information into it to adhere to the
`'static` lifetime safely, but this isn't actually ever used in
`wasmtime` right now so the unsafety around this has been removed and
instead borrowed data is returned (no more clones, yay!).
* Fix lightbeam
* Fix determinism of compiled modules
Currently wasmtime's compilation artifacts are not deterministic due to
the usage of `HashMap` during serialization which has randomized order
of its elements. This commit fixes that by switching to a sorted
`BTreeMap` for various maps. A test is also added to ensure determinism.
If in the future the performance of `BTreeMap` is not as good as
`HashMap` for some of these cases we can implement a fancier
`serialize_with`-style solution where we sort keys during serialization,
but only during serialization and otherwise use a `HashMap`.
* fix lightbeam
This was historically defined in `wasmtime-environ` but it's only used
in `wasmtime-cranelift`, so this commit moves the definition to the
`debug` module where it's primarily used.
* Implement a setting for reserved dynamic memory growth
Dynamic memories aren't really that heavily used in Wasmtime right now
because for most 32-bit memories they're classified as "static" which
means they reserve 4gb of address space and never move. Growth of a
static memory is simply making pages accessible, so it's quite fast.
With the memory64 feature, however, this is no longer true since all
memory64 memories are classified as "dynamic" at this time. Previous to
this commit growth of a dynamic memory unconditionally moved the entire
linear memory in the host's address space, always resulting in a new
`Mmap` allocation. This behavior is causing fuzzers to time out when
working with 64-bit memories because incrementally growing a memory by 1
page at a time can incur a quadratic time complexity as bytes are
constantly moved.
This commit implements a scheme where there is now a tunable setting for
memory to be reserved at the end of a dynamic memory to grow into. This
means that dynamic memory growth is ideally amortized as most calls to
`memory.grow` will be able to grow into the pre-reserved space. Some
calls, though, will still need to copy the memory around.
This helps enable a commented out test for 64-bit memories now that it's
fast enough to run in debug mode. This is because the growth of memory
in the test no longer needs to copy 4gb of zeros.
* Test fixes & review comments
* More comments
* Move `CompiledFunction` into wasmtime-cranelift
This commit moves the `wasmtime_environ::CompiledFunction` type into the
`wasmtime-cranelift` crate. This type has lots of Cranelift-specific
pieces of compilation and doesn't need to be generated by all Wasmtime
compilers. This replaces the usage in the `Compiler` trait with a
`Box<Any>` type that each compiler can select. Each compiler must still
produce a `FunctionInfo`, however, which is shared information we'll
deserialize for each module.
The `wasmtime-debug` crate is also folded into the `wasmtime-cranelift`
crate as a result of this commit. One possibility was to move the
`CompiledFunction` commit into its own crate and have `wasmtime-debug`
depend on that, but since `wasmtime-debug` is Cranelift-specific at this
time it didn't seem like it was too too necessary to keep it separate.
If `wasmtime-debug` supports other backends in the future we can
recreate a new crate, perhaps with it refactored to not depend on
Cranelift.
* Move wasmtime_environ::reference_type
This now belongs in wasmtime-cranelift and nowhere else
* Remove `Type` reexport in wasmtime-environ
One less dependency on `cranelift-codegen`!
* Remove `types` reexport from `wasmtime-environ`
Less cranelift!
* Remove `SourceLoc` from wasmtime-environ
Change the `srcloc`, `start_srcloc`, and `end_srcloc` fields to a custom
`FilePos` type instead of `ir::SourceLoc`. These are only used in a few
places so there's not much to lose from an extra abstraction for these
leaf use cases outside of cranelift.
* Remove wasmtime-environ's dep on cranelift's `StackMap`
This commit "clones" the `StackMap` data structure in to
`wasmtime-environ` to have an independent representation that that
chosen by Cranelift. This allows Wasmtime to decouple this runtime
dependency of stack map information and let the two evolve
independently, if necessary.
An alternative would be to refactor cranelift's implementation into a
separate crate and have wasmtime depend on that but it seemed a bit like
overkill to do so and easier to clone just a few lines for this.
* Define code offsets in wasmtime-environ with `u32`
Don't use Cranelift's `binemit::CodeOffset` alias to define this field
type since the `wasmtime-environ` crate will be losing the
`cranelift-codegen` dependency soon.
* Commit to using `cranelift-entity` in Wasmtime
This commit removes the reexport of `cranelift-entity` from the
`wasmtime-environ` crate and instead directly depends on the
`cranelift-entity` crate in all referencing crates. The original reason
for the reexport was to make cranelift version bumps easier since it's
less versions to change, but nowadays we have a script to do that.
Otherwise this encourages crates to use whatever they want from
`cranelift-entity` since we'll always depend on the whole crate.
It's expected that the `cranelift-entity` crate will continue to be a
lean crate in dependencies and suitable for use at both runtime and
compile time. Consequently there's no need to avoid its usage in
Wasmtime at runtime, since "remove Cranelift at compile time" is
primarily about the `cranelift-codegen` crate.
* Remove most uses of `cranelift-codegen` in `wasmtime-environ`
There's only one final use remaining, which is the reexport of
`TrapCode`, which will get handled later.
* Limit the glob-reexport of `cranelift_wasm`
This commit removes the glob reexport of `cranelift-wasm` from the
`wasmtime-environ` crate. This is intended to explicitly define what
we're reexporting and is a transitionary step to curtail the amount of
dependencies taken on `cranelift-wasm` throughout the codebase. For
example some functions used by debuginfo mapping are better imported
directly from the crate since they're Cranelift-specific. Note that
this is intended to be a temporary state affairs, soon this reexport
will be gone entirely.
Additionally this commit reduces imports from `cranelift_wasm` and also
primarily imports from `crate::wasm` within `wasmtime-environ` to get a
better sense of what's imported from where and what will need to be
shared.
* Extract types from cranelift-wasm to cranelift-wasm-types
This commit creates a new crate called `cranelift-wasm-types` and
extracts type definitions from the `cranelift-wasm` crate into this new
crate. The purpose of this crate is to be a shared definition of wasm
types that can be shared both by compilers (like Cranelift) as well as
wasm runtimes (e.g. Wasmtime). This new `cranelift-wasm-types` crate
doesn't depend on `cranelift-codegen` and is the final step in severing
the unconditional dependency from Wasmtime to `cranelift-codegen`.
The final refactoring in this commit is to then reexport this crate from
`wasmtime-environ`, delete the `cranelift-codegen` dependency, and then
update all `use` paths to point to these new types.
The main change of substance here is that the `TrapCode` enum is
mirrored from Cranelift into this `cranelift-wasm-types` crate. While
this unfortunately results in three definitions (one more which is
non-exhaustive in Wasmtime itself) it's hopefully not too onerous and
ideally something we can patch up in the future.
* Get lightbeam compiling
* Remove unnecessary dependency
* Fix compile with uffd
* Update publish script
* Fix more uffd tests
* Rename cranelift-wasm-types to wasmtime-types
This reflects the purpose a bit more where it's types specifically
intended for Wasmtime and its support.
* Fix publish script
* Reimplement how unwind information is stored
This commit is a major refactoring of how unwind information is stored
after compilation of a function has finished. Previously we would store
the raw `UnwindInfo` as a result of compilation and this would get
serialized/deserialized alongside the rest of the ELF object that
compilation creates. Whenever functions were registered with
`CodeMemory` this would also result in registering unwinding information
dynamically at runtime, which in the case of Unix, for example, would
dynamically created FDE/CIE entries on-the-fly.
Eventually I'd like to support compiling Wasmtime without Cranelift, but
this means that `UnwindInfo` wouldn't be easily available to decode into
and create unwinding information from. To solve this I've changed the
ELF object created to have the unwinding information encoded into it
ahead-of-time so loading code into memory no longer needs to create
unwinding tables. This change has two different implementations for
Windows/Unix:
* On Windows the implementation was much easier. The unwinding
information on Windows is already stored after the function itself in
the text section. This was actually slightly duplicated in object
building and in code memory allocation. Now the object building
continues to do the same, recording unwinding information after
functions, and code memory no longer manually tracks this.
Additionally Wasmtime will emit a special custom section in the object
file with unwinding information which is the list of
`RUNTIME_FUNCTION` structures that `RtlAddFunctionTable` expects. This
means that the object file has all the information precompiled into it
and registration at runtime is simply passing a few pointers around to
the runtime.
* Unix was a little bit more difficult than Windows. Today a `.eh_frame`
section is created on-the-fly with offsets in FDEs specified as the
absolute address that functions are loaded at. This absolute
address hindered the ability to precompile the FDE into the object
file itself. I've switched how addresses are encoded, though, to using
`DW_EH_PE_pcrel` which means that FDE addresses are now specified
relative to the FDE itself. This means that we can maintain a fixed
offset between the `.eh_frame` loaded in memory and the beginning of
code memory. When doing so this enables precompiling the `.eh_frame`
section into the object file and at runtime when loading an object no
further construction of unwinding information is needed.
The overall result of this commit is that unwinding information is no
longer stored in its cranelift-data-structure form on disk. This means
that this unwinding information format is only present during
compilation, which will make it that much easier to compile out
cranelift in the future.
This commit also significantly refactors `CodeMemory` since the way
unwinding information is handled is not much different from before.
Previously `CodeMemory` was suitable for incrementally adding more and
more functions to it, but nowadays a `CodeMemory` either lives per
module (in which case all functions are known up front) or it's created
once-per-`Func::new` with two trampolines. In both cases we know all
functions up front so the functionality of incrementally adding more and
more segments is no longer needed. This commit removes the ability to
add a function-at-a-time in `CodeMemory` and instead it can now only
load objects in their entirety. A small helper function is added to
build a small object file for trampolines in `Func::new` to handle
allocation there.
Finally, this commit also folds the `wasmtime-obj` crate directly into
the `wasmtime-cranelift` crate and its builder structure to be more
amenable to this strategy of managing unwinding tables.
It is not intentional to have any real functional change as a result of
this commit. This might accelerate loading a module from cache slightly
since less work is needed to manage the unwinding information, but
that's just a side benefit from the main goal of this commit which is to
remove the dependence on cranelift unwinding information being available
at runtime.
* Remove isa reexport from wasmtime-environ
* Trim down reexports of `cranelift-codegen`
Remove everything non-essential so that only the bits which will need to
be refactored out of cranelift remain.
* Fix debug tests
* Review comments
This commit started off by deleting the `cranelift_codegen::settings`
reexport in the `wasmtime-environ` crate and then basically played
whack-a-mole until everything compiled again. The main result of this is
that the `wasmtime-*` family of crates have generally less of a
dependency on the `TargetIsa` trait and type from Cranelift. While the
dependency isn't entirely severed yet this is at least a significant
start.
This commit is intended to be largely refactorings, no functional
changes are intended here. The refactorings are:
* A `CompilerBuilder` trait has been added to `wasmtime_environ` which
server as an abstraction used to create compilers and configure them
in a uniform fashion. The `wasmtime::Config` type now uses this
instead of cranelift-specific settings. The `wasmtime-jit` crate
exports the ability to create a compiler builder from a
`CompilationStrategy`, which only works for Cranelift right now. In a
cranelift-less build of Wasmtime this is expected to return a trait
object that fails all requests to compile.
* The `Compiler` trait in the `wasmtime_environ` crate has been souped
up with a number of methods that Wasmtime and other crates needed.
* The `wasmtime-debug` crate is now moved entirely behind the
`wasmtime-cranelift` crate.
* The `wasmtime-cranelift` crate is now only depended on by the
`wasmtime-jit` crate.
* Wasm types in `cranelift-wasm` no longer contain their IR type,
instead they only contain the `WasmType`. This is required to get
everything to align correctly but will also be required in a future
refactoring where the types used by `cranelift-wasm` will be extracted
to a separate crate.
* I moved around a fair bit of code in `wasmtime-cranelift`.
* Some gdb-specific jit-specific code has moved from `wasmtime-debug` to
`wasmtime-jit`.
* Move all trampoline compilation to `wasmtime-cranelift`
This commit moves compilation of all the trampolines used in wasmtime
behind the `Compiler` trait object to live in `wasmtime-cranelift`. The
long-term goal of this is to enable depending on cranelift *only* from
the `wasmtime-cranelift` crate, so by moving these dependencies we
should make that a little more flexible.
* Fix windows build
* Implement the memory64 proposal in Wasmtime
This commit implements the WebAssembly [memory64 proposal][proposal] in
both Wasmtime and Cranelift. In terms of work done Cranelift ended up
needing very little work here since most of it was already prepared for
64-bit memories at one point or another. Most of the work in Wasmtime is
largely refactoring, changing a bunch of `u32` values to something else.
A number of internal and public interfaces are changing as a result of
this commit, for example:
* Acessors on `wasmtime::Memory` that work with pages now all return
`u64` unconditionally rather than `u32`. This makes it possible to
accommodate 64-bit memories with this API, but we may also want to
consider `usize` here at some point since the host can't grow past
`usize`-limited pages anyway.
* The `wasmtime::Limits` structure is removed in favor of
minimum/maximum methods on table/memory types.
* Many libcall intrinsics called by jit code now unconditionally take
`u64` arguments instead of `u32`. Return values are `usize`, however,
since the return value, if successful, is always bounded by host
memory while arguments can come from any guest.
* The `heap_addr` clif instruction now takes a 64-bit offset argument
instead of a 32-bit one. It turns out that the legalization of
`heap_addr` already worked with 64-bit offsets, so this change was
fairly trivial to make.
* The runtime implementation of mmap-based linear memories has changed
to largely work in `usize` quantities in its API and in bytes instead
of pages. This simplifies various aspects and reflects that
mmap-memories are always bound by `usize` since that's what the host
is using to address things, and additionally most calculations care
about bytes rather than pages except for the very edge where we're
going to/from wasm.
Overall I've tried to minimize the amount of `as` casts as possible,
using checked `try_from` and checked arithemtic with either error
handling or explicit `unwrap()` calls to tell us about bugs in the
future. Most locations have relatively obvious things to do with various
implications on various hosts, and I think they should all be roughly of
the right shape but time will tell. I mostly relied on the compiler
complaining that various types weren't aligned to figure out
type-casting, and I manually audited some of the more obvious locations.
I suspect we have a number of hidden locations that will panic on 32-bit
hosts if 64-bit modules try to run there, but otherwise I think we
should be generally ok (famous last words). In any case I wouldn't want
to enable this by default naturally until we've fuzzed it for some time.
In terms of the actual underlying implementation, no one should expect
memory64 to be all that fast. Right now it's implemented with
"dynamic" heaps which have a few consequences:
* All memory accesses are bounds-checked. I'm not sure how aggressively
Cranelift tries to optimize out bounds checks, but I suspect not a ton
since we haven't stressed this much historically.
* Heaps are always precisely sized. This means that every call to
`memory.grow` will incur a `memcpy` of memory from the old heap to the
new. We probably want to at least look into `mremap` on Linux and
otherwise try to implement schemes where dynamic heaps have some
reserved pages to grow into to help amortize the cost of
`memory.grow`.
The memory64 spec test suite is scheduled to now run on CI, but as with
all the other spec test suites it's really not all that comprehensive.
I've tried adding more tests for basic things as I've had to implement
guards for them, but I wouldn't really consider the testing adequate
from just this PR itself. I did try to take care in one test to actually
allocate a 4gb+ heap and then avoid running that in the pooling
allocator or in emulation because otherwise that may fail or take
excessively long.
[proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md
* Fix some tests
* More test fixes
* Fix wasmtime tests
* Fix doctests
* Revert to 32-bit immediate offsets in `heap_addr`
This commit updates the generation of addresses in wasm code to always
use 32-bit offsets for `heap_addr`, and if the calculated offset is
bigger than 32-bits we emit a manual add with an overflow check.
* Disable memory64 for spectest fuzzing
* Fix wrong offset being added to heap addr
* More comments!
* Clarify bytes/pages