Currently the lowering for `iconst` will sign-extend the payload value
of the `iconst` instruction itself, but the payload is already
sign-extended to this isn't necessary. This commit removes the redundant
sign extension.
Currently, the `build.rs` script that generates Rust source from the
ISLE DSL will only do this generation if the `rebuild-isle` Cargo
feature is specified. By default, it is not. This is based on the
principle that we (the build script) do not modify the source tree as
managed by git; git-managed files are strictly a human-managed and
human-edited resource. By adding the opt-in Cargo feature, a developer
is requesting the build script to perform an explicit action. (In my
understanding at least, this principle comes from the general philosophy
of hermetic builds: the output should be a pure function of the input,
and part of this is that the input is read-only. If we modify the source
tree, then all bets are off.)
Unfortunately, requiring the opt-in feature also creates a footgun that
is easy to hit: if a developer modifies the ISLE DSL source, but forgets
to specify the Cargo feature, then the compiler will silently be built
successfully with stale source, and will silently exclude any changes
that were made.
The generated source is checked into git for a good reason: we want DSL
compiler to not affect build times for the overwhelmingly common case
that Cranelift is used as a dependency but the backends are not being
actively developed. (This overhead comes mainly from building `islec`
itself.)
So, what to do? This PR implements a middle ground first described in
[this conversation](https://github.com/bytecodealliance/wasmtime/pull/3506#discussion_r743113351), in which we:
- Generate a hash (SHA-512) of the ISLE DSL source and produce a
"manifest" of ISLE inputs alongside the generated source; and
- Always read the ISLE DSL source, and see if the manifest is still
valid, on builds that do not have the opt-in "rebuild" feature.
This allows us to know whether the ISLE compiler output would have been
the same (modulo changes to the DSL compiler itself, which are
out-of-scope here), without actually building the ISLE compiler and
running it.
If the compiler-backend developer modifies an ISLE source file and then
tries to build `cranelift-codegen` without adding the `rebuild-isle`
Cargo feature, they get the following output:
```text
Error: the ISLE source files that resulted in the generated Rust source
* src/isa/x64/lower/isle/generated_code.rs
have changed but the generated source was not rebuilt! These ISLE source
files are:
* src/clif.isle
* src/prelude.isle
* src/isa/x64/inst.isle
* src/isa/x64/lower.isle
Please add `--features rebuild-isle` to your `cargo build` command
if you wish to rebuild the generated source, then include these changes
in any git commits you make that include the changes to the ISLE.
For example:
$ cargo build -p cranelift-codegen --features rebuild-isle
(This build script cannot do this for you by default because we cannot
modify checked-into-git source without your explicit opt-in.)
```
which will tell them exactly what they need to do to fix the problem!
Note that I am leaving the "Rebuild ISLE" CI job alone for now, because
otherwise, we are trusting whomever submits a PR to generate the correct
generated source. In other words, the manifest is a communication from
the checked-in tree to the developer, but we still need to verify that
the checked-in generated Rust source and the manifest are correct with
respect to the checked-in ISLE source.
This moves the `f32const` and `f64const` instructions from `lower.rs` to
ISLE. I was originally going to add something else but due to the
`isle.rs`'s manual use of `constructor_imm(..)` it necessitated filling
out the `imm` cases for f32/f64 constants, so I figured I'd go ahead and
move these instructions as well.
The special case for 0 constants which use `xorp{s,d}` is preserved from
`lower.rs` today, but a special case isn't added for the all-ones
constant. The comment says to use `cmpeqp{s,d}` but as discovered on
other recent PRs that's not quite sufficient because comparing a
register against itself which happens to be NaN wouldn't work, so
something else will be required (perhaps `pcmpeq` or similar? I figured
I'd defer to later)
This was my first attempt at transitioning code to ISLE to originally
fix#3327 but that fix has since landed on `main`, so this is instead
now just porting a few operations to ISLE.
Closes#3336
On the build side, this commit introduces two things:
1. The automatic generation of various ISLE definitions for working with
CLIF. Specifically, it generates extern type definitions for clif opcodes and
the clif instruction data `enum`, as well as extractors for matching each clif
instructions. This happens inside the `cranelift-codegen-meta` crate.
2. The compilation of ISLE DSL sources to Rust code, that can be included in the
main `cranelift-codegen` compilation.
Next, this commit introduces the integration glue code required to get
ISLE-generated Rust code hooked up in clif-to-x64 lowering. When lowering a clif
instruction, we first try to use the ISLE code path. If it succeeds, then we are
done lowering this instruction. If it fails, then we proceed along the existing
hand-written code path for lowering.
Finally, this commit ports many lowering rules over from hand-written,
open-coded Rust to ISLE.
In the process of supporting ISLE, this commit also makes the x64 `Inst` capable
of expressing SSA by supporting 3-operand forms for all of the existing
instructions that only have a 2-operand form encoding:
dst = src1 op src2
Rather than only the typical x86-64 2-operand form:
dst = dst op src
This allows `MachInst` to be in SSA form, since `dst` and `src1` are
disentangled.
("3-operand" and "2-operand" are a little bit of a misnomer since not all
operations are binary operations, but we do the same thing for, e.g., unary
operations by disentangling the sole operand from the result.)
There are two motivations for this change:
1. To allow ISLE lowering code to have value-equivalence semantics. We want ISLE
lowering to translate a CLIF expression that evaluates to some value into a
`MachInst` expression that evaluates to the same value. We want both the
lowering itself and the resulting `MachInst` to be pure and referentially
transparent. This is both a nice paradigm for compiler writers that are
authoring and maintaining lowering rules and is a prerequisite to any sort of
formal verification of our lowering rules in the future.
2. Better align `MachInst` with `regalloc2`'s API, which requires that the input
be in SSA form.
This opcode was removed as part of the old-backend cleanup in #3446.
While this opcode will definitely go away eventually, it is
unfortunately still used today in Lucet (as we just discovered while
working to upgrade Lucet's pinned Cranelift version). Lucet is
deprecated and slated to eventually be completely sunset in favor of
Wasmtime; but until that happens, we need to keep this opcode.
This eagerly evaluates the `format!` and produces a `String` with a heap
allocation, regardless whether `foo` is `Some`/`Ok` or `None`/`Err`. Using
`foo.unwrap_or_else(|| panic!(...))` makes it so that the error message
formatting is only evaluated if `foo` is `None`/`Err`.
Some platforms such as AArch64 Linux support different memory page
sizes, so we need to be conservative when choosing the code section
alignment (which is equal to the page size) by using the maximum.
Copyright (c) 2021, Arm Limited.
This also paves the way for unifying TargetIsa and MachBackend, since now they map one to one. In theory the two traits could be merged, which would be nice to limit the number of total concepts. Also they have quite different responsibilities, so it might be fine to keep them separate.
Interestingly, this PR started as removing RegInfo from the TargetIsa trait since the adapter returned a dummy value there. From the fallout, noticed that all Display implementations didn't needed an ISA anymore (since these were only used to render ISA specific registers). Also the whole family of RegInfo / ValueLoc / RegUnit was exclusively used for the old backend, and these could be removed. Notably, some IR instructions needed to be removed, because they were using RegUnit too: this was the oddball of regfill / regmove / regspill / copy_special, which were IR instructions inserted by the old regalloc. Fare thee well!
* Support full set of ADD LOGICAL / SUBTRACT LOGICAL instructions
* Full implementation of IaddIfcout lowering
* Enable most memory64 tests (except simd and threads)
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14].
This backend hasn't received maintenance in quite some time, and as [RFC
14] indicates this doesn't meet the threshold for keeping the code
in-tree, so this commit removes it.
A fast "baseline" compiler may still be added in the future. The
addition of such a backend should be in line with [RFC 14], though, with
the principles we now have for stable releases of Wasmtime. I'll close
out Lightbeam-related issues once this is merged.
[RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14
- Add relocation handling needed after PR #3275
- Fix incorrect handling of signed constants detected by PR #3056 test
- Fix LabelUse max pos/neg ranges; fix overflow in buffers.rs
- Disable fuzzing tests that require pre-built v8 binaries
- Disable cranelift test that depends on i128
- Temporarily disable memory64 tests
Rename the existing AtomicRMW to AtomicRMWLoop and directly lower
atomic_rmw operations, without a loop if LSE support is available.
Copyright (c) 2021, Arm Limited
There were cases where the AArch64 backend assumed that an IR
operation would always operate on certain types (the most likely
reason being that the corresponding WebAssembly instruction did
not cover anything else), even though the definition of the IR
operation imposed no constraints like that.
Copyright (c) 2021, Arm Limited.
* Use relative `call` instructions between wasm functions
This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:
* Function calls are always indirect which means they are more difficult
to predict, resulting in worse performance.
* Generating a relocation-per-function call requires expensive
relocation resolution at module-load time, which can be a large
contributing factor to how long it takes to load a precompiled module.
To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.
When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.
The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.
This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:
* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
AArch64 calls use this to resolve relocations.
* The `emit_island` function has been rewritten internally to handle
some cases which previously didn't come up before, such as:
* When emitting an island the deadline is now recalculated, where
previously it was always set to infinitely in the future. This was ok
prior since only a `Branch19` supported veneers and once it was
promoted no veneers were supported, so without multiple layers of
promotion the lack of a new deadline was ok.
* When emitting an island all pending fixups had veneers forced if
their branch target wasn't known yet. This was generally ok for
19-bit fixups since the only kind getting a veneer was a 19-bit
fixup, but with mixed kinds it's a bit odd to force veneers for a
26-bit fixup just because a nearby 19-bit fixup needed a veneer.
Instead fixups are now re-enqueued unless they're known to be
out-of-bounds. This may run the risk of generating more islands for
19-bit branches but it should also reduce the number of islands for
between-function calls.
* Otherwise the internal logic was tweaked to ideally be a bit more
simple, but that's a pretty subjective criteria in compilers...
I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.
The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.
* Fix arm32
* Review comments