This patch makes spillslot allocation, spilling and reloading all based
on register class only. Hence when we have a 32- or 64-bit value in a
128-bit XMM register on x86-64 or vector register on aarch64, this
results in larger spillslots and spills/restores.
Why make this change, if it results in less efficient stack-frame usage?
Simply put, it is safer: there is always a risk when allocating
spillslots or spilling/reloading that we get the wrong type and make the
spillslot or the store/load too small. This was one contributing factor
to CVE-2021-32629, and is now the source of a fuzzbug in SIMD code that
puns an arbitrary user-controlled vector constant over another
stackslot. (If this were a pointer, that could result in RCE. SIMD is
not yet on by default in a release, fortunately.
In particular, we have not been particularly careful about using moves
between values of different types, for example with `raw_bitcast` or
with certain SIMD operations, and such moves indicate to regalloc.rs
that vregs are in equivalence classes and some arbitrary vreg in the
class is provided when allocating the spillslot or spilling/reloading.
Since regalloc.rs does not track actual type, and since we haven't been
careful about moves, we can't really trust this "arbitrary vreg in
equivalence class" to provide accurate type information.
In the fix to CVE-2021-32629 we fixed this for integer registers by
always spilling/reloading 64 bits; this fix can be seen as the analogous
change for FP/vector regs.
* aarch64: Migrate ishl/ushr/sshr to ISLE
This commit migrates the `ishl`, `ushr`, and `sshr` instructions to
ISLE. These involve special cases for almost all types of integers
(including vectors) and helper functions for the i128 lowerings since
the i128 lowerings look to be used for other instructions as well. This
doesn't delete the i128 lowerings in the Rust code just yet because
they're still used by Rust lowerings, but they should be deletable in
due time once those lowerings are translated to ISLE.
* Use more descriptive names for i128 lowerings
* Use a with_flags-lookalike for csel
* Use existing `with_flags_*`
* Coment backwards order
* Update generated code
Uncovered by @bjorn3 (thanks!): 8- and 16-bit rotates were not working
properly in recent versions of Cranelift with part of the lowering
migrated to ISLE.
This PR fixes a few issues:
- 8- and 16-bit rotate-left needs to mask a constant amount, if any,
because we use a 32-bit rotate instruction and so don't get the
appropriate shift-amount masking for free from x86 semantics.
- `operand_size_from_type` was incorrect: it only handled 32- and 64-bit
types and silently returned `OperandSize::Size32` for everything else.
Now uses the `OperandSize::from_ty(ty)` helper as the pre-ISLE code
did.
Our test coverage for narrow value types is not great; this PR adds some
runtests for rotl/rotr but more would always be better!
This commit migrates the sign/zero extension instructions from
`lower_inst.rs` to ISLE. There's actually a fair amount going on in this
migration since a few other pieces needed touching up along the way as
well:
* First is the actual migration of `uextend` and `sextend`. These
instructions are relatively simple but end up having a number of special
cases. I've attempted to replicate all the cases here but
double-checks would be good.
* This commit actually fixes a few issues where if the result of a vector
extraction is sign/zero-extended into i128 that actually results in
panics in the current backend.
* This commit adds exhaustive testing for
extension-of-a-vector-extraction is a noop wrt extraction.
* A bugfix around ISLE glue was required to get this commit working,
notably the case where the `RegMapper` implementation was trying to
map an input to an output (meaning ISLE was passing through an input
unmodified to the output) wasn't working. This requires a `mov`
instruction to be generated and this commit updates the glue to do
this. At the same time this commit updates the ISLE glue to share more
infrastructure between x64 and aarch64 so both backends get this fix
instead of just aarch64.
Overall I think that the translation to ISLE was a net benefit for these
instructions. It's relatively obvious what all the cases are now unlike
before where it took a few reads of the code and some boolean switches
to figure out which path was taken for each flavor of input. I think
there's still possible improvements here where, for example, the
`put_in_reg_{s,z}ext64` helper doesn't use this logic so technically
those helpers could also pattern match the "well atomic loads and vector
extractions automatically do this for us" but that's a possible future
improvement for later (and shouldn't be too too hard with some ISLE
refactoring).
* aarch64: Migrate {s,u}{div,rem} to ISLE
This commit migrates four different instructions at once to ISLE:
* `sdiv`
* `udiv`
* `srem`
* `urem`
These all share similar codegen and center around the `div` instruction
to use internally. The main feature of these was to model the manual
traps since the `div` instruction doesn't trap on overflow, instead
requiring manual checks to adhere to the semantics of the instruction
itself.
While I was here I went ahead and implemented an optimization for these
instructions when the right-hand-side is a constant with a known value.
For `udiv`, `srem`, and `urem` if the right-hand-side is a nonzero
constant then the checks for traps can be skipped entirely. For `sdiv`
if the constant is not 0 and not -1 then additionally all checks can be
elided. Finally if the right-hand-side of `sdiv` is -1 the zero-check is
elided, but it still needs a check for `i64::MIN` on the left-hand-side
and currently there's a TODO where `-1` is still checked too.
* Rebasing and review conflicts
This register is not initialized, but we protect against its being used
by never allowing an iflags/fflags-typed value to be used with
`put_value_in_regs`. All `iflags`/`fflags` usages should be handled by
pattern-matching: e.g., `trapif` explicitly matches an `iadd_ifcout`
input.
Eventually (#3249) we need to simplify this by removing
iflags/fflags-tyepd values and using bool flags instead,
pattern-matching to get the same efficient lowerings as today. For now,
this allows the ISLE assertions to pass.
This starts moving over some sign/zero-extend helpers also present in
lowering in Rust. Otherwise this is a relatively unsurprising transition
with the various cases of the instructions mapping well to ISLE
utilities.
This commit migrates the `imul` clif instruction lowering for AArch64 to
ISLE. This is a relatively complicated instruction with lots of special
cases due to the simd proposal for wasm. Like x64, however, the special
casing lends itself to ISLE quite well and the lowerings here in theory
are pretty straightforward.
The main gotcha of this commit is that this encounters a unique
situation which hasn't been encountered yet with other lowerings, namely
the `Umlal32` instruction used in the implementation of `i64x2.mul` is
unique in the `VecRRRLongOp` class of instructions in that it both reads
and writes the destination register (`use_mod` instead of simply
`use_def`). This meant that I needed to add another helper in ISLe for
creating a `vec_rrrr_long` instruction (despite this enum variant not
actually existing) which implicitly moves the first operand into the
destination before issuing the actual `VecRRRLong` instruction.
This commit is the first "meaty" instruction added to ISLE for the
AArch64 backend. I chose to pick the first two in the current lowering's
`match` statement, `isub` and `iadd`. These two turned out to be
particularly interesting for a few reasons:
* Both had clearly migratable-to-ISLE behavior along the lines of
special-casing per type. For example 128-bit and vector arithmetic
were both easily translateable.
* The `iadd` instruction has special cases for fusing with a
multiplication to generate `madd` which is expressed pretty easily in
ISLE.
* Otherwise both instructions had a number of forms where they attempted
to interpret the RHS as various forms of constants, extends, or
shifts. There's a bit of a design space of how best to represent this
in ISLE and what I settled on was to have a special case for each form
of instruction, and the special cases are somewhat duplicated between
`iadd` and `isub`. There's custom "extractors" for the special cases
and instructions that support these special cases will have an
`rule`-per-case.
Overall I think the ISLE transitioned pretty well. I don't think that
the aarch64 backend is going to follow the x64 backend super closely,
though. For example the x64 backend is having a helper-per-instruction
at the moment but with AArch64 it seems to make more sense to only have
a helper-per-enum-variant-of-`MInst`. This is because the same
instruction (e.g. `ALUOp::Sub32`) can be expressed with multiple
different forms depending on the payload.
It's worth noting that the ISLE looks like it's a good deal larger than
the code actually being removed from lowering as part of this commit. I
think this is deceptive though because a lot of the logic in
`put_input_in_rse_imm12_maybe_negated` and `alu_inst_imm12` is being
inlined into the ISLE definitions for each instruction instead of having
it all packed into the helper functions. Some of the "boilerplate" here
is the addition of various ISLE utilities as well.
This also fixes a bug where `movsd` was incorrectly used with a memory
operand for `insertlane`, causing it to actually zero the upper bits
instead of preserving them.
Note that the insertlane logic still exists in `lower.rs` because it's
used as a helper for a few other instruction lowerings which aren't
migrated to ISLE yet. This commit also adds a helper in ISLE itself for
those other lowerings to use when they get implemented.
Closes#3216
* aarch64: Initial work to transition backend to ISLE
This commit is what is hoped to be the initial commit towards migrating
the aarch64 backend to ISLE. There's seemingly a lot of changes here but
it's intended to largely be code motion. The current thinking is to
closely follow the x64 backend for how all this is handled and
organized.
Major changes in this PR are:
* The `Inst` enum is now defined in ISLE. This avoids having to define
it in two places (once in Rust and once in ISLE). I've preserved all
the comments in the ISLE and otherwise this isn't actually a
functional change from the Rust perspective, it's still the same enum
according to Rust.
* Lots of little enums and things were moved to ISLE as well. As with
`Inst` their definitions didn't change, only where they're defined.
This will give future ISLE PRs access to all these operations.
* Initial code for lowering `iconst`, `null`, and `bconst` are
implemented. Ironically none of this is actually used right now
because constant lowering is handled in `put_input_in_regs` which
specially handles constants. Nonetheless I wanted to get at least
something simple working which shows off how to special case various
things that are specific to AArch64. In a future PR I plan to hook up
const-lowering in ISLE to this path so even though
`iconst`-the-clif-instruction is never lowered this should use the
const lowering defined in ISLE rather than elsewhere in the backend
(eventually leading to the deletion of the non-ISLE lowering).
* The `IsleContext` skeleton is created and set up for future additions.
* Some code for ISLE that's shared across all backends now lives in
`isle_prelude_methods!()` and is deduplicated between the AArch64
backend and the x64 backend.
* Register mapping is tweaked to do the same thing for AArch64 that it
does for x64. Namely mapping virtual registers is supported instead of
just virtual to machine registers.
My main goal with this PR was to get AArch64 into a place where new
instructions can be added with relative ease. Additionally I'm hoping to
figure out as part of this change how much to share for ISLE between
AArch64 and x64 (and other backends).
* Don't use priorities with rules
* Update .gitattributes with concise syntax
* Deduplicate some type definitions
* Rebuild ISLE
* Move isa::isle to machinst::isle
This commit disables the `MachBuffer::check_label_branch_invariants`
debug check on the fuzzers due to it causing timeouts with the test case
from #3441. Fuzzing leads to a 20-30x slowdown of executed code and
locally the fuzz time it takes to instantiate #3441 drops from 3 minutes
to 6 seconds disabling this function. Note that this should still be
executed during our testing on CI since it's still enabled for debug
assertions.
There were a few previous code paths that attempted to handle this, but this new
check handles it for all callers.
Rematerializing constants, rather than assigning and reusing a register, allows
for lower register pressure.
Fixes#3468.
If a program has many instances of the pattern "goto next; next:" in a
row (i.e., no-op branches to the fallthrough address), the branch
simplification in `MachBuffer` would remove them all, as expected.
However, in order to work correctly, the algorithm needs to track all
labels that alias the current buffer tail, so that they can be adjusted
later if another branch chomp occurs.
When many thousands of this branch-to-next pattern occur, many thousands
of labels will reference the current buffer tail, and this list of
thousands of labels will be shuffled between the branch metadata struct
and the "labels at tail" struct as branches are appended and then
chomped immediately.
It's possible that with smarter data structure design, we could somehow
share the list of labels -- e.g., a single array of all labels, in order
they are bound, with ranges of indices in this array used to represent
lists of labels (actually, that seems like a better design in general);
but let's leave that to future optimization work.
For now, we can avoid the quadratic behavior by just "giving up" if the
list is too long; it's always valid to not optimize a branch. It is very
unlikely that the "normal" case will have more than 100 "goto next"
branches in a row, so this should not have any perf impact; if it does,
we will leave 1 out of every 100 such branches un-optimized in a long
sequence of thousands.
This takes total compilation time down on my machine from ~300ms to
~72ms for the `foo.wasm` case in #3441. For reference, the old backend
(now removed), built from arbitrarily-chosen-1-year-old commit
`c7fcc344`, takes 158ms, so we're ~twice as fast, which is what I would
expect.
On the build side, this commit introduces two things:
1. The automatic generation of various ISLE definitions for working with
CLIF. Specifically, it generates extern type definitions for clif opcodes and
the clif instruction data `enum`, as well as extractors for matching each clif
instructions. This happens inside the `cranelift-codegen-meta` crate.
2. The compilation of ISLE DSL sources to Rust code, that can be included in the
main `cranelift-codegen` compilation.
Next, this commit introduces the integration glue code required to get
ISLE-generated Rust code hooked up in clif-to-x64 lowering. When lowering a clif
instruction, we first try to use the ISLE code path. If it succeeds, then we are
done lowering this instruction. If it fails, then we proceed along the existing
hand-written code path for lowering.
Finally, this commit ports many lowering rules over from hand-written,
open-coded Rust to ISLE.
In the process of supporting ISLE, this commit also makes the x64 `Inst` capable
of expressing SSA by supporting 3-operand forms for all of the existing
instructions that only have a 2-operand form encoding:
dst = src1 op src2
Rather than only the typical x86-64 2-operand form:
dst = dst op src
This allows `MachInst` to be in SSA form, since `dst` and `src1` are
disentangled.
("3-operand" and "2-operand" are a little bit of a misnomer since not all
operations are binary operations, but we do the same thing for, e.g., unary
operations by disentangling the sole operand from the result.)
There are two motivations for this change:
1. To allow ISLE lowering code to have value-equivalence semantics. We want ISLE
lowering to translate a CLIF expression that evaluates to some value into a
`MachInst` expression that evaluates to the same value. We want both the
lowering itself and the resulting `MachInst` to be pure and referentially
transparent. This is both a nice paradigm for compiler writers that are
authoring and maintaining lowering rules and is a prerequisite to any sort of
formal verification of our lowering rules in the future.
2. Better align `MachInst` with `regalloc2`'s API, which requires that the input
be in SSA form.
This also paves the way for unifying TargetIsa and MachBackend, since now they map one to one. In theory the two traits could be merged, which would be nice to limit the number of total concepts. Also they have quite different responsibilities, so it might be fine to keep them separate.
Interestingly, this PR started as removing RegInfo from the TargetIsa trait since the adapter returned a dummy value there. From the fallout, noticed that all Display implementations didn't needed an ISA anymore (since these were only used to render ISA specific registers). Also the whole family of RegInfo / ValueLoc / RegUnit was exclusively used for the old backend, and these could be removed. Notably, some IR instructions needed to be removed, because they were using RegUnit too: this was the oddball of regfill / regmove / regspill / copy_special, which were IR instructions inserted by the old regalloc. Fare thee well!
- Add relocation handling needed after PR #3275
- Fix incorrect handling of signed constants detected by PR #3056 test
- Fix LabelUse max pos/neg ranges; fix overflow in buffers.rs
- Disable fuzzing tests that require pre-built v8 binaries
- Disable cranelift test that depends on i128
- Temporarily disable memory64 tests
* Use relative `call` instructions between wasm functions
This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:
* Function calls are always indirect which means they are more difficult
to predict, resulting in worse performance.
* Generating a relocation-per-function call requires expensive
relocation resolution at module-load time, which can be a large
contributing factor to how long it takes to load a precompiled module.
To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.
When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.
The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.
This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:
* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
AArch64 calls use this to resolve relocations.
* The `emit_island` function has been rewritten internally to handle
some cases which previously didn't come up before, such as:
* When emitting an island the deadline is now recalculated, where
previously it was always set to infinitely in the future. This was ok
prior since only a `Branch19` supported veneers and once it was
promoted no veneers were supported, so without multiple layers of
promotion the lack of a new deadline was ok.
* When emitting an island all pending fixups had veneers forced if
their branch target wasn't known yet. This was generally ok for
19-bit fixups since the only kind getting a veneer was a 19-bit
fixup, but with mixed kinds it's a bit odd to force veneers for a
26-bit fixup just because a nearby 19-bit fixup needed a veneer.
Instead fixups are now re-enqueued unless they're known to be
out-of-bounds. This may run the risk of generating more islands for
19-bit branches but it should also reduce the number of islands for
between-function calls.
* Otherwise the internal logic was tweaked to ideally be a bit more
simple, but that's a pretty subjective criteria in compilers...
I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.
The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.
* Fix arm32
* Review comments
* Cranelift AArch64: Simplify leaf functions that do not use the stack
Leaf functions that do not use the stack (e.g. do not clobber any
callee-saved registers) do not need a frame record.
Copyright (c) 2021, Arm Limited.
* Replace some cfg(debug) with cfg(debug_assertions)
Cargo nor rustc ever sets `cfg(debug)` automatically, so it's expected
that these usages were intended to be `cfg(debug_assertions)`.
* Fix MachBuffer debug-assertion invariant checks.
We should only check invariants when we expect them to be true --
specifically, before the branch-simplification algorithm runs. At other
times, they may be temporarily violated: e.g., after
`add_{cond,uncond}_branch()` but before emitting the branch bytes. This
is the expected sequence, and the rest of the code is consistent with
that.
Some of the checks also were not quite right (w.r.t. the written
invariants); specifically, we should not check validity of a label's
offset when the label has been aliased to another label.
It seems that this is an unfortunate consequence of leftover
debug-assertions that weren't actually being run, so weren't kept
up-to-date. Should no longer happen now that we actually check these!
Co-authored-by: Chris Fallin <chris@cfallin.org>
Cranelift crates have historically been much more verbose with debug-level
logging than most other crates in the Rust ecosystem. We log things like how
many parameters a basic block has, the color of virtual registers during
regalloc, etc. Even for Cranelift hackers, these things are largely only useful
when hacking specifically on Cranelift and looking at a particular test case,
not even when using some Cranelift embedding (such as Wasmtime).
Most of the time, when people want logging for their Rust programs, they do
something like:
RUST_LOG=debug cargo run
This means that they get all that mostly not useful debug logging out of
Cranelift. So they might want to disable logging for Cranelift, or change it to
a higher log level:
RUST_LOG=debug,cranelift=info cargo run
The problem is that this is already more annoying to type that `RUST_LOG=debug`,
and that Cranelift isn't one single crate, so you actually have to play
whack-a-mole with naming all the Cranelift crates off the top of your head,
something more like this:
RUST_LOG=debug,cranelift=info,cranelift_codegen=info,cranelift_wasm=info,...
Therefore, we're changing most of the `debug!` logs into `trace!` logs: anything
that is very Cranelift-internal, unlikely to be useful/meaningful to the
"average" Cranelift embedder, or prints a message for each instruction visited
during a pass. On the other hand, things that just report a one line statistic
for a whole pass, for example, are left as `debug!`. The more verbose the log
messages are, the higher the bar they must clear to be `debug!` rather than
`trace!`.
The truncate_last_branch removes an instruction that had already
been added to the buffer, and must update various bookkeeping.
However, updating the "srclocs" field is incorrect: if there is
a srclocs entry that spans both the removed branch *and some
previous instruction*, that whole srclocs entry is removed,
which makes those previous instructions now uncovered by any
srclocs record. This can cause subsequent problems e.g. if
one of those instructions traps.
Fixed by just truncating instead of fully removing the srclocs
record in this case.
The previous choice to use the WasmtimeSystemV calling convention for
apple-aarch64 devices was incorrect: padding of arguments was
incorrectly computed. So we have to use some flavor of the apple-aarch64
ABI there.
Since we want to support the wasmtime custom convention for multiple
returns on apple-aarch64 too, a new custom Wasmtime calling convention
was introduced to support this.
This is sometimes useful when performing analyses on the generated
machine code: for example, some kinds of code verifiers will want to do
a control-flow analysis, and it is much easier to do this if one does
not have to recover the CFG from the machine code (doing so requires
heavyweight analysis when indirect branches are involved). If one trusts
the control-flow lowering and only needs to verify other properties of
the code, this can be very useful.