Commit Graph

337 Commits

Author SHA1 Message Date
Chris Fallin
e2b37a57dc Merge pull request #3639 from bjorn3/machinst_cleanups
Various cleanups around machinst
2022-01-05 10:01:27 -08:00
Chris Fallin
833ebeed76 Fix spillslot size bug in SIMD by removing type-dependent spillslot allocation.
This patch makes spillslot allocation, spilling and reloading all based
on register class only. Hence when we have a 32- or 64-bit value in a
128-bit XMM register on x86-64 or vector register on aarch64, this
results in larger spillslots and spills/restores.

Why make this change, if it results in less efficient stack-frame usage?
Simply put, it is safer: there is always a risk when allocating
spillslots or spilling/reloading that we get the wrong type and make the
spillslot or the store/load too small. This was one contributing factor
to CVE-2021-32629, and is now the source of a fuzzbug in SIMD code that
puns an arbitrary user-controlled vector constant over another
stackslot. (If this were a pointer, that could result in RCE. SIMD is
not yet on by default in a release, fortunately.

In particular, we have not been particularly careful about using moves
between values of different types, for example with `raw_bitcast` or
with certain SIMD operations, and such moves indicate to regalloc.rs
that vregs are in equivalence classes and some arbitrary vreg in the
class is provided when allocating the spillslot or spilling/reloading.
Since regalloc.rs does not track actual type, and since we haven't been
careful about moves, we can't really trust this "arbitrary vreg in
equivalence class" to provide accurate type information.

In the fix to CVE-2021-32629 we fixed this for integer registers by
always spilling/reloading 64 bits; this fix can be seen as the analogous
change for FP/vector regs.
2022-01-04 13:24:40 -08:00
bjorn3
17c3c1813f Remove MachInstEmitInfo 2022-01-04 18:06:01 +01:00
bjorn3
8d1fc75b6b Make MachBackend::triple return &Triple
This avoids an unnecessary clone
2022-01-04 18:06:01 +01:00
Alex Crichton
546e901d32 aarch64: Use smaller instruction helpers in ISLE (#3618)
* aarch64: Use smaller instruction helpers in ISLE

This commit moves the aarch64 backend's ISLE to be more similar to the
x64 backend's ISLE where one-liner instruction builders are used for
various forms of instructions instead of always using the
constructor-per-variant-of-`Inst`. Overall I think this change worked
out quite well and sets up some naming idioms as well for various forms
of instructions.

* rebase conflict
2021-12-17 17:28:52 -06:00
Chris Fallin
5233175b06 Use SipHasher rather than SHA-512 for ISLE manifest.
Fixes #3609. It turns out that `sha2` is a nontrivial dependency for
Cranelift in many contexts, partly because it pulls in a number of other
crates as well.

One option is to remove the hash check under certain circumstances, as
implemented in #3616. However, this is undesirable for other reasons:
having different dependency options in Wasmtime in particular for
crates.io vs. local builds is not really possible, and so either we
still have the higher build cost in Wasmtime, or we turn off the checks
by default, which goes against the original intent of ensuring developer
safety (no mysterious stale-source bugs).

This PR uses `SipHash` instead, which is built into the standard
library. `SipHash` is deprecated, but it's fixed and deterministic
(across runs and across Rust versions), which is what we need, unlike
the suggested replacement `std::collections::hash_map::DefaultHasher`.
The result is only 64 bits, and is not cryptographically secure, but we
never needed that; we just need a simple check to indicate when we
forget a `rebuild-isle`.
2021-12-17 12:11:05 -08:00
Alex Crichton
e94ebc2263 aarch64: Translate rot{r,l} to ISLE (#3614)
This commit translates the `rotl` and `rotr` lowerings already existing
to ISLE. The port was relatively straightforward with the biggest
changing being the instructions generated around i128 rotl/rotr
primarily due to register changes.
2021-12-17 12:37:17 -06:00
Alex Crichton
d8974ce6bc aarch64: Migrate ishl/ushr/sshr to ISLE (#3608)
* aarch64: Migrate ishl/ushr/sshr to ISLE

This commit migrates the `ishl`, `ushr`, and `sshr` instructions to
ISLE. These involve special cases for almost all types of integers
(including vectors) and helper functions for the i128 lowerings since
the i128 lowerings look to be used for other instructions as well. This
doesn't delete the i128 lowerings in the Rust code just yet because
they're still used by Rust lowerings, but they should be deletable in
due time once those lowerings are translated to ISLE.

* Use more descriptive names for i128 lowerings

* Use a with_flags-lookalike for csel

* Use existing `with_flags_*`

* Coment backwards order

* Update generated code
2021-12-16 17:37:53 -06:00
Chris Fallin
1323ae417e Fix some 16- and 8-bit behavior in x64 backend related to rotates.
Uncovered by @bjorn3 (thanks!): 8- and 16-bit rotates were not working
properly in recent versions of Cranelift with part of the lowering
migrated to ISLE.

This PR fixes a few issues:

- 8- and 16-bit rotate-left needs to mask a constant amount, if any,
  because we use a 32-bit rotate instruction and so don't get the
  appropriate shift-amount masking for free from x86 semantics.

- `operand_size_from_type` was incorrect: it only handled 32- and 64-bit
  types and silently returned `OperandSize::Size32` for everything else.
  Now uses the `OperandSize::from_ty(ty)` helper as the pre-ISLE code
  did.

Our test coverage for narrow value types is not great; this PR adds some
runtests for rotl/rotr but more would always be better!
2021-12-16 11:34:24 -08:00
Alex Crichton
4236319a53 aarch64: Migrate some bit-ops to ISLE (#3602)
* aarch64: Migrate some bit-ops to ISLE

This commit migrates these instructions to ISLE:

* `bnot`
* `band`
* `bor`
* `bxor`
* `band_not`
* `bor_not`
* `bxor_not`

The translations were relatively straightforward but the interesting
part here was trying to reduce the duplication between all these
instructions. I opted for a route that's similar to what the lowering
does today, having a `decl` which takes the `ALUOp` and then performs
further pattern matching internally. This enabled each instruction's
lowering to be pretty simple while we still get to handle all the fancy
cases of shifts, constants, etc, for each instruction.

* Actually delete previous lowerings

* Remove dead code
2021-12-15 10:41:36 -06:00
Alex Crichton
d89410ec4e aarch64: Migrate uextend/sextend to ISLE
This commit migrates the sign/zero extension instructions from
`lower_inst.rs` to ISLE. There's actually a fair amount going on in this
migration since a few other pieces needed touching up along the way as
well:

* First is the actual migration of `uextend` and `sextend`. These
  instructions are relatively simple but end up having a number of special
  cases. I've attempted to replicate all the cases here but
  double-checks would be good.

* This commit actually fixes a few issues where if the result of a vector
  extraction is sign/zero-extended into i128 that actually results in
  panics in the current backend.

* This commit adds exhaustive testing for
  extension-of-a-vector-extraction is a noop wrt extraction.

* A bugfix around ISLE glue was required to get this commit working,
  notably the case where the `RegMapper` implementation was trying to
  map an input to an output (meaning ISLE was passing through an input
  unmodified to the output) wasn't working. This requires a `mov`
  instruction to be generated and this commit updates the glue to do
  this. At the same time this commit updates the ISLE glue to share more
  infrastructure between x64 and aarch64 so both backends get this fix
  instead of just aarch64.

Overall I think that the translation to ISLE was a net benefit for these
instructions. It's relatively obvious what all the cases are now unlike
before where it took a few reads of the code and some boolean switches
to figure out which path was taken for each flavor of input. I think
there's still possible improvements here where, for example, the
`put_in_reg_{s,z}ext64` helper doesn't use this logic so technically
those helpers could also pattern match the "well atomic loads and vector
extractions automatically do this for us" but that's a possible future
improvement for later (and shouldn't be too too hard with some ISLE
refactoring).
2021-12-14 07:01:37 -08:00
Alex Crichton
20e090b114 aarch64: Migrate {s,u}{div,rem} to ISLE (#3572)
* aarch64: Migrate {s,u}{div,rem} to ISLE

This commit migrates four different instructions at once to ISLE:

* `sdiv`
* `udiv`
* `srem`
* `urem`

These all share similar codegen and center around the `div` instruction
to use internally. The main feature of these was to model the manual
traps since the `div` instruction doesn't trap on overflow, instead
requiring manual checks to adhere to the semantics of the instruction
itself.

While I was here I went ahead and implemented an optimization for these
instructions when the right-hand-side is a constant with a known value.
For `udiv`, `srem`, and `urem` if the right-hand-side is a nonzero
constant then the checks for traps can be skipped entirely. For `sdiv`
if the constant is not 0 and not -1 then additionally all checks can be
elided. Finally if the right-hand-side of `sdiv` is -1 the zero-check is
elided, but it still needs a check for `i64::MIN` on the left-hand-side
and currently there's a TODO where `-1` is still checked too.

* Rebasing and review conflicts
2021-12-13 17:27:11 -06:00
Andrew Brown
86611d3bbc isle: expand enums in ISLE (#3586)
* x64: expand FloatCC enum in ISLE
* isle: regenerate manifests
* isle: generate all enum fields in `clif.isle`

This expands the `gen_isle` function to write all of the immediate
`enum`s out explicitly in `clif.isle`. Non-`enum` immediates are still
`extern primitive`.

* Only compile `enum_values` with `rebuild-isle` feature
* Only compile `gen_enum_isle` with `rebuild-isle` feature
2021-12-12 18:31:42 -08:00
Andrew Brown
acaa84068d aarch64: assert that temporary and destination registers match during renaming 2021-12-08 11:59:11 -08:00
Alex Crichton
25b380d5fc aarch64: Migrate {s,u}mulhi to ISLE
This starts moving over some sign/zero-extend helpers also present in
lowering in Rust. Otherwise this is a relatively unsurprising transition
with the various cases of the instructions mapping well to ISLE
utilities.
2021-11-29 18:11:42 -08:00
Alex Crichton
33dba07e6b aarch64: Migrate imul to ISLE
This commit migrates the `imul` clif instruction lowering for AArch64 to
ISLE. This is a relatively complicated instruction with lots of special
cases due to the simd proposal for wasm. Like x64, however, the special
casing lends itself to ISLE quite well and the lowerings here in theory
are pretty straightforward.

The main gotcha of this commit is that this encounters a unique
situation which hasn't been encountered yet with other lowerings, namely
the `Umlal32` instruction used in the implementation of `i64x2.mul` is
unique in the `VecRRRLongOp` class of instructions in that it both reads
and writes the destination register (`use_mod` instead of simply
`use_def`). This meant that I needed to add another helper in ISLe for
creating a `vec_rrrr_long` instruction (despite this enum variant not
actually existing) which implicitly moves the first operand into the
destination before issuing the actual `VecRRRLong` instruction.
2021-11-29 16:05:57 -08:00
Alex Crichton
fa63e7de5a aarch64: Migrate ineg to ISLE
Needed a new `vec_misc` instruction construction helper but otherwise a
pretty straightforward translation.
2021-11-29 08:03:17 -08:00
Chris Fallin
bc0de464bc Update aarch64 backend's ISLE code to be rule-ordering-independent.
In [this
comment](https://github.com/bytecodealliance/wasmtime/pull/3545#discussion_r756284757)
I noted a potential subtle issue with the way that a few rules were
written that is fine now but could cause some unexpected pain when we
get around to verification.

Specifically, a set of rules of the form

```
    (rule (A (B _)) (C))
    (rule (A _) (D))
```

should, under any reasonable "default" rule ordering scheme, fire the
more specific rule `(A (B _))` when applicable, in preference to the
second "fallback" rule.

However, for future verification-specific applications of ISLE, we want
to ensure the property that a rule's meaning/validity is not dependent
on being overridden by more specific rules. In other words, if a rule
specifies a rewrite, that rewrite should always be correct; and choosing
a more specific rule can give a *better* compilation (better generated
code) but should not be necessary for correctness.

This is an admittedly under-documented part of the language, though in the
pending #3560 I added a note about rule ordering being a heuristic that
should hopefully make this slightly clearer. Ultimately I want to have
tests that choose non-default rule orderings and differentially fuzz in
order to be sure that we're following this principle; and of course once
we're actually doing verification, we'll catch issues like this upfront.

Apologies for the subtle footgun here and hopefully the reasoning is
clear enough :-)
2021-11-24 11:08:47 -08:00
Alex Crichton
ef8ea644f4 aarch64: Migrate {s,u}{sub,add}_sat to ISLE (#3551)
These were pretty straightforward! Only needed a single `rule` per
instruction with a new 128-bit vector type matcher.
2021-11-19 12:59:06 -06:00
Alex Crichton
cbf539abb8 Use with_flags for 128-bith arith in aarch64
Also move the `with_flags` bits and pieces to `prelude.isle` so it can
be shared between backends if necessary.
2021-11-19 07:08:08 -08:00
Alex Crichton
98ce029bbd Add commutative addition cases 2021-11-19 07:00:04 -08:00
Alex Crichton
7412461b7b Spelling 2021-11-19 06:53:31 -08:00
Alex Crichton
15d2542939 Rebuild ISLE 2021-11-19 06:52:01 -08:00
Alex Crichton
7d0f6ab90f aarch64: Migrate iadd and isub to ISLE
This commit is the first "meaty" instruction added to ISLE for the
AArch64 backend. I chose to pick the first two in the current lowering's
`match` statement, `isub` and `iadd`. These two turned out to be
particularly interesting for a few reasons:

* Both had clearly migratable-to-ISLE behavior along the lines of
  special-casing per type. For example 128-bit and vector arithmetic
  were both easily translateable.

* The `iadd` instruction has special cases for fusing with a
  multiplication to generate `madd` which is expressed pretty easily in
  ISLE.

* Otherwise both instructions had a number of forms where they attempted
  to interpret the RHS as various forms of constants, extends, or
  shifts. There's a bit of a design space of how best to represent this
  in ISLE and what I settled on was to have a special case for each form
  of instruction, and the special cases are somewhat duplicated between
  `iadd` and `isub`. There's custom "extractors" for the special cases
  and instructions that support these special cases will have an
  `rule`-per-case.

Overall I think the ISLE transitioned pretty well. I don't think that
the aarch64 backend is going to follow the x64 backend super closely,
though. For example the x64 backend is having a helper-per-instruction
at the moment but with AArch64 it seems to make more sense to only have
a helper-per-enum-variant-of-`MInst`. This is because the same
instruction (e.g. `ALUOp::Sub32`) can be expressed with multiple
different forms depending on the payload.

It's worth noting that the ISLE looks like it's a good deal larger than
the code actually being removed from lowering as part of this commit. I
think this is deceptive though because a lot of the logic in
`put_input_in_rse_imm12_maybe_negated` and `alu_inst_imm12` is being
inlined into the ISLE definitions for each instruction instead of having
it all packed into the helper functions. Some of the "boilerplate" here
is the addition of various ISLE utilities as well.
2021-11-19 06:51:38 -08:00
Alex Crichton
352ee2b186 Move insertlane to ISLE (#3544)
This also fixes a bug where `movsd` was incorrectly used with a memory
operand for `insertlane`, causing it to actually zero the upper bits
instead of preserving them.

Note that the insertlane logic still exists in `lower.rs` because it's
used as a helper for a few other instruction lowerings which aren't
migrated to ISLE yet. This commit also adds a helper in ISLE itself for
those other lowerings to use when they get implemented.

Closes #3216
2021-11-18 13:48:11 -06:00
Alex Crichton
1141169ff8 aarch64: Initial work to transition backend to ISLE (#3541)
* aarch64: Initial work to transition backend to ISLE

This commit is what is hoped to be the initial commit towards migrating
the aarch64 backend to ISLE. There's seemingly a lot of changes here but
it's intended to largely be code motion. The current thinking is to
closely follow the x64 backend for how all this is handled and
organized.

Major changes in this PR are:

* The `Inst` enum is now defined in ISLE. This avoids having to define
  it in two places (once in Rust and once in ISLE). I've preserved all
  the comments in the ISLE and otherwise this isn't actually a
  functional change from the Rust perspective, it's still the same enum
  according to Rust.

* Lots of little enums and things were moved to ISLE as well. As with
  `Inst` their definitions didn't change, only where they're defined.
  This will give future ISLE PRs access to all these operations.

* Initial code for lowering `iconst`, `null`, and `bconst` are
  implemented. Ironically none of this is actually used right now
  because constant lowering is handled in `put_input_in_regs` which
  specially handles constants. Nonetheless I wanted to get at least
  something simple working which shows off how to special case various
  things that are specific to AArch64. In a future PR I plan to hook up
  const-lowering in ISLE to this path so even though
  `iconst`-the-clif-instruction is never lowered this should use the
  const lowering defined in ISLE rather than elsewhere in the backend
  (eventually leading to the deletion of the non-ISLE lowering).

* The `IsleContext` skeleton is created and set up for future additions.

* Some code for ISLE that's shared across all backends now lives in
  `isle_prelude_methods!()` and is deduplicated between the AArch64
  backend and the x64 backend.

* Register mapping is tweaked to do the same thing for AArch64 that it
  does for x64. Namely mapping virtual registers is supported instead of
  just virtual to machine registers.

My main goal with this PR was to get AArch64 into a place where new
instructions can be added with relative ease. Additionally I'm hoping to
figure out as part of this change how much to share for ISLE between
AArch64 and x64 (and other backends).

* Don't use priorities with rules

* Update .gitattributes with concise syntax

* Deduplicate some type definitions

* Rebuild ISLE

* Move isa::isle to machinst::isle
2021-11-18 10:38:16 -06:00
Alex Crichton
f787ce433d aarch64: Remove manual sign extension in lowering (#3538)
Currently the lowering for `iconst` will sign-extend the payload value
of the `iconst` instruction itself, but the payload is already
sign-extended to this isn't necessary. This commit removes the redundant
sign extension.
2021-11-16 16:47:12 -06:00
Chris Fallin
5e96a447f0 Add back the ifcmp_sp CLIF opcode.
This opcode was removed as part of the old-backend cleanup in #3446.
While this opcode will definitely go away eventually, it is
unfortunately still used today in Lucet (as we just discovered while
working to upgrade Lucet's pinned Cranelift version). Lucet is
deprecated and slated to eventually be completely sunset in favor of
Wasmtime; but until that happens, we need to keep this opcode.
2021-11-01 13:34:31 -07:00
bjorn3
a05bf2bf42 Remove instructions necessary for the old regalloc 2021-10-12 14:37:36 +02:00
bjorn3
1fd491dadd Remove fallthrough instruction 2021-10-12 14:22:07 +02:00
bjorn3
5b24e117ee Remove instructions used by old br_table legalization 2021-10-12 14:18:52 +02:00
bjorn3
8a8797b911 Remove the sarg_t type and dummy_sarg_t instruction
They are no longer necessary with the new style backends
2021-10-10 14:38:35 +02:00
Benjamin Bouvier
43a86f14d5 Remove more old backend ISA concepts (#3402)
This also paves the way for unifying TargetIsa and MachBackend, since now they map one to one. In theory the two traits could be merged, which would be nice to limit the number of total concepts. Also they have quite different responsibilities, so it might be fine to keep them separate.

Interestingly, this PR started as removing RegInfo from the TargetIsa trait since the adapter returned a dummy value there. From the fallout, noticed that all Display implementations didn't needed an ISA anymore (since these were only used to render ISA specific registers). Also the whole family of RegInfo / ValueLoc / RegUnit was exclusively used for the old backend, and these could be removed. Notably, some IR instructions needed to be removed, because they were using RegUnit too: this was the oddball of regfill / regmove / regspill / copy_special, which were IR instructions inserted by the old regalloc. Fare thee well!
2021-10-04 10:36:12 +02:00
bjorn3
9e34df33b9 Remove the old x86 backend 2021-09-29 16:13:46 +02:00
Chris Fallin
344a219245 Merge pull request #3383 from akirilov-arm/vany_true
Cranelift AArch64: Fix the VanyTrue implementation for 64-bit elements
2021-09-24 09:26:36 -07:00
Anton Kirilov
0fb3acfb94 Cranelift AArch64: Fix the VanyTrue implementation for 64-bit elements
Copyright (c) 2021, Arm Limited.
2021-09-23 20:39:46 +01:00
Anton Kirilov
930b1f17f0 Cranelift AArch64: Implement scalar FmaxPseudo and FminPseudo
Copyright (c) 2021, Arm Limited.
2021-09-23 15:11:01 +01:00
Chris Fallin
3474965ca6 Merge pull request #3322 from sparker-arm/aarch64-lse-ops
AArch64 LSE atomic_rmw support
2021-09-22 09:21:28 -07:00
Anton Kirilov
a8aec2e0e6 Cranelift AArch64: Avoid invalid encodings for some vector instructions
Copyright (c) 2021, Arm Limited.
2021-09-16 12:26:58 +01:00
Sam Parker
7da76f0601 cargo fmt 2021-09-15 16:01:51 +01:00
Sam Parker
80d596b055 AArch64 LSE atomic_rmw support
Rename the existing AtomicRMW to AtomicRMWLoop and directly lower
atomic_rmw operations, without a loop if LSE support is available.

Copyright (c) 2021, Arm Limited
2021-09-15 16:01:51 +01:00
Anton Kirilov
8805e25042 Cranelift AArch64: Improve the type checks for IR operations
There were cases where the AArch64 backend assumed that an IR
operation would always operate on certain types (the most likely
reason being that the corresponding WebAssembly instruction did
not cover anything else), even though the definition of the IR
operation imposed no constraints like that.

Copyright (c) 2021, Arm Limited.
2021-09-13 14:46:45 +01:00
Benjamin Bouvier
85ec11acb9 Aarch64: always generate the CFA directive indicating no pointer signing 2021-09-02 09:16:34 +02:00
Alex Crichton
1532516a36 Use relative call instructions between wasm functions (#3275)
* Use relative `call` instructions between wasm functions

This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:

* Function calls are always indirect which means they are more difficult
  to predict, resulting in worse performance.

* Generating a relocation-per-function call requires expensive
  relocation resolution at module-load time, which can be a large
  contributing factor to how long it takes to load a precompiled module.

To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.

When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.

The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.

This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:

* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
  AArch64 calls use this to resolve relocations.

* The `emit_island` function has been rewritten internally to handle
  some cases which previously didn't come up before, such as:

  * When emitting an island the deadline is now recalculated, where
    previously it was always set to infinitely in the future. This was ok
    prior since only a `Branch19` supported veneers and once it was
    promoted no veneers were supported, so without multiple layers of
    promotion the lack of a new deadline was ok.

  * When emitting an island all pending fixups had veneers forced if
    their branch target wasn't known yet. This was generally ok for
    19-bit fixups since the only kind getting a veneer was a 19-bit
    fixup, but with mixed kinds it's a bit odd to force veneers for a
    26-bit fixup just because a nearby 19-bit fixup needed a veneer.
    Instead fixups are now re-enqueued unless they're known to be
    out-of-bounds. This may run the risk of generating more islands for
    19-bit branches but it should also reduce the number of islands for
    between-function calls.

  * Otherwise the internal logic was tweaked to ideally be a bit more
    simple, but that's a pretty subjective criteria in compilers...

I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.

The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.

* Fix arm32

* Review comments
2021-09-01 13:27:38 -05:00
Anton Kirilov
7b98be1bee Cranelift: Simplify leaf functions that do not use the stack (#2960)
* Cranelift AArch64: Simplify leaf functions that do not use the stack

Leaf functions that do not use the stack (e.g. do not clobber any
callee-saved registers) do not need a frame record.

Copyright (c) 2021, Arm Limited.
2021-08-27 12:12:37 +02:00
Anton Kirilov
a1b39276e1 Enable more CLIF tests on AArch64
The tests for the SIMD floating-point maximum and minimum operations
require particular care because the handling of the NaN values is
non-deterministic and may vary between platforms. There is no way to
match several NaN values in a test, so the solution is to extract the
non-deterministic test cases into a separate file that is subsequently
replicated for every backend under test, with adjustments made to the
expected results.

Copyright (c) 2021, Arm Limited.
2021-08-17 13:27:58 +01:00
Sam Parker
b6f6ac116a Revert IR changes
Along with the x64 and s390x changes. Now pattern matching the
uextend(atomic_load) in the aarch64 backend.
2021-08-05 09:35:32 +01:00
Sam Parker
cbb7229457 Re-implement atomic load and stores
The AArch64 support was a bit broken and was using Armv7 style
barriers, which aren't required with Armv8 acquire-release
load/stores.

The fallback CAS loops and RMW, for AArch64, have also been updated
to use acquire-release, exclusive, instructions which, again, remove
the need for barriers. The CAS loop has also been further optimised
by using the extending form of the cmp instruction.

Copyright (c) 2021, Arm Limited.
2021-08-05 09:08:08 +01:00
Sam Parker
3bc2f0c701 Enable simd_X_extadd_pairwise_X for AArch64
Lower to [u|s]addlp for AArch64.

Copyright (c) 2021, Arm Limited.
2021-08-03 10:25:09 +01:00
Johnnie Birch
e519fca61c Refactor and turn on lowering for extend-add-pairwise 2021-07-31 10:52:39 -07:00