Commit Graph

986 Commits

Author SHA1 Message Date
Benjamin Bouvier
6bf6612d96 cranelift x64: use the TZCNT instruction for Ctz when it's available; 2021-01-29 19:41:01 +01:00
Benjamin Bouvier
d3acd9a283 cranelift x64: use the LZCNT instruction for Clz when it's available; 2021-01-29 19:41:01 +01:00
Kasey Carrothers
c6c5fe48b6 Add i128.icmp run tests for the x64 backend. 2021-01-25 13:02:21 -08:00
Kasey Carrothers
c55c5e0506 Add additional tests for icmp-i128. Fixes #1136.
Tests added:
  * eq with nonzero values
  * gt with nonzero values
  * ge with nonzero values
2021-01-25 13:02:20 -08:00
Anton Kirilov
043a8434d2 Cranelift AArch64: Improve the Popcnt implementation
Now the backend uses the CNT instruction, which results into a major
simplification.

Copyright (c) 2021, Arm Limited.
2021-01-19 16:49:47 +00:00
Chris Fallin
c7de8f5efb Merge pull request #2541 from cfallin/struct-arg-ret
x64 and aarch64: allow StructArgument and StructReturn args.
2021-01-17 23:50:19 -08:00
Chris Fallin
456561f431 x64 and aarch64: allow StructArgument and StructReturn args.
The StructReturn ABI is fairly simple at the codegen/isel level: we only
need to take care to return the sret pointer as one of the return values
if that wasn't specified in the initial function signature.

Struct arguments are a little more complex. A struct argument is stored
as a chunk of memory in the stack-args space. However, the CLIF
semantics are slightly special: on the caller side, the parameter passed
in is a pointer to an arbitrary memory block, and we must memcpy this
data to the on-stack struct-argument; and on the callee side, we provide
a pointer to the passed-in struct-argument as the CLIF block param
value.

This is necessary to support various ABIs other than Wasm, such as that
of Rust (with the cg_clif codegen backend).
2021-01-17 23:11:45 -08:00
Chris Fallin
0f563f786a Add ELF TLS support in new x64 backend.
This follows the implementation in the legacy x86 backend, including
hardcoded sequence that is compatible with what the linker expects. We
could potentially do better here, but it is likely not necessary.

Thanks to @bjorn3 for a bugfix to an earlier version of this.
2021-01-17 22:48:51 -08:00
Chris Fallin
71ead6e31d x64 backend: implement 128-bit ops and misc fixes.
This implements all of the ops on I128 that are implemented by the
legacy x86 backend, and includes all that are required by at least one
major use-case (cg_clif rustc backend).

The sequences are open-coded where necessary; for e.g. the bit
operations, this can be somewhat complex, but these sequences have been
tested carefully. This PR also includes a drive-by fix of clz/ctz for 8-
and 16-bit cases where they were incorrect previously.

Also includes ridealong fixes developed while bringing up cg_clif
support, because they are difficult to completely separate due to
other refactors that occurred in this PR:

- fix REX prefix logic for some 8-bit instructions.

  When using an 8-bit register in 64-bit mode on x86-64, the REX prefix
  semantics are somewhat subtle: without the REX prefix, register numbers
  4--7 correspond to the second-to-lowest byte of the first four registers
  (AH, CH, BH, DH), whereas with the REX prefix, these register numbers
  correspond to the usual encoding (SPL, BPL, SIL, DIL). We could always
  emit a REX byte for instructions with 8-bit cases (this is harmless even
  if unneeded), but this would unnecessarily inflate code size; instead,
  the usual approach is to emit it only for these registers.

  This logic was present in some cases but missing for some other
  instructions: divide, not, negate, shifts.

  Fixes #2508.

- avoid unaligned SSE loads on some f64 ops.

  The implementations of several FP ops, such as fabs/fneg, used SSE
  instructions. This is not a problem per-se, except that load-op merging
  did not take *alignment* into account. Specifically, if an op on an f64
  loaded from memory happened to merge that load, and the instruction into
  which it was merged was an SSE instruction, then the SSE instruction
  imposes stricter (128-bit) alignment requirements than the load.f64 did.

  This PR simply forces any instruction lowerings that could use SSE
  instructions to implement non-SIMD operations to take inputs in
  registers only, and avoid load-op merging.

  Fixes #2507.

- two bugfixes exposed by cg_clif: urem/srem.i8, select.b1.

  - urem/srem.i8: the 8-bit form of the DIV instruction on x86-64 places
    the remainder in AH, not RDX, different from all the other width-forms
    of this instruction.

  - select.b1: we were not recognizing selects of boolean values as
    integer-typed operations, so we were generating XMM moves instead (!).
2021-01-14 13:45:50 -08:00
Chris Fallin
4638de673c x64 bugfix: prevent load-op fusion of cmp because it could be emitted multiple times.
On x64, the new backend generates `cmp` instructions at their use-sites
when possible (when the icmp that generates a boolean is known) so that
the condition flows directly through flags rather than a materialized
boolean. E.g., both `bint` (boolean to int) and `select` (conditional
select) instruction lowerings invoke `emit_cmp()` to do so.

Load-op fusion in `emit_cmp()` nominally allowed `cmp` to use its `cmp
reg, mem` form.

However, the mergeable-load condition (load has only single use) was not
adequately checked. Consider the sequence:

```
    v2 = load.i64 v1
    v3 = icmp eq v0, v2
    v4 = bint.i64 v3
    v5 = select.i64 v3, v0, v1
```

The load `v2` is only used in the `icmp` at `v3`. However, the cmp will
be separately codegen'd twice, once for the `bint` and once for the
`select`.

Prior to this fix, the above example would result in the load at `v2`
sinking to the `cmp` just above the `select`; we then emit another `cmp`
for the `bint`, but the load has already been used once so we do not
allow merging. We thus (i) expect the register for `v2` to contain the
loaded value, but (ii) skip the codegen for the load because it has been
sunk. This results in a regalloc error (unexpected livein) as the
unfilled register is upward-exposed to the entry point.

Because of this, we need to accept only the reg, reg form in
`emit_cmp()` (and the FP equivalent). We could get marginally better
code by tracking whether the `cmp` we are emitting comes from an
`icmp`/`fcmp` with only one use; but IMHO simplicity is a better rule
here when subtle interactions occur.
2021-01-13 09:48:51 -08:00
Chris Fallin
b4426be072 machinst lowering: update inst color when scanning across branch to allow more load-op merging.
A branch is considered side-effecting and so updates the instruction
color (which is our way of computing how far instructions can sink).
However, in the lowering loop, we did not update current instruction
color when scanning backward across branches, which are side-effecting.
As a result, the color was stale and fewer load-op merges were permitted
than are actually possible.

Note that this would not have resulted in any correctness issues, as the
stale color is too high (so no merges are permitted that should have
been disallowed).

Fixes #2562.
2021-01-11 11:20:44 -08:00
Andrew Brown
bb2dd5b68b [machinst x64]: implement load*_zero for x64 2021-01-08 16:21:57 -08:00
Nick Fitzgerald
5ad82de3c5 Bump Wasmtime to 0.22.0; Cranelift to 0.69.0 2021-01-07 14:51:12 -08:00
Nick Fitzgerald
6317290a1d Merge pull request #2548 from cfallin/fix-aarch64-sp
aarch64: fix reg/imm `sub` insts that read `SP`, not the zero register.
2021-01-05 16:38:25 -08:00
Chris Fallin
aac3751025 aarch64: fix reg/imm sub insts that read SP, not the zero register.
On AArch64, the zero register (xzr) and the stack pointer (xsp) are
alternately named by the same index `31` in machine code depending on
context. In particular, in the reg-reg-immediate ALU instruction form,
add/subtract will use the stack pointer, not the zero register, if index
31 is given for the first (register) source arg.

In a few places, we were emitting subtract instructions with the zero
register as an argument and a reg/immediate as the second argument. When
an immediate could be incorporated directly (we have the `iconst`
definition visible), this would result in incorrect code being
generated.

This issue was found in `ineg` and in the sequence for vector
right-shifts.

Reported by Ian Cullinan; thanks!
2021-01-05 15:48:07 -08:00
Chris Fallin
dbd2241b60 x64: handle tests of b1 values correctly (only LSB is defined).
Previously, `select` and `brz`/`brnz` instructions, when given a `b1`
boolean argument, would test whether that boolean argument was nonzero,
rather than whether its LSB was nonzero. Since our invariant for mapping
CLIF state to machine state is that bits beyond the width of a value are
undefined, the proper lowering is to test only the LSB.

(aarch64 does not have the same issue because its `Extend` pseudoinst
already properly handles masking of b1 values when a zero-extend is
requested, as it is for select/brz/brnz.)

Found by Nathan Ringo on Zulip [1] (thanks!).

[1]
https://bytecodealliance.zulipchat.com/#narrow/stream/217117-cranelift/topic/bnot.20on.20b1s
2021-01-05 14:45:46 -08:00
Yury Delendik
2964023a77 [SIMD][x86_64] Add encoding for PMADDWD (#2530)
* [SIMD][x86_64] Add encoding for PMADDWD

* also for "experimental_x64"
2020-12-24 07:52:50 -06:00
bjorn3
8f7f8ee0b4 Fix iconst.i8 0 miscompilation 2020-12-12 09:44:05 +01:00
Chris Fallin
267d4a8bdb Merge pull request #2490 from cfallin/fix-popcnt-load-width
x64 lowering fix: i32.popcnt should not merge load and make it 64-bit.
2020-12-08 22:28:41 -08:00
Y-Nak
855a6374dd Fix missing modification of jump table in licm 2020-12-09 11:13:33 +09:00
Chris Fallin
6632c45c01 x64 lowering fix: i32.popcnt should not merge load and make it 64-bit.
As a subtle consequence of the recent load-op fusion, popcnt of a
value that came from a load.i32 was compiling into a 64-bit load. This
is a result of the way in which x86 infers the width of loads: it is a
consequence of the instruction containing the memory reference, not the
memory reference itself. So the `input_to_reg_mem()` helper (convert an
instruction input into a register or memory reference) was providing the
appropriate memory reference for the result of a load.i32, but never
encoded the assumption that it would only be used in a 32-bit
instruction. It turns out that popcnt.i32 uses a 64-bit instruction to
load this RM op, hence widening a 32-bit to 64-bit load (which is
problematic when the offset is (memory_length - 4)).

Separately, popcnt was using the RM operand twice, resulting in two
loads if we merged a load. This isn't a correctness bug in practice
because only a racy sequence (store interleaving between the loads)
would produce incorrect results, but we decided earlier to treat loads
as effectful for now, neither reordering nor duplicating them, to
deliberately reduce complexity.

Because of the second issue, the fix is just to force the operand into a
register always, so any source load will not be merged.

Discovered via fuzzing with oss-fuzz.
2020-12-08 12:24:34 -08:00
Chris Fallin
2cec20aa57 Merge pull request #2486 from cfallin/fix-probestack
Two Lucet-related fixes to stack overflow handling.
2020-12-07 16:47:37 -08:00
Chris Fallin
3a01d14712 Two Lucet-related fixes to stack overflow handling.
Lucet uses stack probes rather than explicit stack limit checks as
Wasmtime does. In bytecodealliance/lucet#616, I have discovered that I
previously was not running some Lucet runtime tests with the new
backend, so was missing some test failures due to missing pieces in the
new backend.

This PR adds (i) calls to probestack, when enabled, in the prologue of
every function with a stack frame larger than one page (configurable via
flags); and (ii) trap metadata for every instruction on x86-64 that can
access the stack, hence be the first point at which a stack overflow is
detected when the stack pointer is decremented.
2020-12-07 16:08:53 -08:00
Chris Fallin
1dddba649a x64 regalloc register order: put caller-saves (volatiles) first.
The x64 backend currently builds the `RealRegUniverse` in a way that
is generating somewhat suboptimal code. In many blocks, we see uses of
callee-save (non-volatile) registers (r12, r13, r14, rbx) first, even in
very short leaf functions where there are plenty of volatiles to use.
This is leading to unnecessary spills/reloads.

On one (local) test program, a medium-sized C benchmark compiled to Wasm
and run on Wasmtime, I am seeing a ~10% performance improvement with
this change; it will be less pronounced in programs with high register
pressure (there we are likely to use all registers regardless, so the
prologue/epilogue will save/restore all callee-saves), or in programs
with fewer calls, but this is a clear win for small functions and in
many cases removes prologue/epilogue clobber-saves altogether.

Separately, I think the RA's coalescing is tripping up a bit in some
cases; see e.g. the filetest touched by this commit that loads a value
into %rsi then moves to %rax and returns immediately. This is an
orthogonal issue, though, and should be addressed (if worthwhile) in
regalloc.rs.
2020-12-06 22:37:43 -08:00
bjorn3
411ec3a857 Rename SimpleJIT to JIT as it isn't simple anymore 2020-12-04 13:21:13 -08:00
Julian Seward
8f34d2dc59 aarch64 isel: collect_address_addends: correctly handle ExtendOp::UXTW(negative immediate).
The current code doesn't correctly handle the case where `ExtendOp::UXTW` has
as source, a constant-producing insn that produces a negative (32-bit) value.
Then the value is incorrectly sign-extended to 64 bits (in fact, this has
already been done by `ctx.get_constant(insn)`), whereas it needs to be zero
extended.  The obvious fix, done here, is just to force bits 63:32 of the
extension to zero, hence zero-extending it.
2020-12-04 19:21:40 +01:00
Chris Fallin
3e516e784b Fix lowering instruction-sinking (load-merging) bug.
This fixes a subtle corner case exposed during fuzzing. If we have a bit
of CLIF like:

```
    v0 = load.i64 ...
    v1 = iadd.i64 v0, ...
    v2 = do_other_thing v1
    v3 = load.i64 v1
```

and if this is lowered using a machine backend that can merge loads into
ALU ops, *and* that has an addressing mode that can look through add
ops, then the following can happen:

1. We lower the load at `v3`. This looks backward at the address
   operand tree and finds that `v1` is `v0` plus other things; it has an
   addressing mode that can add `v0`'s register and the other things
   directly; so it calls `put_value_in_reg(v0)` and uses its register in
   the amode. At this point, the add producing `v1` has no references,
   so it will not (yet) be codegen'd.
2. We lower `do_other_thing`, which puts `v1` in a register and uses it.
   the `iadd` now has a reference.
3. We reach the `iadd` and, because it has a reference, lower it. Our
   machine has the ability to merge a load into an ALU operation.
   Crucially, *we think the load at `v0` is mergeable* because it has
   only one user, the add at `v1` (!). So we merge it.
4. We reach the `load` at `v0` and because it has been merged into the
   `iadd`, we do not separately codegen it. The register that holds `v0`
   is thus never written, and the use of this register by the final load
   (Step 1) will see an undefined value.

The logic error here is that in the presence of pattern matching that
looks through pure ops, we can end up with multiple uses of a value that
originally had a single use (because we allow lookthrough of pure ops in
all cases). In other words, the multiple-use-ness of `v1` "passes
through" in some sense to `v0`. However, the load sinking logic is not
aware of this.

The fix, I think, is pretty simple: we disallow an effectful instruction
from sinking/merging if it already has some other use when we look back
at it.

If we disallowed lookthrough of *any* op that had multiple uses, even
pure ones, then we would avoid this scenario; but earlier experiments
showed that to have a non-negligible performance impact, so (given that
we've worked out the logic above) I think this complexity is worth it.
2020-12-03 14:59:12 -08:00
Anton Kirilov
f59b274d22 Cranelift AArch64: Further vector constant improvements
Introduce support for MOVI/MVNI with 16-, 32-, and 64-bit elements,
and the vector variant of FMOV.

Copyright (c) 2020, Arm Limited.
2020-12-03 15:30:24 +00:00
Chris Fallin
c9a81f008d x64 backend: fix condition-code used for part of explicit heap check.
A dynamic heap address computation may create up to two conditional
branches: the usual bounds-check, but also (in some cases) an
offset-addition overflow check.

The x64 backend had reversed the condition code for this check,
resulting in an always-trapping execution for a valid offset. I'm
somewhat surprised this has existed so long, but I suppose the
particular conditions (large offset, small offset guard, dynamic heap)
have been somewhat rare in our testing so far.

Found via fuzzing in #2453.
2020-12-02 10:40:53 -08:00
Chris Fallin
d413b907b4 Merge pull request #2414 from jgouly/extend-refactor
arm64: Refactor Inst::Extend handling
2020-11-25 17:22:07 -08:00
Chris Fallin
b97f07b405 x64 backend: merge loads into ALU ops when appropriate.
This PR makes use of the support in #2366 for sinking effectful
instructions and merging them with consumers. In particular, on x86, we
want to make use of the ability of many instructions to load one operand
directly from memory. That is, instead of this:

```
    movq 0(%rdi), %rax
    addq %rax, %rbx
```

we want to generate this:

```
    addq 0(%rdi), %rax
```

As described in more detail in #2366, sinking and merging the load is
only possible under certain conditions. In particular, we need to ensure
that the use is the *only* use (otherwise the load happens more than
once), and we need to ensure that it does not move across other
effectful ops (see #2366 for how we ensure this).

This change is actually fairly simple, given that all the framework is
in place: we simply pattern-match a load on one operand of an ALU
instruction that takes an RMI (reg, mem, or immediate) operand, and
generate the mem form when we match.

Also makes a drive-by improvement in the x64 backend to use
statically-monomorphized `LowerCtx` types rather than a `&mut dyn
LowerCtx`.

On `bz2.wasm`, this results in ~1% instruction-count reduction. More is
likely possible by following up with other instructions that can merge
memory loads as well.
2020-11-17 11:06:46 -08:00
Chris Fallin
712ff22492 AArch64 SIMD: pattern-match load+splat into LD1R instruction. 2020-11-16 15:59:28 -08:00
Joey Gouly
70cbc4ca7c arm64: Refactor Inst::Extend handling
This refactors the handling of Inst::Extend and simplifies the lowering
of Bextend and Bmask, which allows the use of SBFX instructions for
extensions from 1-bit booleans. Other extensions use aliases of BFM,
and the code was changed to reflect that, rather than hard coding bit
patterns. Also ImmLogic is now implemented, so another hard coded
instruction can be removed.

As part of looking at boolean handling, `normalize_boolean_result` was
changed to `materialize_boolean_result`, such that it can use either
CSET or CSETM. Using CSETM saves an instruction (previously CSET + SUB)
for booleans bigger than 1-bit.

Copyright (c) 2020, Arm Limited.
2020-11-13 16:17:25 +00:00
Chris Fallin
113d061129 Merge pull request #2369 from akirilov-arm/move_fix
Cranelift AArch64: Various small fixes
2020-11-12 14:59:10 -08:00
Andrew Brown
bd93e69eb4 [machinst x64]: implement packed shifts 2020-11-12 14:21:45 -08:00
Chris Fallin
89dbc4590d Merge pull request #2363 from cfallin/extend-only-if-abi
Do value-extensions at ABI boundaries only when ABI requires it.
2020-11-12 12:26:20 -08:00
Chris Fallin
fd6433aaf5 Merge pull request #2395 from cfallin/lucet-x64-support
Add support for brff/brif and icmp_sp to new x64 backend to support Lucet.
2020-11-12 12:10:52 -08:00
Anton Kirilov
edaada3f57 Cranelift AArch64: Various small fixes
* Use FMOV to move 64-bit FP registers and SIMD vectors.
* Add support for additional vector load types.
* Fix the printing of Inst::LoadAddr.

Copyright (c) 2020, Arm Limited.
2020-11-12 13:54:05 +00:00
Chris Fallin
5df8840483 Add support for brff/brif and icmp_sp to new x64 backend to support Lucet.
`lucetc` currently *almost*, but not quite, works with the new x64
backend; the only missing piece is support for the particular
instructions emitted as part of its prologue stack-check.

We do not normally see `brff`, `brif`, or `ifcmp_sp` in CLIF generated by
`cranelift-wasm` without the old-backend legalization rules, so these
were not supported in the new x64 backend as they were not necessary for
Wasm MVP support. Using them resulted in an `unimplemented!()` panic.

This PR adds support for `brff` and `brif` analogously to how AArch64
implements them, by pattern-matching the `ifcmp` / `ffcmp` directly.
Then `ifcmp_sp` is a straightforward variant of `ifcmp`.

Along the way, this also removes the notion of "fallthrough block" from
the branch-group lowering method; instead, `fallthrough` instructions
are handled as normal branches to their explicitly-provided targets,
which (in the original CLIF) match the fallthrough block. The reason for
this is that the block reordering done as part of lowering can change
the fallthrough block. We were not using `fallthrough` instructions in
the output produced by `cranelift-wasm`, so this, too, was not
previously caught.

With these changes, the `lucetc` crate in Lucet passes all tests with
the `x64` feature-flag added to its `cranelift-codegen` dependency.
2020-11-11 13:43:39 -08:00
Chris Fallin
997b654235 Merge pull request #2393 from jgouly/constant-addend
arm64: Fold some constants into load instructions
2020-11-11 11:23:21 -08:00
Pat Hickey
aa259ff92a Merge pull request #2390 from bjorn3/more_simplejit_refactors
More SimpleJIT refactorings
2020-11-11 11:16:04 -08:00
Joey Gouly
a5011e8212 arm64: Fold some constants into load instructions
This changes the following:
  mov x0, #4
  ldr x0, [x1, #4]

Into:
  ldr x0, [x1]

I noticed this pattern (but with #0), in a benchmark.

Copyright (c) 2020, Arm Limited.
2020-11-11 18:47:43 +00:00
Julian Seward
41e87a2f99 Support wasm select instruction with V128-typed operands on AArch64.
* this requires upgrading to wasmparser 0.67.0.

* There are no CLIF side changes because the CLIF `select` instruction is
  polymorphic enough.

* on aarch64, there is unfortunately no conditional-move (csel) instruction on
  vectors.  This patch adds a synthetic instruction `VecCSel` which *does*
  behave like that.  At emit time, this is emitted as an if-then-else diamond
  (4 insns).

* aarch64 implementation is otherwise straightforwards.
2020-11-11 18:45:24 +01:00
bjorn3
b7a93c2321 Remove reloc_block
It isn't called and all reloc sinks either ignore it or panic when it is
called.
2020-11-11 12:36:17 +01:00
Andrew Brown
c9e8889d47 Update clippy annotation to use latest version (#2375) 2020-11-09 09:24:59 -06:00
Yury Delendik
f60c0f3ec3 cranelift: refactor unwind logic to accommodate multiple backends (#2357)
*    Make cranelift_codegen::isa::unwind::input public
*    Move UnwindCode's common offset field out of the structure
*    Make MachCompileResult::unwind_info more generic
*    Record initial stack pointer offset
2020-11-05 16:57:40 -06:00
Andrew Brown
83f182b390 Implement initial emission of constants
This approach suffers from memory-size bloat during compile time due to the desire to de-duplicate the constants emitted and reduce runtime memory-size. As a first step, though, this provides an end-to-end mechanism for constants to be emitted in the MachBuffer islands.
2020-11-05 14:25:02 -08:00
Chris Fallin
a2bbb198de Do value-extensions at ABI boundaries only when ABI requires it.
There has been some confusion over the meaning of the "sign-extend"
(`sext`) and "zero-extend" (`uext`) attributes on parameters and return
values in signatures. According to the three implemented backends, these
attributes indicate that a value narrower than a full register should
always be extended in the way specified. However, they are much more
useful if they mean "extend in this way if the ABI requires extending":
only the ABI backend knows whether or not a particular ABI (e.g., x64
SysV vs. x64 Baldrdash) requires extensions, while only the frontend
(CLIF generator) knows whether or not a value is signed, so the two have
to work in concert.

This is the result of some very helpful discussion in #2354 (thanks to
@uweigand for raising the issue and @bjorn3 for helping to reason about
it).

This change respects the extension attributes in the above way, rather
than unconditionally extending, to avoid potential performance
degradation as we introduce more extension attributes on signatures.
2020-11-05 11:54:35 -08:00
Alex Crichton
e4c3fc5cf2 Update immediate and transitive dependencies
I don't think this has happened in awhile but I've run a `cargo update`
as well as trimming some of the duplicate/older dependencies in
`Cargo.lock` by updating some of our immediate dependencies as well.
2020-11-05 08:34:09 -08:00
Alex Crichton
ab1958434a Bump to 0.21.0 (#2359) 2020-11-05 09:39:53 -06:00