Commit Graph

319 Commits

Author SHA1 Message Date
Chris Fallin
5e96a447f0 Add back the ifcmp_sp CLIF opcode.
This opcode was removed as part of the old-backend cleanup in #3446.
While this opcode will definitely go away eventually, it is
unfortunately still used today in Lucet (as we just discovered while
working to upgrade Lucet's pinned Cranelift version). Lucet is
deprecated and slated to eventually be completely sunset in favor of
Wasmtime; but until that happens, we need to keep this opcode.
2021-11-01 13:34:31 -07:00
Nick Fitzgerald
e3a423c3e8 Remove some uses of foo.expect(&format!(...)) pattern
This eagerly evaluates the `format!` and produces a `String` with a heap
allocation, regardless whether `foo` is `Some`/`Ok` or `None`/`Err`. Using
`foo.unwrap_or_else(|| panic!(...))` makes it so that the error message
formatting is only evaluated if `foo` is `None`/`Err`.
2021-10-25 17:35:23 -07:00
bjorn3
a05bf2bf42 Remove instructions necessary for the old regalloc 2021-10-12 14:37:36 +02:00
bjorn3
1fd491dadd Remove fallthrough instruction 2021-10-12 14:22:07 +02:00
bjorn3
5b24e117ee Remove instructions used by old br_table legalization 2021-10-12 14:18:52 +02:00
Pat Hickey
b7375817b1 Merge pull request #3431 from bjorn3/remove_sarg_t
Remove the sarg_t type and dummy_sarg_t instruction
2021-10-10 09:58:14 -07:00
bjorn3
8a8797b911 Remove the sarg_t type and dummy_sarg_t instruction
They are no longer necessary with the new style backends
2021-10-10 14:38:35 +02:00
bjorn3
2b89b13c57 Move condcodes from cranelift-codegen-shared to cranelift-codegen 2021-10-10 14:23:35 +02:00
Benjamin Bouvier
43a86f14d5 Remove more old backend ISA concepts (#3402)
This also paves the way for unifying TargetIsa and MachBackend, since now they map one to one. In theory the two traits could be merged, which would be nice to limit the number of total concepts. Also they have quite different responsibilities, so it might be fine to keep them separate.

Interestingly, this PR started as removing RegInfo from the TargetIsa trait since the adapter returned a dummy value there. From the fallout, noticed that all Display implementations didn't needed an ISA anymore (since these were only used to render ISA specific registers). Also the whole family of RegInfo / ValueLoc / RegUnit was exclusively used for the old backend, and these could be removed. Notably, some IR instructions needed to be removed, because they were using RegUnit too: this was the oddball of regfill / regmove / regspill / copy_special, which were IR instructions inserted by the old regalloc. Fare thee well!
2021-10-04 10:36:12 +02:00
bjorn3
9e34df33b9 Remove the old x86 backend 2021-09-29 16:13:46 +02:00
Alex Crichton
1ee2af0098 Remove the lightbeam backend (#3390)
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14].
This backend hasn't received maintenance in quite some time, and as [RFC
14] indicates this doesn't meet the threshold for keeping the code
in-tree, so this commit removes it.

A fast "baseline" compiler may still be added in the future. The
addition of such a backend should be in line with [RFC 14], though, with
the principles we now have for stable releases of Wasmtime. I'll close
out Lightbeam-related issues once this is merged.

[RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14
2021-09-27 12:27:19 -05:00
Afonso Bordado
f9ada24bcf cranelift: Fix br_table for i64 inputs
We still only support a maximum of u32::MAX entries, however we no
longer crash when compiling 64 bit indexes.

Fixes #3100
2021-09-02 15:31:48 +01:00
Alex Crichton
1532516a36 Use relative call instructions between wasm functions (#3275)
* Use relative `call` instructions between wasm functions

This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:

* Function calls are always indirect which means they are more difficult
  to predict, resulting in worse performance.

* Generating a relocation-per-function call requires expensive
  relocation resolution at module-load time, which can be a large
  contributing factor to how long it takes to load a precompiled module.

To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.

When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.

The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.

This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:

* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
  AArch64 calls use this to resolve relocations.

* The `emit_island` function has been rewritten internally to handle
  some cases which previously didn't come up before, such as:

  * When emitting an island the deadline is now recalculated, where
    previously it was always set to infinitely in the future. This was ok
    prior since only a `Branch19` supported veneers and once it was
    promoted no veneers were supported, so without multiple layers of
    promotion the lack of a new deadline was ok.

  * When emitting an island all pending fixups had veneers forced if
    their branch target wasn't known yet. This was generally ok for
    19-bit fixups since the only kind getting a veneer was a 19-bit
    fixup, but with mixed kinds it's a bit odd to force veneers for a
    26-bit fixup just because a nearby 19-bit fixup needed a veneer.
    Instead fixups are now re-enqueued unless they're known to be
    out-of-bounds. This may run the risk of generating more islands for
    19-bit branches but it should also reduce the number of islands for
    between-function calls.

  * Otherwise the internal logic was tweaked to ideally be a bit more
    simple, but that's a pretty subjective criteria in compilers...

I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.

The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.

* Fix arm32

* Review comments
2021-09-01 13:27:38 -05:00
bjorn3
690ea640b3 Implement fmin_pseudo and fmax_pseudo for scalars 2021-08-27 16:59:47 +02:00
Anton Kirilov
7b98be1bee Cranelift: Simplify leaf functions that do not use the stack (#2960)
* Cranelift AArch64: Simplify leaf functions that do not use the stack

Leaf functions that do not use the stack (e.g. do not clobber any
callee-saved registers) do not need a frame record.

Copyright (c) 2021, Arm Limited.
2021-08-27 12:12:37 +02:00
Johnnie Birch
e3aae9e498 Refactor to avoid too strict assertion. Fix for 3160 and 3161.
Assertion was intended for SIMD lowering of F64x2ConvertLowI32x4U
2021-08-24 19:39:47 -07:00
Johnnie Birch
e519fca61c Refactor and turn on lowering for extend-add-pairwise 2021-07-31 10:52:39 -07:00
Johnnie Birch
e373ddfe1b Add extend-add-pairwise instructions x64 2021-07-30 15:06:58 -07:00
Johnnie Birch
4f601edc36 Add x64 support for remaining int-to-int extend simd instructions
Adds remaming support for int to int extend simd instructions.
Specifically adds support for remaining I32x4->I64x2 instructions
2021-07-28 23:33:42 -07:00
Johnnie Birch
500f530322 Add support for i32x4_trunc_sat_f64x2_s for x64 2021-07-26 22:24:30 -07:00
Johnnie Birch
23290f0450 Add support for i32x4_trunc_sat_f64x2_u for x64 2021-07-26 22:24:30 -07:00
Johnnie Birch
5deda27977 Add support for Saturating Rounding Q-format Multiplication for x64 2021-07-26 20:32:46 -07:00
Johnnie Birch
ffec1f9b41 Fix for 3089 X64 ext_mul_i8x16 has incorrect lowering
Also factors out unnecessary temp register
2021-07-26 20:06:43 -07:00
Andrew Brown
6b86984c41 x64: avoid load-coalescing SIMD operations with non-aligned loads
Fixes #2943, though not as optimally as may be desired. With x64 SIMD
instructions, the memory operand must be aligned--this change adds that
check. There are cases, however, where we can do better--see #3106.
2021-07-26 13:39:16 -07:00
Nick Fitzgerald
4283d2116d cranelift: Move most debug-level logs to the trace level
Cranelift crates have historically been much more verbose with debug-level
logging than most other crates in the Rust ecosystem. We log things like how
many parameters a basic block has, the color of virtual registers during
regalloc, etc. Even for Cranelift hackers, these things are largely only useful
when hacking specifically on Cranelift and looking at a particular test case,
not even when using some Cranelift embedding (such as Wasmtime).

Most of the time, when people want logging for their Rust programs, they do
something like:

    RUST_LOG=debug cargo run

This means that they get all that mostly not useful debug logging out of
Cranelift. So they might want to disable logging for Cranelift, or change it to
a higher log level:

    RUST_LOG=debug,cranelift=info cargo run

The problem is that this is already more annoying to type that `RUST_LOG=debug`,
and that Cranelift isn't one single crate, so you actually have to play
whack-a-mole with naming all the Cranelift crates off the top of your head,
something more like this:

    RUST_LOG=debug,cranelift=info,cranelift_codegen=info,cranelift_wasm=info,...

Therefore, we're changing most of the `debug!` logs into `trace!` logs: anything
that is very Cranelift-internal, unlikely to be useful/meaningful to the
"average" Cranelift embedder, or prints a message for each instruction visited
during a pass. On the other hand, things that just report a one line statistic
for a whole pass, for example, are left as `debug!`. The more verbose the log
messages are, the higher the bar they must clear to be `debug!` rather than
`trace!`.
2021-07-26 11:50:16 -07:00
Johnnie Birch
2452a4cd74 Refactor lowering structure for ext_mul on x64 and add comments 2021-07-15 01:07:52 -07:00
Johnnie Birch
e5b6bee968 Add emit tests to ext_mul_* instructions 2021-07-15 01:07:52 -07:00
Johnnie Birch
6fbe0b72bd Add simd_extmul_* support for x64 2021-07-15 01:07:52 -07:00
Johnnie Birch
d8e813204e Fold fcvt_low_from_uinit into previously existing clif instructions 2021-07-09 10:39:05 -07:00
Johnnie Birch
2d676d838f Implements f64x2.convert_low_i32x4_u for x64 2021-07-09 10:39:05 -07:00
Anton Kirilov
330f02aa09 Enable the simd_i32x4_trunc_sat_f64x2 test for AArch64
Also, reorganize the AArch64-specific VCode instructions for unary
narrowing and widening vector operations, so that they are more
straightforward to use.

Copyright (c) 2021, Arm Limited.
2021-06-30 12:17:53 +01:00
Anton Kirilov
98f1ac789e Enable the simd_i16x8_q15mulr_sat_s test on AArch64
Copyright (c) 2021, Arm Limited.
2021-06-28 12:24:31 +01:00
Chris Fallin
d42c8692bc x64 backend: be explicit about unimplemented opcodes.
As discussed in #3035, most backends have explicit
`unimplemented!(...)` match-arms for opcode lowering cases that are not
yet implemented; this allows the backend maintainer to easily see what
is not yet implemented, and avoiding a catch-all wildcard arm is less
error-prone as opcodes are added in the future.

However, the x64 backend was the exception: as @akirilov-arm pointed
out, it had a wildcard match arm. This fixes the issue by explicitly
listing all opcodes the x64 backend does not yet implement.

As per our tests, these opcodes are not used or need by Wasm lowering;
but, it is good to know that they exist, so that we can eventually
either support or remove them.

This was a good exercise for me as I wasn't aware of a few of these in
particular: e.g., aarch64 supports `bmask` while x64 does not, and there
isn't a good reason why x64 shouldn't, especially if others hope to use
Cranelift as a SIMD-capable general codegen in the future.

The `unimplemented!()` cases are separate from `panic!()` ones: my
convention here was to split out those that are logically just *missing*
from those that should be *impossible*, mostly due to expected removal
by legalization before we reach the lowering step.
2021-06-26 15:29:14 -07:00
Chris Fallin
efe3930215 Fix bint on x64, and make bextend consistent with bool representation.
There has been occasional confusion with the representation that we use
for bool-typed values in registers, at least when these are wider than
one bit. Does a `b8` store `true` as 1, or as all-ones (`0xff`)?

We've settled on the latter because of some use-cases where the wide
bool becomes a mask -- see #2058 for more on this.

This is fine, and transparent, to most operations within CLIF, because
the bool-typed value still has only two semantically-visible states,
namely `true` and `false`.

However, we have to be careful with bool-to-int conversions. `bint` on
aarch64 correctly masked the all-ones value down to 0 or 1, as required
by the instruction specification, but on x64 it did not. This PR fixes
that bug and makes x64 consistent with aarch64.

While staring at this code I realized that `bextend` was also not
consistent with the all-ones invariant: it should do a sign-extend, not
a zero-extend as it previously did. This is also rectified and tested.
(Aarch64 also already had this case implemented correctly.)

Fixes #3003.
2021-06-22 10:56:56 -07:00
Johnnie Birch
1770880e19 x64: add support for packed promote and demote (#2783)
* Add support for x64 packed promote low

* Add support for x64 packed floating point demote

* Update vector promote low and demote by adding constraints

Also does some renaming and minor refactoring
2021-06-04 15:59:20 -07:00
Andrew Brown
8dc4cc9fe3 x64: fix AVX512 flag checks
Previously, the multiple flags for certain AVX512 instructions were
checked using `OR`: e.g., if the CPU has AVX512VL `OR` AVX512DQ,
emit `VPMULLQ`. This is incorrect--the logic should be `AND`. The Intel
Software Developer Manual, vol. 1, sec. 15.4, has more information on
this (notable there is the suggestion to check with `XGETBV` that the OS
is allowing the use of the XMM registers--but that is a separate issue).
This change switches to `AND` logic in the new backend.
2021-06-01 11:41:16 -07:00
Andrew Brown
2a9f458ea3 x64: lower i8x16.shuffle to VPERMI2B when possible
When shuffling values from two different registers, the x64 lowering for
`i8x16.shuffle` must first shuffle each register separately and then OR
the results with SSE instructions. With `VPERMI2B`, available in
AVX512VL + AVX512VBMI, this can be done in a single instruction after
the shuffle mask has been moved into the destination register. This
change uses `VPERMI2B` for that case when the CPU supports it.
2021-06-01 11:40:53 -07:00
Benjamin Bouvier
51edea9e57 cranelift: introduce a new WasmtimeAppleAarch64 calling convention
The previous choice to use the WasmtimeSystemV calling convention for
apple-aarch64 devices was incorrect: padding of arguments was
incorrectly computed. So we have to use some flavor of the apple-aarch64
ABI there.

Since we want to support the wasmtime custom convention for multiple
returns on apple-aarch64 too, a new custom Wasmtime calling convention
was introduced to support this.
2021-06-01 17:29:12 +02:00
Andrew Brown
459fce3467 x64: lower i8x16.popcnt to VPOPCNTB when possible
When AVX512VL or AVX512BITALG are available, Wasm SIMD's `popcnt`
instruction can be lowered to a single x64 instruction, `VPOPCNTB`,
instead of 8+ instructions.
2021-05-25 12:16:25 -07:00
Chris Fallin
11a2ef01e7 Provide BB layout info externally in terms of code offsets.
This is sometimes useful when performing analyses on the generated
machine code: for example, some kinds of code verifiers will want to do
a control-flow analysis, and it is much easier to do this if one does
not have to recover the CFG from the machine code (doing so requires
heavyweight analysis when indirect branches are involved). If one trusts
the control-flow lowering and only needs to verify other properties of
the code, this can be very useful.
2021-05-24 09:18:06 -07:00
Johnnie Birch
9a5c9607e1 Vpopcnt for x64 2021-05-21 19:23:26 -07:00
Chris Fallin
95559c01aa Merge pull request from GHSA-hpqh-2wqx-7qp5
Fix spillslot reload of narrow values: zero-extend, don't sign-extend. Release v0.74.0 as security-patch release.
2021-05-21 12:01:55 -07:00
Andrew Brown
54b45d28a3 x64: lower fcvt_from_uint to VCVTUDQ2PS when possible
When AVX512VL and AVX512F are available, use a single instruction
(`VCVTUDQ2PS`) instead of a length 9-instruction sequence. This
optimization is a port from the legacy x86 backend.
2021-05-19 12:20:11 -07:00
Chris Fallin
a1c9b06cea Fix spillslot reload of narrow values: zero-extend, don't sign-extend.
Previously, the x64 backend's ABI code would generate a sign-extending
load when loading a less-than-64-bit integer from a spillslot. This is
incorrect: e.g., for i32s > 0x80000000, this would result in all high
bits set.

This interacts poorly with another optimization. Normally, the invariant
is that the high bits of a register holding a value of a certain type,
beyond that type's bits, are undefined. However, as an optimization, we
recognize and use the fact that on x86-64, 32-bit instructions zero the
upper 32 bits. This allows us to elide a 32-to-64-bit zero-extend op
(turning it into just a move, which can then sometimes disappear
entirely due to register coalescing).

If a spill and reload happen between the production of a 32-bit value
from an instruction known to zero the upper bits and its use, then we
will rely on zero upper bits that might actually be set by a
sign-extend. This will result in incorrect execution.

As a fix, we stick to a simple invariant: we always spill and reload a
full 64 bits when handling integer registers on x64. This ensures that
no bits are mangled.
2021-05-19 12:19:19 -07:00
Andrew Brown
7ef3ae2903 x64: implement vselect with variable blend instructions
This change implements `vselect` using SSE4.1's `BLENDVPS`, `BLENDVPD`,
and `PBLENDVB`. `vselect` is a lane-selecting instruction that is used
by
[simple_preopt.rs](fa1faf5d22/cranelift/codegen/src/simple_preopt.rs (L947-L999))
to lower `bitselect` to a single x86 instruction when the condition mask
is known to be boolean (all 1s or 0s, e.g., from a conversion). This is
better than `bitselect` in general, which lowers to 4-5 instructions.
The old backend had the `vselect` lowering; this simply introduces it to
the new backend.
2021-05-17 11:23:33 -07:00
Andrew Brown
e676589b0c x64: lower i64x2.imul to VPMULLQ when possible
This adds the machinery to encode the VPMULLQ instruction which is
available in AVX512VL and AVX512DQ. When these feature sets are
available, we use this instruction instead of a lengthy 12-instruction
sequence.
2021-05-13 20:14:05 -07:00
Andrew Brown
c982d2be65 x64: move multiplication lowering
Since the lowering of `imul` complicated the other ALU operations it was
matched with and since future commits will alter the multiplication
lowering further, this change moves the `imul` lowering to its own match
block.
2021-05-13 20:14:05 -07:00
Andrew Brown
02796fc670 x64: move encodings to a separate module
In order to benchmark the encoding code with criterion, the functions
and structures must be public. Moving this code to its own module
(instead of keeping as a submodule to `inst`), allows `inst` to remain
private. This avoids having to expose and document (or ignore
documenting) the numerous instruction variants in `inst` while allowing
access to the encoding code. This commit changes no functionality.
2021-05-13 10:46:08 -07:00
Afonso Bordado
e021995323 Allow i128 amount operands on shift instructions in the x64 backend
Fixes #2727.
2021-05-10 18:32:20 +01:00
Andrew Brown
0acc1451ea x64: lower iabs.i64x2 using a single AVX512 instruction when possible (#2819)
* x64: add EVEX encoding mechanism

Also, includes an empty stub module for the VEX encoding.

* x64: lower abs.i64x2 to VPABSQ when available

* x64: refactor EVEX encodings to use `EvexInstruction`

This change replaces the `encode_evex` function with a builder-style struct, `EvexInstruction`. This approach clarifies the code, adds documentation, and results in slight speedups when benchmarked.

* x64: rename encoding CodeSink to ByteSink
2021-04-15 11:53:58 -07:00