Cranelift only has one instruction SIMD which depends on SSE4.2 so this
commit adds a lowering rule for `pcmpgtq` which doesn't use SSE4.2 and
enables lowering the baseline requirement for SIMD support from SSE4.2
to SSE4.1.
The `has_sse42` setting is no longer enabled by default for Cranelift.
Additionally `enable_simd` no longer requires `has_sse42` on x64.
Finally the fuzz-generator for Wasmtime codegen settings now enables
flipping the `has_sse42` setting instead of unconditionally setting it
to `true`.
The specific lowering for `pcmpgtq` is copied from LLVM's lowering of
this instruction.
* Handle signature() for more libcalls
This is necessary to be able to call them in the interpreter. All the
remaining libcalls which signature() doesn't handle are never used in
clif ir. Only in code compiled by a backend.
* Fix libcall declarations in cranelift-frontend
* Add function signatures
* Use correct pointer type instead of I64
* fix typo
* add test to check that Option<EntityRef> is twice as large as EntityRef
* grammar
* grammar
* reverse snakecase -- Not sure if folks want this type of change
I noticed recently that for the `ImmRegRegShift` addressing mode
Cranelift will unconditionally emit at least a 1-byte immediate for the
offset to be added to the register addition computation, even when the
offset is zero. In this case though the instruction encoding can be
slightly more compact and remove a byte. This commit started off by
applying this optimization, which resulted in the `*.clif` test changes
in this commit.
Further reading this code, however, I personally found it quite hard to
follow what was happening with all the various branches and ModRM/SIB
bits. I reviewed these encodings in the x64 architecture manual and
attempted to improve the logic for encoding here. The new version in
this commit is intended to be functionally equivalent to the prior
version where dropping a zero-offset from the `ImmRegRegShift` variant
is the only change.
`FunctionBuilder::create_stackslot` was split into `create_sized_stack_slot`
and `create_dynamic_stack_slot`. This updates the doc in the `StackBuilder`
docstring to refer to the new methods.
Fixes#5838.
* Cranelift: remove non-egraphs optimization pipeline and `use_egraphs` option.
This PR removes the LICM, GVN, and preopt passes, and associated support
pieces, from `cranelift-codegen`. Not to worry, we still have
optimizations: the egraph framework subsumes all of these, and has been
on by default since #5181.
A few decision points:
- Filetests for the legacy LICM, GVN and simple_preopt were removed too.
As we built optimizations in the egraph framework we wrote new tests
for the equivalent functionality, and many of the old tests were
testing specific behaviors in the old implementations that may not be
relevant anymore. However if folks prefer I could take a different
approach here and try to port over all of the tests.
- The corresponding filetest modes (commands) were deleted too. The
`test alias_analysis` mode remains, but no longer invokes a separate
GVN first (since there is no separate GVN that will not also do alias
analysis) so the tests were tweaked slightly to work with that. The
egrpah testsuite also covers alias analysis.
- The `divconst_magic_numbers` module is removed since it's unused
without `simple_preopt`, though this is the one remaining optimization
we still need to build in the egraphs framework, pending #5908. The
magic numbers will live forever in git history so removing this in the
meantime is not a major issue IMHO.
- The `use_egraphs` setting itself was removed at both the Cranelift and
Wasmtime levels. It has been marked deprecated for a few releases now
(Wasmtime 6.0, 7.0, upcoming 8.0, and corresponding Cranelift
versions) so I think this is probably OK. As an alternative if anyone
feels strongly, we could leave the setting and make it a no-op.
* Update test outputs for remaining test differences.
This commit adds new lowerings to the AArch64 backend of the
element-based `fmla` and `fmls` instructions. These instructions have
one of the multiplicands as an implicit broadcast of a single lane of
another register and can help remove `shuffle` or `dup` instructions
that would otherwise be used to implement them.
This commit adds constant-propagation optimizations for
`splat`-of-constant to produce a `vconst` node. This should help later
hoisting these constants out of loops if it shows up in wasm.
* simple_gvn: recognize commutative operators
Normalize instructions with commutative opcodes by sorting the arguments. This
means instructions like `iadd v0, v1` and `iadd v1, v0` will be considered
identical by GVN and deduplicated.
* Remove `UsubSat` and `SsubSat` from `is_commutative`
They are not actually commutative
* Remove `TODO`s
* Move InstructionData normalization into helper fn
* Add normalization of commutative instructions in the epgrah implementation
* Handle reflexive icmp/fcmps in GVN
* Change formatting of `normalize_in_place`
* suggestions from code review
* ISLE: move `icmp` rewrites to separate file.
Move `icmp`-related rewrite rules from `algebraic.isle` to `icmp.isle`.
Also move `icmp`-related tests from `algebraic.clif` to `icmp.clif`.
* Put parameterized and unparameterized `icmp` tests in separate files
* Undo refactoring of (ir)reflexivity rewrites
* Fix `icmp-parameterised.clif`
* Undo formatting/comment changes
* x64: Add AVX encodings of `vcvt{ss2sd,sd2ss}`
Additionally update the instruction helpers to take an `XmmMem` argument
to allow load sinking into the instruction.
* x64: Add AVX encoding of `sqrts{s,d}`
* x64: Add AVX support for `rounds{s,d}`
* x64: Deduplicate fcmp emission logic
The `select`-of-`fcmp` lowering duplicated a good deal of `FloatCC`
lowering logic that was already done by `emit_fcmp`, so this commit
refactors these lowering rules to instead delegate to `emit_fcmp` and
then handle that result.
* Swap order of condition codes
Shouldn't affect the correctness of this operation and it's a bit more
natural to write the lowering rule this way.
* Swap the order of comparison operands
No need to swap `a b`, only the `x y` needs swapping.
* Fix x64 printing of `XmmCmove`
* Implement TLS on Aarch64 Mach-O
* Add aarch64 macho TLS filetest
* Address review comments
- `Aarch64` instead of `AArch64` in comments
- Remove unnecessary guard in tls_value lowering
- Remove unnecessary regalloc metadata in emission
* Use x1 as temporary register in emission
- Instead of passing in a temporary register to use when emitting
the TLS code, just use `x1`, as it's already in the clobber set.
This also keeps the size of `aarch64::inst::Inst` at 32 bytes.
- Update filetest accordingly
* Update aarch64 mach-o TLS filetest
Following up on the discussion in
https://github.com/bytecodealliance/wasmtime/pull/6011
this adds an improved implementation of TrapIf for s390x
using a single conditional branch instruction.
If the trap conditions is true, we branch into the middle of
the branch instruction - those middle two bytes are zero,
which matches the encoding of the trap instruction.
In addition, show the trap code for Trap and TrapIf
instructions in assembler output.
* x64: Add instruction helpers for `mov{d,q}`
These will soon grow AVX-equivalents so move them to instruction helpers
to have clauses for AVX in the future.
* x64: Don't auto-convert between RegMemImm and XmmMemImm
The previous conversion, `mov_rmi_to_xmm`, would move from GPR registers
to XMM registers which isn't what many of the other `convert` statements
between these newtypes do. This seemed like a possible footgun so I've
removed the auto-conversion and added an explicit helper to go from a
`u32` to an `XmmMemImm`.
* x64: Add AVX encodings of some more GPR-related insns
This commit adds some more support for AVX instructions where GPRs are
in use mixed in with XMM registers. This required a few more variants of
`Inst` to handle the new instructions.
* Fix vpmovmskb encoding
* Fix xmm-to-gpr encoding of vmovd/vmovq
* Fix typo
* Fix rebase conflict
* Fix rebase conflict with tests
* cranelift: Add extra runtests for `clz`/`ctz`
* riscv64: Restrict lowering rules for `ctz`/`clz`
* cranelift: Add `u64` isle helpers
* riscv64: Improve `ctz` codegen
* riscv64: Improve `clz` codegen
* riscv64: Improve `cls` codegen
* riscv64: Improve `clz.i128` codegen
Instead of checking if we have 64 zeros in the top half. Check
if it *is* 0, that way we avoid loading the `64` constant.
* riscv64: Improve `ctz.i128` codegen
Instead of checking if we have 64 zeros in the bottom half. Check
if it *is* 0, that way we avoid loading the `64` constant.
* riscv64: Use extended value in `lower_cls`
* riscv64: Use pattern matches on `bseti`
* Restrict the types for isplit and iconcat to match backends
* Admit unimplemented bitwidths to isplit/iconcat
* Modify the NarrowInt type instead of shadowing it
* Fix filetest failures
* Add a `MachBuffer::defer_trap` method
This commit adds a new method to `MachBuffer` to defer trap opcodes to
the end of a function in a similar manner to how constants are deferred
to the end of the function. This is useful for backends which frequently
use `TrapIf`-style opcodes. Currently a jump is emitted which skips the
next instruction, a trap, and then execution continues normally. While
there isn't any pressing problem with this construction the trap opcode
is in the middle of the instruction stream as opposed to "off on the
side" despite rarely being taken.
With this method in place all the backends (except riscv64 since I
couldn't figure it out easily enough) have a new lowering of their
`TrapIf` opcode. Now a trap is deferred, which returns a label, and then
that label is jumped to when executing the trap. A fixup is then
recorded in `MachBuffer` to get patched later on during emission, or at
the end of the function. Subsequently all `TrapIf` instructions
translate to a single branch plus a single trap at the end of the
function.
I've additionally further updated some more lowerings in the x64 backend
which were explicitly using traps to instead use `TrapIf` where
applicable to avoid jumping over traps mid-function. Other backends
didn't appear to have many jump-over-the-next-trap patterns.
Lots of tests have had their expectations updated here which should
reflect all the traps being sunk to the end of functions.
* Print trap code on all platforms
* Emit traps before constants
* Preserve source location information for traps
* Fix test expectations
* Attempt to fix s390x
The MachBuffer was registering trap codes with the first byte of the
trap, but the SIGILL handler was expecting it to be registered with the
last byte of the trap. Exploit that SIGILL is always represented with a
2-byte instruction and always march 2-backwards for SIGILL, continuing
to march backwards 1 byte for SIGFPE-generating instructions.
* Back out s390x changes
* Back out more s390x bits
* Review comments
Previously it could affect the PartialEq and Hash impls. Ignoring the
sequence number in PartialEq and Hash allows us to not renumber all
blocks in the incremental cache.
* x64: Fix vbroadcastss with AVX2 and without AVX
This commit fixes a corner case in the emission of the
`vbroadcasts{s,d}` instructions. The memory-to-xmm form of these
instructions was available with the AVX instruction set, but the
xmm-to-xmm form of these instructions wasn't available until AVX2.
The instruction requirement for these are listed as AVX but the lowering
rules are appropriately annotated to use either AVX2 or AVX when
appropriate.
While this should work in practice this didn't work for the assertion
about enabled features for each instruction. The `vbroadcastss`
instruction was listed as requiring AVX but could get emitted when AVX2
was enabled (due to the reg-to-reg form being available). This caused an
issue for the fuzzer where AVX2 was enabled but AVX was disabled.
One possible fix would be to add more opcodes, one for reg-to-reg and
one for mem-to-reg. That seemed like somewhat overkill for a pretty
niche situation that shouldn't actually come up in practice anywhere.
Instead this commit changes all the `has_avx` accessors to the
`use_avx_simd` predicate already available in the target flags. The
`use_avx2_simd` predicate was then updated to additionally require
`has_avx`, so if AVX2 is enabled and AVX is disabled then the
`vbroadcastss` instruction won't get emitted any more.
Closes#6059
* Pass `enable_simd` on a few more files
* Only allow pp_cmp within a single block
Block order shouldn't matter for codegen and restricting pp_cmp to a
single block will allow making instruction sequence numbers local to a
block.
* Make sequence numbers local to instructions
This allows renumbering to be localized to a single block where previously it
could affect the entire function. Also saves 32bit of overhead per block.