x64: Add rudimentary support for some AVX instructions (#5795)
* x64: Add rudimentary support for some AVX instructions I was poking around Spidermonkey's wasm backend and saw that the various assembler functions used are all `v*`-prefixed which look like they're intended for use with AVX instructions. I looked at Cranelift and it currently doesn't have support for many AVX-based instructions, so I figured I'd take a crack at it! The support added here is a bit of a mishmash when viewed alone, but my general goal was to take a single instruction from the SIMD proposal for WebAssembly and migrate all of its component instructions to AVX. I, by random chance, picked a pretty complicated instruction of `f32x4.min`. This wasm instruction is implemented on x64 with 4 unique SSE instructions and ended up being a pretty good candidate. Further digging about AVX-vs-SSE shows that there should be two major benefits to using AVX over SSE: * Primarily AVX instructions largely use a three-operand form where two input registers are operated with and an output register is also specified. This is in contrast to SSE's predominant one-register-is-input-but-also-output pattern. This should help free up the register allocator a bit and additionally remove the need for movement between registers. * As #4767 notes the memory-based operations of VEX-encoded instructions (aka AVX instructions) do not have strict alignment requirements which means we would be able to sink loads and stores into individual instructions instead of having separate instructions. So I set out on my journey to implement the instructions used by `f32x4.min`. The first few were fairly easy. The machinst backends are already of the shape "take these inputs and compute the output" where the x86 requirement of a register being both input and output is postprocessed in. This means that the `inst.isle` creation helpers for SSE instructions were already of the correct form to use AVX. I chose to add new `rule` branches for the instruction creation helpers, for example `x64_andnps`. The new `rule` conditionally only runs if AVX is enabled and emits an AVX instruction instead of an SSE instruction for achieving the same goal. This means that no lowerings of clif instructions were modified, instead just new instructions are being generated. The VEX encoding was previously not heavily used in Cranelift. The only current user are the FMA-style instructions that Cranelift has at this time. These FMA instructions have one extra operand than `vandnps`, for example, so I split the existing `XmmRmRVex` into a few more variants to fit the shape of the instructions that needed generating for `f32x4.min`. This was accompanied then with more AVX opcode definitions, more emission support, etc. Upon implementing all of this it turned out that the test suite was failing on my machine due to the memory-operand encodings of VEX instructions not being supported. I didn't explicitly add those in myself but some preexisting RIP-relative addressing was leaking into the new instructions with existing tests. I opted to go ahead and fill out the memory addressing modes of VEX encoding to get the tests passing again. All-in-all this PR adds new instructions to the x64 backend for a number of AVX instructions, updates 5 existing instruction producers to use AVX instructions conditionally, implements VEX memory operands, and adds some simple tests for the new output of `f32x4.min`. The existing runtest for `f32x.min` caught a few intermediate bugs along the way and I additionally added a plain `target x86_64` to that runtest to ensure that it executes with and without AVX to test the various lowerings. I'll also note that this, and future support, should be well-fuzzed through Wasmtime's fuzzing which may explicitly disable AVX support despite the machine having access to AVX, so non-AVX lowerings should be well-tested into the future. It's also worth mentioning that I am not an AVX or VEX or x64 expert. Implementing the memory operand part for VEX was the hardest part of this PR and while I think it should be good someone else should definitely double-check me. Additionally I haven't added many instructions to the x64 backend yet so I may have missed obvious places to tests or such, so am happy to follow-up with anything to be more thorough if necessary. Finally I should note that this is just the tip of the iceberg when it comes to AVX. My hope is to get some of the idioms sorted out to make it easier for future PRs to add one-off instruction lowerings or such. * Review feedback
This commit is contained in:
@@ -227,8 +227,24 @@
|
||||
(mask Xmm)
|
||||
(dst WritableXmm))
|
||||
|
||||
;; XMM (scalar or vector) binary op that relies on the VEX prefix.
|
||||
(XmmRmRVex (op AvxOpcode)
|
||||
;; XMM (scalar or vector) binary op that relies on the VEX prefix and
|
||||
;; has two inputs.
|
||||
(XmmRmiRVex (op AvxOpcode)
|
||||
(src1 Xmm)
|
||||
(src2 XmmMemImm)
|
||||
(dst WritableXmm))
|
||||
|
||||
;; XMM (scalar or vector) ternary op that relies on the VEX prefix and
|
||||
;; has two dynamic inputs plus one immediate input.
|
||||
(XmmRmRImmVex (op AvxOpcode)
|
||||
(src1 Xmm)
|
||||
(src2 XmmMem)
|
||||
(dst WritableXmm)
|
||||
(imm u8))
|
||||
|
||||
;; XMM (scalar or vector) ternary op that relies on the VEX prefix and
|
||||
;; has three dynamic inputs.
|
||||
(XmmRmRVex3 (op AvxOpcode)
|
||||
(src1 Xmm)
|
||||
(src2 Xmm)
|
||||
(src3 XmmMem)
|
||||
@@ -1132,11 +1148,16 @@
|
||||
(decl cc_nz_or_z (CC) CC)
|
||||
(extern extractor cc_nz_or_z cc_nz_or_z)
|
||||
|
||||
(type AvxOpcode extern
|
||||
(type AvxOpcode
|
||||
(enum Vfmadd213ss
|
||||
Vfmadd213sd
|
||||
Vfmadd213ps
|
||||
Vfmadd213pd))
|
||||
Vfmadd213pd
|
||||
Vminps
|
||||
Vorps
|
||||
Vandnps
|
||||
Vcmpps
|
||||
Vpsrld))
|
||||
|
||||
(type Avx512Opcode extern
|
||||
(enum Vcvtudq2ps
|
||||
@@ -1226,6 +1247,10 @@
|
||||
(decl xmm_to_xmm_mem_imm (Xmm) XmmMemImm)
|
||||
(extern constructor xmm_to_xmm_mem_imm xmm_to_xmm_mem_imm)
|
||||
|
||||
;; Convert an `XmmMem` into an `XmmMemImm`.
|
||||
(decl xmm_mem_to_xmm_mem_imm (XmmMem) XmmMemImm)
|
||||
(extern constructor xmm_mem_to_xmm_mem_imm xmm_mem_to_xmm_mem_imm)
|
||||
|
||||
;; Allocate a new temporary GPR register.
|
||||
(decl temp_writable_gpr () WritableGpr)
|
||||
(extern constructor temp_writable_gpr temp_writable_gpr)
|
||||
@@ -1438,6 +1463,9 @@
|
||||
(decl use_sse41 (bool) Type)
|
||||
(extern extractor infallible use_sse41 use_sse41)
|
||||
|
||||
(decl pure has_avx () bool)
|
||||
(extern constructor has_avx has_avx)
|
||||
|
||||
;;;; Helpers for Merging and Sinking Immediates/Loads ;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
;; Extract a constant `Imm8Reg.Imm8` from a value operand.
|
||||
@@ -2285,8 +2313,11 @@
|
||||
|
||||
;; Helper for creating `orps` instructions.
|
||||
(decl x64_orps (Xmm XmmMem) Xmm)
|
||||
(rule (x64_orps src1 src2)
|
||||
(rule 0 (x64_orps src1 src2)
|
||||
(xmm_rm_r (SseOpcode.Orps) src1 src2))
|
||||
(rule 1 (x64_orps src1 src2)
|
||||
(if-let $true (has_avx))
|
||||
(xmm_rmir_vex (AvxOpcode.Vorps) src1 src2))
|
||||
|
||||
;; Helper for creating `orpd` instructions.
|
||||
(decl x64_orpd (Xmm XmmMem) Xmm)
|
||||
@@ -2360,8 +2391,11 @@
|
||||
|
||||
;; Helper for creating `andnps` instructions.
|
||||
(decl x64_andnps (Xmm XmmMem) Xmm)
|
||||
(rule (x64_andnps src1 src2)
|
||||
(rule 0 (x64_andnps src1 src2)
|
||||
(xmm_rm_r (SseOpcode.Andnps) src1 src2))
|
||||
(rule 1 (x64_andnps src1 src2)
|
||||
(if-let $true (has_avx))
|
||||
(xmm_rmir_vex (AvxOpcode.Vandnps) src1 src2))
|
||||
|
||||
;; Helper for creating `andnpd` instructions.
|
||||
(decl x64_andnpd (Xmm XmmMem) Xmm)
|
||||
@@ -2602,12 +2636,18 @@
|
||||
(rule (x64_cmpp $F64X2 x y imm) (x64_cmppd x y imm))
|
||||
|
||||
(decl x64_cmpps (Xmm XmmMem FcmpImm) Xmm)
|
||||
(rule (x64_cmpps src1 src2 imm)
|
||||
(rule 0 (x64_cmpps src1 src2 imm)
|
||||
(xmm_rm_r_imm (SseOpcode.Cmpps)
|
||||
src1
|
||||
src2
|
||||
(encode_fcmp_imm imm)
|
||||
(OperandSize.Size32)))
|
||||
(rule 1 (x64_cmpps src1 src2 imm)
|
||||
(if-let $true (has_avx))
|
||||
(xmm_rmr_imm_vex (AvxOpcode.Vcmpps)
|
||||
src1
|
||||
src2
|
||||
(encode_fcmp_imm imm)))
|
||||
|
||||
;; Note that `Size32` is intentional despite this being used for 64-bit
|
||||
;; operations, since this presumably induces the correct encoding of the
|
||||
@@ -2858,8 +2898,11 @@
|
||||
|
||||
;; Helper for creating `psrld` instructions.
|
||||
(decl x64_psrld (Xmm XmmMemImm) Xmm)
|
||||
(rule (x64_psrld src1 src2)
|
||||
(rule 0 (x64_psrld src1 src2)
|
||||
(xmm_rmi_xmm (SseOpcode.Psrld) src1 src2))
|
||||
(rule 1 (x64_psrld src1 src2)
|
||||
(if-let $true (has_avx))
|
||||
(xmm_rmir_vex (AvxOpcode.Vpsrld) src1 src2))
|
||||
|
||||
;; Helper for creating `psrlq` instructions.
|
||||
(decl x64_psrlq (Xmm XmmMemImm) Xmm)
|
||||
@@ -3070,10 +3113,11 @@
|
||||
|
||||
;; Helper for creating `minps` instructions.
|
||||
(decl x64_minps (Xmm Xmm) Xmm)
|
||||
(rule (x64_minps x y)
|
||||
(let ((dst WritableXmm (temp_writable_xmm))
|
||||
(_ Unit (emit (MInst.XmmRmR (SseOpcode.Minps) x y dst))))
|
||||
dst))
|
||||
(rule 0 (x64_minps x y)
|
||||
(xmm_rm_r (SseOpcode.Minps) x y))
|
||||
(rule 1 (x64_minps x y)
|
||||
(if-let $true (has_avx))
|
||||
(xmm_rmir_vex (AvxOpcode.Vminps) x y))
|
||||
|
||||
;; Helper for creating `minpd` instructions.
|
||||
(decl x64_minpd (Xmm Xmm) Xmm)
|
||||
@@ -3101,15 +3145,25 @@
|
||||
(xmm_rm_r (SseOpcode.Maxpd) x y))
|
||||
|
||||
|
||||
;; Helper for creating `MInst.XmmRmRVex` instructions.
|
||||
(decl xmm_rmr_vex (AvxOpcode Xmm Xmm XmmMem) Xmm)
|
||||
(rule (xmm_rmr_vex op src1 src2 src3)
|
||||
;; Helper for creating `MInst.XmmRmiRVex` instructions.
|
||||
(decl xmm_rmir_vex (AvxOpcode Xmm XmmMemImm) Xmm)
|
||||
(rule (xmm_rmir_vex op src1 src2)
|
||||
(let ((dst WritableXmm (temp_writable_xmm))
|
||||
(_ Unit (emit (MInst.XmmRmRVex op
|
||||
src1
|
||||
src2
|
||||
src3
|
||||
dst))))
|
||||
(_ Unit (emit (MInst.XmmRmiRVex op src1 src2 dst))))
|
||||
dst))
|
||||
|
||||
;; Helper for creating `MInst.XmmRmRImmVex` instructions.
|
||||
(decl xmm_rmr_imm_vex (AvxOpcode Xmm XmmMem u8) Xmm)
|
||||
(rule (xmm_rmr_imm_vex op src1 src2 imm)
|
||||
(let ((dst WritableXmm (temp_writable_xmm))
|
||||
(_ Unit (emit (MInst.XmmRmRImmVex op src1 src2 dst imm))))
|
||||
dst))
|
||||
|
||||
;; Helper for creating `MInst.XmmRmRVex3` instructions.
|
||||
(decl xmm_rmr_vex3 (AvxOpcode Xmm Xmm XmmMem) Xmm)
|
||||
(rule (xmm_rmr_vex3 op src1 src2 src3)
|
||||
(let ((dst WritableXmm (temp_writable_xmm))
|
||||
(_ Unit (emit (MInst.XmmRmRVex3 op src1 src2 src3 dst))))
|
||||
dst))
|
||||
|
||||
;; Helper for creating `vfmadd213ss` instructions.
|
||||
@@ -3117,28 +3171,28 @@
|
||||
; but we don't support VEX memory encodings yet
|
||||
(decl x64_vfmadd213ss (Xmm Xmm Xmm) Xmm)
|
||||
(rule (x64_vfmadd213ss x y z)
|
||||
(xmm_rmr_vex (AvxOpcode.Vfmadd213ss) x y z))
|
||||
(xmm_rmr_vex3 (AvxOpcode.Vfmadd213ss) x y z))
|
||||
|
||||
;; Helper for creating `vfmadd213sd` instructions.
|
||||
; TODO: This should have the (Xmm Xmm XmmMem) signature
|
||||
; but we don't support VEX memory encodings yet
|
||||
(decl x64_vfmadd213sd (Xmm Xmm Xmm) Xmm)
|
||||
(rule (x64_vfmadd213sd x y z)
|
||||
(xmm_rmr_vex (AvxOpcode.Vfmadd213sd) x y z))
|
||||
(xmm_rmr_vex3 (AvxOpcode.Vfmadd213sd) x y z))
|
||||
|
||||
;; Helper for creating `vfmadd213ps` instructions.
|
||||
; TODO: This should have the (Xmm Xmm XmmMem) signature
|
||||
; but we don't support VEX memory encodings yet
|
||||
(decl x64_vfmadd213ps (Xmm Xmm Xmm) Xmm)
|
||||
(rule (x64_vfmadd213ps x y z)
|
||||
(xmm_rmr_vex (AvxOpcode.Vfmadd213ps) x y z))
|
||||
(xmm_rmr_vex3 (AvxOpcode.Vfmadd213ps) x y z))
|
||||
|
||||
;; Helper for creating `vfmadd213pd` instructions.
|
||||
; TODO: This should have the (Xmm Xmm XmmMem) signature
|
||||
; but we don't support VEX memory encodings yet
|
||||
(decl x64_vfmadd213pd (Xmm Xmm Xmm) Xmm)
|
||||
(rule (x64_vfmadd213pd x y z)
|
||||
(xmm_rmr_vex (AvxOpcode.Vfmadd213pd) x y z))
|
||||
(xmm_rmr_vex3 (AvxOpcode.Vfmadd213pd) x y z))
|
||||
|
||||
|
||||
;; Helper for creating `sqrtss` instructions.
|
||||
@@ -3836,6 +3890,7 @@
|
||||
(convert RegMemImm XmmMemImm mov_rmi_to_xmm)
|
||||
(convert Xmm XmmMem xmm_to_xmm_mem)
|
||||
(convert Xmm XmmMemImm xmm_to_xmm_mem_imm)
|
||||
(convert XmmMem XmmMemImm xmm_mem_to_xmm_mem_imm)
|
||||
(convert XmmMem RegMem xmm_mem_to_reg_mem)
|
||||
(convert WritableXmm Xmm writable_xmm_to_xmm)
|
||||
(convert WritableXmm WritableReg writable_xmm_to_reg)
|
||||
|
||||
Reference in New Issue
Block a user