x64: Enable load-coalescing for SSE/AVX instructions (#5841)
* x64: Enable load-coalescing for SSE/AVX instructions This commit unlocks the ability to fold loads into operands of SSE and AVX instructions. This is beneficial for both function size when it happens in addition to being able to reduce register pressure. Previously this was not done because most SSE instructions require memory to be aligned. AVX instructions, however, do not have alignment requirements. The solution implemented here is one recommended by Chris which is to add a new `XmmMemAligned` newtype wrapper around `XmmMem`. All SSE instructions are now annotated as requiring an `XmmMemAligned` operand except for a new new instruction styles used specifically for instructions that don't require alignment (e.g. `movdqu`, `*sd`, and `*ss` instructions). All existing instruction helpers continue to take `XmmMem`, however. This way if an AVX lowering is chosen it can be used as-is. If an SSE lowering is chosen, however, then an automatic conversion from `XmmMem` to `XmmMemAligned` kicks in. This automatic conversion only fails for unaligned addresses in which case a load instruction is emitted and the operand becomes a temporary register instead. A number of prior `Xmm` arguments have now been converted to `XmmMem` as well. One change from this commit is that loading an unaligned operand for an SSE instruction previously would use the "correct type" of load, e.g. `movups` for f32x4 or `movup` for f64x2, but now the loading happens in a context without type information so the `movdqu` instruction is generated. According to [this stack overflow question][question] it looks like modern processors won't penalize this "wrong" choice of type when the operand is then used for f32 or f64 oriented instructions. Finally this commit improves some reuse of logic in the `put_in_*_mem*` helper to share code with `sinkable_load` and avoid duplication. With this in place some various ISLE rules have been updated as well. In the tests it can be seen that AVX-instructions are now automatically load-coalesced and use memory operands in a few cases. [question]: https://stackoverflow.com/questions/40854819/is-there-any-situation-where-using-movdqu-and-movupd-is-better-than-movups * Fix tests * Fix move-and-extend to be unaligned These don't have alignment requirements like other xmm instructions as well. Additionally add some ISA tests to ensure that their output is tested. * Review comments
This commit is contained in:
@@ -2155,10 +2155,6 @@
|
||||
|
||||
;; Rules for `fadd` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
;; N.B.: there are no load-op merging rules here. We can't guarantee
|
||||
;; the RHS (if a load) is 128-bit aligned, so we must avoid merging a
|
||||
;; load. Likewise for other ops below.
|
||||
|
||||
(rule (lower (has_type $F32 (fadd x y)))
|
||||
(x64_addss x y))
|
||||
(rule (lower (has_type $F64 (fadd x y)))
|
||||
@@ -2168,6 +2164,17 @@
|
||||
(rule (lower (has_type $F64X2 (fadd x y)))
|
||||
(x64_addpd x y))
|
||||
|
||||
;; The above rules automatically sink loads for rhs operands, so additionally
|
||||
;; add rules for sinking loads with lhs operands.
|
||||
(rule 1 (lower (has_type $F32 (fadd (sinkable_load x) y)))
|
||||
(x64_addss y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F64 (fadd (sinkable_load x) y)))
|
||||
(x64_addsd y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F32X4 (fadd (sinkable_load x) y)))
|
||||
(x64_addps y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F64X2 (fadd (sinkable_load x) y)))
|
||||
(x64_addpd y (sink_load x)))
|
||||
|
||||
;; Rules for `fsub` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
(rule (lower (has_type $F32 (fsub x y)))
|
||||
@@ -2190,6 +2197,17 @@
|
||||
(rule (lower (has_type $F64X2 (fmul x y)))
|
||||
(x64_mulpd x y))
|
||||
|
||||
;; The above rules automatically sink loads for rhs operands, so additionally
|
||||
;; add rules for sinking loads with lhs operands.
|
||||
(rule 1 (lower (has_type $F32 (fmul (sinkable_load x) y)))
|
||||
(x64_mulss y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F64 (fmul (sinkable_load x) y)))
|
||||
(x64_mulsd y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F32X4 (fmul (sinkable_load x) y)))
|
||||
(x64_mulps y (sink_load x)))
|
||||
(rule 1 (lower (has_type $F64X2 (fmul (sinkable_load x) y)))
|
||||
(x64_mulpd y (sink_load x)))
|
||||
|
||||
;; Rules for `fdiv` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
(rule (lower (has_type $F32 (fdiv x y)))
|
||||
@@ -2983,7 +3001,7 @@
|
||||
(tmp Xmm (x64_pxor tmp dst))
|
||||
|
||||
;; Convert the packed float to packed doubleword.
|
||||
(dst Xmm (x64_cvttps2dq $F32X4 dst))
|
||||
(dst Xmm (x64_cvttps2dq dst))
|
||||
|
||||
;; Set top bit only if < 0
|
||||
(tmp Xmm (x64_pand dst tmp))
|
||||
@@ -3064,7 +3082,7 @@
|
||||
;; Overflow lanes greater than the maximum allowed signed value will
|
||||
;; set to 0x80000000. Negative and NaN lanes will be 0x0
|
||||
(tmp1 Xmm dst)
|
||||
(dst Xmm (x64_cvttps2dq $F32X4 dst))
|
||||
(dst Xmm (x64_cvttps2dq dst))
|
||||
|
||||
;; Set lanes to src - max_signed_int
|
||||
(tmp1 Xmm (x64_subps tmp1 tmp2))
|
||||
@@ -3074,7 +3092,7 @@
|
||||
(tmp2 Xmm (x64_cmpps tmp2 tmp1 (FcmpImm.LessThanOrEqual)))
|
||||
|
||||
;; Convert those set of lanes that have the max_signed_int factored out.
|
||||
(tmp1 Xmm (x64_cvttps2dq $F32X4 tmp1))
|
||||
(tmp1 Xmm (x64_cvttps2dq tmp1))
|
||||
|
||||
;; Prepare converted lanes by zeroing negative lanes and prepping lanes
|
||||
;; that have positive overflow (based on the mask) by setting these lanes
|
||||
|
||||
Reference in New Issue
Block a user