The x64 backend currently builds the `RealRegUniverse` in a way that is generating somewhat suboptimal code. In many blocks, we see uses of callee-save (non-volatile) registers (r12, r13, r14, rbx) first, even in very short leaf functions where there are plenty of volatiles to use. This is leading to unnecessary spills/reloads. On one (local) test program, a medium-sized C benchmark compiled to Wasm and run on Wasmtime, I am seeing a ~10% performance improvement with this change; it will be less pronounced in programs with high register pressure (there we are likely to use all registers regardless, so the prologue/epilogue will save/restore all callee-saves), or in programs with fewer calls, but this is a clear win for small functions and in many cases removes prologue/epilogue clobber-saves altogether. Separately, I think the RA's coalescing is tripping up a bit in some cases; see e.g. the filetest touched by this commit that loads a value into %rsi then moves to %rax and returns immediately. This is an orthogonal issue, though, and should be addressed (if worthwhile) in regalloc.rs.
31 lines
600 B
Plaintext
31 lines
600 B
Plaintext
test compile
|
|
set enable_simd
|
|
target x86_64 skylake
|
|
feature "experimental_x64"
|
|
|
|
function %bnot_b32x4(b32x4) -> b32x4 {
|
|
block0(v0: b32x4):
|
|
v1 = bnot v0
|
|
return v1
|
|
}
|
|
; check: pcmpeqd %xmm1, %xmm1
|
|
; nextln: pxor %xmm1, %xmm0
|
|
|
|
function %vany_true_b32x4(b32x4) -> b1 {
|
|
block0(v0: b32x4):
|
|
v1 = vany_true v0
|
|
return v1
|
|
}
|
|
; check: ptest %xmm0, %xmm0
|
|
; nextln: setnz %sil
|
|
|
|
function %vall_true_i64x2(i64x2) -> b1 {
|
|
block0(v0: i64x2):
|
|
v1 = vall_true v0
|
|
return v1
|
|
}
|
|
; check: pxor %xmm1, %xmm1
|
|
; nextln: pcmpeqq %xmm0, %xmm1
|
|
; nextln: ptest %xmm1, %xmm1
|
|
; nextln: setz %sil
|