x64: avoid load-coalescing SIMD operations with non-aligned loads
Fixes #2943, though not as optimally as may be desired. With x64 SIMD instructions, the memory operand must be aligned--this change adds that check. There are cases, however, where we can do better--see #3106.
This commit is contained in:
@@ -153,6 +153,12 @@ fn is_mergeable_load<C: LowerCtx<I = Inst>>(
|
||||
return None;
|
||||
}
|
||||
|
||||
// SIMD instructions can only be load-coalesced when the loaded value comes
|
||||
// from an aligned address.
|
||||
if load_ty.is_vector() && !insn_data.memflags().map_or(false, |f| f.aligned()) {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Just testing the opcode is enough, because the width will always match if
|
||||
// the type does (and the type should match if the CLIF is properly
|
||||
// constructed).
|
||||
|
||||
Reference in New Issue
Block a user