Rework/simplify unwind infrastructure and implement Windows unwind.

Our previous implementation of unwind infrastructure was somewhat
complex and brittle: it parsed generated instructions in order to
reverse-engineer unwind info from prologues. It also relied on some
fragile linkage to communicate instruction-layout information that VCode
was not designed to provide.

A much simpler, more reliable, and easier-to-reason-about approach is to
embed unwind directives as pseudo-instructions in the prologue as we
generate it. That way, we can say what we mean and just emit it
directly.

The usual reasoning that leads to the reverse-engineering approach is
that metadata is hard to keep in sync across optimization passes; but
here, (i) prologues are generated at the very end of the pipeline, and
(ii) if we ever do a post-prologue-gen optimization, we can treat unwind
directives as black boxes with unknown side-effects, just as we do for
some other pseudo-instructions today.

It turns out that it was easier to just build this for both x64 and
aarch64 (since they share a factored-out ABI implementation), and wire
up the platform-specific unwind-info generation for Windows and SystemV.
Now we have simpler unwind on all platforms and we can delete the old
unwind infra as soon as we remove the old backend.

There were a few consequences to supporting Fastcall unwind in
particular that led to a refactor of the common ABI. Windows only
supports naming clobbered-register save locations within 240 bytes of
the frame-pointer register, whatever one chooses that to be (RSP or
RBP). We had previously saved clobbers below the fixed frame (and below
nominal-SP). The 240-byte range has to include the old RBP too, so we're
forced to place clobbers at the top of the frame, just below saved
RBP/RIP. This is fine; we always keep a frame pointer anyway because we
use it to refer to stack args. It does mean that offsets of fixed-frame
slots (spillslots, stackslots) from RBP are no longer known before we do
regalloc, so if we ever want to index these off of RBP rather than
nominal-SP because we add support for `alloca` (dynamic frame growth),
then we'll need a "nominal-BP" mode that is resolved after regalloc and
clobber-save code is generated. I added a comment to this effect in
`abi_impl.rs`.

The above refactor touched both x64 and aarch64 because of shared code.
This had a further effect in that the old aarch64 prologue generation
subtracted from `sp` once to allocate space, then used stores to `[sp,
offset]` to save clobbers. Unfortunately the offset only has 7-bit
range, so if there are enough clobbered registers (and there can be --
aarch64 has 384 bytes of registers; at least one unit test hits this)
the stores/loads will be out-of-range. I really don't want to synthesize
large-offset sequences here; better to go back to the simpler
pre-index/post-index `stp r1, r2, [sp, #-16]` form that works just like
a "push". It's likely not much worse microarchitecturally (dependence
chain on SP, but oh well) and it actually saves an instruction if
there's no other frame to allocate. As a further advantage, it's much
simpler to understand; simpler is usually better.

This PR adds the new backend on Windows to CI as well.
This commit is contained in:
Chris Fallin
2021-03-06 15:43:09 -08:00
parent 05688aa8f4
commit 2d5db92a9e
63 changed files with 905 additions and 1088 deletions

View File

@@ -235,6 +235,18 @@ pub(crate) fn define() -> SettingGroup {
false, false,
); );
settings.add_bool(
"unwind_info",
r#"
Generate unwind info. This increases metadata size and compile time,
but allows for the debugger to trace frames, is needed for GC tracing
that relies on libunwind (such as in Wasmtime), and is
unconditionally needed on certain platforms (such as Windows) that
must always be able to unwind.
"#,
true,
);
// BaldrMonkey requires that not-yet-relocated function addresses be encoded // BaldrMonkey requires that not-yet-relocated function addresses be encoded
// as all-ones bitpatterns. // as all-ones bitpatterns.
settings.add_bool( settings.add_bool(

View File

@@ -8,6 +8,7 @@ use crate::ir::Opcode;
use crate::ir::{ExternalName, LibCall}; use crate::ir::{ExternalName, LibCall};
use crate::isa; use crate::isa;
use crate::isa::aarch64::{inst::EmitState, inst::*}; use crate::isa::aarch64::{inst::EmitState, inst::*};
use crate::isa::unwind::UnwindInst;
use crate::machinst::*; use crate::machinst::*;
use crate::settings; use crate::settings;
use crate::{CodegenError, CodegenResult}; use crate::{CodegenError, CodegenResult};
@@ -472,7 +473,7 @@ impl ABIMachineSpec for AArch64MachineDeps {
} }
} }
fn gen_prologue_frame_setup() -> SmallInstVec<Inst> { fn gen_prologue_frame_setup(flags: &settings::Flags) -> SmallInstVec<Inst> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
// stp fp (x29), lr (x30), [sp, #-16]! // stp fp (x29), lr (x30), [sp, #-16]!
insts.push(Inst::StoreP64 { insts.push(Inst::StoreP64 {
@@ -484,6 +485,15 @@ impl ABIMachineSpec for AArch64MachineDeps {
), ),
flags: MemFlags::trusted(), flags: MemFlags::trusted(),
}); });
if flags.unwind_info() {
insts.push(Inst::Unwind {
inst: UnwindInst::PushFrameRegs {
offset_upward_to_caller_sp: 16, // FP, LR
},
});
}
// mov fp (x29), sp. This uses the ADDI rd, rs, 0 form of `MOV` because // mov fp (x29), sp. This uses the ADDI rd, rs, 0 form of `MOV` because
// the usual encoding (`ORR`) does not work with SP. // the usual encoding (`ORR`) does not work with SP.
insts.push(Inst::AluRRImm12 { insts.push(Inst::AluRRImm12 {
@@ -498,20 +508,14 @@ impl ABIMachineSpec for AArch64MachineDeps {
insts insts
} }
fn gen_epilogue_frame_restore() -> SmallInstVec<Inst> { fn gen_epilogue_frame_restore(_: &settings::Flags) -> SmallInstVec<Inst> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
// MOV (alias of ORR) interprets x31 as XZR, so use an ADD here. // N.B.: sp is already adjusted to the appropriate place by the
// MOV to SP is an alias of ADD. // clobber-restore code (which also frees the fixed frame). Hence, there
insts.push(Inst::AluRRImm12 { // is no need for the usual `mov sp, fp` here.
alu_op: ALUOp::Add64,
rd: writable_stack_reg(), // `ldp fp, lr, [sp], #16`
rn: fp_reg(),
imm12: Imm12 {
bits: 0,
shift12: false,
},
});
insts.push(Inst::LoadP64 { insts.push(Inst::LoadP64 {
rt: writable_fp_reg(), rt: writable_fp_reg(),
rt2: writable_link_reg(), rt2: writable_link_reg(),
@@ -521,7 +525,6 @@ impl ABIMachineSpec for AArch64MachineDeps {
), ),
flags: MemFlags::trusted(), flags: MemFlags::trusted(),
}); });
insts insts
} }
@@ -535,21 +538,43 @@ impl ABIMachineSpec for AArch64MachineDeps {
// nominal SP offset; abi_impl generic code will do that. // nominal SP offset; abi_impl generic code will do that.
fn gen_clobber_save( fn gen_clobber_save(
call_conv: isa::CallConv, call_conv: isa::CallConv,
_: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> (u64, SmallVec<[Inst; 16]>) { ) -> (u64, SmallVec<[Inst; 16]>) {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers); let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers);
let (int_save_bytes, vec_save_bytes) = saved_reg_stack_size(&clobbered_int, &clobbered_vec); let (int_save_bytes, vec_save_bytes) = saved_reg_stack_size(&clobbered_int, &clobbered_vec);
let total_save_bytes = (vec_save_bytes + int_save_bytes) as i32; let total_save_bytes = int_save_bytes + vec_save_bytes;
insts.extend(Self::gen_sp_reg_adjust( let clobber_size = total_save_bytes as i32;
-(total_save_bytes + fixed_frame_storage_size as i32),
));
for (i, reg_pair) in clobbered_int.chunks(2).enumerate() { if flags.unwind_info() {
// The *unwind* frame (but not the actual frame) starts at the
// clobbers, just below the saved FP/LR pair.
insts.push(Inst::Unwind {
inst: UnwindInst::DefineNewFrame {
offset_downward_to_clobbers: clobber_size as u32,
offset_upward_to_caller_sp: 16, // FP, LR
},
});
}
// We use pre-indexed addressing modes here, rather than the possibly
// more efficient "subtract sp once then used fixed offsets" scheme,
// because (i) we cannot necessarily guarantee that the offset of a
// clobber-save slot will be within a SImm7Scaled (+504-byte) offset
// range of the whole frame including other slots, it is more complex to
// conditionally generate a two-stage SP adjustment (clobbers then fixed
// frame) otherwise, and generally we just want to maintain simplicity
// here for maintainability. Because clobbers are at the top of the
// frame, just below FP, all that is necessary is to use the pre-indexed
// "push" `[sp, #-16]!` addressing mode.
//
// `frame_offset` tracks offset above start-of-clobbers for unwind-info
// purposes.
let mut clobber_offset = clobber_size as u32;
for reg_pair in clobbered_int.chunks(2) {
let (r1, r2) = if reg_pair.len() == 2 { let (r1, r2) = if reg_pair.len() == 2 {
// .to_reg().to_reg(): Writable<RealReg> --> RealReg --> Reg // .to_reg().to_reg(): Writable<RealReg> --> RealReg --> Reg
(reg_pair[0].to_reg().to_reg(), reg_pair[1].to_reg().to_reg()) (reg_pair[0].to_reg().to_reg(), reg_pair[1].to_reg().to_reg())
@@ -560,28 +585,56 @@ impl ABIMachineSpec for AArch64MachineDeps {
debug_assert!(r1.get_class() == RegClass::I64); debug_assert!(r1.get_class() == RegClass::I64);
debug_assert!(r2.get_class() == RegClass::I64); debug_assert!(r2.get_class() == RegClass::I64);
// stp r1, r2, [sp, #(i * #16)] // stp r1, r2, [sp, #-16]!
insts.push(Inst::StoreP64 { insts.push(Inst::StoreP64 {
rt: r1, rt: r1,
rt2: r2, rt2: r2,
mem: PairAMode::SignedOffset( mem: PairAMode::PreIndexed(
stack_reg(), writable_stack_reg(),
SImm7Scaled::maybe_from_i64((i * 16) as i64, types::I64).unwrap(), SImm7Scaled::maybe_from_i64(-16, types::I64).unwrap(),
), ),
flags: MemFlags::trusted(), flags: MemFlags::trusted(),
}); });
if flags.unwind_info() {
clobber_offset -= 8;
if r2 != zero_reg() {
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: r2.to_real_reg(),
},
});
}
clobber_offset -= 8;
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: r1.to_real_reg(),
},
});
}
} }
let vec_offset = int_save_bytes; for reg in clobbered_vec.iter() {
for (i, reg) in clobbered_vec.iter().enumerate() {
insts.push(Inst::FpuStore128 { insts.push(Inst::FpuStore128 {
rd: reg.to_reg().to_reg(), rd: reg.to_reg().to_reg(),
mem: AMode::Unscaled( mem: AMode::PreIndexed(writable_stack_reg(), SImm9::maybe_from_i64(-16).unwrap()),
stack_reg(),
SImm9::maybe_from_i64((vec_offset + (i * 16)) as i64).unwrap(),
),
flags: MemFlags::trusted(), flags: MemFlags::trusted(),
}); });
if flags.unwind_info() {
clobber_offset -= 16;
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: reg.to_reg(),
},
});
}
}
// Allocate the fixed frame below the clobbers if necessary.
if fixed_frame_storage_size > 0 {
insts.extend(Self::gen_sp_reg_adjust(-(fixed_frame_storage_size as i32)));
} }
(total_save_bytes as u64, insts) (total_save_bytes as u64, insts)
@@ -591,14 +644,25 @@ impl ABIMachineSpec for AArch64MachineDeps {
call_conv: isa::CallConv, call_conv: isa::CallConv,
flags: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
_fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> SmallVec<[Inst; 16]> { ) -> SmallVec<[Inst; 16]> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers); let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers);
let (int_save_bytes, vec_save_bytes) = saved_reg_stack_size(&clobbered_int, &clobbered_vec); // Free the fixed frame if necessary.
for (i, reg_pair) in clobbered_int.chunks(2).enumerate() { if fixed_frame_storage_size > 0 {
insts.extend(Self::gen_sp_reg_adjust(fixed_frame_storage_size as i32));
}
for reg in clobbered_vec.iter().rev() {
insts.push(Inst::FpuLoad128 {
rd: Writable::from_reg(reg.to_reg().to_reg()),
mem: AMode::PostIndexed(writable_stack_reg(), SImm9::maybe_from_i64(16).unwrap()),
flags: MemFlags::trusted(),
});
}
for reg_pair in clobbered_int.chunks(2).rev() {
let (r1, r2) = if reg_pair.len() == 2 { let (r1, r2) = if reg_pair.len() == 2 {
( (
reg_pair[0].map(|r| r.to_reg()), reg_pair[0].map(|r| r.to_reg()),
@@ -611,37 +675,18 @@ impl ABIMachineSpec for AArch64MachineDeps {
debug_assert!(r1.to_reg().get_class() == RegClass::I64); debug_assert!(r1.to_reg().get_class() == RegClass::I64);
debug_assert!(r2.to_reg().get_class() == RegClass::I64); debug_assert!(r2.to_reg().get_class() == RegClass::I64);
// ldp r1, r2, [sp, #(i * 16)] // ldp r1, r2, [sp], #16
insts.push(Inst::LoadP64 { insts.push(Inst::LoadP64 {
rt: r1, rt: r1,
rt2: r2, rt2: r2,
mem: PairAMode::SignedOffset( mem: PairAMode::PostIndexed(
stack_reg(), writable_stack_reg(),
SImm7Scaled::maybe_from_i64((i * 16) as i64, types::I64).unwrap(), SImm7Scaled::maybe_from_i64(16, I64).unwrap(),
), ),
flags: MemFlags::trusted(), flags: MemFlags::trusted(),
}); });
} }
for (i, reg) in clobbered_vec.iter().enumerate() {
insts.push(Inst::FpuLoad128 {
rd: Writable::from_reg(reg.to_reg().to_reg()),
mem: AMode::Unscaled(
stack_reg(),
SImm9::maybe_from_i64(((i * 16) + int_save_bytes) as i64).unwrap(),
),
flags: MemFlags::trusted(),
});
}
// For non-baldrdash calling conventions, the frame pointer
// will be moved into the stack pointer in the epilogue, so we
// can skip restoring the stack pointer value with this `add`.
if call_conv.extends_baldrdash() {
let total_save_bytes = (int_save_bytes + vec_save_bytes) as i32;
insts.extend(Self::gen_sp_reg_adjust(total_save_bytes));
}
// If this is Baldrdash-2020, restore the callee (i.e., our) TLS // If this is Baldrdash-2020, restore the callee (i.e., our) TLS
// register. We may have allocated it for something else and clobbered // register. We may have allocated it for something else and clobbered
// it, but the ABI expects us to leave the TLS register unchanged. // it, but the ABI expects us to leave the TLS register unchanged.

View File

@@ -542,7 +542,6 @@ impl MachInstEmitInfo for EmitInfo {
impl MachInstEmit for Inst { impl MachInstEmit for Inst {
type State = EmitState; type State = EmitState;
type Info = EmitInfo; type Info = EmitInfo;
type UnwindInfo = super::unwind::AArch64UnwindInfo;
fn emit(&self, sink: &mut MachBuffer<Inst>, emit_info: &Self::Info, state: &mut EmitState) { fn emit(&self, sink: &mut MachBuffer<Inst>, emit_info: &Self::Info, state: &mut EmitState) {
// N.B.: we *must* not exceed the "worst-case size" used to compute // N.B.: we *must* not exceed the "worst-case size" used to compute
@@ -2379,6 +2378,10 @@ impl MachInstEmit for Inst {
&Inst::ValueLabelMarker { .. } => { &Inst::ValueLabelMarker { .. } => {
// Nothing; this is only used to compute debug info. // Nothing; this is only used to compute debug info.
} }
&Inst::Unwind { ref inst } => {
sink.add_unwind(inst.clone());
}
} }
let end_off = sink.cur_offset(); let end_off = sink.cur_offset();

View File

@@ -8,6 +8,7 @@ use crate::ir::types::{
B1, B128, B16, B32, B64, B8, F32, F64, FFLAGS, I128, I16, I32, I64, I8, I8X16, IFLAGS, R32, R64, B1, B128, B16, B32, B64, B8, F32, F64, FFLAGS, I128, I16, I32, I64, I8, I8X16, IFLAGS, R32, R64,
}; };
use crate::ir::{ExternalName, MemFlags, Opcode, SourceLoc, TrapCode, Type, ValueLabel}; use crate::ir::{ExternalName, MemFlags, Opcode, SourceLoc, TrapCode, Type, ValueLabel};
use crate::isa::unwind::UnwindInst;
use crate::isa::CallConv; use crate::isa::CallConv;
use crate::machinst::*; use crate::machinst::*;
use crate::{settings, CodegenError, CodegenResult}; use crate::{settings, CodegenError, CodegenResult};
@@ -1216,6 +1217,11 @@ pub enum Inst {
reg: Reg, reg: Reg,
label: ValueLabel, label: ValueLabel,
}, },
/// An unwind pseudo-instruction.
Unwind {
inst: UnwindInst,
},
} }
fn count_zero_half_words(mut value: u64, num_half_words: u8) -> usize { fn count_zero_half_words(mut value: u64, num_half_words: u8) -> usize {
@@ -2026,6 +2032,7 @@ fn aarch64_get_regs(inst: &Inst, collector: &mut RegUsageCollector) {
&Inst::ValueLabelMarker { reg, .. } => { &Inst::ValueLabelMarker { reg, .. } => {
collector.add_use(reg); collector.add_use(reg);
} }
&Inst::Unwind { .. } => {}
&Inst::EmitIsland { .. } => {} &Inst::EmitIsland { .. } => {}
} }
} }
@@ -2779,6 +2786,7 @@ fn aarch64_map_regs<RUM: RegUsageMapper>(inst: &mut Inst, mapper: &RUM) {
&mut Inst::ValueLabelMarker { ref mut reg, .. } => { &mut Inst::ValueLabelMarker { ref mut reg, .. } => {
map_use(mapper, reg); map_use(mapper, reg);
} }
&mut Inst::Unwind { .. } => {}
} }
} }
@@ -4097,6 +4105,10 @@ impl Inst {
&Inst::ValueLabelMarker { label, reg } => { &Inst::ValueLabelMarker { label, reg } => {
format!("value_label {:?}, {}", label, reg.show_rru(mb_rru)) format!("value_label {:?}, {}", label, reg.show_rru(mb_rru))
} }
&Inst::Unwind { ref inst } => {
format!("unwind {:?}", inst)
}
} }
} }
} }

View File

@@ -1,201 +1,2 @@
use super::*;
use crate::isa::aarch64::inst::{args::PairAMode, imms::Imm12, regs, ALUOp, Inst};
use crate::isa::unwind::input::{UnwindCode, UnwindInfo};
use crate::machinst::UnwindInfoContext;
use crate::result::CodegenResult;
use alloc::vec::Vec;
use regalloc::Reg;
#[cfg(feature = "unwind")] #[cfg(feature = "unwind")]
pub(crate) mod systemv; pub(crate) mod systemv;
pub struct AArch64UnwindInfo;
impl UnwindInfoGenerator<Inst> for AArch64UnwindInfo {
fn create_unwind_info(
context: UnwindInfoContext<Inst>,
) -> CodegenResult<Option<UnwindInfo<Reg>>> {
let word_size = 8u8;
let pair_size = word_size * 2;
let mut codes = Vec::new();
for i in context.prologue.clone() {
let i = i as usize;
let inst = &context.insts[i];
let offset = context.insts_layout[i];
match inst {
Inst::StoreP64 {
rt,
rt2,
mem: PairAMode::PreIndexed(rn, imm7),
..
} if *rt == regs::fp_reg()
&& *rt2 == regs::link_reg()
&& *rn == regs::writable_stack_reg()
&& imm7.value == -(pair_size as i16) =>
{
// stp fp (x29), lr (x30), [sp, #-16]!
codes.push((
offset,
UnwindCode::StackAlloc {
size: pair_size as u32,
},
));
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt,
stack_offset: 0,
},
));
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt2,
stack_offset: word_size as u32,
},
));
}
Inst::StoreP64 {
rt,
rt2,
mem: PairAMode::PreIndexed(rn, imm7),
..
} if rn.to_reg() == regs::stack_reg() && imm7.value % (pair_size as i16) == 0 => {
// stp r1, r2, [sp, #(i * #16)]
let stack_offset = imm7.value as u32;
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt,
stack_offset,
},
));
if *rt2 != regs::zero_reg() {
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt2,
stack_offset: stack_offset + word_size as u32,
},
));
}
}
Inst::AluRRImm12 {
alu_op: ALUOp::Add64,
rd,
rn,
imm12:
Imm12 {
bits: 0,
shift12: false,
},
} if *rd == regs::writable_fp_reg() && *rn == regs::stack_reg() => {
// mov fp (x29), sp.
codes.push((offset, UnwindCode::SetFramePointer { reg: rd.to_reg() }));
}
Inst::VirtualSPOffsetAdj { offset: adj } if offset > 0 => {
codes.push((offset, UnwindCode::StackAlloc { size: *adj as u32 }));
}
_ => {}
}
}
// TODO epilogues
let prologue_size = if context.prologue.len() == 0 {
0
} else {
context.insts_layout[context.prologue.end as usize - 1]
};
Ok(Some(UnwindInfo {
prologue_size,
prologue_unwind_codes: codes,
epilogues_unwind_codes: vec![],
function_size: context.len,
word_size,
initial_sp_offset: 0,
}))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::cursor::{Cursor, FuncCursor};
use crate::ir::{ExternalName, Function, InstBuilder, Signature, StackSlotData, StackSlotKind};
use crate::isa::{lookup, CallConv};
use crate::settings::{builder, Flags};
use crate::Context;
use std::str::FromStr;
use target_lexicon::triple;
#[test]
fn test_simple_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_function(
CallConv::SystemV,
Some(StackSlotData::new(StackSlotKind::ExplicitSlot, 64)),
));
context.compile(&*isa).expect("expected compilation");
let result = context.mach_compile_result.unwrap();
let unwind_info = result.unwind_info.unwrap();
assert_eq!(
unwind_info,
UnwindInfo {
prologue_size: 12,
prologue_unwind_codes: vec![
(4, UnwindCode::StackAlloc { size: 16 }),
(
4,
UnwindCode::SaveRegister {
reg: regs::fp_reg(),
stack_offset: 0
}
),
(
4,
UnwindCode::SaveRegister {
reg: regs::link_reg(),
stack_offset: 8
}
),
(
8,
UnwindCode::SetFramePointer {
reg: regs::fp_reg()
}
)
],
epilogues_unwind_codes: vec![],
function_size: 24,
word_size: 8,
initial_sp_offset: 0,
}
);
}
fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function {
let mut func =
Function::with_name_signature(ExternalName::user(0, 0), Signature::new(call_conv));
let block0 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().return_(&[]);
if let Some(stack_slot) = stack_slot {
func.stack_slots.push(stack_slot);
}
func
}
}

View File

@@ -1,9 +1,7 @@
//! Unwind information for System V ABI (Aarch64). //! Unwind information for System V ABI (Aarch64).
use crate::isa::aarch64::inst::regs; use crate::isa::aarch64::inst::regs;
use crate::isa::unwind::input; use crate::isa::unwind::systemv::RegisterMappingError;
use crate::isa::unwind::systemv::{RegisterMappingError, UnwindInfo};
use crate::result::CodegenResult;
use gimli::{write::CommonInformationEntry, Encoding, Format, Register}; use gimli::{write::CommonInformationEntry, Encoding, Format, Register};
use regalloc::{Reg, RegClass}; use regalloc::{Reg, RegClass};
@@ -31,128 +29,40 @@ pub fn create_cie() -> CommonInformationEntry {
/// Map Cranelift registers to their corresponding Gimli registers. /// Map Cranelift registers to their corresponding Gimli registers.
pub fn map_reg(reg: Reg) -> Result<Register, RegisterMappingError> { pub fn map_reg(reg: Reg) -> Result<Register, RegisterMappingError> {
// For AArch64 DWARF register mappings, see:
//
// https://developer.arm.com/documentation/ihi0057/e/?lang=en#dwarf-register-names
//
// X0--X31 is 0--31; V0--V31 is 64--95.
match reg.get_class() { match reg.get_class() {
RegClass::I64 => Ok(Register(reg.get_hw_encoding().into())), RegClass::I64 => {
let reg = reg.get_hw_encoding() as u16;
Ok(Register(reg))
}
RegClass::V128 => {
let reg = reg.get_hw_encoding() as u16;
Ok(Register(64 + reg))
}
_ => Err(RegisterMappingError::UnsupportedRegisterBank("class?")), _ => Err(RegisterMappingError::UnsupportedRegisterBank("class?")),
} }
} }
pub(crate) fn create_unwind_info( pub(crate) struct RegisterMapper;
unwind: input::UnwindInfo<Reg>,
) -> CodegenResult<Option<UnwindInfo>> { impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
struct RegisterMapper;
impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> { fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> {
Ok(map_reg(reg)?.0) Ok(map_reg(reg)?.0)
} }
fn sp(&self) -> u16 { fn sp(&self) -> u16 {
regs::stack_reg().get_hw_encoding().into() regs::stack_reg().get_hw_encoding().into()
} }
fn fp(&self) -> u16 {
regs::fp_reg().get_hw_encoding().into()
} }
let map = RegisterMapper; fn lr(&self) -> Option<u16> {
Ok(Some(UnwindInfo::build(unwind, &map)?)) Some(regs::link_reg().get_hw_encoding().into())
}
#[cfg(test)]
mod tests {
use crate::cursor::{Cursor, FuncCursor};
use crate::ir::{
types, AbiParam, ExternalName, Function, InstBuilder, Signature, StackSlotData,
StackSlotKind,
};
use crate::isa::{lookup, CallConv};
use crate::settings::{builder, Flags};
use crate::Context;
use gimli::write::Address;
use std::str::FromStr;
use target_lexicon::triple;
#[test]
fn test_simple_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_function(
CallConv::SystemV,
Some(StackSlotData::new(StackSlotKind::ExplicitSlot, 64)),
));
context.compile(&*isa).expect("expected compilation");
let fde = match context
.create_unwind_info(isa.as_ref())
.expect("can create unwind info")
{
Some(crate::isa::unwind::UnwindInfo::SystemV(info)) => {
info.to_fde(Address::Constant(1234))
} }
_ => panic!("expected unwind information"), fn lr_offset(&self) -> Option<u32> {
}; Some(8)
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(1234), length: 24, lsda: None, instructions: [(4, CfaOffset(16)), (4, Offset(Register(29), -16)), (4, Offset(Register(30), -8)), (8, CfaRegister(Register(29)))] }");
}
fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function {
let mut func =
Function::with_name_signature(ExternalName::user(0, 0), Signature::new(call_conv));
let block0 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().return_(&[]);
if let Some(stack_slot) = stack_slot {
func.stack_slots.push(stack_slot);
}
func
}
#[test]
fn test_multi_return_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_multi_return_function(CallConv::SystemV));
context.compile(&*isa).expect("expected compilation");
let fde = match context
.create_unwind_info(isa.as_ref())
.expect("can create unwind info")
{
Some(crate::isa::unwind::UnwindInfo::SystemV(info)) => {
info.to_fde(Address::Constant(4321))
}
_ => panic!("expected unwind information"),
};
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(4321), length: 40, lsda: None, instructions: [(4, CfaOffset(16)), (4, Offset(Register(29), -16)), (4, Offset(Register(30), -8)), (8, CfaRegister(Register(29)))] }");
}
fn create_multi_return_function(call_conv: CallConv) -> Function {
let mut sig = Signature::new(call_conv);
sig.params.push(AbiParam::new(types::I32));
let mut func = Function::with_name_signature(ExternalName::user(0, 0), sig);
let block0 = func.dfg.make_block();
let v0 = func.dfg.append_block_param(block0, types::I32);
let block1 = func.dfg.make_block();
let block2 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().brnz(v0, block2, &[]);
pos.ins().jump(block1, &[]);
pos.insert_block(block1);
pos.ins().return_(&[]);
pos.insert_block(block2);
pos.ins().return_(&[]);
func
} }
} }

View File

@@ -65,7 +65,6 @@ impl MachBackend for AArch64Backend {
let buffer = vcode.emit(); let buffer = vcode.emit();
let frame_size = vcode.frame_size(); let frame_size = vcode.frame_size();
let unwind_info = vcode.unwind_info()?;
let stackslot_offsets = vcode.stackslot_offsets().clone(); let stackslot_offsets = vcode.stackslot_offsets().clone();
let disasm = if want_disasm { let disasm = if want_disasm {
@@ -80,7 +79,6 @@ impl MachBackend for AArch64Backend {
buffer, buffer,
frame_size, frame_size,
disasm, disasm,
unwind_info,
value_labels_ranges: Default::default(), value_labels_ranges: Default::default(),
stackslot_offsets, stackslot_offsets,
}) })
@@ -127,11 +125,18 @@ impl MachBackend for AArch64Backend {
) -> CodegenResult<Option<crate::isa::unwind::UnwindInfo>> { ) -> CodegenResult<Option<crate::isa::unwind::UnwindInfo>> {
use crate::isa::unwind::UnwindInfo; use crate::isa::unwind::UnwindInfo;
use crate::machinst::UnwindInfoKind; use crate::machinst::UnwindInfoKind;
Ok(match (result.unwind_info.as_ref(), kind) { Ok(match kind {
(Some(info), UnwindInfoKind::SystemV) => { UnwindInfoKind::SystemV => {
inst::unwind::systemv::create_unwind_info(info.clone())?.map(UnwindInfo::SystemV) let mapper = self::inst::unwind::systemv::RegisterMapper;
Some(UnwindInfo::SystemV(
crate::isa::unwind::systemv::create_unwind_info_from_insts(
&result.buffer.unwind_info[..],
result.buffer.data.len(),
&mapper,
)?,
))
} }
(Some(_info), UnwindInfoKind::Windows) => { UnwindInfoKind::Windows => {
// TODO: support Windows unwind info on AArch64 // TODO: support Windows unwind info on AArch64
None None
} }
@@ -200,12 +205,11 @@ mod test {
// mov x29, sp // mov x29, sp
// mov x1, #0x1234 // mov x1, #0x1234
// add w0, w0, w1 // add w0, w0, w1
// mov sp, x29
// ldp x29, x30, [sp], #16 // ldp x29, x30, [sp], #16
// ret // ret
let golden = vec![ let golden = vec![
0xfd, 0x7b, 0xbf, 0xa9, 0xfd, 0x03, 0x00, 0x91, 0x81, 0x46, 0x82, 0xd2, 0x00, 0x00, 0xfd, 0x7b, 0xbf, 0xa9, 0xfd, 0x03, 0x00, 0x91, 0x81, 0x46, 0x82, 0xd2, 0x00, 0x00,
0x01, 0x0b, 0xbf, 0x03, 0x00, 0x91, 0xfd, 0x7b, 0xc1, 0xa8, 0xc0, 0x03, 0x5f, 0xd6, 0x01, 0x0b, 0xfd, 0x7b, 0xc1, 0xa8, 0xc0, 0x03, 0x5f, 0xd6,
]; ];
assert_eq!(code, &golden[..]); assert_eq!(code, &golden[..]);
@@ -267,14 +271,13 @@ mod test {
// cbnz x1, 0x18 // cbnz x1, 0x18
// mov x1, #0x1234 // #4660 // mov x1, #0x1234 // #4660
// sub w0, w0, w1 // sub w0, w0, w1
// mov sp, x29
// ldp x29, x30, [sp], #16 // ldp x29, x30, [sp], #16
// ret // ret
let golden = vec![ let golden = vec![
253, 123, 191, 169, 253, 3, 0, 145, 129, 70, 130, 210, 0, 0, 1, 11, 225, 3, 0, 42, 161, 253, 123, 191, 169, 253, 3, 0, 145, 129, 70, 130, 210, 0, 0, 1, 11, 225, 3, 0, 42, 161,
0, 0, 181, 129, 70, 130, 210, 1, 0, 1, 11, 225, 3, 1, 42, 161, 255, 255, 181, 225, 3, 0, 0, 181, 129, 70, 130, 210, 1, 0, 1, 11, 225, 3, 1, 42, 161, 255, 255, 181, 225, 3,
0, 42, 97, 255, 255, 181, 129, 70, 130, 210, 0, 0, 1, 75, 191, 3, 0, 145, 253, 123, 0, 42, 97, 255, 255, 181, 129, 70, 130, 210, 0, 0, 1, 75, 253, 123, 193, 168, 192, 3,
193, 168, 192, 3, 95, 214, 95, 214,
]; ];
assert_eq!(code, &golden[..]); assert_eq!(code, &golden[..]);

View File

@@ -284,7 +284,7 @@ impl ABIMachineSpec for Arm32MachineDeps {
Inst::VirtualSPOffsetAdj { offset } Inst::VirtualSPOffsetAdj { offset }
} }
fn gen_prologue_frame_setup() -> SmallInstVec<Inst> { fn gen_prologue_frame_setup(_: &settings::Flags) -> SmallInstVec<Inst> {
let mut ret = SmallVec::new(); let mut ret = SmallVec::new();
let reg_list = vec![fp_reg(), lr_reg()]; let reg_list = vec![fp_reg(), lr_reg()];
ret.push(Inst::Push { reg_list }); ret.push(Inst::Push { reg_list });
@@ -295,7 +295,7 @@ impl ABIMachineSpec for Arm32MachineDeps {
ret ret
} }
fn gen_epilogue_frame_restore() -> SmallInstVec<Inst> { fn gen_epilogue_frame_restore(_: &settings::Flags) -> SmallInstVec<Inst> {
let mut ret = SmallVec::new(); let mut ret = SmallVec::new();
ret.push(Inst::Mov { ret.push(Inst::Mov {
rd: writable_sp_reg(), rd: writable_sp_reg(),
@@ -319,7 +319,6 @@ impl ABIMachineSpec for Arm32MachineDeps {
_flags: &settings::Flags, _flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> (u64, SmallVec<[Inst; 16]>) { ) -> (u64, SmallVec<[Inst; 16]>) {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
if fixed_frame_storage_size > 0 { if fixed_frame_storage_size > 0 {
@@ -349,7 +348,6 @@ impl ABIMachineSpec for Arm32MachineDeps {
_flags: &settings::Flags, _flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
_fixed_frame_storage_size: u32, _fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> SmallVec<[Inst; 16]> { ) -> SmallVec<[Inst; 16]> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
let clobbered_vec = get_callee_saves(clobbers); let clobbered_vec = get_callee_saves(clobbers);

View File

@@ -286,7 +286,6 @@ impl MachInstEmitInfo for EmitInfo {
impl MachInstEmit for Inst { impl MachInstEmit for Inst {
type Info = EmitInfo; type Info = EmitInfo;
type State = EmitState; type State = EmitState;
type UnwindInfo = super::unwind::Arm32UnwindInfo;
fn emit(&self, sink: &mut MachBuffer<Inst>, emit_info: &Self::Info, state: &mut EmitState) { fn emit(&self, sink: &mut MachBuffer<Inst>, emit_info: &Self::Info, state: &mut EmitState) {
let start_off = sink.cur_offset(); let start_off = sink.cur_offset();

View File

@@ -22,7 +22,6 @@ mod emit;
pub use self::emit::*; pub use self::emit::*;
mod regs; mod regs;
pub use self::regs::*; pub use self::regs::*;
pub mod unwind;
#[cfg(test)] #[cfg(test)]
mod emit_tests; mod emit_tests;

View File

@@ -1,14 +0,0 @@
use super::*;
use crate::isa::unwind::input::UnwindInfo;
use crate::result::CodegenResult;
pub struct Arm32UnwindInfo;
impl UnwindInfoGenerator<Inst> for Arm32UnwindInfo {
fn create_unwind_info(
_context: UnwindInfoContext<Inst>,
) -> CodegenResult<Option<UnwindInfo<Reg>>> {
// TODO
Ok(None)
}
}

View File

@@ -75,7 +75,6 @@ impl MachBackend for Arm32Backend {
buffer, buffer,
frame_size, frame_size,
disasm, disasm,
unwind_info: None,
value_labels_ranges: Default::default(), value_labels_ranges: Default::default(),
stackslot_offsets, stackslot_offsets,
}) })

View File

@@ -1,4 +1,7 @@
//! Represents information relating to function unwinding. //! Represents information relating to function unwinding.
use regalloc::RealReg;
#[cfg(feature = "enable-serde")] #[cfg(feature = "enable-serde")]
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -86,3 +89,149 @@ pub mod input {
pub initial_sp_offset: u8, pub initial_sp_offset: u8,
} }
} }
/// Unwind pseudoinstruction used in VCode backends: represents that
/// at the present location, an action has just been taken.
///
/// VCode backends always emit unwind info that is relative to a frame
/// pointer, because we are planning to allow for dynamic frame allocation,
/// and because it makes the design quite a lot simpler in general: we don't
/// have to be precise about SP adjustments throughout the body of the function.
///
/// We include only unwind info for prologues at this time. Note that unwind
/// info for epilogues is only necessary if one expects to unwind while within
/// the last few instructions of the function (after FP has been restored) or
/// if one wishes to instruction-step through the epilogue and see a backtrace
/// at every point. This is not necessary for correct operation otherwise and so
/// we simplify the world a bit by omitting epilogue information. (Note that
/// some platforms also don't require or have a way to describe unwind
/// information for epilogues at all: for example, on Windows, the `UNWIND_INFO`
/// format only stores information for the function prologue.)
///
/// Because we are defining an abstraction over multiple unwind formats (at
/// least Windows/fastcall and System V) and multiple architectures (at least
/// x86-64 and aarch64), we have to be a little bit flexible in how we describe
/// the frame. However, it turns out that a least-common-denominator prologue
/// works for all of the cases we have to worry about today!
///
/// We assume the stack looks something like this:
///
///
/// ```plain
/// +----------------------------------------------+
/// | stack arg area, etc (according to ABI) |
/// | ... |
/// SP at call --> +----------------------------------------------+
/// | return address (pushed by HW or SW) |
/// +----------------------------------------------+
/// | old frame pointer (FP) |
/// FP in this --> +----------------------------------------------+
/// function | clobbered callee-save registers |
/// | ... |
/// start of --> +----------------------------------------------+
/// clobbers | (rest of function's frame, irrelevant here) |
/// | ... |
/// SP in this --> +----------------------------------------------+
/// function
/// ```
///
/// We assume that the prologue consists of:
///
/// * `PushFrameRegs`: A push operation that adds the old FP to the stack (and
/// maybe the link register, on architectures that do not push return addresses
/// in hardware)
/// * `DefineFrame`: An update that sets FP to SP to establish a new frame
/// * `SaveReg`: A number of stores or pushes to the stack to save clobbered registers
///
/// Each of these steps has a corresponding pseudo-instruction. At each step,
/// we need some information to determine where the current stack frame is
/// relative to SP or FP. When the `PushFrameRegs` occurs, we need to know how
/// much SP was decremented by, so we can allow the unwinder to continue to find
/// the caller's frame. When we define the new frame, we need to know where FP
/// is in relation to "SP at call" and also "start of clobbers", because
/// different unwind formats define one or the other of those as the anchor by
/// which we define the frame. Finally, when registers are saved, we need to
/// know which ones, and where.
///
/// Different unwind formats work differently; here is a whirlwind tour of how
/// they define frames to help understanding:
///
/// - Windows unwind information defines a frame that must start below the
/// clobber area, because all clobber-save offsets are non-negative. We set it
/// at the "start of clobbers" in the figure above. The `UNWIND_INFO` contains
/// a "frame pointer offset" field; when we define the new frame, the frame is
/// understood to be the value of FP (`RBP`) *minus* this offset. In other
/// words, the FP is *at the frame pointer offset* relative to the
/// start-of-clobber-frame. We use the "FP offset down to clobber area" offset
/// to generate this info.
///
/// - System V unwind information defines a frame in terms of the CFA
/// (call-frame address), which is equal to the "SP at call" above. SysV
/// allows negative offsets, so there is no issue defining clobber-save
/// locations in terms of CFA. The format allows us to define CFA flexibly in
/// terms of any register plus an offset; we define it in terms of FP plus
/// the clobber-to-caller-SP offset once FP is established.
///
/// Note that certain architectures impose limits on offsets: for example, on
/// Windows, the base of the clobber area must not be more than 240 bytes below
/// FP.
///
/// Unwind pseudoinstructions are emitted inline by ABI code as it generates
/// a prologue. Thus, for the usual case, a prologue might look like (using x64
/// as an example):
///
/// ```plain
/// push rbp
/// unwind UnwindInst::PushFrameRegs { offset_upward_to_caller_sp: 16 }
/// mov rbp, rsp
/// unwind UnwindInst::DefineNewFrame { offset_upward_to_caller_sp: 16,
/// offset_downward_to_clobbers: 16 }
/// sub rsp, 32
/// mov [rsp+16], r12
/// unwind UnwindInst::SaveReg { reg: R12, clobber_offset: 0 }
/// mov [rsp+24], r13
/// unwind UnwindInst::SaveReg { reg: R13, clobber_offset: 8 }
/// ...
/// ```
#[derive(Clone, Debug, PartialEq, Eq)]
#[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))]
pub enum UnwindInst {
/// The frame-pointer register for this architecture has just been pushed to
/// the stack (and on architectures where return-addresses are not pushed by
/// hardware, the link register as well). The FP has not been set to this
/// frame yet. The current location of SP is such that
/// `offset_upward_to_caller_sp` is the distance to SP-at-callsite (our
/// caller's frame).
PushFrameRegs {
/// The offset from the current SP (after push) to the SP at
/// caller's callsite.
offset_upward_to_caller_sp: u32,
},
/// The frame-pointer register for this architecture has just been
/// set to the current stack location. We wish to define a new
/// frame that is anchored on this new FP value. Offsets are provided
/// upward to the caller's stack frame and downward toward the clobber
/// area. We expect this pseudo-op to come after `PushFrameRegs`.
DefineNewFrame {
/// The offset from the current SP and FP value upward to the value of
/// SP at the callsite that invoked us.
offset_upward_to_caller_sp: u32,
/// The offset from the current SP and FP value downward to the start of
/// the clobber area.
offset_downward_to_clobbers: u32,
},
/// The stack slot at the given offset from the clobber-area base has been
/// used to save the given register.
///
/// Given that `CreateFrame` has occurred first with some
/// `offset_downward_to_clobbers`, `SaveReg` with `clobber_offset` indicates
/// that the value of `reg` is saved on the stack at address `FP -
/// offset_downward_to_clobbers + clobber_offset`.
SaveReg {
/// The offset from the start of the clobber area to this register's
/// stack location.
clobber_offset: u32,
/// The saved register.
reg: RealReg,
},
}

View File

@@ -1,6 +1,8 @@
//! System V ABI unwind information. //! System V ABI unwind information.
use crate::binemit::CodeOffset;
use crate::isa::unwind::input; use crate::isa::unwind::input;
use crate::isa::unwind::UnwindInst;
use crate::result::{CodegenError, CodegenResult}; use crate::result::{CodegenError, CodegenResult};
use alloc::vec::Vec; use alloc::vec::Vec;
use gimli::write::{Address, FrameDescriptionEntry}; use gimli::write::{Address, FrameDescriptionEntry};
@@ -100,6 +102,16 @@ pub(crate) trait RegisterMapper<Reg> {
fn map(&self, reg: Reg) -> Result<Register, RegisterMappingError>; fn map(&self, reg: Reg) -> Result<Register, RegisterMappingError>;
/// Gets stack pointer register. /// Gets stack pointer register.
fn sp(&self) -> Register; fn sp(&self) -> Register;
/// Gets the frame pointer register.
fn fp(&self) -> Register;
/// Gets the link register, if any.
fn lr(&self) -> Option<Register> {
None
}
/// What is the offset from saved FP to saved LR?
fn lr_offset(&self) -> Option<u32> {
None
}
} }
/// Represents unwind information for a single System V ABI function. /// Represents unwind information for a single System V ABI function.
@@ -112,7 +124,82 @@ pub struct UnwindInfo {
len: u32, len: u32,
} }
pub(crate) fn create_unwind_info_from_insts<MR: RegisterMapper<regalloc::Reg>>(
insts: &[(CodeOffset, UnwindInst)],
code_len: usize,
mr: &MR,
) -> CodegenResult<UnwindInfo> {
let mut instructions = vec![];
let mut clobber_offset_to_cfa = 0;
for &(instruction_offset, ref inst) in insts {
match inst {
&UnwindInst::PushFrameRegs {
offset_upward_to_caller_sp,
} => {
// Define CFA in terms of current SP (SP changed and we haven't
// set FP yet).
instructions.push((
instruction_offset,
CallFrameInstruction::CfaOffset(offset_upward_to_caller_sp as i32),
));
// Note that we saved the old FP value on the stack.
instructions.push((
instruction_offset,
CallFrameInstruction::Offset(mr.fp(), -(offset_upward_to_caller_sp as i32)),
));
// If there is a link register on this architecture, note that
// we saved it as well.
if let Some(lr) = mr.lr() {
instructions.push((
instruction_offset,
CallFrameInstruction::Offset(
lr,
-(offset_upward_to_caller_sp as i32)
+ mr.lr_offset().expect("LR offset not provided") as i32,
),
));
}
}
&UnwindInst::DefineNewFrame {
offset_upward_to_caller_sp,
offset_downward_to_clobbers,
} => {
// Define CFA in terms of FP. Note that we assume it was already
// defined correctly in terms of the current SP, and FP has just
// been set to the current SP, so we do not need to change the
// offset, only the register.
instructions.push((
instruction_offset,
CallFrameInstruction::CfaRegister(mr.fp()),
));
// Record distance from CFA downward to clobber area so we can
// express clobber offsets later in terms of CFA.
clobber_offset_to_cfa = offset_upward_to_caller_sp + offset_downward_to_clobbers;
}
&UnwindInst::SaveReg {
clobber_offset,
reg,
} => {
let reg = mr
.map(reg.to_reg())
.map_err(|e| CodegenError::RegisterMappingError(e))?;
let off = (clobber_offset as i32) - (clobber_offset_to_cfa as i32);
instructions.push((instruction_offset, CallFrameInstruction::Offset(reg, off)));
}
}
}
Ok(UnwindInfo {
instructions,
len: code_len as u32,
})
}
impl UnwindInfo { impl UnwindInfo {
// TODO: remove `build()` below when old backend is removed. The new backend uses a simpler
// approach in `create_unwind_info_from_insts()` above.
pub(crate) fn build<'b, Reg: PartialEq + Copy>( pub(crate) fn build<'b, Reg: PartialEq + Copy>(
unwind: input::UnwindInfo<Reg>, unwind: input::UnwindInfo<Reg>,
map_reg: &'b dyn RegisterMapper<Reg>, map_reg: &'b dyn RegisterMapper<Reg>,
@@ -179,6 +266,8 @@ impl UnwindInfo {
} }
} }
// TODO: delete the builder below when the old backend is removed.
struct InstructionBuilder<'a, Reg: PartialEq + Copy> { struct InstructionBuilder<'a, Reg: PartialEq + Copy> {
sp_offset: i32, sp_offset: i32,
frame_register: Option<Reg>, frame_register: Option<Reg>,

View File

@@ -1,6 +1,6 @@
//! Windows x64 ABI unwind information. //! Windows x64 ABI unwind information.
use crate::isa::{unwind::input, RegUnit}; use crate::isa::unwind::input;
use crate::result::{CodegenError, CodegenResult}; use crate::result::{CodegenError, CodegenResult};
use alloc::vec::Vec; use alloc::vec::Vec;
use byteorder::{ByteOrder, LittleEndian}; use byteorder::{ByteOrder, LittleEndian};
@@ -8,6 +8,11 @@ use log::warn;
#[cfg(feature = "enable-serde")] #[cfg(feature = "enable-serde")]
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[cfg(feature = "x64")]
use crate::binemit::CodeOffset;
#[cfg(feature = "x64")]
use crate::isa::unwind::UnwindInst;
/// Maximum (inclusive) size of a "small" stack allocation /// Maximum (inclusive) size of a "small" stack allocation
const SMALL_ALLOC_MAX_SIZE: u32 = 128; const SMALL_ALLOC_MAX_SIZE: u32 = 128;
/// Maximum (inclusive) size of a "large" stack allocation that can represented in 16-bits /// Maximum (inclusive) size of a "large" stack allocation that can represented in 16-bits
@@ -44,22 +49,31 @@ impl<'a> Writer<'a> {
/// See: https://docs.microsoft.com/en-us/cpp/build/exception-handling-x64 /// See: https://docs.microsoft.com/en-us/cpp/build/exception-handling-x64
/// Only what is needed to describe the prologues generated by the Cranelift x86 ISA are represented here. /// Only what is needed to describe the prologues generated by the Cranelift x86 ISA are represented here.
/// Note: the Cranelift x86 ISA RU enum matches the Windows unwind GPR encoding values. /// Note: the Cranelift x86 ISA RU enum matches the Windows unwind GPR encoding values.
#[allow(dead_code)]
#[derive(Clone, Debug, PartialEq, Eq)] #[derive(Clone, Debug, PartialEq, Eq)]
#[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "enable-serde", derive(Serialize, Deserialize))]
pub(crate) enum UnwindCode { pub(crate) enum UnwindCode {
PushRegister { PushRegister {
offset: u8, instruction_offset: u8,
reg: u8, reg: u8,
}, },
SaveReg {
instruction_offset: u8,
reg: u8,
stack_offset: u32,
},
SaveXmm { SaveXmm {
offset: u8, instruction_offset: u8,
reg: u8, reg: u8,
stack_offset: u32, stack_offset: u32,
}, },
StackAlloc { StackAlloc {
offset: u8, instruction_offset: u8,
size: u32, size: u32,
}, },
SetFPReg {
instruction_offset: u8,
},
} }
impl UnwindCode { impl UnwindCode {
@@ -68,37 +82,63 @@ impl UnwindCode {
PushNonvolatileRegister = 0, PushNonvolatileRegister = 0,
LargeStackAlloc = 1, LargeStackAlloc = 1,
SmallStackAlloc = 2, SmallStackAlloc = 2,
SetFPReg = 3,
SaveNonVolatileRegister = 4,
SaveNonVolatileRegisterFar = 5,
SaveXmm128 = 8, SaveXmm128 = 8,
SaveXmm128Far = 9, SaveXmm128Far = 9,
} }
match self { match self {
Self::PushRegister { offset, reg } => { Self::PushRegister {
writer.write_u8(*offset); instruction_offset,
reg,
} => {
writer.write_u8(*instruction_offset);
writer.write_u8((*reg << 4) | (UnwindOperation::PushNonvolatileRegister as u8)); writer.write_u8((*reg << 4) | (UnwindOperation::PushNonvolatileRegister as u8));
} }
Self::SaveXmm { Self::SaveReg {
offset, instruction_offset,
reg,
stack_offset,
}
| Self::SaveXmm {
instruction_offset,
reg, reg,
stack_offset, stack_offset,
} => { } => {
writer.write_u8(*offset); let is_xmm = match self {
Self::SaveXmm { .. } => true,
_ => false,
};
let (op_small, op_large) = if is_xmm {
(UnwindOperation::SaveXmm128, UnwindOperation::SaveXmm128Far)
} else {
(
UnwindOperation::SaveNonVolatileRegister,
UnwindOperation::SaveNonVolatileRegisterFar,
)
};
writer.write_u8(*instruction_offset);
let scaled_stack_offset = stack_offset / 16; let scaled_stack_offset = stack_offset / 16;
if scaled_stack_offset <= core::u16::MAX as u32 { if scaled_stack_offset <= core::u16::MAX as u32 {
writer.write_u8((*reg << 4) | (UnwindOperation::SaveXmm128 as u8)); writer.write_u8((*reg << 4) | (op_small as u8));
writer.write_u16::<LittleEndian>(scaled_stack_offset as u16); writer.write_u16::<LittleEndian>(scaled_stack_offset as u16);
} else { } else {
writer.write_u8((*reg << 4) | (UnwindOperation::SaveXmm128Far as u8)); writer.write_u8((*reg << 4) | (op_large as u8));
writer.write_u16::<LittleEndian>(*stack_offset as u16); writer.write_u16::<LittleEndian>(*stack_offset as u16);
writer.write_u16::<LittleEndian>((stack_offset >> 16) as u16); writer.write_u16::<LittleEndian>((stack_offset >> 16) as u16);
} }
} }
Self::StackAlloc { offset, size } => { Self::StackAlloc {
instruction_offset,
size,
} => {
// Stack allocations on Windows must be a multiple of 8 and be at least 1 slot // Stack allocations on Windows must be a multiple of 8 and be at least 1 slot
assert!(*size >= 8); assert!(*size >= 8);
assert!((*size % 8) == 0); assert!((*size % 8) == 0);
writer.write_u8(*offset); writer.write_u8(*instruction_offset);
if *size <= SMALL_ALLOC_MAX_SIZE { if *size <= SMALL_ALLOC_MAX_SIZE {
writer.write_u8( writer.write_u8(
((((*size - 8) / 8) as u8) << 4) | UnwindOperation::SmallStackAlloc as u8, ((((*size - 8) / 8) as u8) << 4) | UnwindOperation::SmallStackAlloc as u8,
@@ -111,7 +151,11 @@ impl UnwindCode {
writer.write_u32::<LittleEndian>(*size); writer.write_u32::<LittleEndian>(*size);
} }
} }
}; Self::SetFPReg { instruction_offset } => {
writer.write_u8(*instruction_offset);
writer.write_u8(UnwindOperation::SetFPReg as u8);
}
}
} }
fn node_count(&self) -> usize { fn node_count(&self) -> usize {
@@ -125,7 +169,7 @@ impl UnwindCode {
3 3
} }
} }
Self::SaveXmm { stack_offset, .. } => { Self::SaveXmm { stack_offset, .. } | Self::SaveReg { stack_offset, .. } => {
if *stack_offset <= core::u16::MAX as u32 { if *stack_offset <= core::u16::MAX as u32 {
2 2
} else { } else {
@@ -143,9 +187,9 @@ pub(crate) enum MappedRegister {
} }
/// Maps UnwindInfo register to Windows x64 unwind data. /// Maps UnwindInfo register to Windows x64 unwind data.
pub(crate) trait RegisterMapper { pub(crate) trait RegisterMapper<Reg> {
/// Maps RegUnit. /// Maps a Reg to a Windows unwind register number.
fn map(reg: RegUnit) -> MappedRegister; fn map(reg: Reg) -> MappedRegister;
} }
/// Represents Windows x64 unwind information. /// Represents Windows x64 unwind information.
@@ -219,8 +263,11 @@ impl UnwindInfo {
.fold(0, |nodes, c| nodes + c.node_count()) .fold(0, |nodes, c| nodes + c.node_count())
} }
pub(crate) fn build<MR: RegisterMapper>( // TODO: remove `build()` below when old backend is removed. The new backend uses
unwind: input::UnwindInfo<RegUnit>, // a simpler approach in `create_unwind_info_from_insts()` below.
pub(crate) fn build<Reg: PartialEq + Copy + std::fmt::Debug, MR: RegisterMapper<Reg>>(
unwind: input::UnwindInfo<Reg>,
) -> CodegenResult<Self> { ) -> CodegenResult<Self> {
use crate::isa::unwind::input::UnwindCode as InputUnwindCode; use crate::isa::unwind::input::UnwindCode as InputUnwindCode;
@@ -237,7 +284,7 @@ impl UnwindInfo {
// `StackAlloc { size = word_size }`, `SaveRegister { stack_offset: 0 }` // `StackAlloc { size = word_size }`, `SaveRegister { stack_offset: 0 }`
// to the shorter `UnwindCode::PushRegister`. // to the shorter `UnwindCode::PushRegister`.
let push_reg_sequence = if let Some(UnwindCode::StackAlloc { let push_reg_sequence = if let Some(UnwindCode::StackAlloc {
offset: alloc_offset, instruction_offset: alloc_offset,
size, size,
}) = unwind_codes.last() }) = unwind_codes.last()
{ {
@@ -246,19 +293,21 @@ impl UnwindInfo {
false false
}; };
if push_reg_sequence { if push_reg_sequence {
*unwind_codes.last_mut().unwrap() = *unwind_codes.last_mut().unwrap() = UnwindCode::PushRegister {
UnwindCode::PushRegister { offset, reg }; instruction_offset: offset,
reg,
};
} else { } else {
// TODO add `UnwindCode::SaveRegister` to handle multiple register unwind_codes.push(UnwindCode::SaveReg {
// pushes with single `UnwindCode::StackAlloc`. instruction_offset: offset,
return Err(CodegenError::Unsupported( reg,
"Unsupported UnwindCode::PushRegister sequence".into(), stack_offset: *stack_offset,
)); });
} }
} }
MappedRegister::Xmm(reg) => { MappedRegister::Xmm(reg) => {
unwind_codes.push(UnwindCode::SaveXmm { unwind_codes.push(UnwindCode::SaveXmm {
offset, instruction_offset: offset,
reg, reg,
stack_offset: *stack_offset, stack_offset: *stack_offset,
}); });
@@ -267,7 +316,7 @@ impl UnwindInfo {
} }
InputUnwindCode::StackAlloc { size } => { InputUnwindCode::StackAlloc { size } => {
unwind_codes.push(UnwindCode::StackAlloc { unwind_codes.push(UnwindCode::StackAlloc {
offset: ensure_unwind_offset(*offset)?, instruction_offset: ensure_unwind_offset(*offset)?,
size: *size, size: *size,
}); });
} }
@@ -285,6 +334,64 @@ impl UnwindInfo {
} }
} }
#[cfg(feature = "x64")]
const UNWIND_RBP_REG: u8 = 5;
#[cfg(feature = "x64")]
pub(crate) fn create_unwind_info_from_insts<MR: RegisterMapper<regalloc::Reg>>(
insts: &[(CodeOffset, UnwindInst)],
) -> CodegenResult<UnwindInfo> {
let mut unwind_codes = vec![];
let mut frame_register_offset = 0;
let mut max_unwind_offset = 0;
for &(instruction_offset, ref inst) in insts {
let instruction_offset = ensure_unwind_offset(instruction_offset)?;
match inst {
&UnwindInst::PushFrameRegs { .. } => {
unwind_codes.push(UnwindCode::PushRegister {
instruction_offset,
reg: UNWIND_RBP_REG,
});
}
&UnwindInst::DefineNewFrame {
offset_downward_to_clobbers,
..
} => {
frame_register_offset = ensure_unwind_offset(offset_downward_to_clobbers)?;
unwind_codes.push(UnwindCode::SetFPReg { instruction_offset });
}
&UnwindInst::SaveReg {
clobber_offset,
reg,
} => match MR::map(reg.to_reg()) {
MappedRegister::Int(reg) => {
unwind_codes.push(UnwindCode::SaveReg {
instruction_offset,
reg,
stack_offset: clobber_offset,
});
}
MappedRegister::Xmm(reg) => {
unwind_codes.push(UnwindCode::SaveXmm {
instruction_offset,
reg,
stack_offset: clobber_offset,
});
}
},
}
max_unwind_offset = instruction_offset;
}
Ok(UnwindInfo {
flags: 0,
prologue_size: max_unwind_offset,
frame_register: Some(UNWIND_RBP_REG),
frame_register_offset,
unwind_codes,
})
}
fn ensure_unwind_offset(offset: u32) -> CodegenResult<u8> { fn ensure_unwind_offset(offset: u32) -> CodegenResult<u8> {
if offset > 255 { if offset > 255 {
warn!("function prologues cannot exceed 255 bytes in size for Windows x64"); warn!("function prologues cannot exceed 255 bytes in size for Windows x64");

View File

@@ -3,7 +3,7 @@
use crate::ir::types::*; use crate::ir::types::*;
use crate::ir::{self, types, ExternalName, LibCall, MemFlags, Opcode, TrapCode, Type}; use crate::ir::{self, types, ExternalName, LibCall, MemFlags, Opcode, TrapCode, Type};
use crate::isa; use crate::isa;
use crate::isa::{x64::inst::*, CallConv}; use crate::isa::{unwind::UnwindInst, x64::inst::*, CallConv};
use crate::machinst::abi_impl::*; use crate::machinst::abi_impl::*;
use crate::machinst::*; use crate::machinst::*;
use crate::settings; use crate::settings;
@@ -433,25 +433,38 @@ impl ABIMachineSpec for X64ABIMachineSpec {
} }
} }
fn gen_prologue_frame_setup() -> SmallInstVec<Self::I> { fn gen_prologue_frame_setup(flags: &settings::Flags) -> SmallInstVec<Self::I> {
let r_rsp = regs::rsp(); let r_rsp = regs::rsp();
let r_rbp = regs::rbp(); let r_rbp = regs::rbp();
let w_rbp = Writable::from_reg(r_rbp); let w_rbp = Writable::from_reg(r_rbp);
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
// `push %rbp`
// RSP before the call will be 0 % 16. So here, it is 8 % 16. // RSP before the call will be 0 % 16. So here, it is 8 % 16.
insts.push(Inst::push64(RegMemImm::reg(r_rbp))); insts.push(Inst::push64(RegMemImm::reg(r_rbp)));
if flags.unwind_info() {
insts.push(Inst::Unwind {
inst: UnwindInst::PushFrameRegs {
offset_upward_to_caller_sp: 16, // RBP, return address
},
});
}
// `mov %rsp, %rbp`
// RSP is now 0 % 16 // RSP is now 0 % 16
insts.push(Inst::mov_r_r(OperandSize::Size64, r_rsp, w_rbp)); insts.push(Inst::mov_r_r(OperandSize::Size64, r_rsp, w_rbp));
insts insts
} }
fn gen_epilogue_frame_restore() -> SmallInstVec<Self::I> { fn gen_epilogue_frame_restore(_: &settings::Flags) -> SmallInstVec<Self::I> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
// `mov %rbp, %rsp`
insts.push(Inst::mov_r_r( insts.push(Inst::mov_r_r(
OperandSize::Size64, OperandSize::Size64,
regs::rbp(), regs::rbp(),
Writable::from_reg(regs::rsp()), Writable::from_reg(regs::rsp()),
)); ));
// `pop %rbp`
insts.push(Inst::pop64(Writable::from_reg(regs::rbp()))); insts.push(Inst::pop64(Writable::from_reg(regs::rbp())));
insts insts
} }
@@ -474,22 +487,31 @@ impl ABIMachineSpec for X64ABIMachineSpec {
fn gen_clobber_save( fn gen_clobber_save(
call_conv: isa::CallConv, call_conv: isa::CallConv,
_: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> (u64, SmallVec<[Self::I; 16]>) { ) -> (u64, SmallVec<[Self::I; 16]>) {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
// Find all clobbered registers that are callee-save. These are only I64 // Find all clobbered registers that are callee-save.
// registers (all XMM registers are caller-save) so we can compute the
// total size of the needed stack space easily.
let clobbered = get_callee_saves(&call_conv, clobbers); let clobbered = get_callee_saves(&call_conv, clobbers);
let stack_size = compute_clobber_size(&clobbered) + fixed_frame_storage_size; let clobbered_size = compute_clobber_size(&clobbered);
// Align to 16 bytes.
let stack_size = align_to(stack_size, 16); if flags.unwind_info() {
let clobbered_size = stack_size - fixed_frame_storage_size; // Emit unwind info: start the frame. The frame (from unwind
// Adjust the stack pointer downward with one `sub rsp, IMM` // consumers' point of view) starts at clobbbers, just below
// instruction. // the FP and return address. Spill slots and stack slots are
// part of our actual frame but do not concern the unwinder.
insts.push(Inst::Unwind {
inst: UnwindInst::DefineNewFrame {
offset_downward_to_clobbers: clobbered_size,
offset_upward_to_caller_sp: 16, // RBP, return address
},
});
}
// Adjust the stack pointer downward for clobbers and the function fixed
// frame (spillslots and storage slots).
let stack_size = fixed_frame_storage_size + clobbered_size;
if stack_size > 0 { if stack_size > 0 {
insts.push(Inst::alu_rmi_r( insts.push(Inst::alu_rmi_r(
OperandSize::Size64, OperandSize::Size64,
@@ -498,10 +520,12 @@ impl ABIMachineSpec for X64ABIMachineSpec {
Writable::from_reg(regs::rsp()), Writable::from_reg(regs::rsp()),
)); ));
} }
// Store each clobbered register in order at offsets from RSP. // Store each clobbered register in order at offsets from RSP,
let mut cur_offset = 0; // placing them above the fixed frame slots.
let mut cur_offset = fixed_frame_storage_size;
for reg in &clobbered { for reg in &clobbered {
let r_reg = reg.to_reg(); let r_reg = reg.to_reg();
let off = cur_offset;
match r_reg.get_class() { match r_reg.get_class() {
RegClass::I64 => { RegClass::I64 => {
insts.push(Inst::store( insts.push(Inst::store(
@@ -521,6 +545,14 @@ impl ABIMachineSpec for X64ABIMachineSpec {
cur_offset += 16; cur_offset += 16;
} }
_ => unreachable!(), _ => unreachable!(),
};
if flags.unwind_info() {
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset: off - fixed_frame_storage_size,
reg: r_reg,
},
});
} }
} }
@@ -531,17 +563,17 @@ impl ABIMachineSpec for X64ABIMachineSpec {
call_conv: isa::CallConv, call_conv: isa::CallConv,
flags: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
_fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> SmallVec<[Self::I; 16]> { ) -> SmallVec<[Self::I; 16]> {
let mut insts = SmallVec::new(); let mut insts = SmallVec::new();
let clobbered = get_callee_saves(&call_conv, clobbers); let clobbered = get_callee_saves(&call_conv, clobbers);
let stack_size = compute_clobber_size(&clobbered); let stack_size = fixed_frame_storage_size + compute_clobber_size(&clobbered);
let stack_size = align_to(stack_size, 16);
// Restore regs by loading from offsets of RSP. // Restore regs by loading from offsets of RSP. RSP will be
let mut cur_offset = 0; // returned to nominal-RSP at this point, so we can use the
// same offsets that we used when saving clobbers above.
let mut cur_offset = fixed_frame_storage_size;
for reg in &clobbered { for reg in &clobbered {
let rreg = reg.to_reg(); let rreg = reg.to_reg();
match rreg.get_class() { match rreg.get_class() {
@@ -990,5 +1022,5 @@ fn compute_clobber_size(clobbers: &Vec<Writable<RealReg>>) -> u32 {
_ => unreachable!(), _ => unreachable!(),
} }
} }
clobbered_size align_to(clobbered_size, 16)
} }

View File

@@ -3050,6 +3050,10 @@ pub(crate) fn emit(
Inst::ValueLabelMarker { .. } => { Inst::ValueLabelMarker { .. } => {
// Nothing; this is only used to compute debug info. // Nothing; this is only used to compute debug info.
} }
Inst::Unwind { ref inst } => {
sink.add_unwind(inst.clone());
}
} }
state.clear_post_insn(); state.clear_post_insn();

View File

@@ -2,6 +2,7 @@
use crate::binemit::{CodeOffset, StackMap}; use crate::binemit::{CodeOffset, StackMap};
use crate::ir::{types, ExternalName, Opcode, SourceLoc, TrapCode, Type, ValueLabel}; use crate::ir::{types, ExternalName, Opcode, SourceLoc, TrapCode, Type, ValueLabel};
use crate::isa::unwind::UnwindInst;
use crate::isa::x64::abi::X64ABIMachineSpec; use crate::isa::x64::abi::X64ABIMachineSpec;
use crate::isa::x64::settings as x64_settings; use crate::isa::x64::settings as x64_settings;
use crate::isa::CallConv; use crate::isa::CallConv;
@@ -488,6 +489,10 @@ pub enum Inst {
/// A definition of a value label. /// A definition of a value label.
ValueLabelMarker { reg: Reg, label: ValueLabel }, ValueLabelMarker { reg: Reg, label: ValueLabel },
/// An unwind pseudoinstruction describing the state of the
/// machine at this program point.
Unwind { inst: UnwindInst },
} }
pub(crate) fn low32_will_sign_extend_to_64(x: u64) -> bool { pub(crate) fn low32_will_sign_extend_to_64(x: u64) -> bool {
@@ -548,7 +553,8 @@ impl Inst {
| Inst::XmmUninitializedValue { .. } | Inst::XmmUninitializedValue { .. }
| Inst::ElfTlsGetAddr { .. } | Inst::ElfTlsGetAddr { .. }
| Inst::MachOTlsGetAddr { .. } | Inst::MachOTlsGetAddr { .. }
| Inst::ValueLabelMarker { .. } => None, | Inst::ValueLabelMarker { .. }
| Inst::Unwind { .. } => None,
Inst::UnaryRmR { op, .. } => op.available_from(), Inst::UnaryRmR { op, .. } => op.available_from(),
@@ -1787,6 +1793,10 @@ impl PrettyPrint for Inst {
Inst::ValueLabelMarker { label, reg } => { Inst::ValueLabelMarker { label, reg } => {
format!("value_label {:?}, {}", label, reg.show_rru(mb_rru)) format!("value_label {:?}, {}", label, reg.show_rru(mb_rru))
} }
Inst::Unwind { inst } => {
format!("unwind {:?}", inst)
}
} }
} }
} }
@@ -2065,6 +2075,8 @@ fn x64_get_regs(inst: &Inst, collector: &mut RegUsageCollector) {
Inst::ValueLabelMarker { reg, .. } => { Inst::ValueLabelMarker { reg, .. } => {
collector.add_use(*reg); collector.add_use(*reg);
} }
Inst::Unwind { .. } => {}
} }
} }
@@ -2459,7 +2471,8 @@ fn x64_map_regs<RUM: RegUsageMapper>(inst: &mut Inst, mapper: &RUM) {
| Inst::AtomicRmwSeq { .. } | Inst::AtomicRmwSeq { .. }
| Inst::ElfTlsGetAddr { .. } | Inst::ElfTlsGetAddr { .. }
| Inst::MachOTlsGetAddr { .. } | Inst::MachOTlsGetAddr { .. }
| Inst::Fence { .. } => { | Inst::Fence { .. }
| Inst::Unwind { .. } => {
// Instruction doesn't explicitly mention any regs, so it can't have any virtual // Instruction doesn't explicitly mention any regs, so it can't have any virtual
// regs that we'd need to remap. Hence no action required. // regs that we'd need to remap. Hence no action required.
} }
@@ -2776,7 +2789,6 @@ impl MachInstEmitInfo for EmitInfo {
impl MachInstEmit for Inst { impl MachInstEmit for Inst {
type State = EmitState; type State = EmitState;
type Info = EmitInfo; type Info = EmitInfo;
type UnwindInfo = unwind::X64UnwindInfo;
fn emit(&self, sink: &mut MachBuffer<Inst>, info: &Self::Info, state: &mut Self::State) { fn emit(&self, sink: &mut MachBuffer<Inst>, info: &Self::Info, state: &mut Self::State) {
emit::emit(self, sink, info, state); emit::emit(self, sink, info, state);

View File

@@ -1,125 +1,5 @@
use crate::isa::unwind::input::UnwindInfo;
use crate::isa::x64::inst::{
args::{AluRmiROpcode, Amode, OperandSize, RegMemImm, SyntheticAmode},
regs, Inst,
};
use crate::machinst::{UnwindInfoContext, UnwindInfoGenerator};
use crate::result::CodegenResult;
use alloc::vec::Vec;
use regalloc::Reg;
#[cfg(feature = "unwind")] #[cfg(feature = "unwind")]
pub(crate) mod systemv; pub(crate) mod systemv;
pub struct X64UnwindInfo; #[cfg(feature = "unwind")]
pub(crate) mod winx64;
impl UnwindInfoGenerator<Inst> for X64UnwindInfo {
fn create_unwind_info(
context: UnwindInfoContext<Inst>,
) -> CodegenResult<Option<UnwindInfo<Reg>>> {
use crate::isa::unwind::input::{self, UnwindCode};
let mut codes = Vec::new();
const WORD_SIZE: u8 = 8;
for i in context.prologue.clone() {
let i = i as usize;
let inst = &context.insts[i];
let offset = context.insts_layout[i];
match inst {
Inst::Push64 {
src: RegMemImm::Reg { reg },
} => {
codes.push((
offset,
UnwindCode::StackAlloc {
size: WORD_SIZE.into(),
},
));
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *reg,
stack_offset: 0,
},
));
}
Inst::MovRR { src, dst, .. } => {
if *src == regs::rsp() {
codes.push((offset, UnwindCode::SetFramePointer { reg: dst.to_reg() }));
}
}
Inst::AluRmiR {
size: OperandSize::Size64,
op: AluRmiROpcode::Sub,
src: RegMemImm::Imm { simm32 },
dst,
..
} if dst.to_reg() == regs::rsp() => {
let imm = *simm32;
codes.push((offset, UnwindCode::StackAlloc { size: imm }));
}
Inst::MovRM {
src,
dst: SyntheticAmode::Real(Amode::ImmReg { simm32, base, .. }),
..
} if *base == regs::rsp() => {
// `mov reg, imm(rsp)`
let imm = *simm32;
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *src,
stack_offset: imm,
},
));
}
Inst::AluRmiR {
size: OperandSize::Size64,
op: AluRmiROpcode::Add,
src: RegMemImm::Imm { simm32 },
dst,
..
} if dst.to_reg() == regs::rsp() => {
let imm = *simm32;
codes.push((offset, UnwindCode::StackDealloc { size: imm }));
}
_ => {}
}
}
let last_epilogue_end = context.len;
let epilogues_unwind_codes = context
.epilogues
.iter()
.map(|epilogue| {
// TODO add logic to process epilogue instruction instead of
// returning empty array.
let end = epilogue.end as usize - 1;
let end_offset = context.insts_layout[end];
if end_offset == last_epilogue_end {
// Do not remember/restore for very last epilogue.
return vec![];
}
let start = epilogue.start as usize;
let offset = context.insts_layout[start];
vec![
(offset, UnwindCode::RememberState),
// TODO epilogue instructions
(end_offset, UnwindCode::RestoreState),
]
})
.collect();
let prologue_size = context.insts_layout[context.prologue.end as usize];
Ok(Some(input::UnwindInfo {
prologue_size,
prologue_unwind_codes: codes,
epilogues_unwind_codes,
function_size: context.len,
word_size: WORD_SIZE,
initial_sp_offset: WORD_SIZE,
}))
}
}

View File

@@ -1,8 +1,6 @@
//! Unwind information for System V ABI (x86-64). //! Unwind information for System V ABI (x86-64).
use crate::isa::unwind::input; use crate::isa::unwind::systemv::RegisterMappingError;
use crate::isa::unwind::systemv::{RegisterMappingError, UnwindInfo};
use crate::result::CodegenResult;
use gimli::{write::CommonInformationEntry, Encoding, Format, Register, X86_64}; use gimli::{write::CommonInformationEntry, Encoding, Format, Register, X86_64};
use regalloc::{Reg, RegClass}; use regalloc::{Reg, RegClass};
@@ -82,21 +80,18 @@ pub fn map_reg(reg: Reg) -> Result<Register, RegisterMappingError> {
} }
} }
pub(crate) fn create_unwind_info( pub(crate) struct RegisterMapper;
unwind: input::UnwindInfo<Reg>,
) -> CodegenResult<Option<UnwindInfo>> { impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
struct RegisterMapper;
impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> { fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> {
Ok(map_reg(reg)?.0) Ok(map_reg(reg)?.0)
} }
fn sp(&self) -> u16 { fn sp(&self) -> u16 {
X86_64::RSP.0 X86_64::RSP.0
} }
fn fp(&self) -> u16 {
X86_64::RBP.0
} }
let map = RegisterMapper;
Ok(Some(UnwindInfo::build(unwind, &map)?))
} }
#[cfg(test)] #[cfg(test)]
@@ -136,7 +131,7 @@ mod tests {
_ => panic!("expected unwind information"), _ => panic!("expected unwind information"),
}; };
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(1234), length: 13, lsda: None, instructions: [(1, CfaOffset(16)), (1, Offset(Register(6), -16)), (4, CfaRegister(Register(6)))] }"); assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(1234), length: 17, lsda: None, instructions: [(1, CfaOffset(16)), (1, Offset(Register(6), -16)), (4, CfaRegister(Register(6)))] }");
} }
fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function { fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function {
@@ -175,7 +170,7 @@ mod tests {
_ => panic!("expected unwind information"), _ => panic!("expected unwind information"),
}; };
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(4321), length: 22, lsda: None, instructions: [(1, CfaOffset(16)), (1, Offset(Register(6), -16)), (4, CfaRegister(Register(6))), (15, RememberState), (17, RestoreState)] }"); assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(4321), length: 22, lsda: None, instructions: [(1, CfaOffset(16)), (1, Offset(Register(6), -16)), (4, CfaRegister(Register(6)))] }");
} }
fn create_multi_return_function(call_conv: CallConv) -> Function { fn create_multi_return_function(call_conv: CallConv) -> Function {

View File

@@ -0,0 +1,16 @@
//! Unwind information for Windows x64 ABI.
use regalloc::{Reg, RegClass};
pub(crate) struct RegisterMapper;
impl crate::isa::unwind::winx64::RegisterMapper<Reg> for RegisterMapper {
fn map(reg: Reg) -> crate::isa::unwind::winx64::MappedRegister {
use crate::isa::unwind::winx64::MappedRegister;
match reg.get_class() {
RegClass::I64 => MappedRegister::Int(reg.get_hw_encoding()),
RegClass::V128 => MappedRegister::Xmm(reg.get_hw_encoding()),
_ => unreachable!(),
}
}
}

View File

@@ -4,7 +4,6 @@ use self::inst::EmitInfo;
use super::TargetIsa; use super::TargetIsa;
use crate::ir::{condcodes::IntCC, Function}; use crate::ir::{condcodes::IntCC, Function};
use crate::isa::unwind::systemv::RegisterMappingError;
use crate::isa::x64::{inst::regs::create_reg_universe_systemv, settings as x64_settings}; use crate::isa::x64::{inst::regs::create_reg_universe_systemv, settings as x64_settings};
use crate::isa::Builder as IsaBuilder; use crate::isa::Builder as IsaBuilder;
use crate::machinst::{compile, MachBackend, MachCompileResult, TargetIsaAdapter, VCode}; use crate::machinst::{compile, MachBackend, MachCompileResult, TargetIsaAdapter, VCode};
@@ -15,6 +14,9 @@ use core::hash::{Hash, Hasher};
use regalloc::{PrettyPrint, RealRegUniverse, Reg}; use regalloc::{PrettyPrint, RealRegUniverse, Reg};
use target_lexicon::Triple; use target_lexicon::Triple;
#[cfg(feature = "unwind")]
use crate::isa::unwind::systemv;
mod abi; mod abi;
mod inst; mod inst;
mod lower; mod lower;
@@ -61,7 +63,6 @@ impl MachBackend for X64Backend {
let buffer = vcode.emit(); let buffer = vcode.emit();
let buffer = buffer.finish(); let buffer = buffer.finish();
let frame_size = vcode.frame_size(); let frame_size = vcode.frame_size();
let unwind_info = vcode.unwind_info()?;
let value_labels_ranges = vcode.value_labels_ranges(); let value_labels_ranges = vcode.value_labels_ranges();
let stackslot_offsets = vcode.stackslot_offsets().clone(); let stackslot_offsets = vcode.stackslot_offsets().clone();
@@ -75,7 +76,6 @@ impl MachBackend for X64Backend {
buffer, buffer,
frame_size, frame_size,
disasm, disasm,
unwind_info,
value_labels_ranges, value_labels_ranges,
stackslot_offsets, stackslot_offsets,
}) })
@@ -122,14 +122,22 @@ impl MachBackend for X64Backend {
) -> CodegenResult<Option<crate::isa::unwind::UnwindInfo>> { ) -> CodegenResult<Option<crate::isa::unwind::UnwindInfo>> {
use crate::isa::unwind::UnwindInfo; use crate::isa::unwind::UnwindInfo;
use crate::machinst::UnwindInfoKind; use crate::machinst::UnwindInfoKind;
Ok(match (result.unwind_info.as_ref(), kind) { Ok(match kind {
(Some(info), UnwindInfoKind::SystemV) => { UnwindInfoKind::SystemV => {
inst::unwind::systemv::create_unwind_info(info.clone())?.map(UnwindInfo::SystemV) let mapper = self::inst::unwind::systemv::RegisterMapper;
} Some(UnwindInfo::SystemV(
(Some(_info), UnwindInfoKind::Windows) => { crate::isa::unwind::systemv::create_unwind_info_from_insts(
//TODO inst::unwind::winx64::create_unwind_info(info.clone())?.map(|u| UnwindInfo::WindowsX64(u)) &result.buffer.unwind_info[..],
None result.buffer.data.len(),
&mapper,
)?,
))
} }
UnwindInfoKind::Windows => Some(UnwindInfo::WindowsX64(
crate::isa::unwind::winx64::create_unwind_info_from_insts::<
self::inst::unwind::winx64::RegisterMapper,
>(&result.buffer.unwind_info[..])?,
)),
_ => None, _ => None,
}) })
} }
@@ -140,7 +148,7 @@ impl MachBackend for X64Backend {
} }
#[cfg(feature = "unwind")] #[cfg(feature = "unwind")]
fn map_reg_to_dwarf(&self, reg: Reg) -> Result<u16, RegisterMappingError> { fn map_reg_to_dwarf(&self, reg: Reg) -> Result<u16, systemv::RegisterMappingError> {
inst::unwind::systemv::map_reg(reg).map(|reg| reg.0) inst::unwind::systemv::map_reg(reg).map(|reg| reg.0)
} }
} }

View File

@@ -121,6 +121,9 @@ pub(crate) fn create_unwind_info(
fn sp(&self) -> u16 { fn sp(&self) -> u16 {
X86_64::RSP.0 X86_64::RSP.0
} }
fn fp(&self) -> u16 {
X86_64::RBP.0
}
} }
let map = RegisterMapper(isa); let map = RegisterMapper(isa);

View File

@@ -21,12 +21,12 @@ pub(crate) fn create_unwind_info(
} }
}; };
Ok(Some(UnwindInfo::build::<RegisterMapper>(unwind)?)) Ok(Some(UnwindInfo::build::<RegUnit, RegisterMapper>(unwind)?))
} }
struct RegisterMapper; struct RegisterMapper;
impl crate::isa::unwind::winx64::RegisterMapper for RegisterMapper { impl crate::isa::unwind::winx64::RegisterMapper<RegUnit> for RegisterMapper {
fn map(reg: RegUnit) -> crate::isa::unwind::winx64::MappedRegister { fn map(reg: RegUnit) -> crate::isa::unwind::winx64::MappedRegister {
use crate::isa::unwind::winx64::MappedRegister; use crate::isa::unwind::winx64::MappedRegister;
if GPR.contains(reg) { if GPR.contains(reg) {
@@ -94,11 +94,11 @@ mod tests {
frame_register_offset: 0, frame_register_offset: 0,
unwind_codes: vec![ unwind_codes: vec![
UnwindCode::PushRegister { UnwindCode::PushRegister {
offset: 2, instruction_offset: 2,
reg: GPR.index_of(RU::rbp.into()) as u8 reg: GPR.index_of(RU::rbp.into()) as u8
}, },
UnwindCode::StackAlloc { UnwindCode::StackAlloc {
offset: 9, instruction_offset: 9,
size: 64 size: 64
} }
] ]
@@ -151,11 +151,11 @@ mod tests {
frame_register_offset: 0, frame_register_offset: 0,
unwind_codes: vec![ unwind_codes: vec![
UnwindCode::PushRegister { UnwindCode::PushRegister {
offset: 2, instruction_offset: 2,
reg: GPR.index_of(RU::rbp.into()) as u8 reg: GPR.index_of(RU::rbp.into()) as u8
}, },
UnwindCode::StackAlloc { UnwindCode::StackAlloc {
offset: 27, instruction_offset: 27,
size: 10000 size: 10000
} }
] ]
@@ -212,11 +212,11 @@ mod tests {
frame_register_offset: 0, frame_register_offset: 0,
unwind_codes: vec![ unwind_codes: vec![
UnwindCode::PushRegister { UnwindCode::PushRegister {
offset: 2, instruction_offset: 2,
reg: GPR.index_of(RU::rbp.into()) as u8 reg: GPR.index_of(RU::rbp.into()) as u8
}, },
UnwindCode::StackAlloc { UnwindCode::StackAlloc {
offset: 27, instruction_offset: 27,
size: 1000000 size: 1000000
} }
] ]

View File

@@ -30,12 +30,6 @@ pub trait ABICallee {
/// Access the (possibly legalized) signature. /// Access the (possibly legalized) signature.
fn signature(&self) -> &Signature; fn signature(&self) -> &Signature;
/// Accumulate outgoing arguments. This ensures that at least SIZE bytes
/// are allocated in the prologue to be available for use in function calls
/// to hold arguments and/or return values. If this function is called
/// multiple times, the maximum of all SIZE values will be available.
fn accumulate_outgoing_args_size(&mut self, size: u32);
/// Get the settings controlling this function's compilation. /// Get the settings controlling this function's compilation.
fn flags(&self) -> &settings::Flags; fn flags(&self) -> &settings::Flags;
@@ -251,13 +245,6 @@ pub trait ABICaller {
/// Emit code to post-adjust the satck, after call return and return-value copies. /// Emit code to post-adjust the satck, after call return and return-value copies.
fn emit_stack_post_adjust<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C); fn emit_stack_post_adjust<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C);
/// Accumulate outgoing arguments. This ensures that the caller (as
/// identified via the CTX argument) allocates enough space in the
/// prologue to hold all arguments and return values for this call.
/// There is no code emitted at the call site, everything is done
/// in the caller's function prologue.
fn accumulate_outgoing_args_size<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C);
/// Emit the call itself. /// Emit the call itself.
/// ///
/// The returned instruction should have proper use- and def-sets according /// The returned instruction should have proper use- and def-sets according

View File

@@ -22,26 +22,41 @@
//! area on the stack, given by a hidden extra parameter. //! area on the stack, given by a hidden extra parameter.
//! //!
//! Note that the exact stack layout is up to us. We settled on the //! Note that the exact stack layout is up to us. We settled on the
//! below design based on several requirements. In particular, we need to be //! below design based on several requirements. In particular, we need
//! able to generate instructions (or instruction sequences) to access //! to be able to generate instructions (or instruction sequences) to
//! arguments, stack slots, and spill slots before we know how many spill slots //! access arguments, stack slots, and spill slots before we know how
//! or clobber-saves there will be, because of our pass structure. We also //! many spill slots or clobber-saves there will be, because of our
//! prefer positive offsets to negative offsets because of an asymmetry in //! pass structure. We also prefer positive offsets to negative
//! some machines' addressing modes (e.g., on AArch64, positive offsets have a //! offsets because of an asymmetry in some machines' addressing modes
//! larger possible range without a long-form sequence to synthesize an //! (e.g., on AArch64, positive offsets have a larger possible range
//! arbitrary offset). Finally, it is not allowed to access memory below the //! without a long-form sequence to synthesize an arbitrary
//! current SP value. //! offset). We also need clobber-save registers to be "near" the
//! frame pointer: Windows unwind information requires it to be within
//! 240 bytes of RBP. Finally, it is not allowed to access memory
//! below the current SP value.
//! //!
//! We assume that a prologue first pushes the frame pointer (and return address //! We assume that a prologue first pushes the frame pointer (and
//! above that, if the machine does not do that in hardware). We set FP to point //! return address above that, if the machine does not do that in
//! to this two-word frame record. We store all other frame slots below this //! hardware). We set FP to point to this two-word frame record. We
//! two-word frame record, with the stack pointer remaining at or below this //! store all other frame slots below this two-word frame record, with
//! fixed frame storage for the rest of the function. We can then access frame //! the stack pointer remaining at or below this fixed frame storage
//! storage slots using positive offsets from SP. In order to allow codegen for //! for the rest of the function. We can then access frame storage
//! the latter before knowing how many clobber-saves we have, and also allow it //! slots using positive offsets from SP. In order to allow codegen
//! while SP is being adjusted to set up a call, we implement a "nominal SP" //! for the latter before knowing how SP might be adjusted around
//! tracking feature by which a fixup (distance between actual SP and a //! callsites, we implement a "nominal SP" tracking feature by which a
//! "nominal" SP) is known at each instruction. //! fixup (distance between actual SP and a "nominal" SP) is known at
//! each instruction.
//!
//! Note that if we ever support dynamic stack-space allocation (for
//! `alloca`), we will need a way to reference spill slots and stack
//! slots without "nominal SP", because we will no longer be able to
//! know a static offset from SP to the slots at any particular
//! program point. Probably the best solution at that point will be to
//! revert to using the frame pointer as the reference for all slots,
//! and creating a "nominal FP" synthetic addressing mode (analogous
//! to "nominal SP" today) to allow generating spill/reload and
//! stackslot accesses before we know how large the clobber-saves will
//! be.
//! //!
//! # Stack Layout //! # Stack Layout
//! //!
@@ -60,17 +75,17 @@
//! FP after prologue --------> | FP (pushed by prologue) | //! FP after prologue --------> | FP (pushed by prologue) |
//! +---------------------------+ //! +---------------------------+
//! | ... | //! | ... |
//! | clobbered callee-saves |
//! unwind-frame base ----> | (pushed by prologue) |
//! +---------------------------+
//! | ... |
//! | spill slots | //! | spill slots |
//! | (accessed via nominal SP) | //! | (accessed via nominal SP) |
//! | ... | //! | ... |
//! | stack slots | //! | stack slots |
//! | (accessed via nominal SP) | //! | (accessed via nominal SP) |
//! nominal SP ---------------> | (alloc'd by prologue) | //! nominal SP ---------------> | (alloc'd by prologue) |
//! +---------------------------+ //! (SP at end of prologue) +---------------------------+
//! | ... |
//! | clobbered callee-saves |
//! SP at end of prologue ----> | (pushed by prologue) |
//! +---------------------------+
//! | [alignment as needed] | //! | [alignment as needed] |
//! | ... | //! | ... |
//! | args for call | //! | args for call |
@@ -406,10 +421,10 @@ pub trait ABIMachineSpec {
/// Generate the usual frame-setup sequence for this architecture: e.g., /// Generate the usual frame-setup sequence for this architecture: e.g.,
/// `push rbp / mov rbp, rsp` on x86-64, or `stp fp, lr, [sp, #-16]!` on /// `push rbp / mov rbp, rsp` on x86-64, or `stp fp, lr, [sp, #-16]!` on
/// AArch64. /// AArch64.
fn gen_prologue_frame_setup() -> SmallInstVec<Self::I>; fn gen_prologue_frame_setup(flags: &settings::Flags) -> SmallInstVec<Self::I>;
/// Generate the usual frame-restore sequence for this architecture. /// Generate the usual frame-restore sequence for this architecture.
fn gen_epilogue_frame_restore() -> SmallInstVec<Self::I>; fn gen_epilogue_frame_restore(flags: &settings::Flags) -> SmallInstVec<Self::I>;
/// Generate a probestack call. /// Generate a probestack call.
fn gen_probestack(_frame_size: u32) -> SmallInstVec<Self::I>; fn gen_probestack(_frame_size: u32) -> SmallInstVec<Self::I>;
@@ -429,7 +444,6 @@ pub trait ABIMachineSpec {
flags: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
outgoing_args_size: u32,
) -> (u64, SmallVec<[Self::I; 16]>); ) -> (u64, SmallVec<[Self::I; 16]>);
/// Generate a clobber-restore sequence. This sequence should perform the /// Generate a clobber-restore sequence. This sequence should perform the
@@ -441,7 +455,6 @@ pub trait ABIMachineSpec {
flags: &settings::Flags, flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>, clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32, fixed_frame_storage_size: u32,
outgoing_args_size: u32,
) -> SmallVec<[Self::I; 16]>; ) -> SmallVec<[Self::I; 16]>;
/// Generate a call instruction/sequence. This method is provided one /// Generate a call instruction/sequence. This method is provided one
@@ -563,8 +576,6 @@ pub struct ABICalleeImpl<M: ABIMachineSpec> {
stackslots: PrimaryMap<StackSlot, u32>, stackslots: PrimaryMap<StackSlot, u32>,
/// Total stack size of all stackslots. /// Total stack size of all stackslots.
stackslots_size: u32, stackslots_size: u32,
/// Stack size to be reserved for outgoing arguments.
outgoing_args_size: u32,
/// Clobbered registers, from regalloc. /// Clobbered registers, from regalloc.
clobbered: Set<Writable<RealReg>>, clobbered: Set<Writable<RealReg>>,
/// Total number of spillslots, from regalloc. /// Total number of spillslots, from regalloc.
@@ -678,7 +689,6 @@ impl<M: ABIMachineSpec> ABICalleeImpl<M> {
sig, sig,
stackslots, stackslots,
stackslots_size: stack_offset, stackslots_size: stack_offset,
outgoing_args_size: 0,
clobbered: Set::empty(), clobbered: Set::empty(),
spillslots: None, spillslots: None,
fixed_frame_storage_size: 0, fixed_frame_storage_size: 0,
@@ -905,12 +915,6 @@ impl<M: ABIMachineSpec> ABICallee for ABICalleeImpl<M> {
} }
} }
fn accumulate_outgoing_args_size(&mut self, size: u32) {
if size > self.outgoing_args_size {
self.outgoing_args_size = size;
}
}
fn flags(&self) -> &settings::Flags { fn flags(&self) -> &settings::Flags {
&self.flags &self.flags
} }
@@ -1244,7 +1248,7 @@ impl<M: ABIMachineSpec> ABICallee for ABICalleeImpl<M> {
let mut insts = smallvec![]; let mut insts = smallvec![];
if !self.call_conv.extends_baldrdash() { if !self.call_conv.extends_baldrdash() {
// set up frame // set up frame
insts.extend(M::gen_prologue_frame_setup().into_iter()); insts.extend(M::gen_prologue_frame_setup(&self.flags).into_iter());
} }
let bytes = M::word_bytes(); let bytes = M::word_bytes();
@@ -1278,30 +1282,25 @@ impl<M: ABIMachineSpec> ABICallee for ABICalleeImpl<M> {
} }
} }
// N.B.: "nominal SP", which we use to refer to stackslots and
// spillslots, is defined to be equal to the stack pointer at this point
// in the prologue.
//
// If we push any clobbers below, we emit a virtual-SP adjustment
// meta-instruction so that the nominal SP references behave as if SP
// were still at this point. See documentation for
// [crate::machinst::abi_impl](this module) for more details on
// stackframe layout and nominal SP maintenance.
// Save clobbered registers. // Save clobbered registers.
let (clobber_size, clobber_insts) = M::gen_clobber_save( let (_, clobber_insts) = M::gen_clobber_save(
self.call_conv, self.call_conv,
&self.flags, &self.flags,
&self.clobbered, &self.clobbered,
self.fixed_frame_storage_size, self.fixed_frame_storage_size,
self.outgoing_args_size,
); );
insts.extend(clobber_insts); insts.extend(clobber_insts);
let sp_adj = self.outgoing_args_size as i32 + clobber_size as i32; // N.B.: "nominal SP", which we use to refer to stackslots and
if sp_adj > 0 { // spillslots, is defined to be equal to the stack pointer at this point
insts.push(M::gen_nominal_sp_adj(sp_adj)); // in the prologue.
} //
// If we push any further data onto the stack in the function
// body, we emit a virtual-SP adjustment meta-instruction so
// that the nominal SP references behave as if SP were still
// at this point. See documentation for
// [crate::machinst::abi_impl](this module) for more details
// on stackframe layout and nominal SP maintenance.
self.total_frame_size = Some(total_stacksize); self.total_frame_size = Some(total_stacksize);
insts insts
@@ -1316,7 +1315,6 @@ impl<M: ABIMachineSpec> ABICallee for ABICalleeImpl<M> {
&self.flags, &self.flags,
&self.clobbered, &self.clobbered,
self.fixed_frame_storage_size, self.fixed_frame_storage_size,
self.outgoing_args_size,
)); ));
// N.B.: we do *not* emit a nominal SP adjustment here, because (i) there will be no // N.B.: we do *not* emit a nominal SP adjustment here, because (i) there will be no
@@ -1326,7 +1324,7 @@ impl<M: ABIMachineSpec> ABICallee for ABICalleeImpl<M> {
// offset for the rest of the body. // offset for the rest of the body.
if !self.call_conv.extends_baldrdash() { if !self.call_conv.extends_baldrdash() {
insts.extend(M::gen_epilogue_frame_restore()); insts.extend(M::gen_epilogue_frame_restore(&self.flags));
insts.push(M::gen_ret()); insts.push(M::gen_ret());
} }
@@ -1531,11 +1529,6 @@ impl<M: ABIMachineSpec> ABICaller for ABICallerImpl<M> {
} }
} }
fn accumulate_outgoing_args_size<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C) {
let off = self.sig.stack_arg_space + self.sig.stack_ret_space;
ctx.abi().accumulate_outgoing_args_size(off as u32);
}
fn emit_stack_pre_adjust<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C) { fn emit_stack_pre_adjust<C: LowerCtx<I = Self::I>>(&self, ctx: &mut C) {
let off = self.sig.stack_arg_space + self.sig.stack_ret_space; let off = self.sig.stack_arg_space + self.sig.stack_ret_space;
adjust_stack_and_nominal_sp::<M, C>(ctx, off as i32, /* is_sub = */ true) adjust_stack_and_nominal_sp::<M, C>(ctx, off as i32, /* is_sub = */ true)

View File

@@ -142,6 +142,7 @@
use crate::binemit::{Addend, CodeOffset, CodeSink, Reloc, StackMap}; use crate::binemit::{Addend, CodeOffset, CodeSink, Reloc, StackMap};
use crate::ir::{ExternalName, Opcode, SourceLoc, TrapCode}; use crate::ir::{ExternalName, Opcode, SourceLoc, TrapCode};
use crate::isa::unwind::UnwindInst;
use crate::machinst::{BlockIndex, MachInstLabelUse, VCodeConstant, VCodeConstants, VCodeInst}; use crate::machinst::{BlockIndex, MachInstLabelUse, VCodeConstant, VCodeConstants, VCodeInst};
use crate::timing; use crate::timing;
use cranelift_entity::{entity_impl, SecondaryMap}; use cranelift_entity::{entity_impl, SecondaryMap};
@@ -173,6 +174,8 @@ pub struct MachBuffer<I: VCodeInst> {
srclocs: SmallVec<[MachSrcLoc; 64]>, srclocs: SmallVec<[MachSrcLoc; 64]>,
/// Any stack maps referring to this code. /// Any stack maps referring to this code.
stack_maps: SmallVec<[MachStackMap; 8]>, stack_maps: SmallVec<[MachStackMap; 8]>,
/// Any unwind info at a given location.
unwind_info: SmallVec<[(CodeOffset, UnwindInst); 8]>,
/// The current source location in progress (after `start_srcloc()` and /// The current source location in progress (after `start_srcloc()` and
/// before `end_srcloc()`). This is a (start_offset, src_loc) tuple. /// before `end_srcloc()`). This is a (start_offset, src_loc) tuple.
cur_srcloc: Option<(CodeOffset, SourceLoc)>, cur_srcloc: Option<(CodeOffset, SourceLoc)>,
@@ -240,6 +243,8 @@ pub struct MachBufferFinalized {
srclocs: SmallVec<[MachSrcLoc; 64]>, srclocs: SmallVec<[MachSrcLoc; 64]>,
/// Any stack maps referring to this code. /// Any stack maps referring to this code.
stack_maps: SmallVec<[MachStackMap; 8]>, stack_maps: SmallVec<[MachStackMap; 8]>,
/// Any unwind info at a given location.
pub unwind_info: SmallVec<[(CodeOffset, UnwindInst); 8]>,
} }
static UNKNOWN_LABEL_OFFSET: CodeOffset = 0xffff_ffff; static UNKNOWN_LABEL_OFFSET: CodeOffset = 0xffff_ffff;
@@ -299,6 +304,7 @@ impl<I: VCodeInst> MachBuffer<I> {
call_sites: SmallVec::new(), call_sites: SmallVec::new(),
srclocs: SmallVec::new(), srclocs: SmallVec::new(),
stack_maps: SmallVec::new(), stack_maps: SmallVec::new(),
unwind_info: SmallVec::new(),
cur_srcloc: None, cur_srcloc: None,
label_offsets: SmallVec::new(), label_offsets: SmallVec::new(),
label_aliases: SmallVec::new(), label_aliases: SmallVec::new(),
@@ -1200,6 +1206,7 @@ impl<I: VCodeInst> MachBuffer<I> {
call_sites: self.call_sites, call_sites: self.call_sites,
srclocs, srclocs,
stack_maps: self.stack_maps, stack_maps: self.stack_maps,
unwind_info: self.unwind_info,
} }
} }
@@ -1239,6 +1246,11 @@ impl<I: VCodeInst> MachBuffer<I> {
}); });
} }
/// Add an unwind record at the current offset.
pub fn add_unwind(&mut self, unwind: UnwindInst) {
self.unwind_info.push((self.cur_offset(), unwind));
}
/// Set the `SourceLoc` for code from this offset until the offset at the /// Set the `SourceLoc` for code from this offset until the offset at the
/// next call to `end_srcloc()`. /// next call to `end_srcloc()`.
pub fn start_srcloc(&mut self, loc: SourceLoc) { pub fn start_srcloc(&mut self, loc: SourceLoc) {

View File

@@ -63,14 +63,12 @@
use crate::binemit::{CodeInfo, CodeOffset, StackMap}; use crate::binemit::{CodeInfo, CodeOffset, StackMap};
use crate::ir::condcodes::IntCC; use crate::ir::condcodes::IntCC;
use crate::ir::{Function, SourceLoc, StackSlot, Type, ValueLabel}; use crate::ir::{Function, SourceLoc, StackSlot, Type, ValueLabel};
use crate::isa::unwind::input as unwind_input;
use crate::result::CodegenResult; use crate::result::CodegenResult;
use crate::settings::Flags; use crate::settings::Flags;
use crate::value_label::ValueLabelsRanges; use crate::value_label::ValueLabelsRanges;
use alloc::boxed::Box; use alloc::boxed::Box;
use alloc::vec::Vec; use alloc::vec::Vec;
use core::fmt::Debug; use core::fmt::Debug;
use core::ops::Range;
use cranelift_entity::PrimaryMap; use cranelift_entity::PrimaryMap;
use regalloc::RegUsageCollector; use regalloc::RegUsageCollector;
use regalloc::{ use regalloc::{
@@ -303,8 +301,6 @@ pub trait MachInstEmit: MachInst {
type State: MachInstEmitState<Self>; type State: MachInstEmitState<Self>;
/// Constant information used in `emit` invocations. /// Constant information used in `emit` invocations.
type Info: MachInstEmitInfo; type Info: MachInstEmitInfo;
/// Unwind info generator.
type UnwindInfo: UnwindInfoGenerator<Self>;
/// Emit the instruction. /// Emit the instruction.
fn emit(&self, code: &mut MachBuffer<Self>, info: &Self::Info, state: &mut Self::State); fn emit(&self, code: &mut MachBuffer<Self>, info: &Self::Info, state: &mut Self::State);
/// Pretty-print the instruction. /// Pretty-print the instruction.
@@ -340,8 +336,6 @@ pub struct MachCompileResult {
pub frame_size: u32, pub frame_size: u32,
/// Disassembly, if requested. /// Disassembly, if requested.
pub disasm: Option<String>, pub disasm: Option<String>,
/// Unwind info.
pub unwind_info: Option<unwind_input::UnwindInfo<Reg>>,
/// Debug info: value labels to registers/stackslots at code offsets. /// Debug info: value labels to registers/stackslots at code offsets.
pub value_labels_ranges: ValueLabelsRanges, pub value_labels_ranges: ValueLabelsRanges,
/// Debug info: stackslots to stack pointer offsets. /// Debug info: stackslots to stack pointer offsets.
@@ -433,29 +427,6 @@ pub enum UnwindInfoKind {
Windows, Windows,
} }
/// Input data for UnwindInfoGenerator.
pub struct UnwindInfoContext<'a, Inst: MachInstEmit> {
/// Function instructions.
pub insts: &'a [Inst],
/// Instruction layout: end offsets
pub insts_layout: &'a [CodeOffset],
/// Length of the function.
pub len: CodeOffset,
/// Prologue range.
pub prologue: Range<u32>,
/// Epilogue ranges.
pub epilogues: &'a [Range<u32>],
}
/// UnwindInfo generator/helper.
pub trait UnwindInfoGenerator<I: MachInstEmit> {
/// Creates unwind info based on function signature and
/// emitted instructions.
fn create_unwind_info(
context: UnwindInfoContext<I>,
) -> CodegenResult<Option<unwind_input::UnwindInfo<Reg>>>;
}
/// Info about an operation that loads or stores from/to the stack. /// Info about an operation that loads or stores from/to the stack.
#[derive(Clone, Copy, Debug)] #[derive(Clone, Copy, Debug)]
pub enum MachInstStackOpInfo { pub enum MachInstStackOpInfo {

View File

@@ -106,9 +106,6 @@ pub struct VCode<I: VCodeInst> {
/// post-regalloc. /// post-regalloc.
safepoint_slots: Vec<Vec<SpillSlot>>, safepoint_slots: Vec<Vec<SpillSlot>>,
/// Ranges for prologue and epilogue instructions.
prologue_epilogue_ranges: Option<(InsnRange, Box<[InsnRange]>)>,
/// Do we generate debug info? /// Do we generate debug info?
generate_debug_info: bool, generate_debug_info: bool,
@@ -330,7 +327,6 @@ impl<I: VCodeInst> VCode<I> {
emit_info, emit_info,
safepoint_insns: vec![], safepoint_insns: vec![],
safepoint_slots: vec![], safepoint_slots: vec![],
prologue_epilogue_ranges: None,
generate_debug_info, generate_debug_info,
insts_layout: RefCell::new((vec![], vec![], 0)), insts_layout: RefCell::new((vec![], vec![], 0)),
constants, constants,
@@ -396,10 +392,6 @@ impl<I: VCodeInst> VCode<I> {
let mut final_safepoint_insns = vec![]; let mut final_safepoint_insns = vec![];
let mut safept_idx = 0; let mut safept_idx = 0;
let mut prologue_start = None;
let mut prologue_end = None;
let mut epilogue_islands = vec![];
assert!(result.target_map.elems().len() == self.num_blocks()); assert!(result.target_map.elems().len() == self.num_blocks());
for block in 0..self.num_blocks() { for block in 0..self.num_blocks() {
let start = result.target_map.elems()[block].get() as usize; let start = result.target_map.elems()[block].get() as usize;
@@ -412,13 +404,11 @@ impl<I: VCodeInst> VCode<I> {
let final_start = final_insns.len() as InsnIndex; let final_start = final_insns.len() as InsnIndex;
if block == self.entry { if block == self.entry {
prologue_start = Some(final_insns.len() as InsnIndex);
// Start with the prologue. // Start with the prologue.
let prologue = self.abi.gen_prologue(); let prologue = self.abi.gen_prologue();
let len = prologue.len(); let len = prologue.len();
final_insns.extend(prologue.into_iter()); final_insns.extend(prologue.into_iter());
final_srclocs.extend(iter::repeat(SourceLoc::default()).take(len)); final_srclocs.extend(iter::repeat(SourceLoc::default()).take(len));
prologue_end = Some(final_insns.len() as InsnIndex);
} }
for i in start..end { for i in start..end {
@@ -444,12 +434,10 @@ impl<I: VCodeInst> VCode<I> {
// with the epilogue. // with the epilogue.
let is_ret = insn.is_term() == MachTerminator::Ret; let is_ret = insn.is_term() == MachTerminator::Ret;
if is_ret { if is_ret {
let epilogue_start = final_insns.len() as InsnIndex;
let epilogue = self.abi.gen_epilogue(); let epilogue = self.abi.gen_epilogue();
let len = epilogue.len(); let len = epilogue.len();
final_insns.extend(epilogue.into_iter()); final_insns.extend(epilogue.into_iter());
final_srclocs.extend(iter::repeat(srcloc).take(len)); final_srclocs.extend(iter::repeat(srcloc).take(len));
epilogue_islands.push(epilogue_start..final_insns.len() as InsnIndex);
} else { } else {
final_insns.push(insn.clone()); final_insns.push(insn.clone());
final_srclocs.push(srcloc); final_srclocs.push(srcloc);
@@ -481,11 +469,6 @@ impl<I: VCodeInst> VCode<I> {
// for the machine backend during emission so that it can do // for the machine backend during emission so that it can do
// target-specific translations of slot numbers to stack offsets. // target-specific translations of slot numbers to stack offsets.
self.safepoint_slots = result.stackmaps; self.safepoint_slots = result.stackmaps;
self.prologue_epilogue_ranges = Some((
prologue_start.unwrap()..prologue_end.unwrap(),
epilogue_islands.into_boxed_slice(),
));
} }
/// Emit the instructions to a `MachBuffer`, containing fixed-up code and external /// Emit the instructions to a `MachBuffer`, containing fixed-up code and external
@@ -600,22 +583,6 @@ impl<I: VCodeInst> VCode<I> {
buffer buffer
} }
/// Generates unwind info.
pub fn unwind_info(
&self,
) -> crate::result::CodegenResult<Option<crate::isa::unwind::input::UnwindInfo<Reg>>> {
let layout = &self.insts_layout.borrow();
let (prologue, epilogues) = self.prologue_epilogue_ranges.as_ref().unwrap();
let context = UnwindInfoContext {
insts: &self.insts,
insts_layout: &layout.0,
len: layout.2,
prologue: prologue.clone(),
epilogues,
};
I::UnwindInfo::create_unwind_info(context)
}
/// Generates value-label ranges. /// Generates value-label ranges.
pub fn value_labels_ranges(&self) -> ValueLabelsRanges { pub fn value_labels_ranges(&self) -> ValueLabelsRanges {
if !self.has_value_labels { if !self.has_value_labels {

View File

@@ -399,6 +399,7 @@ enable_simd = false
enable_atomics = true enable_atomics = true
enable_safepoints = false enable_safepoints = false
enable_llvm_abi_extensions = false enable_llvm_abi_extensions = false
unwind_info = true
emit_all_ones_funcaddrs = false emit_all_ones_funcaddrs = false
enable_probestack = true enable_probestack = true
probestack_func_adjusts_sp = false probestack_func_adjusts_sp = false

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f0(i64, i32) -> i32 { function %f0(i64, i32) -> i32 {
@@ -11,7 +12,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, UXTW] ; nextln: ldr w0, [x0, w1, UXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -25,7 +25,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, UXTW] ; nextln: ldr w0, [x0, w1, UXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -39,7 +38,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -53,7 +51,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -68,7 +65,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -83,7 +79,6 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -100,7 +95,6 @@ block0(v0: i32, v1: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mov w0, w0 ; nextln: mov w0, w0
; nextln: ldr w0, [x0, w1, UXTW] ; nextln: ldr w0, [x0, w1, UXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -124,7 +118,6 @@ block0(v0: i64, v1: i32):
; nextln: add x0, x2, x0 ; nextln: add x0, x2, x0
; nextln: add x0, x0, x1, SXTW ; nextln: add x0, x0, x1, SXTW
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -145,7 +138,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: add x0, x0, x2 ; nextln: add x0, x0, x2
; nextln: add x0, x0, x1 ; nextln: add x0, x0, x1
; nextln: ldur w0, [x0, #48] ; nextln: ldur w0, [x0, #48]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -167,7 +159,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: add x1, x3, x1 ; nextln: add x1, x3, x1
; nextln: add x1, x1, x2 ; nextln: add x1, x1, x2
; nextln: ldr w0, [x1, x0] ; nextln: ldr w0, [x1, x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -184,7 +175,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #1234 ; nextln: movz x0, #1234
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -200,7 +190,6 @@ block0(v0: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add x0, x0, #8388608 ; nextln: add x0, x0, #8388608
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -216,7 +205,6 @@ block0(v0: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub x0, x0, #4 ; nextln: sub x0, x0, #4
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -234,7 +222,6 @@ block0(v0: i64):
; nextln: movk w1, #15258, LSL #16 ; nextln: movk w1, #15258, LSL #16
; nextln: add x0, x1, x0 ; nextln: add x0, x1, x0
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -249,7 +236,6 @@ block0(v0: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtw x0, w0 ; nextln: sxtw x0, w0
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -266,7 +252,6 @@ block0(v0: i32, v1: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtw x0, w0 ; nextln: sxtw x0, w0
; nextln: ldr w0, [x0, w1, SXTW] ; nextln: ldr w0, [x0, w1, SXTW]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -281,7 +266,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr w0, [x0] ; nextln: ldr w0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -296,7 +280,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldur w0, [x0, #4] ; nextln: ldur w0, [x0, #4]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -311,7 +294,6 @@ block0(v0: i64, v1: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr d0, [x0, w1, UXTW] ; nextln: ldr d0, [x0, w1, UXTW]
; nextln: sxtl v0.8h, v0.8b ; nextln: sxtl v0.8h, v0.8b
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -326,7 +308,6 @@ block0(v0: i64, v1: i64):
; nextln: add x0, x0, x1 ; nextln: add x0, x0, x1
; nextln: ldr d0, [x0, #8] ; nextln: ldr d0, [x0, #8]
; nextln: uxtl v0.4s, v0.4h ; nextln: uxtl v0.4s, v0.4h
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -341,7 +322,6 @@ block0(v0: i64, v1: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr d0, [x0, w1, SXTW] ; nextln: ldr d0, [x0, w1, SXTW]
; nextln: uxtl v0.2d, v0.2s ; nextln: uxtl v0.2d, v0.2s
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -357,7 +337,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn w0, #4097 ; nextln: movn w0, #4097
; nextln: ldrsh x0, [x0] ; nextln: ldrsh x0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -373,7 +352,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #4098 ; nextln: movz x0, #4098
; nextln: ldrsh x0, [x0] ; nextln: ldrsh x0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -390,7 +368,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: movn w0, #4097 ; nextln: movn w0, #4097
; nextln: sxtw x0, w0 ; nextln: sxtw x0, w0
; nextln: ldrsh x0, [x0] ; nextln: ldrsh x0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -407,6 +384,5 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: movz x0, #4098 ; nextln: movz x0, #4098
; nextln: sxtw x0, w0 ; nextln: sxtw x0, w0
; nextln: ldrsh x0, [x0] ; nextln: ldrsh x0, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f1(i64, i64) -> i64 { function %f1(i64, i64) -> i64 {
@@ -10,7 +11,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add x0, x0, x1 ; nextln: add x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -24,7 +24,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub x0, x0, x1 ; nextln: sub x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -37,7 +36,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: madd x0, x0, x1, xzr ; nextln: madd x0, x0, x1, xzr
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -50,7 +48,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: umulh x0, x0, x1 ; nextln: umulh x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -63,7 +60,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: smulh x0, x0, x1 ; nextln: smulh x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -81,7 +77,6 @@ block0(v0: i64, v1: i64):
; nextln: ccmp x0, #1, #nzcv, eq ; nextln: ccmp x0, #1, #nzcv, eq
; nextln: b.vc 8 ; udf ; nextln: b.vc 8 ; udf
; nextln: mov x0, x2 ; nextln: mov x0, x2
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -101,7 +96,6 @@ block0(v0: i64):
; nextln: ccmp x0, #1, #nzcv, eq ; nextln: ccmp x0, #1, #nzcv, eq
; nextln: b.vc 8 ; udf ; nextln: b.vc 8 ; udf
; nextln: mov x0, x1 ; nextln: mov x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -115,7 +109,6 @@ block0(v0: i64, v1: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: udiv x0, x0, x1 ; nextln: udiv x0, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -131,7 +124,6 @@ block0(v0: i64):
; nextln: movz x1, #2 ; nextln: movz x1, #2
; nextln: udiv x0, x0, x1 ; nextln: udiv x0, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -146,7 +138,6 @@ block0(v0: i64, v1: i64):
; nextln: sdiv x2, x0, x1 ; nextln: sdiv x2, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: msub x0, x2, x1, x0 ; nextln: msub x0, x2, x1, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -161,7 +152,6 @@ block0(v0: i64, v1: i64):
; nextln: udiv x2, x0, x1 ; nextln: udiv x2, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: msub x0, x2, x1, x0 ; nextln: msub x0, x2, x1, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -181,7 +171,6 @@ block0(v0: i32, v1: i32):
; nextln: adds wzr, w2, #1 ; nextln: adds wzr, w2, #1
; nextln: ccmp w3, #1, #nzcv, eq ; nextln: ccmp w3, #1, #nzcv, eq
; nextln: b.vc 8 ; udf ; nextln: b.vc 8 ; udf
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -203,7 +192,6 @@ block0(v0: i32):
; nextln: ccmp w0, #1, #nzcv, eq ; nextln: ccmp w0, #1, #nzcv, eq
; nextln: b.vc 8 ; udf ; nextln: b.vc 8 ; udf
; nextln: mov x0, x1 ; nextln: mov x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -219,7 +207,6 @@ block0(v0: i32, v1: i32):
; nextln: mov w1, w1 ; nextln: mov w1, w1
; nextln: udiv x0, x0, x1 ; nextln: udiv x0, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -237,7 +224,6 @@ block0(v0: i32):
; nextln: movz x1, #2 ; nextln: movz x1, #2
; nextln: udiv x0, x0, x1 ; nextln: udiv x0, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -254,7 +240,6 @@ block0(v0: i32, v1: i32):
; nextln: sdiv x2, x0, x1 ; nextln: sdiv x2, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: msub x0, x2, x1, x0 ; nextln: msub x0, x2, x1, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -271,7 +256,6 @@ block0(v0: i32, v1: i32):
; nextln: udiv x2, x0, x1 ; nextln: udiv x2, x0, x1
; nextln: cbnz x1, 8 ; udf ; nextln: cbnz x1, 8 ; udf
; nextln: msub x0, x2, x1, x0 ; nextln: msub x0, x2, x1, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -284,7 +268,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: and x0, x0, x1 ; nextln: and x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -297,7 +280,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: orr x0, x0, x1 ; nextln: orr x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -310,7 +292,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: eor x0, x0, x1 ; nextln: eor x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -323,7 +304,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: bic x0, x0, x1 ; nextln: bic x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -336,7 +316,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: orn x0, x0, x1 ; nextln: orn x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -349,7 +328,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: eon x0, x0, x1 ; nextln: eon x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -362,7 +340,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: orn x0, xzr, x0 ; nextln: orn x0, xzr, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -377,7 +354,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub w0, w1, w0, LSL 21 ; nextln: sub w0, w1, w0, LSL 21
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -391,7 +367,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub w0, w0, #1 ; nextln: sub w0, w0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -405,7 +380,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add w0, w0, #1 ; nextln: add w0, w0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -419,7 +393,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add x0, x0, #1 ; nextln: add x0, x0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -434,7 +407,6 @@ block0(v0: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #1 ; nextln: movz x0, #1
; nextln: sub x0, xzr, x0 ; nextln: sub x0, xzr, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -451,6 +423,5 @@ block0(v0: i8x16):
; nextln: sub w0, wzr, w0 ; nextln: sub w0, wzr, w0
; nextln: dup v1.16b, w0 ; nextln: dup v1.16b, w0
; nextln: ushl v0.16b, v0.16b, v1.16b ; nextln: ushl v0.16b, v0.16b, v1.16b
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i32, i32) -> i32 { function %f(i32, i32) -> i32 {
@@ -8,7 +9,6 @@ block0(v0: i32, v1: i32):
v2 = iadd v0, v1 v2 = iadd v0, v1
; check: add w0, w0, w1 ; check: add w0, w0, w1
return v2 return v2
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %a(i8) -> i8 { function %a(i8) -> i8 {
@@ -11,7 +12,6 @@ block0(v0: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: lsr w0, w0, #24 ; nextln: lsr w0, w0, #24
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -25,7 +25,6 @@ block0(v0: i16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: lsr w0, w0, #16 ; nextln: lsr w0, w0, #16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -38,7 +37,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -51,7 +49,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit x0, x0 ; nextln: rbit x0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -66,7 +63,6 @@ block0(v0: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -80,7 +76,6 @@ block0(v0: i16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxth w0, w0 ; nextln: uxth w0, w0
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -93,7 +88,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -106,7 +100,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: clz x0, x0 ; nextln: clz x0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -120,7 +113,6 @@ block0(v0: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: cls w0, w0 ; nextln: cls w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -134,7 +126,6 @@ block0(v0: i16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxth w0, w0 ; nextln: uxth w0, w0
; nextln: cls w0, w0 ; nextln: cls w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -147,7 +138,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: cls w0, w0 ; nextln: cls w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -160,7 +150,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: cls x0, x0 ; nextln: cls x0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -175,7 +164,6 @@ block0(v0: i8):
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: lsr w0, w0, #24 ; nextln: lsr w0, w0, #24
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -190,7 +178,6 @@ block0(v0: i16):
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: lsr w0, w0, #16 ; nextln: lsr w0, w0, #16
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -204,7 +191,6 @@ block0(v0: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit w0, w0 ; nextln: rbit w0, w0
; nextln: clz w0, w0 ; nextln: clz w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -218,7 +204,6 @@ block0(v0: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: rbit x0, x0 ; nextln: rbit x0, x0
; nextln: clz x0, x0 ; nextln: clz x0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -234,7 +219,6 @@ block0(v0: i64):
; nextln: cnt v0.8b, v0.8b ; nextln: cnt v0.8b, v0.8b
; nextln: addv b0, v0.8b ; nextln: addv b0, v0.8b
; nextln: umov w0, v0.b[0] ; nextln: umov w0, v0.b[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -250,7 +234,6 @@ block0(v0: i32):
; nextln: cnt v0.8b, v0.8b ; nextln: cnt v0.8b, v0.8b
; nextln: addv b0, v0.8b ; nextln: addv b0, v0.8b
; nextln: umov w0, v0.b[0] ; nextln: umov w0, v0.b[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -266,7 +249,6 @@ block0(v0: i16):
; nextln: cnt v0.8b, v0.8b ; nextln: cnt v0.8b, v0.8b
; nextln: addp v0.8b, v0.8b, v0.8b ; nextln: addp v0.8b, v0.8b, v0.8b
; nextln: umov w0, v0.b[0] ; nextln: umov w0, v0.b[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -281,7 +263,6 @@ block0(v0: i8):
; nextln: fmov s0, w0 ; nextln: fmov s0, w0
; nextln: cnt v0.8b, v0.8b ; nextln: cnt v0.8b, v0.8b
; nextln: umov w0, v0.b[0] ; nextln: umov w0, v0.b[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -296,7 +277,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #255 ; nextln: movz x0, #255
; nextln: sxtb w0, w0 ; nextln: sxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -311,6 +291,5 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #1 ; nextln: movz x0, #1
; nextln: sbfx w0, w0, #0, #1 ; nextln: sbfx w0, w0, #0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i64, i64) -> i64 { function %f(i64, i64) -> i64 {
@@ -11,6 +12,5 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: blr x1 ; nextln: blr x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
set enable_probestack=false set enable_probestack=false
target aarch64 target aarch64
@@ -14,7 +15,6 @@ block0(v0: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr x1, 8 ; b 12 ; data ; nextln: ldr x1, 8 ; b 12 ; data
; nextln: blr x1 ; nextln: blr x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -31,8 +31,7 @@ block0(v0: i32):
; check: mov w0, w0 ; check: mov w0, w0
; nextln: ldr x1, 8 ; b 12 ; data ; nextln: ldr x1, 8 ; b 12 ; data
; nextln: blr x1 ; nextln: blr x1
; check: mov sp, fp ; check: ldp fp, lr, [sp], #16
; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
function %f3(i32) -> i32 uext baldrdash_system_v { function %f3(i32) -> i32 uext baldrdash_system_v {
@@ -55,8 +54,7 @@ block0(v0: i32):
; check: sxtw x0, w0 ; check: sxtw x0, w0
; nextln: ldr x1, 8 ; b 12 ; data ; nextln: ldr x1, 8 ; b 12 ; data
; nextln: blr x1 ; nextln: blr x1
; check: mov sp, fp ; check: ldp fp, lr, [sp], #16
; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
function %f5(i32) -> i32 sext baldrdash_system_v { function %f5(i32) -> i32 sext baldrdash_system_v {
@@ -93,7 +91,6 @@ block0(v0: i8):
; nextln: blr x8 ; nextln: blr x8
; nextln: add sp, sp, #16 ; nextln: add sp, sp, #16
; nextln: virtual_sp_offset_adjust -16 ; nextln: virtual_sp_offset_adjust -16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -116,7 +113,6 @@ block0(v0: i8):
; nextln: movz x6, #42 ; nextln: movz x6, #42
; nextln: movz x7, #42 ; nextln: movz x7, #42
; nextln: sturb w9, [x8] ; nextln: sturb w9, [x8]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -161,7 +157,7 @@ block0:
; nextln: ldr d0, [sp, #16] ; nextln: ldr d0, [sp, #16]
; nextln: ldr x0, 8 ; b 12 ; data ; nextln: ldr x0, 8 ; b 12 ; data
; nextln: blr x0 ; nextln: blr x0
; nextln: mov sp, fp ; nextln: add sp, sp, #32
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -204,7 +200,7 @@ block0:
; nextln: ldr q0, [sp, #32] ; nextln: ldr q0, [sp, #32]
; nextln: ldr x0, 8 ; b 12 ; data ; nextln: ldr x0, 8 ; b 12 ; data
; nextln: blr x0 ; nextln: blr x0
; nextln: mov sp, fp ; nextln: add sp, sp, #48
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -251,6 +247,6 @@ block0:
; nextln: ldr q0, [sp, #16] ; nextln: ldr q0, [sp, #16]
; nextln: ldr x0, 8 ; b 12 ; data ; nextln: ldr x0, 8 ; b 12 ; data
; nextln: blr x0 ; nextln: blr x0
; nextln: mov sp, fp ; nextln: add sp, sp, #32
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i64, i64) -> b1 { function %f(i64, i64) -> b1 {
@@ -11,7 +12,6 @@ block0(v0: i64, v1: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: subs xzr, x0, x1 ; nextln: subs xzr, x0, x1
; nextln: cset x0, eq ; nextln: cset x0, eq
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -37,12 +37,10 @@ block2:
; nextln: b.eq label1 ; b label2 ; nextln: b.eq label1 ; b label2
; check: Block 1: ; check: Block 1:
; check: movz x0, #1 ; check: movz x0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
; check: Block 2: ; check: Block 2:
; check: movz x0, #2 ; check: movz x0, #2
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -62,6 +60,5 @@ block1:
; nextln: subs xzr, x0, x1 ; nextln: subs xzr, x0, x1
; check: Block 1: ; check: Block 1:
; check: movz x0, #1 ; check: movz x0, #1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i8, i64, i64) -> i64 { function %f(i8, i64, i64) -> i64 {

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f() -> b8 { function %f() -> b8 {
@@ -10,7 +11,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #255 ; nextln: movz x0, #255
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -23,7 +23,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #0 ; nextln: movz x0, #0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -36,7 +35,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #0 ; nextln: movz x0, #0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -49,7 +47,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #65535 ; nextln: movz x0, #65535
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -62,7 +59,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #65535, LSL #16 ; nextln: movz x0, #65535, LSL #16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -75,7 +71,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #65535, LSL #32 ; nextln: movz x0, #65535, LSL #32
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -88,7 +83,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #65535, LSL #48 ; nextln: movz x0, #65535, LSL #48
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -101,7 +95,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #0 ; nextln: movn x0, #0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -114,7 +107,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #65535 ; nextln: movn x0, #65535
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -127,7 +119,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #65535, LSL #16 ; nextln: movn x0, #65535, LSL #16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -140,7 +131,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #65535, LSL #32 ; nextln: movn x0, #65535, LSL #32
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -153,7 +143,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #65535, LSL #48 ; nextln: movn x0, #65535, LSL #48
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -169,7 +158,6 @@ block0:
; nextln: movk x0, #4626, LSL #16 ; nextln: movk x0, #4626, LSL #16
; nextln: movk x0, #61603, LSL #32 ; nextln: movk x0, #61603, LSL #32
; nextln: movk x0, #62283, LSL #48 ; nextln: movk x0, #62283, LSL #48
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -183,7 +171,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #7924, LSL #16 ; nextln: movz x0, #7924, LSL #16
; nextln: movk x0, #4841, LSL #48 ; nextln: movk x0, #4841, LSL #48
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -197,7 +184,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #57611, LSL #16 ; nextln: movn x0, #57611, LSL #16
; nextln: movk x0, #4841, LSL #48 ; nextln: movk x0, #4841, LSL #48
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -210,7 +196,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: orr x0, xzr, #4294967295 ; nextln: orr x0, xzr, #4294967295
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -223,7 +208,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn w0, #8 ; nextln: movn w0, #8
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -236,7 +220,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn w0, #8 ; nextln: movn w0, #8
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -249,6 +232,5 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movn x0, #8 ; nextln: movn x0, #8
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i8) -> i64 { function %f(i8) -> i64 {
@@ -13,6 +14,5 @@ block0(v0: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x1, #42 ; nextln: movz x1, #42
; nextln: add x0, x1, x0, SXTB ; nextln: add x0, x1, x0, SXTB
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function u0:0(i8) -> f32 { function u0:0(i8) -> f32 {
@@ -9,7 +10,6 @@ block0(v0: i8):
; check: uxtb w0, w0 ; check: uxtb w0, w0
; check: ucvtf s0, w0 ; check: ucvtf s0, w0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -22,7 +22,6 @@ block0(v0: i8):
; check: uxtb w0, w0 ; check: uxtb w0, w0
; check: ucvtf d0, w0 ; check: ucvtf d0, w0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -35,7 +34,6 @@ block0(v0: i16):
; check: uxth w0, w0 ; check: uxth w0, w0
; check: ucvtf s0, w0 ; check: ucvtf s0, w0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -48,7 +46,6 @@ block0(v0: i16):
; check: uxth w0, w0 ; check: uxth w0, w0
; check: ucvtf d0, w0 ; check: ucvtf d0, w0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -70,7 +67,6 @@ block0(v0: f32):
; check: b.mi 8 ; udf ; check: b.mi 8 ; udf
; check: fcvtzu w0, s0 ; check: fcvtzu w0, s0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -92,7 +88,6 @@ block0(v0: f64):
; check: b.mi 8 ; udf ; check: b.mi 8 ; udf
; check: fcvtzu w0, d0 ; check: fcvtzu w0, d0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -114,7 +109,6 @@ block0(v0: f32):
; check: b.mi 8 ; udf ; check: b.mi 8 ; udf
; check: fcvtzu w0, s0 ; check: fcvtzu w0, s0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }
@@ -136,7 +130,6 @@ block0(v0: f64):
; check: b.mi 8 ; udf ; check: b.mi 8 ; udf
; check: fcvtzu w0, d0 ; check: fcvtzu w0, d0
return v1 return v1
; check: mov sp, fp
; check: ldp fp, lr, [sp], #16 ; check: ldp fp, lr, [sp], #16
; check: ret ; check: ret
} }

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f1(f32, f32) -> f32 { function %f1(f32, f32) -> f32 {
@@ -10,7 +11,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fadd s0, s0, s1 ; nextln: fadd s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -23,7 +23,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fadd d0, d0, d1 ; nextln: fadd d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -36,7 +35,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fsub s0, s0, s1 ; nextln: fsub s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -49,7 +47,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fsub d0, d0, d1 ; nextln: fsub d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -62,7 +59,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmul s0, s0, s1 ; nextln: fmul s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -75,7 +71,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmul d0, d0, d1 ; nextln: fmul d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -88,7 +83,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fdiv s0, s0, s1 ; nextln: fdiv s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -101,7 +95,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fdiv d0, d0, d1 ; nextln: fdiv d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -114,7 +107,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmin s0, s0, s1 ; nextln: fmin s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -127,7 +119,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmin d0, d0, d1 ; nextln: fmin d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -140,7 +131,6 @@ block0(v0: f32, v1: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmax s0, s0, s1 ; nextln: fmax s0, s0, s1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -153,7 +143,6 @@ block0(v0: f64, v1: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmax d0, d0, d1 ; nextln: fmax d0, d0, d1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -166,7 +155,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fsqrt s0, s0 ; nextln: fsqrt s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -179,7 +167,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fsqrt d0, d0 ; nextln: fsqrt d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -192,7 +179,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fabs s0, s0 ; nextln: fabs s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -205,7 +191,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fabs d0, d0 ; nextln: fabs d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -218,7 +203,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fneg s0, s0 ; nextln: fneg s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -231,7 +215,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fneg d0, d0 ; nextln: fneg d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -244,7 +227,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fcvt d0, s0 ; nextln: fcvt d0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -257,7 +239,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fcvt s0, d0 ; nextln: fcvt s0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -270,7 +251,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintp s0, s0 ; nextln: frintp s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -283,7 +263,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintp d0, d0 ; nextln: frintp d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -296,7 +275,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintm s0, s0 ; nextln: frintm s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -309,7 +287,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintm d0, d0 ; nextln: frintm d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -322,7 +299,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintz s0, s0 ; nextln: frintz s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -335,7 +311,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintz d0, d0 ; nextln: frintz d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -348,7 +323,6 @@ block0(v0: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintn s0, s0 ; nextln: frintn s0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -361,7 +335,6 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: frintn d0, d0 ; nextln: frintn d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -374,7 +347,6 @@ block0(v0: f32, v1: f32, v2: f32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmadd s0, s0, s1, s2 ; nextln: fmadd s0, s0, s1, s2
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -387,7 +359,6 @@ block0(v0: f64, v1: f64, v2: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmadd d0, d0, d1, d2 ; nextln: fmadd d0, d0, d1, d2
; nextln: mov sp, fp
function %f31(f32, f32) -> f32 { function %f31(f32, f32) -> f32 {
block0(v0: f32, v1: f32): block0(v0: f32, v1: f32):
@@ -399,7 +370,6 @@ block0(v0: f32, v1: f32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ushr v1.2s, v1.2s, #31 ; nextln: ushr v1.2s, v1.2s, #31
; nextln: sli v0.2s, v1.2s, #31 ; nextln: sli v0.2s, v1.2s, #31
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -413,7 +383,6 @@ block0(v0: f64, v1: f64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ushr d1, d1, #63 ; nextln: ushr d1, d1, #63
; nextln: sli d0, d1, #63 ; nextln: sli d0, d1, #63
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -436,7 +405,6 @@ block0(v0: f32):
; nextln: fcmp s0, s1 ; nextln: fcmp s0, s1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzu w0, s0 ; nextln: fcvtzu w0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -459,7 +427,6 @@ block0(v0: f32):
; nextln: fcmp s0, s1 ; nextln: fcmp s0, s1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzs w0, s0 ; nextln: fcvtzs w0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -482,7 +449,6 @@ block0(v0: f32):
; nextln: fcmp s0, s1 ; nextln: fcmp s0, s1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzu x0, s0 ; nextln: fcvtzu x0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -505,7 +471,6 @@ block0(v0: f32):
; nextln: fcmp s0, s1 ; nextln: fcmp s0, s1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzs x0, s0 ; nextln: fcvtzs x0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -528,7 +493,6 @@ block0(v0: f64):
; nextln: fcmp d0, d1 ; nextln: fcmp d0, d1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzu w0, d0 ; nextln: fcvtzu w0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -550,7 +514,6 @@ block0(v0: f64):
; nextln: fcmp d0, d1 ; nextln: fcmp d0, d1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzs w0, d0 ; nextln: fcvtzs w0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -573,7 +536,6 @@ block0(v0: f64):
; nextln: fcmp d0, d1 ; nextln: fcmp d0, d1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzu x0, d0 ; nextln: fcvtzu x0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -596,7 +558,6 @@ block0(v0: f64):
; nextln: fcmp d0, d1 ; nextln: fcmp d0, d1
; nextln: b.mi 8 ; udf ; nextln: b.mi 8 ; udf
; nextln: fcvtzs x0, d0 ; nextln: fcvtzs x0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -609,7 +570,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ucvtf s0, w0 ; nextln: ucvtf s0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -622,7 +582,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: scvtf s0, w0 ; nextln: scvtf s0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -635,7 +594,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ucvtf s0, x0 ; nextln: ucvtf s0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -648,7 +606,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: scvtf s0, x0 ; nextln: scvtf s0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -661,7 +618,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ucvtf d0, w0 ; nextln: ucvtf d0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -674,7 +630,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: scvtf d0, w0 ; nextln: scvtf d0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -687,7 +642,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ucvtf d0, x0 ; nextln: ucvtf d0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -700,7 +654,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: scvtf d0, x0 ; nextln: scvtf d0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -720,7 +673,6 @@ block0(v0: f32):
; nextln: fcmp s0, s0 ; nextln: fcmp s0, s0
; nextln: fcsel s0, s1, s2, ne ; nextln: fcsel s0, s1, s2, ne
; nextln: fcvtzu w0, s0 ; nextln: fcvtzu w0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -742,7 +694,6 @@ block0(v0: f32):
; nextln: fcmp s0, s0 ; nextln: fcmp s0, s0
; nextln: fcsel s0, s2, s1, ne ; nextln: fcsel s0, s2, s1, ne
; nextln: fcvtzs w0, s0 ; nextln: fcvtzs w0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -762,7 +713,6 @@ block0(v0: f32):
; nextln: fcmp s0, s0 ; nextln: fcmp s0, s0
; nextln: fcsel s0, s1, s2, ne ; nextln: fcsel s0, s1, s2, ne
; nextln: fcvtzu x0, s0 ; nextln: fcvtzu x0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -784,7 +734,6 @@ block0(v0: f32):
; nextln: fcmp s0, s0 ; nextln: fcmp s0, s0
; nextln: fcsel s0, s2, s1, ne ; nextln: fcsel s0, s2, s1, ne
; nextln: fcvtzs x0, s0 ; nextln: fcvtzs x0, s0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -803,7 +752,6 @@ block0(v0: f64):
; nextln: fcmp d0, d0 ; nextln: fcmp d0, d0
; nextln: fcsel d0, d1, d2, ne ; nextln: fcsel d0, d1, d2, ne
; nextln: fcvtzu w0, d0 ; nextln: fcvtzu w0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -824,7 +772,6 @@ block0(v0: f64):
; nextln: fcmp d0, d0 ; nextln: fcmp d0, d0
; nextln: fcsel d0, d2, d1, ne ; nextln: fcsel d0, d2, d1, ne
; nextln: fcvtzs w0, d0 ; nextln: fcvtzs w0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -844,7 +791,6 @@ block0(v0: f64):
; nextln: fcmp d0, d0 ; nextln: fcmp d0, d0
; nextln: fcsel d0, d1, d2, ne ; nextln: fcsel d0, d1, d2, ne
; nextln: fcvtzu x0, d0 ; nextln: fcvtzu x0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -866,6 +812,5 @@ block0(v0: f64):
; nextln: fcmp d0, d0 ; nextln: fcmp d0, d0
; nextln: fcsel d0, d2, d1, ne ; nextln: fcsel d0, d2, d1, ne
; nextln: fcvtzs x0, d0 ; nextln: fcvtzs x0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
set enable_heap_access_spectre_mitigation=true set enable_heap_access_spectre_mitigation=true
target aarch64 target aarch64
@@ -24,7 +25,6 @@ block0(v0: i64, v1: i32):
; nextln: subs wzr, w1, w2 ; nextln: subs wzr, w1, w2
; nextln: movz x1, #0 ; nextln: movz x1, #0
; nextln: csel x0, x1, x0, hi ; nextln: csel x0, x1, x0, hi
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
; check: Block 2: ; check: Block 2:
@@ -49,7 +49,6 @@ block0(v0: i64, v1: i32):
; nextln: subs wzr, w1, #65536 ; nextln: subs wzr, w1, #65536
; nextln: movz x1, #0 ; nextln: movz x1, #0
; nextln: csel x0, x1, x0, hi ; nextln: csel x0, x1, x0, hi
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
; check: Block 2: ; check: Block 2:

View File

@@ -2,6 +2,7 @@
; would result in an out-of-bounds panic. (#2147) ; would result in an out-of-bounds panic. (#2147)
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function u0:0() -> i8 system_v { function u0:0() -> i8 system_v {
@@ -17,7 +18,7 @@ block0:
; nextln: Entry block: 0 ; nextln: Entry block: 0
; nextln: Block 0: ; nextln: Block 0:
; nextln: (original IR block: block0) ; nextln: (original IR block: block0)
; nextln: (instruction range: 0 .. 11) ; nextln: (instruction range: 0 .. 10)
; nextln: Inst 0: stp fp, lr, [sp, #-16]! ; nextln: Inst 0: stp fp, lr, [sp, #-16]!
; nextln: Inst 1: mov fp, sp ; nextln: Inst 1: mov fp, sp
; nextln: Inst 2: movz x0, #56780 ; nextln: Inst 2: movz x0, #56780
@@ -26,7 +27,6 @@ block0:
; nextln: Inst 5: subs wzr, w0, w1, UXTH ; nextln: Inst 5: subs wzr, w0, w1, UXTH
; nextln: Inst 6: cset x0, ne ; nextln: Inst 6: cset x0, ne
; nextln: Inst 7: and w0, w0, #1 ; nextln: Inst 7: and w0, w0, #1
; nextln: Inst 8: mov sp, fp ; nextln: Inst 8: ldp fp, lr, [sp], #16
; nextln: Inst 9: ldp fp, lr, [sp], #16 ; nextln: Inst 9: ret
; nextln: Inst 10: ret
; nextln: }} ; nextln: }}

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i64) -> i64 { function %f(i64) -> i64 {

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
;; Test default (non-SpiderMonkey) ABI. ;; Test default (non-SpiderMonkey) ABI.
@@ -13,6 +14,5 @@ block1:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #1 ; nextln: movz x0, #1
; nextln: movz x1, #2 ; nextln: movz x1, #2
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %add8(i8, i8) -> i8 { function %add8(i8, i8) -> i8 {
@@ -10,7 +11,6 @@ block0(v0: i8, v1: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add w0, w0, w1 ; nextln: add w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -23,7 +23,6 @@ block0(v0: i16, v1: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add w0, w0, w1 ; nextln: add w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -36,7 +35,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add w0, w0, w1 ; nextln: add w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -50,7 +48,6 @@ block0(v0: i32, v1: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add w0, w0, w1, SXTB ; nextln: add w0, w0, w1, SXTB
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -64,6 +61,5 @@ block0(v0: i64, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add x0, x0, x1, SXTW ; nextln: add x0, x0, x1, SXTW
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(f64) -> f64 { function %f(f64) -> f64 {
@@ -76,24 +77,22 @@ block0(v0: f64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub sp, sp, #128 ; nextln: str q8, [sp, #-16]!
; nextln: str q8, [sp] ; nextln: str q9, [sp, #-16]!
; nextln: str q9, [sp, #16] ; nextln: str q10, [sp, #-16]!
; nextln: str q10, [sp, #32] ; nextln: str q11, [sp, #-16]!
; nextln: str q11, [sp, #48] ; nextln: str q12, [sp, #-16]!
; nextln: str q12, [sp, #64] ; nextln: str q13, [sp, #-16]!
; nextln: str q13, [sp, #80] ; nextln: str q14, [sp, #-16]!
; nextln: str q14, [sp, #96] ; nextln: str q15, [sp, #-16]!
; nextln: str q15, [sp, #112]
; check: ldr q8, [sp] ; check: ldr q15, [sp], #16
; nextln: ldr q9, [sp, #16] ; nextln: ldr q14, [sp], #16
; nextln: ldr q10, [sp, #32] ; nextln: ldr q13, [sp], #16
; nextln: ldr q11, [sp, #48] ; nextln: ldr q12, [sp], #16
; nextln: ldr q12, [sp, #64] ; nextln: ldr q11, [sp], #16
; nextln: ldr q13, [sp, #80] ; nextln: ldr q10, [sp], #16
; nextln: ldr q14, [sp, #96] ; nextln: ldr q9, [sp], #16
; nextln: ldr q15, [sp, #112] ; nextln: ldr q8, [sp], #16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f0(r64) -> r64 { function %f0(r64) -> r64 {
@@ -8,7 +9,6 @@ block0(v0: r64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -22,7 +22,6 @@ block0(v0: r64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: subs xzr, x0, #0 ; nextln: subs xzr, x0, #0
; nextln: cset x0, eq ; nextln: cset x0, eq
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -36,7 +35,6 @@ block0(v0: r64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: adds xzr, x0, #1 ; nextln: adds xzr, x0, #1
; nextln: cset x0, eq ; nextln: cset x0, eq
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -49,7 +47,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #0 ; nextln: movz x0, #0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -77,21 +74,20 @@ block3(v7: r64, v8: r64):
; check: Block 0: ; check: Block 0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub sp, sp, #48 ; nextln: stp x19, x20, [sp, #-16]!
; nextln: stp x19, x20, [sp] ; nextln: sub sp, sp, #32
; nextln: virtual_sp_offset_adjust 16
; nextln: mov x19, x0 ; nextln: mov x19, x0
; nextln: mov x20, x1 ; nextln: mov x20, x1
; nextln: mov x0, x19 ; nextln: mov x0, x19
; nextln: ldr x1, 8 ; b 12 ; data ; nextln: ldr x1, 8 ; b 12 ; data
; nextln: stur x0, [sp, #24] ; nextln: stur x0, [sp, #8]
; nextln: stur x19, [sp, #32] ; nextln: stur x19, [sp, #16]
; nextln: stur x20, [sp, #40] ; nextln: stur x20, [sp, #24]
; nextln: (safepoint: slots [S0, S1, S2] ; nextln: (safepoint: slots [S0, S1, S2]
; nextln: blr x1 ; nextln: blr x1
; nextln: ldur x19, [sp, #32] ; nextln: ldur x19, [sp, #16]
; nextln: ldur x20, [sp, #40] ; nextln: ldur x20, [sp, #24]
; nextln: add x1, sp, #16 ; nextln: mov x1, sp
; nextln: str x19, [x1] ; nextln: str x19, [x1]
; nextln: and w0, w0, #1 ; nextln: and w0, w0, #1
; nextln: cbz x0, label1 ; b label3 ; nextln: cbz x0, label1 ; b label3
@@ -107,11 +103,11 @@ block3(v7: r64, v8: r64):
; nextln: mov x19, x20 ; nextln: mov x19, x20
; nextln: b label5 ; nextln: b label5
; check: Block 5: ; check: Block 5:
; check: add x1, sp, #16 ; check: mov x1, sp
; nextln: ldr x1, [x1] ; nextln: ldr x1, [x1]
; nextln: mov x2, x1 ; nextln: mov x2, x1
; nextln: mov x1, x19 ; nextln: mov x1, x19
; nextln: ldp x19, x20, [sp] ; nextln: add sp, sp, #32
; nextln: mov sp, fp ; nextln: ldp x19, x20, [sp], #16
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %uaddsat64(i64, i64) -> i64 { function %uaddsat64(i64, i64) -> i64 {
@@ -13,7 +14,6 @@ block0(v0: i64, v1: i64):
; nextln: fmov d1, x1 ; nextln: fmov d1, x1
; nextln: uqadd d0, d0, d1 ; nextln: uqadd d0, d0, d1
; nextln: mov x0, v0.d[0] ; nextln: mov x0, v0.d[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -31,6 +31,5 @@ block0(v0: i8, v1: i8):
; nextln: fmov d1, x1 ; nextln: fmov d1, x1
; nextln: uqadd d0, d0, d1 ; nextln: uqadd d0, d0, d1
; nextln: mov x0, v0.d[0] ; nextln: mov x0, v0.d[0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f(i64) -> i64 { function %f(i64) -> i64 {
@@ -12,7 +13,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: add x0, x0, x0, LSL 3 ; nextln: add x0, x0, x0, LSL 3
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -26,6 +26,5 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl w0, w0, #21 ; nextln: lsl w0, w0, #21
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -14,7 +15,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ror x0, x0, x1 ; nextln: ror x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -27,7 +27,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ror w0, w0, w1 ; nextln: ror w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -46,7 +45,6 @@ block0(v0: i16, v1: i16):
; nextln: lsr w1, w0, w1 ; nextln: lsr w1, w0, w1
; nextln: lsl w0, w0, w2 ; nextln: lsl w0, w0, w2
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -65,7 +63,6 @@ block0(v0: i8, v1: i8):
; nextln: lsr w1, w0, w1 ; nextln: lsr w1, w0, w1
; nextln: lsl w0, w0, w2 ; nextln: lsl w0, w0, w2
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -83,7 +80,6 @@ block0(v0: i64, v1: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub x1, xzr, x1 ; nextln: sub x1, xzr, x1
; nextln: ror x0, x0, x1 ; nextln: ror x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -97,7 +93,6 @@ block0(v0: i32, v1: i32):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub w1, wzr, w1 ; nextln: sub w1, wzr, w1
; nextln: ror w0, w0, w1 ; nextln: ror w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -117,7 +112,6 @@ block0(v0: i16, v1: i16):
; nextln: lsr w1, w0, w1 ; nextln: lsr w1, w0, w1
; nextln: lsl w0, w0, w2 ; nextln: lsl w0, w0, w2
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -137,7 +131,6 @@ block0(v0: i8, v1: i8):
; nextln: lsr w1, w0, w1 ; nextln: lsr w1, w0, w1
; nextln: lsl w0, w0, w2 ; nextln: lsl w0, w0, w2
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -154,7 +147,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsr x0, x0, x1 ; nextln: lsr x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -167,7 +159,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsr w0, w0, w1 ; nextln: lsr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -181,7 +172,6 @@ block0(v0: i16, v1: i16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxth w0, w0 ; nextln: uxth w0, w0
; nextln: lsr w0, w0, w1 ; nextln: lsr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -195,7 +185,6 @@ block0(v0: i8, v1: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: lsr w0, w0, w1 ; nextln: lsr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -212,7 +201,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl x0, x0, x1 ; nextln: lsl x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -225,7 +213,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl w0, w0, w1 ; nextln: lsl w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -238,7 +225,6 @@ block0(v0: i16, v1: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl w0, w0, w1 ; nextln: lsl w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -251,7 +237,6 @@ block0(v0: i8, v1: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl w0, w0, w1 ; nextln: lsl w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -268,7 +253,6 @@ block0(v0: i64, v1: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: asr x0, x0, x1 ; nextln: asr x0, x0, x1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -281,7 +265,6 @@ block0(v0: i32, v1: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: asr w0, w0, w1 ; nextln: asr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -295,7 +278,6 @@ block0(v0: i16, v1: i16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxth w0, w0 ; nextln: sxth w0, w0
; nextln: asr w0, w0, w1 ; nextln: asr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -309,7 +291,6 @@ block0(v0: i8, v1: i8):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtb w0, w0 ; nextln: sxtb w0, w0
; nextln: asr w0, w0, w1 ; nextln: asr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -327,7 +308,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ror x0, x0, #17 ; nextln: ror x0, x0, #17
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -341,7 +321,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ror x0, x0, #47 ; nextln: ror x0, x0, #47
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -355,7 +334,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ror w0, w0, #15 ; nextln: ror w0, w0, #15
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -372,7 +350,6 @@ block0(v0: i16):
; nextln: lsr w1, w0, #6 ; nextln: lsr w1, w0, #6
; nextln: lsl w0, w0, #10 ; nextln: lsl w0, w0, #10
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -389,7 +366,6 @@ block0(v0: i8):
; nextln: lsr w1, w0, #5 ; nextln: lsr w1, w0, #5
; nextln: lsl w0, w0, #3 ; nextln: lsl w0, w0, #3
; nextln: orr w0, w0, w1 ; nextln: orr w0, w0, w1
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -403,7 +379,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsr x0, x0, #17 ; nextln: lsr x0, x0, #17
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -417,7 +392,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: asr x0, x0, #17 ; nextln: asr x0, x0, #17
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -431,6 +405,5 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: lsl x0, x0, #17 ; nextln: lsl x0, x0, #17
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f1() -> i64x2 { function %f1() -> i64x2 {
@@ -13,7 +14,6 @@ block0:
; nextln: movz x0, #1 ; nextln: movz x0, #1
; nextln: movk x0, #1, LSL #48 ; nextln: movk x0, #1, LSL #48
; nextln: dup v0.2d, x0 ; nextln: dup v0.2d, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -29,7 +29,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #42679 ; nextln: movz x0, #42679
; nextln: dup v0.8h, w0 ; nextln: dup v0.8h, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -44,7 +43,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movi v0.16b, #255 ; nextln: movi v0.16b, #255
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -58,7 +56,6 @@ block0(v0: i32, v1: i8x16, v2: i8x16):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: subs wzr, w0, wzr ; nextln: subs wzr, w0, wzr
; nextln: vcsel v0.16b, v0.16b, v1.16b, ne (if-then-else diamond) ; nextln: vcsel v0.16b, v0.16b, v1.16b, ne (if-then-else diamond)
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -72,7 +69,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ld1r { v0.16b }, [x0] ; nextln: ld1r { v0.16b }, [x0]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -89,7 +85,6 @@ block0(v0: i64, v1: i64):
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ld1r { v0.16b }, [x0] ; nextln: ld1r { v0.16b }, [x0]
; nextln: ld1r { v1.16b }, [x1] ; nextln: ld1r { v1.16b }, [x1]
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -107,7 +102,6 @@ block0(v0: i64, v1: i64):
; nextln: ldrb w0, [x0] ; nextln: ldrb w0, [x0]
; nextln: ld1r { v0.16b }, [x1] ; nextln: ld1r { v0.16b }, [x1]
; nextln: dup v1.16b, w0 ; nextln: dup v1.16b, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -124,7 +118,6 @@ block0(v0: i64, v1: i64):
; nextln: ldrb w0, [x0] ; nextln: ldrb w0, [x0]
; nextln: dup v0.16b, w0 ; nextln: dup v0.16b, w0
; nextln: dup v1.16b, w0 ; nextln: dup v1.16b, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -139,7 +132,6 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movi v0.2d, #18374687579166474495 ; nextln: movi v0.2d, #18374687579166474495
; nextln: fmov d0, d0 ; nextln: fmov d0, d0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -153,7 +145,6 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mvni v0.4s, #15, MSL #16 ; nextln: mvni v0.4s, #15, MSL #16
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -167,6 +158,5 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: fmov v0.4s, #1.3125 ; nextln: fmov v0.4s, #1.3125
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f1() -> i64x2 { function %f1() -> i64x2 {
@@ -13,7 +14,6 @@ block0:
; nextln: movz x0, #1 ; nextln: movz x0, #1
; nextln: movk x0, #1, LSL #48 ; nextln: movk x0, #1, LSL #48
; nextln: fmov d0, x0 ; nextln: fmov d0, x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -28,6 +28,5 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: movz x0, #42679 ; nextln: movz x0, #42679
; nextln: fmov s0, w0 ; nextln: fmov s0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %foo() { function %foo() {
@@ -13,7 +14,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -28,7 +28,6 @@ block0(v0: i64):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -46,7 +45,6 @@ block0(v0: i64):
; nextln: b.hs 8 ; udf ; nextln: b.hs 8 ; udf
; nextln: ldr x0 ; nextln: ldr x0
; nextln: blr x0 ; nextln: blr x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -69,7 +67,6 @@ block0(v0: i64):
; nextln: b.hs 8 ; udf ; nextln: b.hs 8 ; udf
; nextln: ldr x0 ; nextln: ldr x0
; nextln: blr x0 ; nextln: blr x0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -86,7 +83,7 @@ block0(v0: i64):
; nextln: subs xzr, sp, x16 ; nextln: subs xzr, sp, x16
; nextln: b.hs 8 ; udf ; nextln: b.hs 8 ; udf
; nextln: sub sp, sp, #176 ; nextln: sub sp, sp, #176
; nextln: mov sp, fp ; nextln: add sp, sp, #176
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -108,7 +105,9 @@ block0(v0: i64):
; nextln: movz w16, #6784 ; nextln: movz w16, #6784
; nextln: movk w16, #6, LSL #16 ; nextln: movk w16, #6, LSL #16
; nextln: sub sp, sp, x16, UXTX ; nextln: sub sp, sp, x16, UXTX
; nextln: mov sp, fp ; nextln: movz w16, #6784
; nextln: movk w16, #6, LSL #16
; nextln: add sp, sp, x16, UXTX
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -130,7 +129,7 @@ block0(v0: i64):
; nextln: subs xzr, sp, x16 ; nextln: subs xzr, sp, x16
; nextln: b.hs 8 ; udf ; nextln: b.hs 8 ; udf
; nextln: sub sp, sp, #32 ; nextln: sub sp, sp, #32
; nextln: mov sp, fp ; nextln: add sp, sp, #32
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -158,7 +157,9 @@ block0(v0: i64):
; nextln: movz w16, #6784 ; nextln: movz w16, #6784
; nextln: movk w16, #6, LSL #16 ; nextln: movk w16, #6, LSL #16
; nextln: sub sp, sp, x16, UXTX ; nextln: sub sp, sp, x16, UXTX
; nextln: mov sp, fp ; nextln: movz w16, #6784
; nextln: movk w16, #6, LSL #16
; nextln: add sp, sp, x16, UXTX
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -178,6 +179,6 @@ block0(v0: i64):
; nextln: subs xzr, sp, x16 ; nextln: subs xzr, sp, x16
; nextln: b.hs 8 ; udf ; nextln: b.hs 8 ; udf
; nextln: sub sp, sp, #32 ; nextln: sub sp, sp, #32
; nextln: mov sp, fp ; nextln: add sp, sp, #32
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %stack_addr_small() -> i64 { function %stack_addr_small() -> i64 {
@@ -13,7 +14,7 @@ block0:
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sub sp, sp, #16 ; nextln: sub sp, sp, #16
; nextln: mov x0, sp ; nextln: mov x0, sp
; nextln: mov sp, fp ; nextln: add sp, sp, #16
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -33,7 +34,9 @@ block0:
; nextln: movk w16, #1, LSL #16 ; nextln: movk w16, #1, LSL #16
; nextln: sub sp, sp, x16, UXTX ; nextln: sub sp, sp, x16, UXTX
; nextln: mov x0, sp ; nextln: mov x0, sp
; nextln: mov sp, fp ; nextln: movz w16, #34480
; nextln: movk w16, #1, LSL #16
; nextln: add sp, sp, x16, UXTX
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -53,7 +56,7 @@ block0:
; nextln: sub sp, sp, #16 ; nextln: sub sp, sp, #16
; nextln: mov x0, sp ; nextln: mov x0, sp
; nextln: ldr x0, [x0] ; nextln: ldr x0, [x0]
; nextln: mov sp, fp ; nextln: add sp, sp, #16
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -74,7 +77,9 @@ block0:
; nextln: sub sp, sp, x16, UXTX ; nextln: sub sp, sp, x16, UXTX
; nextln: mov x0, sp ; nextln: mov x0, sp
; nextln: ldr x0, [x0] ; nextln: ldr x0, [x0]
; nextln: mov sp, fp ; nextln: movz w16, #34480
; nextln: movk w16, #1, LSL #16
; nextln: add sp, sp, x16, UXTX
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -92,7 +97,7 @@ block0(v0: i64):
; nextln: sub sp, sp, #16 ; nextln: sub sp, sp, #16
; nextln: mov x1, sp ; nextln: mov x1, sp
; nextln: str x0, [x1] ; nextln: str x0, [x1]
; nextln: mov sp, fp ; nextln: add sp, sp, #16
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -113,7 +118,9 @@ block0(v0: i64):
; nextln: sub sp, sp, x16, UXTX ; nextln: sub sp, sp, x16, UXTX
; nextln: mov x1, sp ; nextln: mov x1, sp
; nextln: str x0, [x1] ; nextln: str x0, [x1]
; nextln: mov sp, fp ; nextln: movz w16, #34480
; nextln: movk w16, #1, LSL #16
; nextln: add sp, sp, x16, UXTX
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f() -> i64 { function %f() -> i64 {
@@ -12,6 +13,5 @@ block0:
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: ldr x0, 8 ; b 12 ; data ; nextln: ldr x0, 8 ; b 12 ; data
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f() { function %f() {

View File

@@ -1,4 +1,5 @@
test compile test compile
set unwind_info=false
target aarch64 target aarch64
function %f_u_8_64(i8) -> i64 { function %f_u_8_64(i8) -> i64 {
@@ -10,7 +11,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -23,7 +23,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -36,7 +35,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxtb w0, w0 ; nextln: uxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -49,7 +47,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtb x0, w0 ; nextln: sxtb x0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -62,7 +59,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtb w0, w0 ; nextln: sxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -75,7 +71,6 @@ block0(v0: i8):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtb w0, w0 ; nextln: sxtb w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -88,7 +83,6 @@ block0(v0: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxth w0, w0 ; nextln: uxth w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -101,7 +95,6 @@ block0(v0: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: uxth w0, w0 ; nextln: uxth w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -114,7 +107,6 @@ block0(v0: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxth x0, w0 ; nextln: sxth x0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -127,7 +119,6 @@ block0(v0: i16):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxth w0, w0 ; nextln: sxth w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -140,7 +131,6 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: mov w0, w0 ; nextln: mov w0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret
@@ -153,6 +143,5 @@ block0(v0: i32):
; check: stp fp, lr, [sp, #-16]! ; check: stp fp, lr, [sp, #-16]!
; nextln: mov fp, sp ; nextln: mov fp, sp
; nextln: sxtw x0, w0 ; nextln: sxtw x0, w0
; nextln: mov sp, fp
; nextln: ldp fp, lr, [sp], #16 ; nextln: ldp fp, lr, [sp], #16
; nextln: ret ; nextln: ret

View File

@@ -1,5 +1,6 @@
test compile test compile
set enable_llvm_abi_extensions=true set enable_llvm_abi_extensions=true
set unwind_info=true
target x86_64 target x86_64
feature "experimental_x64" feature "experimental_x64"
@@ -9,7 +10,9 @@ block0(v0: i64, v1: i64, v2: i64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movq %rcx, %rax ; nextln: movq %rcx, %rax
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -21,7 +24,9 @@ block0(v0: i64, v1: i64, v2: i64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movq %rdx, %rax ; nextln: movq %rdx, %rax
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -33,7 +38,9 @@ block0(v0: i64, v1: i64, v2: i64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movq %r8, %rax ; nextln: movq %r8, %rax
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -45,7 +52,9 @@ block0(v0: i64, v1: i64, v2: i64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movq %r9, %rax ; nextln: movq %r9, %rax
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -57,7 +66,9 @@ block0(v0: i64, v1: i64, v2: f64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movaps %xmm2, %xmm0 ; nextln: movaps %xmm2, %xmm0
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -69,7 +80,9 @@ block0(v0: i64, v1: i64, v2: f64, v3: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 0 }
; nextln: movq %r9, %rax ; nextln: movq %r9, %rax
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
@@ -91,10 +104,12 @@ block0(v0: i64, v1: i64, v2: i64, v3: i64, v4: i64, v5: i64):
;; TODO(#2704): fix regalloc's register priority ordering! ;; TODO(#2704): fix regalloc's register priority ordering!
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 16 }
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %rsi, 0(%rsp) ; nextln: movq %rsi, 0(%rsp)
; nextln: virtual_sp_offset_adjust 16 ; nextln: unwind SaveReg { clobber_offset: 0, reg: r16J }
; nextln: movq 48(%rbp), %rsi ; nextln: movq 48(%rbp), %rsi
; nextln: movq 56(%rbp), %rsi ; nextln: movq 56(%rbp), %rsi
; nextln: movq %rsi, %rax ; nextln: movq %rsi, %rax
@@ -114,11 +129,14 @@ block0(v0: i128, v1: i64, v2: i128, v3: i128):
;; stack slot. ;; stack slot.
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 16 }
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %rsi, 0(%rsp) ; nextln: movq %rsi, 0(%rsp)
; nextln: unwind SaveReg { clobber_offset: 0, reg: r16J }
; nextln: movq %rdi, 8(%rsp) ; nextln: movq %rdi, 8(%rsp)
; nextln: virtual_sp_offset_adjust 16 ; nextln: unwind SaveReg { clobber_offset: 8, reg: r17J }
; nextln: movq 48(%rbp), %rsi ; nextln: movq 48(%rbp), %rsi
; nextln: movq 56(%rbp), %rsi ; nextln: movq 56(%rbp), %rsi
; nextln: movq 64(%rbp), %rdi ; nextln: movq 64(%rbp), %rdi
@@ -142,10 +160,12 @@ block0(v0: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 16 }
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %rsi, 0(%rsp) ; nextln: movq %rsi, 0(%rsp)
; nextln: virtual_sp_offset_adjust 16 ; nextln: unwind SaveReg { clobber_offset: 0, reg: r16J }
; nextln: movq %rcx, %rsi ; nextln: movq %rcx, %rsi
; nextln: cvtsi2sd %rsi, %xmm3 ; nextln: cvtsi2sd %rsi, %xmm3
; nextln: subq $$48, %rsp ; nextln: subq $$48, %rsp
@@ -216,19 +236,30 @@ block0(v0: i64):
} }
; check: pushq %rbp ; check: pushq %rbp
; nextln: unwind PushFrameRegs { offset_upward_to_caller_sp: 16 }
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: unwind DefineNewFrame { offset_upward_to_caller_sp: 16, offset_downward_to_clobbers: 160 }
; nextln: subq $$208, %rsp ; nextln: subq $$208, %rsp
; nextln: movdqu %xmm6, 0(%rsp) ; nextln: movdqu %xmm6, 48(%rsp)
; nextln: movdqu %xmm7, 16(%rsp) ; nextln: unwind SaveReg { clobber_offset: 0, reg: r6V }
; nextln: movdqu %xmm8, 32(%rsp) ; nextln: movdqu %xmm7, 64(%rsp)
; nextln: movdqu %xmm9, 48(%rsp) ; nextln: unwind SaveReg { clobber_offset: 16, reg: r7V }
; nextln: movdqu %xmm10, 64(%rsp) ; nextln: movdqu %xmm8, 80(%rsp)
; nextln: movdqu %xmm11, 80(%rsp) ; nextln: unwind SaveReg { clobber_offset: 32, reg: r8V }
; nextln: movdqu %xmm12, 96(%rsp) ; nextln: movdqu %xmm9, 96(%rsp)
; nextln: movdqu %xmm13, 112(%rsp) ; nextln: unwind SaveReg { clobber_offset: 48, reg: r9V }
; nextln: movdqu %xmm14, 128(%rsp) ; nextln: movdqu %xmm10, 112(%rsp)
; nextln: movdqu %xmm15, 144(%rsp) ; nextln: unwind SaveReg { clobber_offset: 64, reg: r10V }
; nextln: virtual_sp_offset_adjust 160 ; nextln: movdqu %xmm11, 128(%rsp)
; nextln: unwind SaveReg { clobber_offset: 80, reg: r11V }
; nextln: movdqu %xmm12, 144(%rsp)
; nextln: unwind SaveReg { clobber_offset: 96, reg: r12V }
; nextln: movdqu %xmm13, 160(%rsp)
; nextln: unwind SaveReg { clobber_offset: 112, reg: r13V }
; nextln: movdqu %xmm14, 176(%rsp)
; nextln: unwind SaveReg { clobber_offset: 128, reg: r14V }
; nextln: movdqu %xmm15, 192(%rsp)
; nextln: unwind SaveReg { clobber_offset: 144, reg: r15V }
; nextln: movsd 0(%rcx), %xmm0 ; nextln: movsd 0(%rcx), %xmm0
; nextln: movsd %xmm0, rsp(16 + virtual offset) ; nextln: movsd %xmm0, rsp(16 + virtual offset)
; nextln: movsd 8(%rcx), %xmm1 ; nextln: movsd 8(%rcx), %xmm1
@@ -282,17 +313,17 @@ block0(v0: i64):
; nextln: addsd %xmm8, %xmm2 ; nextln: addsd %xmm8, %xmm2
; nextln: addsd %xmm3, %xmm2 ; nextln: addsd %xmm3, %xmm2
; nextln: movaps %xmm2, %xmm0 ; nextln: movaps %xmm2, %xmm0
; nextln: movdqu 0(%rsp), %xmm6 ; nextln: movdqu 48(%rsp), %xmm6
; nextln: movdqu 16(%rsp), %xmm7 ; nextln: movdqu 64(%rsp), %xmm7
; nextln: movdqu 32(%rsp), %xmm8 ; nextln: movdqu 80(%rsp), %xmm8
; nextln: movdqu 48(%rsp), %xmm9 ; nextln: movdqu 96(%rsp), %xmm9
; nextln: movdqu 64(%rsp), %xmm10 ; nextln: movdqu 112(%rsp), %xmm10
; nextln: movdqu 80(%rsp), %xmm11 ; nextln: movdqu 128(%rsp), %xmm11
; nextln: movdqu 96(%rsp), %xmm12 ; nextln: movdqu 144(%rsp), %xmm12
; nextln: movdqu 112(%rsp), %xmm13 ; nextln: movdqu 160(%rsp), %xmm13
; nextln: movdqu 128(%rsp), %xmm14 ; nextln: movdqu 176(%rsp), %xmm14
; nextln: movdqu 144(%rsp), %xmm15 ; nextln: movdqu 192(%rsp), %xmm15
; nextln: addq $$160, %rsp ; nextln: addq $$208, %rsp
; nextln: movq %rbp, %rsp ; nextln: movq %rbp, %rsp
; nextln: popq %rbp ; nextln: popq %rbp
; nextln: ret ; nextln: ret

View File

@@ -744,7 +744,6 @@ block0(v0: i128, v1: i128, v2: i64, v3: i128, v4: i128, v5: i128):
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %r12, 0(%rsp) ; nextln: movq %r12, 0(%rsp)
; nextln: movq %r13, 8(%rsp) ; nextln: movq %r13, 8(%rsp)
; nextln: virtual_sp_offset_adjust 16
; nextln: movq 16(%rbp), %r10 ; nextln: movq 16(%rbp), %r10
; nextln: movq 24(%rbp), %r12 ; nextln: movq 24(%rbp), %r12
; nextln: movq 32(%rbp), %r11 ; nextln: movq 32(%rbp), %r11
@@ -804,7 +803,6 @@ block0(v0: i128, v1: i128):
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %r12, 0(%rsp) ; nextln: movq %r12, 0(%rsp)
; nextln: virtual_sp_offset_adjust 16
; nextln: movq %r8, %r12 ; nextln: movq %r8, %r12
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: virtual_sp_offset_adjust 16 ; nextln: virtual_sp_offset_adjust 16

View File

@@ -72,7 +72,6 @@ block0(v0: i64, v1: i64):
; nextln: movq %rsp, %rbp ; nextln: movq %rsp, %rbp
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %r12, 0(%rsp) ; nextln: movq %r12, 0(%rsp)
; nextln: virtual_sp_offset_adjust 16
; nextln: movq %rdi, %r12 ; nextln: movq %rdi, %r12
; nextln: subq $$64, %rsp ; nextln: subq $$64, %rsp
; nextln: virtual_sp_offset_adjust 64 ; nextln: virtual_sp_offset_adjust 64
@@ -121,7 +120,6 @@ block0(v0: i64, v1: i64, v2: i64):
; nextln: subq $$16, %rsp ; nextln: subq $$16, %rsp
; nextln: movq %r12, 0(%rsp) ; nextln: movq %r12, 0(%rsp)
; nextln: movq %r13, 8(%rsp) ; nextln: movq %r13, 8(%rsp)
; nextln: virtual_sp_offset_adjust 16
; nextln: movq %rdi, %r12 ; nextln: movq %rdi, %r12
; nextln: movq %rdx, %r13 ; nextln: movq %rdx, %r13
; nextln: subq $$192, %rsp ; nextln: subq $$192, %rsp

View File

@@ -23,7 +23,7 @@ use cranelift_codegen::ir::{
}; };
use cranelift_codegen::isa::{self, CallConv, Encoding, RegUnit, TargetIsa}; use cranelift_codegen::isa::{self, CallConv, Encoding, RegUnit, TargetIsa};
use cranelift_codegen::packed_option::ReservedValue; use cranelift_codegen::packed_option::ReservedValue;
use cranelift_codegen::{settings, timing}; use cranelift_codegen::{settings, settings::Configurable, timing};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::mem; use std::mem;
use std::str::FromStr; use std::str::FromStr;
@@ -50,6 +50,8 @@ pub struct ParseOptions<'a> {
pub target: Option<&'a str>, pub target: Option<&'a str>,
/// Default calling convention used when none is specified for a parsed function. /// Default calling convention used when none is specified for a parsed function.
pub default_calling_convention: CallConv, pub default_calling_convention: CallConv,
/// Default for unwind-info setting (enabled or disabled).
pub unwind_info: bool,
} }
impl Default for ParseOptions<'_> { impl Default for ParseOptions<'_> {
@@ -58,6 +60,7 @@ impl Default for ParseOptions<'_> {
passes: None, passes: None,
target: None, target: None,
default_calling_convention: CallConv::Fast, default_calling_convention: CallConv::Fast,
unwind_info: false,
} }
} }
} }
@@ -81,12 +84,12 @@ pub fn parse_test<'a>(text: &'a str, options: ParseOptions<'a>) -> ParseResult<T
Some(pass_vec) => { Some(pass_vec) => {
parser.parse_test_commands(); parser.parse_test_commands();
commands = parser.parse_cmdline_passes(pass_vec); commands = parser.parse_cmdline_passes(pass_vec);
parser.parse_target_specs()?; parser.parse_target_specs(&options)?;
isa_spec = parser.parse_cmdline_target(options.target)?; isa_spec = parser.parse_cmdline_target(options.target)?;
} }
None => { None => {
commands = parser.parse_test_commands(); commands = parser.parse_test_commands();
isa_spec = parser.parse_target_specs()?; isa_spec = parser.parse_target_specs(&options)?;
} }
}; };
let features = parser.parse_cranelift_features()?; let features = parser.parse_cranelift_features()?;
@@ -1189,7 +1192,7 @@ impl<'a> Parser<'a> {
/// ///
/// Accept a mix of `target` and `set` command lines. The `set` commands are cumulative. /// Accept a mix of `target` and `set` command lines. The `set` commands are cumulative.
/// ///
fn parse_target_specs(&mut self) -> ParseResult<isaspec::IsaSpec> { fn parse_target_specs(&mut self, options: &ParseOptions) -> ParseResult<isaspec::IsaSpec> {
// Were there any `target` commands? // Were there any `target` commands?
let mut seen_target = false; let mut seen_target = false;
// Location of last `set` command since the last `target`. // Location of last `set` command since the last `target`.
@@ -1198,6 +1201,11 @@ impl<'a> Parser<'a> {
let mut targets = Vec::new(); let mut targets = Vec::new();
let mut flag_builder = settings::builder(); let mut flag_builder = settings::builder();
let unwind_info = if options.unwind_info { "true" } else { "false" };
flag_builder
.set("unwind_info", unwind_info)
.expect("unwind_info option should be present");
while let Some(Token::Identifier(command)) = self.token() { while let Some(Token::Identifier(command)) = self.token() {
match command { match command {
"set" => { "set" => {