Rework/simplify unwind infrastructure and implement Windows unwind.

Our previous implementation of unwind infrastructure was somewhat
complex and brittle: it parsed generated instructions in order to
reverse-engineer unwind info from prologues. It also relied on some
fragile linkage to communicate instruction-layout information that VCode
was not designed to provide.

A much simpler, more reliable, and easier-to-reason-about approach is to
embed unwind directives as pseudo-instructions in the prologue as we
generate it. That way, we can say what we mean and just emit it
directly.

The usual reasoning that leads to the reverse-engineering approach is
that metadata is hard to keep in sync across optimization passes; but
here, (i) prologues are generated at the very end of the pipeline, and
(ii) if we ever do a post-prologue-gen optimization, we can treat unwind
directives as black boxes with unknown side-effects, just as we do for
some other pseudo-instructions today.

It turns out that it was easier to just build this for both x64 and
aarch64 (since they share a factored-out ABI implementation), and wire
up the platform-specific unwind-info generation for Windows and SystemV.
Now we have simpler unwind on all platforms and we can delete the old
unwind infra as soon as we remove the old backend.

There were a few consequences to supporting Fastcall unwind in
particular that led to a refactor of the common ABI. Windows only
supports naming clobbered-register save locations within 240 bytes of
the frame-pointer register, whatever one chooses that to be (RSP or
RBP). We had previously saved clobbers below the fixed frame (and below
nominal-SP). The 240-byte range has to include the old RBP too, so we're
forced to place clobbers at the top of the frame, just below saved
RBP/RIP. This is fine; we always keep a frame pointer anyway because we
use it to refer to stack args. It does mean that offsets of fixed-frame
slots (spillslots, stackslots) from RBP are no longer known before we do
regalloc, so if we ever want to index these off of RBP rather than
nominal-SP because we add support for `alloca` (dynamic frame growth),
then we'll need a "nominal-BP" mode that is resolved after regalloc and
clobber-save code is generated. I added a comment to this effect in
`abi_impl.rs`.

The above refactor touched both x64 and aarch64 because of shared code.
This had a further effect in that the old aarch64 prologue generation
subtracted from `sp` once to allocate space, then used stores to `[sp,
offset]` to save clobbers. Unfortunately the offset only has 7-bit
range, so if there are enough clobbered registers (and there can be --
aarch64 has 384 bytes of registers; at least one unit test hits this)
the stores/loads will be out-of-range. I really don't want to synthesize
large-offset sequences here; better to go back to the simpler
pre-index/post-index `stp r1, r2, [sp, #-16]` form that works just like
a "push". It's likely not much worse microarchitecturally (dependence
chain on SP, but oh well) and it actually saves an instruction if
there's no other frame to allocate. As a further advantage, it's much
simpler to understand; simpler is usually better.

This PR adds the new backend on Windows to CI as well.
This commit is contained in:
Chris Fallin
2021-03-06 15:43:09 -08:00
parent 05688aa8f4
commit 2d5db92a9e
63 changed files with 905 additions and 1088 deletions

View File

@@ -8,6 +8,7 @@ use crate::ir::Opcode;
use crate::ir::{ExternalName, LibCall};
use crate::isa;
use crate::isa::aarch64::{inst::EmitState, inst::*};
use crate::isa::unwind::UnwindInst;
use crate::machinst::*;
use crate::settings;
use crate::{CodegenError, CodegenResult};
@@ -472,7 +473,7 @@ impl ABIMachineSpec for AArch64MachineDeps {
}
}
fn gen_prologue_frame_setup() -> SmallInstVec<Inst> {
fn gen_prologue_frame_setup(flags: &settings::Flags) -> SmallInstVec<Inst> {
let mut insts = SmallVec::new();
// stp fp (x29), lr (x30), [sp, #-16]!
insts.push(Inst::StoreP64 {
@@ -484,6 +485,15 @@ impl ABIMachineSpec for AArch64MachineDeps {
),
flags: MemFlags::trusted(),
});
if flags.unwind_info() {
insts.push(Inst::Unwind {
inst: UnwindInst::PushFrameRegs {
offset_upward_to_caller_sp: 16, // FP, LR
},
});
}
// mov fp (x29), sp. This uses the ADDI rd, rs, 0 form of `MOV` because
// the usual encoding (`ORR`) does not work with SP.
insts.push(Inst::AluRRImm12 {
@@ -498,20 +508,14 @@ impl ABIMachineSpec for AArch64MachineDeps {
insts
}
fn gen_epilogue_frame_restore() -> SmallInstVec<Inst> {
fn gen_epilogue_frame_restore(_: &settings::Flags) -> SmallInstVec<Inst> {
let mut insts = SmallVec::new();
// MOV (alias of ORR) interprets x31 as XZR, so use an ADD here.
// MOV to SP is an alias of ADD.
insts.push(Inst::AluRRImm12 {
alu_op: ALUOp::Add64,
rd: writable_stack_reg(),
rn: fp_reg(),
imm12: Imm12 {
bits: 0,
shift12: false,
},
});
// N.B.: sp is already adjusted to the appropriate place by the
// clobber-restore code (which also frees the fixed frame). Hence, there
// is no need for the usual `mov sp, fp` here.
// `ldp fp, lr, [sp], #16`
insts.push(Inst::LoadP64 {
rt: writable_fp_reg(),
rt2: writable_link_reg(),
@@ -521,7 +525,6 @@ impl ABIMachineSpec for AArch64MachineDeps {
),
flags: MemFlags::trusted(),
});
insts
}
@@ -535,21 +538,43 @@ impl ABIMachineSpec for AArch64MachineDeps {
// nominal SP offset; abi_impl generic code will do that.
fn gen_clobber_save(
call_conv: isa::CallConv,
_: &settings::Flags,
flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>,
fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
) -> (u64, SmallVec<[Inst; 16]>) {
let mut insts = SmallVec::new();
let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers);
let (int_save_bytes, vec_save_bytes) = saved_reg_stack_size(&clobbered_int, &clobbered_vec);
let total_save_bytes = (vec_save_bytes + int_save_bytes) as i32;
insts.extend(Self::gen_sp_reg_adjust(
-(total_save_bytes + fixed_frame_storage_size as i32),
));
let total_save_bytes = int_save_bytes + vec_save_bytes;
let clobber_size = total_save_bytes as i32;
for (i, reg_pair) in clobbered_int.chunks(2).enumerate() {
if flags.unwind_info() {
// The *unwind* frame (but not the actual frame) starts at the
// clobbers, just below the saved FP/LR pair.
insts.push(Inst::Unwind {
inst: UnwindInst::DefineNewFrame {
offset_downward_to_clobbers: clobber_size as u32,
offset_upward_to_caller_sp: 16, // FP, LR
},
});
}
// We use pre-indexed addressing modes here, rather than the possibly
// more efficient "subtract sp once then used fixed offsets" scheme,
// because (i) we cannot necessarily guarantee that the offset of a
// clobber-save slot will be within a SImm7Scaled (+504-byte) offset
// range of the whole frame including other slots, it is more complex to
// conditionally generate a two-stage SP adjustment (clobbers then fixed
// frame) otherwise, and generally we just want to maintain simplicity
// here for maintainability. Because clobbers are at the top of the
// frame, just below FP, all that is necessary is to use the pre-indexed
// "push" `[sp, #-16]!` addressing mode.
//
// `frame_offset` tracks offset above start-of-clobbers for unwind-info
// purposes.
let mut clobber_offset = clobber_size as u32;
for reg_pair in clobbered_int.chunks(2) {
let (r1, r2) = if reg_pair.len() == 2 {
// .to_reg().to_reg(): Writable<RealReg> --> RealReg --> Reg
(reg_pair[0].to_reg().to_reg(), reg_pair[1].to_reg().to_reg())
@@ -560,28 +585,56 @@ impl ABIMachineSpec for AArch64MachineDeps {
debug_assert!(r1.get_class() == RegClass::I64);
debug_assert!(r2.get_class() == RegClass::I64);
// stp r1, r2, [sp, #(i * #16)]
// stp r1, r2, [sp, #-16]!
insts.push(Inst::StoreP64 {
rt: r1,
rt2: r2,
mem: PairAMode::SignedOffset(
stack_reg(),
SImm7Scaled::maybe_from_i64((i * 16) as i64, types::I64).unwrap(),
mem: PairAMode::PreIndexed(
writable_stack_reg(),
SImm7Scaled::maybe_from_i64(-16, types::I64).unwrap(),
),
flags: MemFlags::trusted(),
});
if flags.unwind_info() {
clobber_offset -= 8;
if r2 != zero_reg() {
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: r2.to_real_reg(),
},
});
}
clobber_offset -= 8;
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: r1.to_real_reg(),
},
});
}
}
let vec_offset = int_save_bytes;
for (i, reg) in clobbered_vec.iter().enumerate() {
for reg in clobbered_vec.iter() {
insts.push(Inst::FpuStore128 {
rd: reg.to_reg().to_reg(),
mem: AMode::Unscaled(
stack_reg(),
SImm9::maybe_from_i64((vec_offset + (i * 16)) as i64).unwrap(),
),
mem: AMode::PreIndexed(writable_stack_reg(), SImm9::maybe_from_i64(-16).unwrap()),
flags: MemFlags::trusted(),
});
if flags.unwind_info() {
clobber_offset -= 16;
insts.push(Inst::Unwind {
inst: UnwindInst::SaveReg {
clobber_offset,
reg: reg.to_reg(),
},
});
}
}
// Allocate the fixed frame below the clobbers if necessary.
if fixed_frame_storage_size > 0 {
insts.extend(Self::gen_sp_reg_adjust(-(fixed_frame_storage_size as i32)));
}
(total_save_bytes as u64, insts)
@@ -591,14 +644,25 @@ impl ABIMachineSpec for AArch64MachineDeps {
call_conv: isa::CallConv,
flags: &settings::Flags,
clobbers: &Set<Writable<RealReg>>,
_fixed_frame_storage_size: u32,
_outgoing_args_size: u32,
fixed_frame_storage_size: u32,
) -> SmallVec<[Inst; 16]> {
let mut insts = SmallVec::new();
let (clobbered_int, clobbered_vec) = get_regs_saved_in_prologue(call_conv, clobbers);
let (int_save_bytes, vec_save_bytes) = saved_reg_stack_size(&clobbered_int, &clobbered_vec);
for (i, reg_pair) in clobbered_int.chunks(2).enumerate() {
// Free the fixed frame if necessary.
if fixed_frame_storage_size > 0 {
insts.extend(Self::gen_sp_reg_adjust(fixed_frame_storage_size as i32));
}
for reg in clobbered_vec.iter().rev() {
insts.push(Inst::FpuLoad128 {
rd: Writable::from_reg(reg.to_reg().to_reg()),
mem: AMode::PostIndexed(writable_stack_reg(), SImm9::maybe_from_i64(16).unwrap()),
flags: MemFlags::trusted(),
});
}
for reg_pair in clobbered_int.chunks(2).rev() {
let (r1, r2) = if reg_pair.len() == 2 {
(
reg_pair[0].map(|r| r.to_reg()),
@@ -611,37 +675,18 @@ impl ABIMachineSpec for AArch64MachineDeps {
debug_assert!(r1.to_reg().get_class() == RegClass::I64);
debug_assert!(r2.to_reg().get_class() == RegClass::I64);
// ldp r1, r2, [sp, #(i * 16)]
// ldp r1, r2, [sp], #16
insts.push(Inst::LoadP64 {
rt: r1,
rt2: r2,
mem: PairAMode::SignedOffset(
stack_reg(),
SImm7Scaled::maybe_from_i64((i * 16) as i64, types::I64).unwrap(),
mem: PairAMode::PostIndexed(
writable_stack_reg(),
SImm7Scaled::maybe_from_i64(16, I64).unwrap(),
),
flags: MemFlags::trusted(),
});
}
for (i, reg) in clobbered_vec.iter().enumerate() {
insts.push(Inst::FpuLoad128 {
rd: Writable::from_reg(reg.to_reg().to_reg()),
mem: AMode::Unscaled(
stack_reg(),
SImm9::maybe_from_i64(((i * 16) + int_save_bytes) as i64).unwrap(),
),
flags: MemFlags::trusted(),
});
}
// For non-baldrdash calling conventions, the frame pointer
// will be moved into the stack pointer in the epilogue, so we
// can skip restoring the stack pointer value with this `add`.
if call_conv.extends_baldrdash() {
let total_save_bytes = (int_save_bytes + vec_save_bytes) as i32;
insts.extend(Self::gen_sp_reg_adjust(total_save_bytes));
}
// If this is Baldrdash-2020, restore the callee (i.e., our) TLS
// register. We may have allocated it for something else and clobbered
// it, but the ABI expects us to leave the TLS register unchanged.

View File

@@ -542,7 +542,6 @@ impl MachInstEmitInfo for EmitInfo {
impl MachInstEmit for Inst {
type State = EmitState;
type Info = EmitInfo;
type UnwindInfo = super::unwind::AArch64UnwindInfo;
fn emit(&self, sink: &mut MachBuffer<Inst>, emit_info: &Self::Info, state: &mut EmitState) {
// N.B.: we *must* not exceed the "worst-case size" used to compute
@@ -2379,6 +2378,10 @@ impl MachInstEmit for Inst {
&Inst::ValueLabelMarker { .. } => {
// Nothing; this is only used to compute debug info.
}
&Inst::Unwind { ref inst } => {
sink.add_unwind(inst.clone());
}
}
let end_off = sink.cur_offset();

View File

@@ -8,6 +8,7 @@ use crate::ir::types::{
B1, B128, B16, B32, B64, B8, F32, F64, FFLAGS, I128, I16, I32, I64, I8, I8X16, IFLAGS, R32, R64,
};
use crate::ir::{ExternalName, MemFlags, Opcode, SourceLoc, TrapCode, Type, ValueLabel};
use crate::isa::unwind::UnwindInst;
use crate::isa::CallConv;
use crate::machinst::*;
use crate::{settings, CodegenError, CodegenResult};
@@ -1216,6 +1217,11 @@ pub enum Inst {
reg: Reg,
label: ValueLabel,
},
/// An unwind pseudo-instruction.
Unwind {
inst: UnwindInst,
},
}
fn count_zero_half_words(mut value: u64, num_half_words: u8) -> usize {
@@ -2026,6 +2032,7 @@ fn aarch64_get_regs(inst: &Inst, collector: &mut RegUsageCollector) {
&Inst::ValueLabelMarker { reg, .. } => {
collector.add_use(reg);
}
&Inst::Unwind { .. } => {}
&Inst::EmitIsland { .. } => {}
}
}
@@ -2779,6 +2786,7 @@ fn aarch64_map_regs<RUM: RegUsageMapper>(inst: &mut Inst, mapper: &RUM) {
&mut Inst::ValueLabelMarker { ref mut reg, .. } => {
map_use(mapper, reg);
}
&mut Inst::Unwind { .. } => {}
}
}
@@ -4097,6 +4105,10 @@ impl Inst {
&Inst::ValueLabelMarker { label, reg } => {
format!("value_label {:?}, {}", label, reg.show_rru(mb_rru))
}
&Inst::Unwind { ref inst } => {
format!("unwind {:?}", inst)
}
}
}
}

View File

@@ -1,201 +1,2 @@
use super::*;
use crate::isa::aarch64::inst::{args::PairAMode, imms::Imm12, regs, ALUOp, Inst};
use crate::isa::unwind::input::{UnwindCode, UnwindInfo};
use crate::machinst::UnwindInfoContext;
use crate::result::CodegenResult;
use alloc::vec::Vec;
use regalloc::Reg;
#[cfg(feature = "unwind")]
pub(crate) mod systemv;
pub struct AArch64UnwindInfo;
impl UnwindInfoGenerator<Inst> for AArch64UnwindInfo {
fn create_unwind_info(
context: UnwindInfoContext<Inst>,
) -> CodegenResult<Option<UnwindInfo<Reg>>> {
let word_size = 8u8;
let pair_size = word_size * 2;
let mut codes = Vec::new();
for i in context.prologue.clone() {
let i = i as usize;
let inst = &context.insts[i];
let offset = context.insts_layout[i];
match inst {
Inst::StoreP64 {
rt,
rt2,
mem: PairAMode::PreIndexed(rn, imm7),
..
} if *rt == regs::fp_reg()
&& *rt2 == regs::link_reg()
&& *rn == regs::writable_stack_reg()
&& imm7.value == -(pair_size as i16) =>
{
// stp fp (x29), lr (x30), [sp, #-16]!
codes.push((
offset,
UnwindCode::StackAlloc {
size: pair_size as u32,
},
));
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt,
stack_offset: 0,
},
));
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt2,
stack_offset: word_size as u32,
},
));
}
Inst::StoreP64 {
rt,
rt2,
mem: PairAMode::PreIndexed(rn, imm7),
..
} if rn.to_reg() == regs::stack_reg() && imm7.value % (pair_size as i16) == 0 => {
// stp r1, r2, [sp, #(i * #16)]
let stack_offset = imm7.value as u32;
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt,
stack_offset,
},
));
if *rt2 != regs::zero_reg() {
codes.push((
offset,
UnwindCode::SaveRegister {
reg: *rt2,
stack_offset: stack_offset + word_size as u32,
},
));
}
}
Inst::AluRRImm12 {
alu_op: ALUOp::Add64,
rd,
rn,
imm12:
Imm12 {
bits: 0,
shift12: false,
},
} if *rd == regs::writable_fp_reg() && *rn == regs::stack_reg() => {
// mov fp (x29), sp.
codes.push((offset, UnwindCode::SetFramePointer { reg: rd.to_reg() }));
}
Inst::VirtualSPOffsetAdj { offset: adj } if offset > 0 => {
codes.push((offset, UnwindCode::StackAlloc { size: *adj as u32 }));
}
_ => {}
}
}
// TODO epilogues
let prologue_size = if context.prologue.len() == 0 {
0
} else {
context.insts_layout[context.prologue.end as usize - 1]
};
Ok(Some(UnwindInfo {
prologue_size,
prologue_unwind_codes: codes,
epilogues_unwind_codes: vec![],
function_size: context.len,
word_size,
initial_sp_offset: 0,
}))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::cursor::{Cursor, FuncCursor};
use crate::ir::{ExternalName, Function, InstBuilder, Signature, StackSlotData, StackSlotKind};
use crate::isa::{lookup, CallConv};
use crate::settings::{builder, Flags};
use crate::Context;
use std::str::FromStr;
use target_lexicon::triple;
#[test]
fn test_simple_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_function(
CallConv::SystemV,
Some(StackSlotData::new(StackSlotKind::ExplicitSlot, 64)),
));
context.compile(&*isa).expect("expected compilation");
let result = context.mach_compile_result.unwrap();
let unwind_info = result.unwind_info.unwrap();
assert_eq!(
unwind_info,
UnwindInfo {
prologue_size: 12,
prologue_unwind_codes: vec![
(4, UnwindCode::StackAlloc { size: 16 }),
(
4,
UnwindCode::SaveRegister {
reg: regs::fp_reg(),
stack_offset: 0
}
),
(
4,
UnwindCode::SaveRegister {
reg: regs::link_reg(),
stack_offset: 8
}
),
(
8,
UnwindCode::SetFramePointer {
reg: regs::fp_reg()
}
)
],
epilogues_unwind_codes: vec![],
function_size: 24,
word_size: 8,
initial_sp_offset: 0,
}
);
}
fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function {
let mut func =
Function::with_name_signature(ExternalName::user(0, 0), Signature::new(call_conv));
let block0 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().return_(&[]);
if let Some(stack_slot) = stack_slot {
func.stack_slots.push(stack_slot);
}
func
}
}

View File

@@ -1,9 +1,7 @@
//! Unwind information for System V ABI (Aarch64).
use crate::isa::aarch64::inst::regs;
use crate::isa::unwind::input;
use crate::isa::unwind::systemv::{RegisterMappingError, UnwindInfo};
use crate::result::CodegenResult;
use crate::isa::unwind::systemv::RegisterMappingError;
use gimli::{write::CommonInformationEntry, Encoding, Format, Register};
use regalloc::{Reg, RegClass};
@@ -31,128 +29,40 @@ pub fn create_cie() -> CommonInformationEntry {
/// Map Cranelift registers to their corresponding Gimli registers.
pub fn map_reg(reg: Reg) -> Result<Register, RegisterMappingError> {
// For AArch64 DWARF register mappings, see:
//
// https://developer.arm.com/documentation/ihi0057/e/?lang=en#dwarf-register-names
//
// X0--X31 is 0--31; V0--V31 is 64--95.
match reg.get_class() {
RegClass::I64 => Ok(Register(reg.get_hw_encoding().into())),
RegClass::I64 => {
let reg = reg.get_hw_encoding() as u16;
Ok(Register(reg))
}
RegClass::V128 => {
let reg = reg.get_hw_encoding() as u16;
Ok(Register(64 + reg))
}
_ => Err(RegisterMappingError::UnsupportedRegisterBank("class?")),
}
}
pub(crate) fn create_unwind_info(
unwind: input::UnwindInfo<Reg>,
) -> CodegenResult<Option<UnwindInfo>> {
struct RegisterMapper;
impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> {
Ok(map_reg(reg)?.0)
}
fn sp(&self) -> u16 {
regs::stack_reg().get_hw_encoding().into()
}
pub(crate) struct RegisterMapper;
impl crate::isa::unwind::systemv::RegisterMapper<Reg> for RegisterMapper {
fn map(&self, reg: Reg) -> Result<u16, RegisterMappingError> {
Ok(map_reg(reg)?.0)
}
let map = RegisterMapper;
Ok(Some(UnwindInfo::build(unwind, &map)?))
}
#[cfg(test)]
mod tests {
use crate::cursor::{Cursor, FuncCursor};
use crate::ir::{
types, AbiParam, ExternalName, Function, InstBuilder, Signature, StackSlotData,
StackSlotKind,
};
use crate::isa::{lookup, CallConv};
use crate::settings::{builder, Flags};
use crate::Context;
use gimli::write::Address;
use std::str::FromStr;
use target_lexicon::triple;
#[test]
fn test_simple_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_function(
CallConv::SystemV,
Some(StackSlotData::new(StackSlotKind::ExplicitSlot, 64)),
));
context.compile(&*isa).expect("expected compilation");
let fde = match context
.create_unwind_info(isa.as_ref())
.expect("can create unwind info")
{
Some(crate::isa::unwind::UnwindInfo::SystemV(info)) => {
info.to_fde(Address::Constant(1234))
}
_ => panic!("expected unwind information"),
};
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(1234), length: 24, lsda: None, instructions: [(4, CfaOffset(16)), (4, Offset(Register(29), -16)), (4, Offset(Register(30), -8)), (8, CfaRegister(Register(29)))] }");
fn sp(&self) -> u16 {
regs::stack_reg().get_hw_encoding().into()
}
fn create_function(call_conv: CallConv, stack_slot: Option<StackSlotData>) -> Function {
let mut func =
Function::with_name_signature(ExternalName::user(0, 0), Signature::new(call_conv));
let block0 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().return_(&[]);
if let Some(stack_slot) = stack_slot {
func.stack_slots.push(stack_slot);
}
func
fn fp(&self) -> u16 {
regs::fp_reg().get_hw_encoding().into()
}
#[test]
fn test_multi_return_func() {
let isa = lookup(triple!("aarch64"))
.expect("expect aarch64 ISA")
.finish(Flags::new(builder()));
let mut context = Context::for_function(create_multi_return_function(CallConv::SystemV));
context.compile(&*isa).expect("expected compilation");
let fde = match context
.create_unwind_info(isa.as_ref())
.expect("can create unwind info")
{
Some(crate::isa::unwind::UnwindInfo::SystemV(info)) => {
info.to_fde(Address::Constant(4321))
}
_ => panic!("expected unwind information"),
};
assert_eq!(format!("{:?}", fde), "FrameDescriptionEntry { address: Constant(4321), length: 40, lsda: None, instructions: [(4, CfaOffset(16)), (4, Offset(Register(29), -16)), (4, Offset(Register(30), -8)), (8, CfaRegister(Register(29)))] }");
fn lr(&self) -> Option<u16> {
Some(regs::link_reg().get_hw_encoding().into())
}
fn create_multi_return_function(call_conv: CallConv) -> Function {
let mut sig = Signature::new(call_conv);
sig.params.push(AbiParam::new(types::I32));
let mut func = Function::with_name_signature(ExternalName::user(0, 0), sig);
let block0 = func.dfg.make_block();
let v0 = func.dfg.append_block_param(block0, types::I32);
let block1 = func.dfg.make_block();
let block2 = func.dfg.make_block();
let mut pos = FuncCursor::new(&mut func);
pos.insert_block(block0);
pos.ins().brnz(v0, block2, &[]);
pos.ins().jump(block1, &[]);
pos.insert_block(block1);
pos.ins().return_(&[]);
pos.insert_block(block2);
pos.ins().return_(&[]);
func
fn lr_offset(&self) -> Option<u32> {
Some(8)
}
}

View File

@@ -65,7 +65,6 @@ impl MachBackend for AArch64Backend {
let buffer = vcode.emit();
let frame_size = vcode.frame_size();
let unwind_info = vcode.unwind_info()?;
let stackslot_offsets = vcode.stackslot_offsets().clone();
let disasm = if want_disasm {
@@ -80,7 +79,6 @@ impl MachBackend for AArch64Backend {
buffer,
frame_size,
disasm,
unwind_info,
value_labels_ranges: Default::default(),
stackslot_offsets,
})
@@ -127,11 +125,18 @@ impl MachBackend for AArch64Backend {
) -> CodegenResult<Option<crate::isa::unwind::UnwindInfo>> {
use crate::isa::unwind::UnwindInfo;
use crate::machinst::UnwindInfoKind;
Ok(match (result.unwind_info.as_ref(), kind) {
(Some(info), UnwindInfoKind::SystemV) => {
inst::unwind::systemv::create_unwind_info(info.clone())?.map(UnwindInfo::SystemV)
Ok(match kind {
UnwindInfoKind::SystemV => {
let mapper = self::inst::unwind::systemv::RegisterMapper;
Some(UnwindInfo::SystemV(
crate::isa::unwind::systemv::create_unwind_info_from_insts(
&result.buffer.unwind_info[..],
result.buffer.data.len(),
&mapper,
)?,
))
}
(Some(_info), UnwindInfoKind::Windows) => {
UnwindInfoKind::Windows => {
// TODO: support Windows unwind info on AArch64
None
}
@@ -200,12 +205,11 @@ mod test {
// mov x29, sp
// mov x1, #0x1234
// add w0, w0, w1
// mov sp, x29
// ldp x29, x30, [sp], #16
// ret
let golden = vec![
0xfd, 0x7b, 0xbf, 0xa9, 0xfd, 0x03, 0x00, 0x91, 0x81, 0x46, 0x82, 0xd2, 0x00, 0x00,
0x01, 0x0b, 0xbf, 0x03, 0x00, 0x91, 0xfd, 0x7b, 0xc1, 0xa8, 0xc0, 0x03, 0x5f, 0xd6,
0x01, 0x0b, 0xfd, 0x7b, 0xc1, 0xa8, 0xc0, 0x03, 0x5f, 0xd6,
];
assert_eq!(code, &golden[..]);
@@ -267,14 +271,13 @@ mod test {
// cbnz x1, 0x18
// mov x1, #0x1234 // #4660
// sub w0, w0, w1
// mov sp, x29
// ldp x29, x30, [sp], #16
// ret
let golden = vec![
253, 123, 191, 169, 253, 3, 0, 145, 129, 70, 130, 210, 0, 0, 1, 11, 225, 3, 0, 42, 161,
0, 0, 181, 129, 70, 130, 210, 1, 0, 1, 11, 225, 3, 1, 42, 161, 255, 255, 181, 225, 3,
0, 42, 97, 255, 255, 181, 129, 70, 130, 210, 0, 0, 1, 75, 191, 3, 0, 145, 253, 123,
193, 168, 192, 3, 95, 214,
0, 42, 97, 255, 255, 181, 129, 70, 130, 210, 0, 0, 1, 75, 253, 123, 193, 168, 192, 3,
95, 214,
];
assert_eq!(code, &golden[..]);