ARM64 backend, part 3 / 11: MachInst infrastructure.

This patch adds the MachInst, or Machine Instruction, infrastructure.
This is the machine-independent portion of the new backend design. It
contains the implementation of the "vcode" (virtual-registerized code)
container, the top-level lowering algorithm and compilation pipeline,
and the trait definitions that the machine backends will fill in.

This backend infrastructure is included in the compilation of the
`codegen` crate, but it is not yet tied into the public APIs; that patch
will come last, after all the other pieces are filled in.

This patch contains code written by Julian Seward <jseward@acm.org> and
Benjamin Bouvier <public@benj.me>, originally developed on a side-branch
before rebasing and condensing into this patch series. See the `arm64`
branch at `https://github.com/cfallin/wasmtime` for original development
history.

Co-authored-by: Julian Seward <jseward@acm.org>
Co-authored-by: Benjamin Bouvier <public@benj.me>
This commit is contained in:
Chris Fallin
2020-04-09 12:27:26 -07:00
parent f80fe949c6
commit d83574261c
14 changed files with 2662 additions and 2 deletions

View File

@@ -0,0 +1,59 @@
//! Computation of basic block order in emitted code.
use crate::machinst::*;
/// Simple reverse postorder-based block order emission.
///
/// TODO: use a proper algorithm, such as the bottom-up straight-line-section
/// construction algorithm.
struct BlockRPO {
visited: Vec<bool>,
postorder: Vec<BlockIndex>,
deferred_last: Option<BlockIndex>,
}
impl BlockRPO {
fn new<I: VCodeInst>(vcode: &VCode<I>) -> BlockRPO {
BlockRPO {
visited: vec![false; vcode.num_blocks()],
postorder: vec![],
deferred_last: None,
}
}
fn visit<I: VCodeInst>(&mut self, vcode: &VCode<I>, block: BlockIndex) {
self.visited[block as usize] = true;
for succ in vcode.succs(block) {
if !self.visited[*succ as usize] {
self.visit(vcode, *succ);
}
}
let (start, end) = &vcode.block_ranges[block as usize];
for i in *start..*end {
if vcode.insts[i as usize].is_epilogue_placeholder() {
debug_assert!(self.deferred_last.is_none());
self.deferred_last = Some(block);
return;
}
}
self.postorder.push(block);
}
fn rpo(self) -> Vec<BlockIndex> {
let mut rpo = self.postorder;
rpo.reverse();
if let Some(block) = self.deferred_last {
rpo.push(block);
}
rpo
}
}
/// Compute the final block order.
pub fn compute_final_block_order<I: VCodeInst>(vcode: &VCode<I>) -> Vec<BlockIndex> {
let mut rpo = BlockRPO::new(vcode);
rpo.visit(vcode, vcode.entry());
rpo.rpo()
}