Add a basic alias analysis with redundant-load elim and store-to-load fowarding opts. (#4163)

This PR adds a basic *alias analysis*, and optimizations that use it.
This is a "mid-end optimization": it operates on CLIF, the
machine-independent IR, before lowering occurs.

The alias analysis (or maybe more properly, a sort of memory-value
analysis) determines when it can prove a particular memory
location is equal to a given SSA value, and when it can, it replaces any
loads of that location.

This subsumes two common optimizations:

* Redundant load elimination: when the same memory address is loaded two
  times, and it can be proven that no intervening operations will write
  to that memory, then the second load is *redundant* and its result
  must be the same as the first. We can use the first load's result and
  remove the second load.

* Store-to-load forwarding: when a load can be proven to access exactly
  the memory written by a preceding store, we can replace the load's
  result with the store's data operand, and remove the load.

Both of these optimizations rely on a "last store" analysis that is a
sort of coloring mechanism, split across disjoint categories of abstract
state. The basic idea is that every memory-accessing operation is put
into one of N disjoint categories; it is disallowed for memory to ever
be accessed by an op in one category and later accessed by an op in
another category. (The frontend must ensure this.)

Then, given this, we scan the code and determine, for each
memory-accessing op, when a single prior instruction is a store to the
same category. This "colors" the instruction: it is, in a sense, a
static name for that version of memory.

This analysis provides an important invariant: if two operations access
memory with the same last-store, then *no other store can alias* in the
time between that last store and these operations. This must-not-alias
property, together with a check that the accessed address is *exactly
the same* (same SSA value and offset), and other attributes of the
access (type, extension mode) are the same, let us prove that the
results are the same.

Given last-store info, we scan the instructions and build a table from
"memory location" key (last store, address, offset, type, extension) to
known SSA value stored in that location. A store inserts a new mapping.
A load may also insert a new mapping, if we didn't already have one.
Then when a load occurs and an entry already exists for its "location",
we can reuse the value. This will be either RLE or St-to-Ld depending on
where the value came from.

Note that this *does* work across basic blocks: the last-store analysis
is a full iterative dataflow pass, and we are careful to check dominance
of a previously-defined value before aliasing to it at a potentially
redundant load. So we will do the right thing if we only have a
"partially redundant" load (loaded already but only in one predecessor
block), but we will also correctly reuse a value if there is a store or
load above a loop and a redundant load of that value within the loop, as
long as no potentially-aliasing stores happen within the loop.
This commit is contained in:
Chris Fallin
2022-05-20 13:19:32 -07:00
committed by GitHub
parent 08b7c87793
commit 0824abbae4
17 changed files with 900 additions and 57 deletions

View File

@@ -11,9 +11,20 @@ enum FlagBit {
Readonly,
LittleEndian,
BigEndian,
/// Accesses only the "heap" part of abstract state. Used for
/// alias analysis. Mutually exclusive with "table" and "vmctx".
Heap,
/// Accesses only the "table" part of abstract state. Used for
/// alias analysis. Mutually exclusive with "heap" and "vmctx".
Table,
/// Accesses only the "vmctx" part of abstract state. Used for
/// alias analysis. Mutually exclusive with "heap" and "table".
Vmctx,
}
const NAMES: [&str; 5] = ["notrap", "aligned", "readonly", "little", "big"];
const NAMES: [&str; 8] = [
"notrap", "aligned", "readonly", "little", "big", "heap", "table", "vmctx",
];
/// Endianness of a memory access.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Hash)]
@@ -110,6 +121,12 @@ impl MemFlags {
assert!(!(self.read(FlagBit::LittleEndian) && self.read(FlagBit::BigEndian)));
}
/// Set endianness of the memory access, returning new flags.
pub fn with_endianness(mut self, endianness: Endianness) -> Self {
self.set_endianness(endianness);
self
}
/// Test if the `notrap` flag is set.
///
/// Normally, trapping is part of the semantics of a load/store operation. If the platform
@@ -128,6 +145,12 @@ impl MemFlags {
self.set(FlagBit::Notrap)
}
/// Set the `notrap` flag, returning new flags.
pub fn with_notrap(mut self) -> Self {
self.set_notrap();
self
}
/// Test if the `aligned` flag is set.
///
/// By default, Cranelift memory instructions work with any unaligned effective address. If the
@@ -142,6 +165,12 @@ impl MemFlags {
self.set(FlagBit::Aligned)
}
/// Set the `aligned` flag, returning new flags.
pub fn with_aligned(mut self) -> Self {
self.set_aligned();
self
}
/// Test if the `readonly` flag is set.
///
/// Loads with this flag have no memory dependencies.
@@ -155,6 +184,87 @@ impl MemFlags {
pub fn set_readonly(&mut self) {
self.set(FlagBit::Readonly)
}
/// Set the `readonly` flag, returning new flags.
pub fn with_readonly(mut self) -> Self {
self.set_readonly();
self
}
/// Test if the `heap` bit is set.
///
/// Loads and stores with this flag accesses the "heap" part of
/// abstract state. This is disjoint from the "table", "vmctx",
/// and "other" parts of abstract state. In concrete terms, this
/// means that behavior is undefined if the same memory is also
/// accessed by another load/store with one of the other
/// alias-analysis bits (`table`, `vmctx`) set, or `heap` not set.
pub fn heap(self) -> bool {
self.read(FlagBit::Heap)
}
/// Set the `heap` bit. See the notes about mutual exclusion with
/// other bits in `heap()`.
pub fn set_heap(&mut self) {
assert!(!self.table() && !self.vmctx());
self.set(FlagBit::Heap);
}
/// Set the `heap` bit, returning new flags.
pub fn with_heap(mut self) -> Self {
self.set_heap();
self
}
/// Test if the `table` bit is set.
///
/// Loads and stores with this flag accesses the "table" part of
/// abstract state. This is disjoint from the "heap", "vmctx",
/// and "other" parts of abstract state. In concrete terms, this
/// means that behavior is undefined if the same memory is also
/// accessed by another load/store with one of the other
/// alias-analysis bits (`heap`, `vmctx`) set, or `table` not set.
pub fn table(self) -> bool {
self.read(FlagBit::Table)
}
/// Set the `table` bit. See the notes about mutual exclusion with
/// other bits in `table()`.
pub fn set_table(&mut self) {
assert!(!self.heap() && !self.vmctx());
self.set(FlagBit::Table);
}
/// Set the `table` bit, returning new flags.
pub fn with_table(mut self) -> Self {
self.set_table();
self
}
/// Test if the `vmctx` bit is set.
///
/// Loads and stores with this flag accesses the "vmctx" part of
/// abstract state. This is disjoint from the "heap", "table",
/// and "other" parts of abstract state. In concrete terms, this
/// means that behavior is undefined if the same memory is also
/// accessed by another load/store with one of the other
/// alias-analysis bits (`heap`, `table`) set, or `vmctx` not set.
pub fn vmctx(self) -> bool {
self.read(FlagBit::Vmctx)
}
/// Set the `vmctx` bit. See the notes about mutual exclusion with
/// other bits in `vmctx()`.
pub fn set_vmctx(&mut self) {
assert!(!self.heap() && !self.table());
self.set(FlagBit::Vmctx);
}
/// Set the `vmctx` bit, returning new flags.
pub fn with_vmctx(mut self) -> Self {
self.set_vmctx();
self
}
}
impl fmt::Display for MemFlags {