This is trying to keep algorithms out if the ir module which deals with
the intermediate representation.
Also give the layout_stack() function a Result return value so it can
report a soft error when the stack frame is too large instead of
asserting. Since local variables can be arbitrarily large, it is easy
enough to overflow the stack with even a small function.
A CallConv enum on every function signature makes it possible to
generate calls to functions with different calling conventions within
the same ISA / within a single function.
The calling conventions also serve as a way of customizing Cretonne's
behavior when embedded inside a VM. As an example, the SpiderWASM
calling convention is used to compile WebAssembly functions that run
inside the SpiderMonkey virtual machine.
All function signatures must have a calling convention at the end, so
this changes the textual IL syntax.
Before:
sig1 = signature(i32, f64) -> f64
After
sig1 = (i32, f64) -> f64 native
sig2 = (i32) spiderwasm
When printing functions, the signature goes after the return types:
function %r1() -> i32, f32 spiderwasm {
ebb1:
...
}
In the parser, this calling convention is optional and defaults to
"native". This is mostly to avoid updating all the existing test cases
under filetests/. When printing a function, the calling convention is
always included, including for "native" functions.
Add a StackSlots::layout() method which computes the total stack frame
size and assigns offsets to all spill slots and local variables so they
don't interfere with each other or with incoming or outgoing function
arguments.
Stack slots are given an ad hoc alignment that is the natural alignment
for power-of-two sized spill slots, up to the stack pointer alignment.
It is possible we need explicit stack slot alignment in the future, but
at least for spill slots, this scheme is likely to work for most ISAs.
* Avoid floating-point types in Ieee32::new and Ieee64::new.
This eliminates the need for unsafe code in code that uses Cretonne, a few
instances of unsafe code in Cretonne itself, and eliminates the only instance
of floating point in Cretonne.
* Rename new to with_bits, and new_from_float to with_float.
When making an outgoing call, some arguments may have to be passed on
the stack. Allocate OutgoingArg stack slots for these arguments and
write them immediately before the outgoing call instruction.
Do the same for incoming function arguments on the stack, but use
IncomingArg stack slots instead. This was previously done in the
spiller, but we move it to the legalizer so it is done at the same time
as outgoing stack arguments.
These stack slot assignments are done in the legalizer before live
range analysis because the outgoing arguments usually are in different
SSSA values with their own short live ranges.
Stack slots for outgoing arguments can be reused between function calls.
Add a list of outgoing argument stack slots allocated so far, and
provide a `get_outgoing_arg()` method which will reuse any outgoing
stack slots with matching size and offset.
* Reduce code duplication in TypeConstraint subclasses; Add ConstrainWiderOrEqual to ti and to ireduce,{s,u}extend and f{promote,demote}; Fix bug in emitting constraint edges in TypeEnv.dot(); Modify runtime constraint checks to reject match when they encounter overflow
* Rename Constrain types to something shorter; Move lane_bits/lane_counts in subclasses of ValueType; Add wider_or_eq function in rust and python;
* API and data structures proposal for the SSA construction module
* Polished API and implemented trivial functions
* API more explicit, Variable now struct parameter
* Sample test written to see how the API could be used
* Implemented local value numbering for SSABuilder
* Implemented SSA within a single Ebb
* Unfinished unoptimized implementation for recursive use and seal
* Working global value numbering
The SSABuilder now create ebb args and modifies jump instructions accordingly
* Updated doc and improved branch argument modifying.
Removed instructions::branch_arguments and instructions::branch_argument_mut
* SSA building: bugfix, asserts and new test case
Missing a key optimization to remove cycles of Phi
* SSA Building: small changes after code review
Created helper function for seal_block (which now contains sanity checks)
* Optimization: removed useless phis (ebb arguments)
Using pessimistic assumption that when using a non-def variable in an unsealed block we create an ebb argument which is removed when sealing if we detect it as useless
Using aliases to avoid rewriting variables
* Changed the semantics of remove_ebb_arg and turned it into a proper API method
* Adapted ssa branch to changes in the DFG API
* Abandonned SparseMaps for EntityMaps, added named structure for headr block data.
* Created skeletton for a Cretonne IL builder frontend
* Frontend IL builder: first draft of implementation with example of instruction methods
* Working basic implementation of the frontend
Missing handling of function arguments and return values
* Interaction with function signature, sample test, more checks
* Test with function verifier, seal and fill sanity check
* Implemented python script to generate ILBuilder methods
* Added support for jump tables and stack slot
* Major API overhaul
* No longer generating rust through Python but implements InstBuilder
* No longer parametrized by user's blocks but use regular `Ebb`
* Reuse of allocated memory via distinction between ILBuilder and FunctionBuilder
* Integrate changes from StackSlot
* Improved error message
* Added support for jump arguments supplied by the user
* Added an ebb_args proxy method needed
* Adapted to Entity_ref splitted into a new module
* Better error messages and fixed tests
* Added method to change jump destination
* We whould be able to add unreachable code
* Added inst_result proxy to frontend
* Import support
* Added optimization for SSA construction:
If multiple predecessors but agree on value don't create EBB argument
* Move unsafe and not write-only funcs apart, improved doc
* Added proxy function for append_ebb_arg
* Support for unreachable code and better layout of the Ebbs
* Fixed a bug yielding an infinite loop in SSA construction
* SSA predecessors lookup code refactoring
* Fixed bug in unreachable definition
* New sanity check and display debug function
* Fixed bug in verifier and added is_pristine ;ethod for frontend
* Extended set of characters printable in function names
To be able to print names of functions in test suite
* Fixes and improvements of SSA construction after code review
* Bugfixes for frontend code simplification
* On-the-fly critical edge splitting in case of br_table with jump arguments
* No more dangling undefined values, now attached as EBB args
* Bugfix: only split corresponding edges on demand, not all br_table edges
* Added signature retrieval method
* Bugfix for critical edge splitting not sealing the ebbs it created
* Proper handling of SSA side effects by the frontend
* Code refactoring: moving frontend and SSA to new crate
* Frontend: small changes and bugfixes after code review
Generate code to:
- Unwrap the instruction and generate an error if the instruction format
doesn't match the recipe.
- Look up the value locations of register and stack arguments.
The recipe_* functions in the ISA binemit modules now take these
unwrapped items as arguments.
Also add an optional `emit` argument to the EncRecipe constructor which
makes it possible to provide inline Rust code snippets for code
emission. This requires a lot less boilerplate than recipe_* functions.
Function arguments that don't fit in registers are passed on the stack.
Create "incoming_arg" stack slots representing the stack arguments, and
assign them to the value arguments during spilling.
The offset is relative to the stack pointer in the calling function, so
it excludes the return address pushed by the call instruction itself on
Intel ISAs.
Change the ArgumentLoc::Stack offset to an i32, so it matches the stack
slot offsets.
When coloring registers for a branch instruction, also make sure that
the values passed as EBB arguments are in the registers expected by the
EBB.
The first time a branch to an EBB is processed, assign the EBB arguments
to the registers where the branch arguments already reside so no
regmoves are needed.
* Convert TypeSet fields to sets; Add BitSet<T> type to rust; Encode ValueTypeSets using BitSet; (still need mypy cleanup)
* nits
* cleanup nits
* forgot mypy type annotations
* rustfmt fixes
* Round 1 comments: filer b2, b4; doc comments in python; move bitset in its own toplevel module; Use Into<u32>
* fixes
* Revert comment to appease rustfmt
The EntityRef trait is used by more than just the EntityMap now, so it
should live in its own module.
Also move the entity_impl! macro into the new module so it can be used
for defining new entity references anywhere.
* Replace a single-character string literal with a character literal.
* Use is_some() instead of comparing with Some(_).
* Add code-quotes around type names in comments.
* Use !...is_empty() instead of len() != 0.
* Tidy up redundant returns.
* Remove redundant .clone() calls.
* Remove unnecessary explicit lifetime parameters.
* Tidy up unnecessary '&'s.
* Add parens to make operator precedence explicit.
* Use debug_assert_eq instead of debug_assert with ==.
* Replace a &Vec argument with a &[...].
* Replace `a = a op b` with `a op= b`.
* Avoid unnecessary closures.
* Avoid .iter() and .iter_mut() for iterating over containers.
* Remove unneeded qualification.
Use a new StackSlots struct to keep track of a function's stack slots
instead of just an entity map. This let's us build more internal data
structures for tracking the stack slots if necessary.
Start by adding a make_spill_slot() function that will be used by the
register allocator.
Add a StackSlotKind enumeration to help keep track of the different
kinds of stack slots supported:
- Incoming and outgoing function arguments on the stack.
- Spill slots and locals.
Change the text format syntax for declaring a stack slot to use a kind
keyword rather than just 'stack_slot'.
An instruction may have fixed operand constraints that make it
impossibly to use a single register value to satisfy two at a time.
Detect when the same value is used for multiple fixed register operands
and insert copies during the spilling pass.
When comparing instructions in the same EBB, behave like the RPO visits
instructions in program order.
- Add a Layout::pp_ebb() method for convenience. It gets the EBB
containing any program point.
- Add a conversion from ValueDef to ExpandedProgramPoint so it can be
used with the rpo_cmp method.
* Function names should start with %
* Create FunctionName from string
* Implement displaying of FunctionName as %nnnn with fallback to #xxxx
* Run rustfmt and fix FunctionName::with_string in parser
* Implement FunctionName::new as a generic function
* Binary function names should start with #
* Implement NameRepr for function name
* Fix examples in docs to reflect that function names start with %
* Rebase and fix filecheck tests
The reload pass inserts spill and fill instructions as needed so
instructions that operate on registers will never see a value with stack
affinity.
This is a very basic implementation, and we can't write good test cases
until we have a spilling pass.
* Implemented in two passes
* First pass discovers the loops headers (they dominate one of their predecessors)
* Second pass traverses the blocks of each loop
* Discovers the loop tree structure
* Offers a new LoopAnalysis data structure queried from outside the module
* Fix GVN skipping the instruction after a deleted instruction.
* Teach GVN to resolve aliases as it proceeds.
* Clean up an obsolete reference to extended_values.
* Skeleton simple_gvn pass.
* Basic testing infrastructure for simple-gvn.
* Add can_load and can_store flags to instructions.
* Move the replace_values function into the DataFlowGraph.
* Make InstructionData derive from Hash, PartialEq, and Eq.
* Make EntityList's hash and eq functions panic.
* Change Ieee32 and Ieee64 to store u32 and u64, respectively.
Provide a drop_dead_args() function which deletes them instead.
We still need to assign a register to dead EBB arguments, so they can't
just be ignored.
These special-purpose arguments and return values are only relevant for
the function being compiled, so add a `current` flag to
legalize_signature().
- Add the necessary argument values to the entry block to represent
the special-purpose arguments.
- Propagate the link and sret arguments to return instructions if the
legalized signature asks for it.
Enumerate a set of special purposes for function arguments that general
purpose code needs to know about. Some of these argument purposes will
only appear in the signature of the current function, representing
things the prologue and epilogues need to know about like the link
register and callee-saved registers.
Get rid of the 'inreg' argument flag. Arguments can be pre-assigned to a
specific register instead.
- The detach_secondary_results() is a leftover from the two-plane value
representation. Use detach_results() instead to remove all instruction
results.
- Make the append_* DFG methods more direct. Don't depend on calling the
corresponding attach_* methods. Just create a new value directly,
using the values.next_key() trick.
This makes it possible to reuse one or more result values in the
instruction that is being inserted.
Also add a with_result(v) method for the common case of reusing a single
result value. This could be specialized in the future.
These methods are used to reattach detached values:
- change_to_alias
- attach_result
- attach_ebb_arg
Add an assertion to all of them to ensure that the provided value is not
already attached somewhere else. Use a new value_is_attached() method
for the test.
Also include a verifier check for uses of detached values.
All values are now references into the value table, so drop the
distinction between direct and table values. Direct values don't exist
any more.
Also remove the parser support for the 'vxNN' syntax. Only 'vNN' values
can be parsed now.