With the change to the parser to preserve indices, it now inserts
placeholders to pad out index spaces as needed. Placeholder functions
use reserved signature indices, so skip them when writing them out,
to avoid writing them out as "sig4294967295".
Cretonne clients don't need to know how the register allocator works.
Export the RegDiversions type from the binemit module instead. It is
used by the "test binemit" driver.
StackSlotKind::OutgoingArg stack slots have an offset that is relative
to our own stack pointer, while all other stack slot kinds have offsets
that are relative to the caller's stack pointer.
Make sure we generate the right sp-relative offsets for outgoing
arguments too.
This makes it easier to debug testcases:
- the entity numbers in a .cton file match the entity numbers used
within Cretonne.
- serializing and deserializing doesn't cause indices to change.
One disadvantage is that if a .cton file uses sparse entity numbers,
deserializing to the in-memory form doesn't compact it. However, the
text format is not intended to be performance-critical, so this isn't
expected to be a big burden.
When the input is a NaN, we need to generate a different trap code, so
use the new trapff instruction to generate such a trap after the first
floating point comparison.
This is the floating point equivalent of trapif: Trap when a given
condition is in the floating-point flags.
Define Intel encodings comparable to the trapif encodings.
This enables code generation that never causes a SIGFPE signal to be
raised from a division instruction. Instead, division and remainder
calculations are protected by explicit traps.
* gen_settings: dont try to display a Preset descriptor in Flags
Trying to display a preset doesnt make sense, and before this commit it
does not display anything meaningful - the printout just says e.g.
"haswell =\n".
The offset byte a preset descriptor isnt a valid offset into the
flag bytes, it is actually an offset into the PRESETS table. It will
cause a panic when the offset is out of bounds for the flag bytes,
which happens in the intel isa as of this commit.
* intel settings: test that display impl doesnt panic
This instruction loads a stack limit from a global variable and compares
it to the stack pointer, trapping if the stack has grown beyond the
limit.
Also add a expand_flags transform group containing legalization patterns
for ISAs with CPU flags.
Fixes#234.
The instruction set has variants with 8-bit and 32-bit signed immediate
operands.
Add a TODO to use a TEST instruction for the special case ifcmp_imm x, 0.
Changes:
* Adds a new generic instruction, SELECTIF, that does value selection (a la
conditional move) similarly to existing SELECT, except that it is
controlled by condition code input and flags-register inputs.
* Adds a new Intel x86_64 variant, 'baseline', that supports SSE2 and
nothing else.
* Adds new Intel x86_64 instructions BSR and BSF.
* Implements generic CLZ, CTZ and POPCOUNT on x86_64 'baseline' targets
using the new BSR, BSF and SELECTIF instructions.
* Implements SELECTIF on x86_64 targets using conditional-moves.
* new test filetests/isa/intel/baseline_clz_ctz_popcount.cton
(for legalization)
* new test filetests/isa/intel/baseline_clz_ctz_popcount_encoding.cton
(for encoding)
* Allow lib/cretonne/meta/gen_legalizer.py to generate non-snake-caseified
Rust without rustc complaining.
Fixes#238.
This Function method can be used after the final code layout has been
computed. It returns all the instructions in an EBB along with their
encoded size and offset from the beginning of the function.
This is useful for extracting additional metadata about trapping
instructions and other things that may be needed by a VM.
The dominator tree pre-order is defined at the EBB granularity, but we
are looking for dominating nodes at the instruction level. This means
that we sometimes need to look higher up the DomForest stack for a
dominating node, using DominatorTree::dominates() instead of
DominatorTreePreorder::dominates().
Each dominance check involves the domtree.last_dominator() function
scanning up the dominator tree, starting from the new node that was
pushed. We can eliminate this duplicate work by exposing the
last_dominator() function to push_node().
As we are searching through nodes on the stack, maintain a last_dom
program point representing the previous return value from
last_dominator(). This way, we're only scanning the dominator tree once.
Use a better algorithm for resolving interferences in virtual registers.
This improves code quality by generating much fewer copies on some
complicated functions.
After the initial union-find phase, the check_vreg() function uses a
Budimlic forest to check for interference between the values in the
virtual registers, as before. All the interference-free vregs are done.
Others are passed to synthesize_vreg() which dissolves the vreg and then
attempts to rebuild one or more vregs from the contained values.
The pairwise interference checks use *virtual copies* to make sure that
any future conflicts can be resolved by inserting a copy instruction.
This technique was not present in the old coalescer which caused some
correctness issues.
This coalescing algorithm makes much better code, and it is generally a
bit slower than before. Some of the slowdown is made up by the following
passes being faster because they have to process less code.
Example 1, the Python interpreter which contains a very large function
with a lot of variables.
Before:
15.664 0.011 Register allocation
1.535 1.535 RA liveness analysis
2.872 1.911 RA coalescing CSSA
4.436 4.436 RA spilling
2.610 2.598 RA reloading
4.200 4.199 RA coloring
After:
9.795 0.013 Register allocation
1.372 1.372 RA liveness analysis
6.231 6.227 RA coalescing CSSA
0.712 0.712 RA spilling
0.598 0.598 RA reloading
0.869 0.869 RA coloring
Coalescing is more than twice as slow, but because of the vastly better
code quality, overall register allocation time is improved by 37%.
Example 2, the clang compiler.
Before:
57.148 0.035 Register allocation
9.630 9.630 RA liveness analysis
7.210 7.169 RA coalescing CSSA
9.972 9.972 RA spilling
11.602 11.572 RA reloading
18.698 18.672 RA coloring
After:
64.792 0.042 Register allocation
8.630 8.630 RA liveness analysis
22.937 22.928 RA coalescing CSSA
8.684 8.684 RA spilling
9.559 9.551 RA reloading
14.939 14.936 RA coloring
Here coalescing is 3x slower, but overall regalloc time only regresses
by 13%.
Most examples are less extreme than these two. They just get better code
at about the same compile time.
Use a dominator tree pre-order to speed up the dominance checks and only
check for interference with the nearest dominating predecessor in a
virtual register.
This makes the CSSA verification about 2x as fast.
The ir::layout module is assigning sequence numbers to all EBBs and
instructions so relative positions can be computed in constant time.
This works a lot like BASIC line numbers where we initially use numbers
10, 20, 30, ... so we can insert new instructions in the middle of the
sequence without renumbering everything.
In some cases where the coalescer is misbehaving and inserting a lot of
copy instructions, we end up having to renumber a larger and larger
number of instructions to make space in the sequence. This causes the
following reload pass to be very slow, spending most of its time
renumbering instructions.
Fix this by putting an upper limit on the number of instructions we're
willing to renumber locally. When the limit is reached, switch to a full
function renumbering with the major stride of 10. This gives us new
elasticity in the sequence numbers.
- Time to compile the Python interpreter in #229 drops from 4826 s -> 15.8 s.
- The godot benchmark in #226 drops from 1257 s -> 75 s.
- The AngryBots1 benchmark does not have the coalescer misbehavior.
Its compilation time changes 22.9 s -> 23.1 s.
It's worth noting that the sequence numbering is still technically
quadratic with this fix. The system is not designed to handle a large
number of instructions inserted in a single location. It expects a more
even distribution of new instructions.
We still need to fix the coalescer. It should not insert so many copies
in degenerate cases.
The fuzzer bugs #219 and #227 are both cases where the register
allocator coloring pass "runs out of registers". What's really happening
is that the constraint solver failed to find a solution, even when one
existed.
Suppose we have three solver variables:
v0(GPR, out, global)
v1(GPR, in)
v2(GPR, in, out)
And suppose registers %r0 and %r1 are available on both input and output
sides of the instruction, but only %r1 is available for global outputs.
A valid solution would be:
v0 -> %r1
v1 -> %r1
v2 -> %r0
However, the solver would pick registers for the three values in
numerical order because v1 and v2 have the same domain size (=2). This
would assign v1 -> %r0 and then fail to find a free register for v2.
Fix this by prioritizing in+out variables over single-sided variables
even when their domains are equal. This means the v2 gets assigned a
register before v1, and it gets a chance to pick a register that is
still available on both in and out sides.
Also try to avoid depending on value numbers in the solver. These bugs
were hard to reproduce because a test case invariably would have
different value numbers, causing the solver to order its variables
differently and succeed. Throw in the previous solution and original
register assignments as tie breakers which are stable and not dependent
on value numbers.
This is still not a substitute for a proper solver search algorithm that
we will probably have to write eventually.
Fixes#219Fixes#227
The Intel instruction "v1 = ushr v2, v2" will implicitly fix the output
register for v2 to %rcx because the output is tied to the first input
operand and the second input operand is fixed to %rcx.
Make sure we handle this transitive constraint when checking for
interference with the globally live registers.
Fixes#218
When the coloring pass sees an instruction with a fixed input register
constraint that is already satisfied, make sure to tell the solver
about it anyway.
There are situations where the solver wants to convert a value to a
solver variable, and we can't allow that if the same value is also used
for a fixed register operand.
Fixes#221.
The spiller wasn't tracking register pressure correctly for dead EBB
parameters in visit_ebb_header(). Make sure we free any dead EBB
parameters.
Fixes#223
* Switch RegClass to a bitmap implementation.
* Add special RegClass to remove r13 from 'ld' recipe.
* Use MASK_LEN constant instead of magic number.
* Enforce that RegClass slicing is only valid on contiguous classes.
* Use Optional[int] for RegClass optional bitmask parameter.
* Add comment explaining use of Intel ISA's GPR_NORIP register class.