The meta language patterns sometimes need to refer to specific values of
enumerated immediate operands. The dot syntax provides a namespaced,
typed way of doing that: icmp(intcc.ult, a, x).
Add an ast.Enumerator class for representing this kind of AST leaf node.
Add value definitions for the intcc and floatcc immediate operand kinds.
The meta language patterns sometimes need to refer to specific values of
enumerated immediate operands. The dot syntax provides a namespaced,
typed way of doing that: icmp(intcc.ult, a, x).
Add an ast.Enumerator class for representing this kind of AST leaf node.
Add value definitions for the intcc and floatcc immediate operand kinds.
Run the verify_contexti() function after invoking the legalize() and
regalloc() context functions. This will help catch bad code produced by
these passes.
Run the verify_contexti() function after invoking the legalize() and
regalloc() context functions. This will help catch bad code produced by
these passes.
These two instructions make sense for vector types by simply performing
the same operation on each lane, like most other vector operations.
Problem found by @angusholder's verifier.
These two instructions make sense for vector types by simply performing
the same operation on each lane, like most other vector operations.
Problem found by @angusholder's verifier.
The carry and borrow values are boolean, so we have to convert them to
an integer type with bint(c) before we can add them to the result.
Also tweak the default legalizer action for unsupported types: Only
attempt a narrowing pattern for lane types > 32 bits.
This was found by @angusholder's new type checks in the verifier.
The carry and borrow values are boolean, so we have to convert them to
an integer type with bint(c) before we can add them to the result.
Also tweak the default legalizer action for unsupported types: Only
attempt a narrowing pattern for lane types > 32 bits.
This was found by @angusholder's new type checks in the verifier.
* Verify that a recomputed dominator tree is identical to the existing one.
* The verifier now typechecks instruction results and arguments.
* The verifier now typechecks instruction results and arguments.
* The verifier now typechecks instruction results and arguments.
* Added `inst_{fixed,variable}_args` accessor functions.
* Improved error messages in verifier.
* Type check return statements against the function signature.
* Verify that a recomputed dominator tree is identical to the existing one.
* The verifier now typechecks instruction results and arguments.
* The verifier now typechecks instruction results and arguments.
* The verifier now typechecks instruction results and arguments.
* Added `inst_{fixed,variable}_args` accessor functions.
* Improved error messages in verifier.
* Type check return statements against the function signature.
If an instruction doesn't have an associated encoding, use the standard
TargetIsa hook to encode it.
The test still fails if an instruction can't be encoded. There is no
legalization step.
If an instruction doesn't have an associated encoding, use the standard
TargetIsa hook to encode it.
The test still fails if an instruction can't be encoded. There is no
legalization step.
Use the meta language encoding recipes to generate an emit_inst()
function for each ISA. The generated calls into recipe_*() functions
that must be implemented by hand.
Implement recipe_*() functions for the RISC-V recipes.
Add the TargetIsa::emit_inst() entry point which emits an instruction to
a CodeSink trait object.
Use the meta language encoding recipes to generate an emit_inst()
function for each ISA. The generated calls into recipe_*() functions
that must be implemented by hand.
Implement recipe_*() functions for the RISC-V recipes.
Add the TargetIsa::emit_inst() entry point which emits an instruction to
a CodeSink trait object.
This means that whenever we need to split a value, it is either already
defined by a concatenation instruction in a previously processed EBB, or
it's an EBB argument.
This means that whenever we need to split a value, it is either already
defined by a concatenation instruction in a previously processed EBB, or
it's an EBB argument.
The EBB argument splitting may generate concat-split dependencies when
it repairs branch arguments in EBBs that have not yet been fully
legalized. Add a branch argument simplification step that can resolve
these dependency chains.
This means that all split and concatenation instructions will be dead
after legalization for types that have no legal instructions using them.
The EBB argument splitting may generate concat-split dependencies when
it repairs branch arguments in EBBs that have not yet been fully
legalized. Add a branch argument simplification step that can resolve
these dependency chains.
This means that all split and concatenation instructions will be dead
after legalization for types that have no legal instructions using them.
When the legalizer splits a value into halves, it would previously stop
if the value was an EBB argument. With this change, we also split EBB
arguments and iteratively split arguments on branches to the EBB.
The iterative splitting stops when we hit the entry block arguments or
an instruction that isn't one of the concatenation instructions.
When the legalizer splits a value into halves, it would previously stop
if the value was an EBB argument. With this change, we also split EBB
arguments and iteratively split arguments on branches to the EBB.
The iterative splitting stops when we hit the entry block arguments or
an instruction that isn't one of the concatenation instructions.
Legalizing some instructions may require modifications to the control
flow graph, and some operations need to use the CFG analysis.
The CFG reference is threaded through all the legalization functions to
reach the generated expansion functions as well as the legalizer::split
module where it will be used first.
Legalizing some instructions may require modifications to the control
flow graph, and some operations need to use the CFG analysis.
The CFG reference is threaded through all the legalization functions to
reach the generated expansion functions as well as the legalizer::split
module where it will be used first.
The legalizer often splits values into parts with the vsplit and
isplit_lohi instructions. Avoid doing that for values that are already
defined by the corresponding concatenation instructions.
This reduces the number of instructions created during legalization, and
it simplifies later optimizations. A number of dead concatenation
instructions are left behind. They can be trivially cleaned up by a dead
code elimination pass.
The legalizer often splits values into parts with the vsplit and
isplit_lohi instructions. Avoid doing that for values that are already
defined by the corresponding concatenation instructions.
This reduces the number of instructions created during legalization, and
it simplifies later optimizations. A number of dead concatenation
instructions are left behind. They can be trivially cleaned up by a dead
code elimination pass.