Also make sure we generate type checks for the controlling type variable
in legalization patterns. This is not needed for encodings since the
encoding tables are already keyed on the controlling type variable.
These sign bit manipulations need to use a -0.0 floating point constant
which we didn't have a way of materializing previously.
Add a ieee32.bits(0x...) syntax to the Python AST nodes that creates am
f32 immediate value with the exact requested bitwise representation.
This contains encoding details for a stack reference: The base register
and offset to use in the specific instruction encoding.
Generate StackRef objects called in_stk0 etc for the binemit recipe
code. All binemit recipes need to compute base pointer offsets for stack
references, so have the automatically generated code do it.
Use the simplest expansion which materializes the bits of the floating
point constant as an integer and then bit-casts to the floating point
type. In the future, we may want to use constant pools instead. Either
way, we need custom legalization.
Also add a legalize_monomorphic() function to the Python targetISA class
which permits the configuration of a default legalization action for
monomorphic instructions, just like legalize_type() does for polymorphic
instructions.
The custom_legalize() method on XFormGroup can be used to call a
custom function to legalize specific opcodes.
This will be used shortly to expand global_addr which has an expansion
that depends on the details of the global variable being referenced.
See #144 for discussion.
- Add a new GlobalVar entity type both in Python and Rust.
- Define a UnaryGlobalVar instruction format containing a GlobalVar
reference.
- Add a globalvar.rs module defining the GlobalVarData with support for
'vmctx' and 'deref' global variable kinds.
Langref:
Add a section about global variables and the global_addr
instruction.
Parser:
Add support for the UnaryGlobalVar instruction format as well as
global variable declarations in the preamble.
* Add Atom and Literal base classes to CDSL Ast. Change substitution() and copy() on Def/Apply/Rtl to support substituting Var->Union[Var, Literal]. Check in Apply() constructor kinds of passed in Literals respect instruction signature
* Change verify_semantics to check all possible instantiations of enumerated immediates (needed to descrive icmp). Add all bitvector comparison primitives and bvite; Change set_semantics to optionally accept XForms; Add semantics for icmp; Fix typing errors in semantics/{smtlib, elaborate, __init__}.py after the change of VarMap->VarAtomMap
* Forgot macros.py
* Nit obscured by testing with mypy enabled present.
* Typo
We already do this for the encoding tables, but the instruction
predicates computed by Apply.inst_predicate() did not include them.
Make sure we don't duplicate the type check in the Encoding constructor
when passed an Apply AST node.
Replace the isa::Legalize enumeration with a function pointer. This
allows an ISA to define its own specific legalization actions instead of
relying on the default two.
Generate a LEGALIZE_ACTIONS table for each ISA which contains
legalization function pointers indexed by the legalization codes that
are already in the encoding tables. Include this table in
isa/*/enc_tables.rs.
Give the `Encodings` iterator a reference to the action table and change
its `legalize()` method to return a function pointer instead of an
ISA-specific code.
The Result<> returned from TargetIsa::encode() no longer implements
Debug, so eliminate uses of unwrap and expect on that type.
Instructions will multiple type variables can now use `any` to indicate
encodings that don't care about the value of a secondary type variable:
ishl.i32.any instead of ishl.i32.i32
This is only allowed for secondary type variables (which are converted
to instruction predicates). The controlling type variable must still be
fully specified because it is used to key the encoding tables.
Predicate numbers are available in the maps
isa.settings.predicate_number and isa.instp_number instead.
Like the name field, predicate numbers don't interact well with
unique_pred().
The name of a predicate was only ever used for named settings that are
computed as a boolean expression of other settings.
- Record the names of these settings in named_predicates instead.
- Remove the name field from all predicates.
Named predicates does not interact well with the interning of predicates
through isa.unique_pred().
The encoding tables are keyed by the controlling type variable only. We
need to distinguish different encodings for instructions with multiple
type variables.
Add a TypePredicate instruction predicate which can check the type of an
instruction value operand. Combine type checks into the instruction
predicate for instructions with more than one type variable.
Add Intel encodings for fcvt_from_sint.f32.i64 which can now be
distinguished from fcvt_from_sint.f32.i32.
It turns out that most encoding predicates are expressed as recipe
predicates. This means that the encoding tables can be more compact
since we can check the recipe predicate separately from individual
instruction predicates, and the recipe number is already present in the
table.
- Don't combine recipe and encoding-specific predicates when creating an
Encoding. Keep them separate.
- Generate a table of recipe predicates with function pointers. Many of
these are null.
- Check any recipe predicate before accepting a recipe+bits pair.
This has the effect of making almost all instruction predicates
CODE_ALWAYS.
When an instruction doesn't have a valid encoding for the target ISA, it
needs to be legalized. Different legalization strategies can be
expressed as separate XFormGroup objects.
Make the choice of XFormGroup configurable per CPU mode, rather than
depending on a hard-coded default.
Add a CPUMode.legalize_type() method which assigns an XFormGroup to
controlling type variables and lets you set a default.
Add a `legalize` field to Level1Entry so the first-level hash table
lookup gives us the configured default legalization action for the
instruction's controlling type variable.
Fixes#11.
Presets are groups of settings and values applied at once. This is used
as a shorthand in test files, so for example "isa intel nehalem" enables
all of the CPUID bits that the Nehalem micro-architecture provides.
* Reduce code duplication in TypeConstraint subclasses; Add ConstrainWiderOrEqual to ti and to ireduce,{s,u}extend and f{promote,demote}; Fix bug in emitting constraint edges in TypeEnv.dot(); Modify runtime constraint checks to reject match when they encounter overflow
* Rename Constrain types to something shorter; Move lane_bits/lane_counts in subclasses of ValueType; Add wider_or_eq function in rust and python;
Generate code to:
- Unwrap the instruction and generate an error if the instruction format
doesn't match the recipe.
- Look up the value locations of register and stack arguments.
The recipe_* functions in the ISA binemit modules now take these
unwrapped items as arguments.
Also add an optional `emit` argument to the EncRecipe constructor which
makes it possible to provide inline Rust code snippets for code
emission. This requires a lot less boilerplate than recipe_* functions.
As per the comment in TypeEnv.normalize_tv about cancellation, whenever we create a TypeVar we must assert that there is no under/overflow. To make sure this always happen move the safety checks to TypeVar.derived() from the other helper methods
* Add more rigorous type inference and encapsulate the type inferece code in its own file (ti.py).
Add constraints accumulation during type inference, to represent constraints that cannot be expressed
using bijective derivation functions between typevars.
Add testing for new type inference code.
* Additional annotations to appease mypy
* Convert TypeSet fields to sets; Add BitSet<T> type to rust; Encode ValueTypeSets using BitSet; (still need mypy cleanup)
* nits
* cleanup nits
* forgot mypy type annotations
* rustfmt fixes
* Round 1 comments: filer b2, b4; doc comments in python; move bitset in its own toplevel module; Use Into<u32>
* fixes
* Revert comment to appease rustfmt
Add a Stack() class for specifying operand constraints for values on the
stack.
Add encoding recipes for RISC-V spill and fill instructions. Don't
implement the encoding recipe functions yet since we don't have the
stack slot layout yet.
* Skeleton simple_gvn pass.
* Basic testing infrastructure for simple-gvn.
* Add can_load and can_store flags to instructions.
* Move the replace_values function into the DataFlowGraph.
* Make InstructionData derive from Hash, PartialEq, and Eq.
* Make EntityList's hash and eq functions panic.
* Change Ieee32 and Ieee64 to store u32 and u64, respectively.
Avoid spreading u32 as a bitmask of register classes throughout the
code.
Enforce the limit of 32 register classes total. This could easily be
raised if needed.
The MAX_TOPRCS constant is the highest possible number of top-level
register classes in an ISA. The RegClassData.toprc field is always
smaller than this limit.
A top-level register class is one that has no sub-classes. It is
possible to have multiple top-level register classes in the same
register bank. For example, ARM's FPR bank has both D and Q top-level
register classes.
Number register classes such that all top-level register classes appear
as a contiguous sequence starting from 0. This will be used by the
register allocator when counting used registers per top-level register
class.