Previously, the assertion checked for `lane > 0` when it should have been `lane >= 0`; since lane is unsigned, this half of the assertion can be entirely removed.
* Use `is_wasm_parameter` in translating wasm calls
Added in #1329 it's now possible for multiple parameters to be non-wasm
parameters, so the previous `param_types` method is no longer suitable
for acquiring all wasm-related parameters, rather then `FuncEnvironment`
must be consulted. This removes usage of `param_types()` as a method
from the wasm translation and instead adds a custom method inline for
filtering the parameters based on `is_wasm_parameter`.
* Apply feedback
* Run rustfmt
* Don't require `mut`
* Run rustfmt
* Correctly count the number of wasm parameters.
Following up on #1329, this further replaces `num_normal_params` with a function
which calls `is_wasm_parameter` to correctly count the number of wasm
parameters a function has.
* Move is_wasm_parameter's implementation into the trait.
This is a breaking API change: the following settings have been renamed:
- jump_tables_enabled -> enable_jump_tables
- colocated_libcalls -> use_colocated_libcalls
- probestack_enabled -> enable_probestack
- allones_funcaddrs -> emit_all_ones_funcaddrs
The `TargetIsa` trait already requires that the implementor is `Sync` to
be shared across threads, and this commit adds in an additional
restriction of `Send` to ensure that the type can be sent-by-value
across threads as well.
This is part of an effort to make various data structures in `wasmtime`
sendable/shareable across threads.
The Intel manual uses `CMPNLT` and `CMPNLE` to denote not-less-than and not-less-than-or-equals. These were translated previously to `FloatCC::GreaterThan` and `FloatCC::GreaterThanOrEqual` but should be correctly translated to `FloatCC::UnorderedOrGreaterThanOrEqual` and `FloatCC::UnorderedOrGreaterThan`. This change adds the necessary legalizations to make use of these new encodings.
* Bitcast vectors immediately before a return
* Bitcast vectors immediately before a block end
* Use helper function for bitcasting arguments
* Add FuncTranslationState::peekn_mut; allows mutating of peeked values
* Bitcast values in place, avoiding an allocation
Also, retrieves the correct EBB header types for bitcasting on Operator::End.
* Bitcast values of a function with no explicit Wasm return instruction
* Add Signature::return_types method
This eliminates some duplicate code and avoids extra `use`s of `Vec`.
* Add Signature::param_types method; only collect normal parameters in both this and Signature::return_types
* Move normal_args to Signature::num_normal_params method
This matches the organization of the other Signature::num_*_params methods.
* Bitcast values of Operator::Call and Operator::CallIndirect
* Add DataFlowGraph::ebb_param_types
* Bitcast values of Operator::Br and Operator::BrIf
* Bitcast values of Operator::BrTable
This patch adds a third mode for templates: REX inference is requestable
at template instantiation time. This reduces the number of recipes
by removing rex()/nonrex() redundancy for many instructions.
error[E0425]: cannot find value `ones` in this scope
--> cranelift-codegen/meta/src/isa/x86/legalize.rs:564:33
|
564 | def!(c = vconst(ones)),
| ^^^^ not found in this scope
Previously `fsub` was used but this fails when negating -0.0 and +0.0 in the SIMD spec tests; using more instructions, this change uses shifts to create a constant for flipping the most significant bit of each lane with `bxor`.
Previously, the use of `enc_x86_64` emitted two 64-bit mode encodings for `scalar_to_vector.i64`, neither of which contained the REX.W bit telling `MOVD/MOVQ` to move 64 bits of data instead of 32 bits. Now, `scalar_to_vector.i64` will always use a sole 64-bit mode REX.W encoding and `scalar_to_vector` with other widths will have three encodings: a 32-bit mode move, a 64-bit mode move with no REX, and a 64-bit mode move with REX (but not REX.W).
This removes `report!`, `fatal!`, and `nonfatal!` from the verifier code and replaces them with methods on `VerifierErrors`. In order to maintain similar ease-of-use, `VerifierError` is expanded with several `From` implementations that convert a tuple to a verifier error.
* Try and assign directly to return registers; backtrack to use struct-return param
Rather than trying to count number of return registers that would be used by a
given set of return values, optimistically assign the return values to
registers. If we later find that we can't fit them all in registers, then
backtrack and introduce the use of a struct-return pointer parameter.
* Rename `rets2` and wrap it in an option so we avoid the clone for non-multi-value