When encoding constants as immediates into an RSE Imm12 instruction we need to take special care to check if the value that we are trying to input does not overflow its type when viewed as a signed value. (i.e. iconst.i8 200)
We cannot both put an immediate and sign extend it, so we need to lower it into a separate reg, and emit the sign extend into the instruction.
For more details see the [cg_clif bug report](https://github.com/bjorn3/rustc_codegen_cranelift/issues/1184#issuecomment-873214796).
* cranelift: Initial fuzzer implementation
* cranelift: Generate multiple test cases in fuzzer
* cranelift: Separate function generator in fuzzer
* cranelift: Insert random instructions in fuzzer
* cranelift: Rename gen_testcase
* cranelift: Implement div for unsigned values in interpreter
* cranelift: Run all test cases in fuzzer
* cranelift: Comment options in function_runner
* cranelift: Improve fuzzgen README.md
* cranelift: Fuzzgen remove unused variable
* cranelift: Fuzzer code style fixes
Thanks! @bjorn3
* cranelift: Fix nits in CLIF fuzzer
Thanks @cfallin!
* cranelift: Implement Arbitrary for TestCase
* cranelift: Remove gen_testcase
* cranelift: Move fuzzers to wasmtime fuzz directory
* cranelift: CLIF-Fuzzer ignore tests that produce traps
* cranelift: CLIF-Fuzzer create new fuzz target to validate generated testcases
* cranelift: Store clif-fuzzer config in a separate struct
* cranelift: Generate variables upfront per function
* cranelift: Prevent publishing of fuzzgen crate
Implement the `TlsValue` opcode in the aarch64 backend for ELF_GD.
This is a little bit unusual as the default TLS mechanism for aarch64 is TLS Descriptors in other compilers.
However currently we only recognize elf_gd so lets start with that as a TLS implementation.
There has been occasional confusion with the representation that we use
for bool-typed values in registers, at least when these are wider than
one bit. Does a `b8` store `true` as 1, or as all-ones (`0xff`)?
We've settled on the latter because of some use-cases where the wide
bool becomes a mask -- see #2058 for more on this.
This is fine, and transparent, to most operations within CLIF, because
the bool-typed value still has only two semantically-visible states,
namely `true` and `false`.
However, we have to be careful with bool-to-int conversions. `bint` on
aarch64 correctly masked the all-ones value down to 0 or 1, as required
by the instruction specification, but on x64 it did not. This PR fixes
that bug and makes x64 consistent with aarch64.
While staring at this code I realized that `bextend` was also not
consistent with the all-ones invariant: it should do a sign-extend, not
a zero-extend as it previously did. This is also rectified and tested.
(Aarch64 also already had this case implemented correctly.)
Fixes#3003.
The previous address calculation code had a bug where we tried to
add offsets into a temporary register before defining it, causing
the regalloc to complain.
* Add support for processor features (including auto-detection).
* Move base architecture set requirement back to z14.
* Add z15 feature sets and re-enable z15-specific code generation
when required features are available.
This adds full back-end support for the Fence, AtomicLoad
and AtomicStore operations, and partial support for the
AtomicCas and AtomicRmw operations.
The missing pieces include sub-word operations, operations
on little-endian memory requiring byte-swapping, and some
of the subtypes of AtomicRmw -- everything that cannot be
implemented without a compare-and-swap loop. This will be
done in a follow-up patch.
This patch already suffices to make the test suite green
again after a recent change that now requires atomic
operations when accessing the heap.
We have 3 different aproaches depending on the type of comparision requested:
* For eq/ne we compare the high bits and low bits and check
if they are equal
* For overflow checks, we perform a i128 add and check the
resulting overflow flag
* For the remaining comparisions (gt/lt/sgt/etc...)
We compare both the low bits and high bits, and if the high bits are
equal we return the result of the unsigned comparision on the low bits
As with other i128 ops, we are still missing immlogic support.
Currently we just basically use a two instruction version of the same i64 ops.
IMMLogic doesn't really support multiple register inputs, so its left as a TODO for future optimizations.
Enabling runtests for the s390x backend exposed a pre-existing endian bug with handling bool test case return values.
These are written as integers of the same width by the trampoline, but are always read out as the Rust "bool" type. This happens to work on little-endian systems, but fails for any boolean type larger than 1 byte on big-endian systems.
See: https://github.com/bytecodealliance/wasmtime/pull/2964#issuecomment-855879866
With this change we now reuse tests across multiple arches.
Duplicate tests were merged into the same file where possible.
Some legacy x86 tests were left in separate files due to incompatibilities with the rest of the test suite.
When AVX512VL and AVX512F are available, use a single instruction
(`VCVTUDQ2PS`) instead of a length 9-instruction sequence. This
optimization is a port from the legacy x86 backend.
Previously, the x64 backend's ABI code would generate a sign-extending
load when loading a less-than-64-bit integer from a spillslot. This is
incorrect: e.g., for i32s > 0x80000000, this would result in all high
bits set.
This interacts poorly with another optimization. Normally, the invariant
is that the high bits of a register holding a value of a certain type,
beyond that type's bits, are undefined. However, as an optimization, we
recognize and use the fact that on x86-64, 32-bit instructions zero the
upper 32 bits. This allows us to elide a 32-to-64-bit zero-extend op
(turning it into just a move, which can then sometimes disappear
entirely due to register coalescing).
If a spill and reload happen between the production of a 32-bit value
from an instruction known to zero the upper bits and its use, then we
will rely on zero upper bits that might actually be set by a
sign-extend. This will result in incorrect execution.
As a fix, we stick to a simple invariant: we always spill and reload a
full 64 bits when handling integer registers on x64. This ensures that
no bits are mangled.
This change implements `vselect` using SSE4.1's `BLENDVPS`, `BLENDVPD`,
and `PBLENDVB`. `vselect` is a lane-selecting instruction that is used
by
[simple_preopt.rs](fa1faf5d22/cranelift/codegen/src/simple_preopt.rs (L947-L999))
to lower `bitselect` to a single x86 instruction when the condition mask
is known to be boolean (all 1s or 0s, e.g., from a conversion). This is
better than `bitselect` in general, which lowers to 4-5 instructions.
The old backend had the `vselect` lowering; this simply introduces it to
the new backend.
* Upgrade to the latest versions of gimli, addr2line, object
And adapt to API changes. New gimli supports wasm dwarf, resulting in
some simplifications in the debug crate.
* upgrade gimli usage in linux-specific profiling too
* Add "continue" statement after interpreting a wasm local dwarf opcode
When dealing with params that need to be split, we follow the
arch64 ABI and split the value in two, and make sure that start that
argument in an even numbered xN register.
The apple ABI does not require this, so on those platforms, we start
params anywhere.