This change implements `vselect` using SSE4.1's `BLENDVPS`, `BLENDVPD`,
and `PBLENDVB`. `vselect` is a lane-selecting instruction that is used
by
[simple_preopt.rs](fa1faf5d22/cranelift/codegen/src/simple_preopt.rs (L947-L999))
to lower `bitselect` to a single x86 instruction when the condition mask
is known to be boolean (all 1s or 0s, e.g., from a conversion). This is
better than `bitselect` in general, which lowers to 4-5 instructions.
The old backend had the `vselect` lowering; this simply introduces it to
the new backend.
Since `uadd_sat`, `sadd_sat`, `usub_sat`, and `ssub_sat` are now only
available to vector types, this removes the lowering code for the
scalar versions of these instructions in the arm32 and aarch64 backends.
This adds the machinery to encode the VPMULLQ instruction which is
available in AVX512VL and AVX512DQ. When these feature sets are
available, we use this instruction instead of a lengthy 12-instruction
sequence.
Since the lowering of `imul` complicated the other ALU operations it was
matched with and since future commits will alter the multiplication
lowering further, this change moves the `imul` lowering to its own match
block.
This change adds a criterion-enabled benchmark, x64-evex-encoding, to
compare the performance of the builder pattern used to encode EVEX
instructions in the new x64 backend against the function pattern
used to encode EVEX instructions in the legacy x86 backend. At face
value, the results imply that the builder pattern is faster, but no
efforts were made to analyze and optimize these approaches further.
In order to benchmark the encoding code with criterion, the functions
and structures must be public. Moving this code to its own module
(instead of keeping as a submodule to `inst`), allows `inst` to remain
private. This avoids having to expose and document (or ignore
documenting) the numerous instruction variants in `inst` while allowing
access to the encoding code. This commit changes no functionality.
* Upgrade to the latest versions of gimli, addr2line, object
And adapt to API changes. New gimli supports wasm dwarf, resulting in
some simplifications in the debug crate.
* upgrade gimli usage in linux-specific profiling too
* Add "continue" statement after interpreting a wasm local dwarf opcode
When dealing with params that need to be split, we follow the
arch64 ABI and split the value in two, and make sure that start that
argument in an even numbered xN register.
The apple ABI does not require this, so on those platforms, we start
params anywhere.
This adds support for the IBM z/Architecture (s390x-ibm-linux).
The status of the s390x backend in its current form is:
- Wasmtime is fully functional and passes all tests on s390x.
- All back-end features supported, with the exception of SIMD.
- There is still a lot of potential for performance improvements.
- Currently the only supported processor type is z15.
This removes an existing dependency on the byteorder crate in favor of
using std equivalents directly.
While not an issue for wasmtime per se, cranelift is now part of the
critical path of building and testing Rust, and minimizing dependencies,
even small ones, can help reduce the time and bandwidth required.
Now that we're using "possibly exported" as an impactful decision for
codegen (which trampolines to generate and which ABI a function has)
it's important that we calculate this property of a wasm function
correctly! Previously Wasmtime forgot to processed "declared" elements
in apart from active/passive element segments, but this updates Wasmtime
to ensure that these entries are processed and all the functions
contained within are flagged as "possibly exported".
Closes#2850
* Optimize `table.init` instruction and instantiation
This commit optimizes table initialization as part of instance
instantiation and also applies the same optimization to the `table.init`
instruction. One part of this commit is to remove some preexisting
duplication between instance instantiation and the `table.init`
instruction itself, after this the actual implementation of `table.init`
is optimized to effectively have fewer bounds checks in fewer places and
have a much tighter loop for instantiation.
A big fallout from this change is that memory/table initializer offsets
are now stored as `u32` instead of `usize` to remove a few casts in a
few places. This ended up requiring moving some overflow checks that
happened in parsing to later in code itself because otherwise the wrong
spec test errors are emitted during testing. I've tried to trace where
these can possibly overflow but I think that I managed to get
everything.
In a local synthetic test where an empty module with a single 80,000
element initializer this improves total instantiation time by 4x (562us
=> 141us)
* Review comments
Following the new ABI introduced for efficient support of multiple return values, the old-backend test for generating unwind information was incomplete, resulting in no unwind information being generated and traps not being correctly caught by the runtime.
PR 2840 changed the store_spillslot routine to always store
integer registers in full word size to a spill slot. However,
the load_spillslot routine was not updated, which may causes
the contents to be reloaded in a different type. On big-endian
systems this will fetch wrong data.
Fixed by using the same type override in load_spillslot.
* x64: add EVEX encoding mechanism
Also, includes an empty stub module for the VEX encoding.
* x64: lower abs.i64x2 to VPABSQ when available
* x64: refactor EVEX encodings to use `EvexInstruction`
This change replaces the `encode_evex` function with a builder-style struct, `EvexInstruction`. This approach clarifies the code, adds documentation, and results in slight speedups when benchmarked.
* x64: rename encoding CodeSink to ByteSink
The patch extends the unwinder to support targets that do not need
to use a dedicated frame pointer register. Specifically, the
changes include:
- Change the "fp" routine in the RegisterMapper to return an
*optional* frame pointer regsiter via Option<Register>.
- On targets that choose to not define a FP register via the above
routine, the UnwindInst::DefineNewFrame operation no longer switches
the CFA to be defined in terms of the FP. (The operation still can
be used to define the location of the clobber area.)
- In addition, on targets that choose not to define a FP register, the
UnwindInst::PushFrameRegs operation is not supported.
- There is a new operation UnwindInst::StackAlloc that needs to be
called on targets without FP whenever the stack pointer is updated.
This caused the CFA offset to be adjusted accordingly. (On
targets with FP this operation is a no-op.)
The unwind rework (commit 2d5db92a) removed support for the
feature to allow a target to allocate the space for outgoing
function arguments right in the prologue (originally added
via commit 80c2d70d). This patch adds it back.
After the unwind rework (commit 2d5db92a) the space used to save
clobbered registers now lies between the nominal SP and the FP.
Therefore, the size of that space should now be included in the
frame size as reported by frame_size(), since this value is used
to compute the nominal_sp_to_fp offset.
This re-factoring replaces uses of `Inst::mov_r_m` with `Inst::store` to ensure there is only one code location to troubleshoot when generating store instructions for a specific type.
Previously, `Inst::store` only understood a subset of the scalar types, which resulted in failures seen in #2826. This change allows `Inst::store` to generate instructions for all scalar widths (`8 | 16 | 32 | 64`) since all of these are supported in the emission code of `Inst::MovRM`.