* Added `mem_flags` parameter to `State::checked_{load,store}` as the means
for determining the endianness, typically derived from an instruction.
* Added `native_endianness` property to `InterpreterState` as fallback when
determining endianness, such as in cases where there are no memory flags
avaiable or set.
* Added `to_be` and `to_le` methods to `DataValue`.
* Added `AtomicCas` and `AtomicRmw` to list of instructions with retrievable
memory flags for `InstructionData::memflags`.
* Enabled `atomic-{cas,rmw}-subword-{big,little}.clif` for interpreter run
tests.
This commit adds lowerings to the AArch64 backend for the `fmls`
instruction which is intended to be leveraged in the relaxed-simd
proposal for WebAssembly. This should hopefully allow for a
teeny-bit-more efficient codegen for this operator instead of using the
`fmla` instruction plus a negation instruction.
This catches a case that wasn't handled previously by #5880 to allow a
constant load to be folded into an instruction rather than forcing it to
be loaded into a temporary register.
* Revert "egraphs: disable GVN of effectful idempotent ops (temporarily). (#5808)"
This reverts commit c7e2571866.
* egraphs: fix handling of effectful-but-idempotent ops and GVN.
This PR addresses #5796: currently, ops that are effectful, i.e., remain
in the side-effecting skeleton (which we keep in the `Layout` while the
egraph exists), but are idempotent and thus mergeable by a GVN pass, are
not handled properly.
GVN is still possible on effectful but idempotent ops precisely because
our GVN does not create partial redundancies: it removes an instruction
only when it is dominated by an identical instruction. An isntruction
will not be "hoisted" to a point where it could execute in the optimized
code but not in the original.
However, there are really two parts to the egraph implementation that
produce this effect: the deduplication on insertion into the egraph, and
the elaboration with a scoped hashmap. The deduplication lets us give a
single name (value ID) to all copies of an identical instruction, and
then elaboration will re-create duplicates if GVN should not hoist or
merge some of them.
Because deduplication need not worry about dominance or scopes, we use a
simple (non-scoped) hashmap to dedup/intern ops as "egraph nodes".
When we added support for GVN'ing effectful but idempotent ops (#5594),
we kept the use of this simple dedup'ing hashmap, but these ops do not
get elaborated; instead they stay in the side-effecting skeleton. Thus,
we inadvertently created potential for weird code-motion effects.
The proposal in #5796 would solve this in a clean way by treating these
ops as pure again, and keeping them out of the skeleton, instead putting
"force" pseudo-ops in the skeleton. However, this is a little more
complex than I would like, and I've realized that @jameysharp's earlier
suggestion is much simpler: we can keep an actual scoped hashmap
separately just for the effectful-but-idempotent ops, and use it to GVN
while we build the egraph. In effect, we're fusing a separate GVN pass
with the egraph pass (but letting it interact corecursively with
egraph rewrites. This is in principle similar to how we keep a separate
map for loads and fuse this pass with the egraph rewrite pass as well.
Note that we can use a `ScopedHashMap` here without the "context" (as
needed by `CtxHashMap`) because, as noted by @jameysharp, in practice
the ops we want to GVN have all their args inline. Equality on the
`InstructinoData` itself is conservative: two insts whose struct
contents compare shallowly equal are definitely identical, but identical
insts in a deep-equality sense may not compare shallowly equal, due to
list indirection. This is fine for GVN, because it is still sound to
skip any given GVN opportunity (and keep the original instructions).
Fixes#5796.
* Add comments from review.
* x64: Add `shuffle` cases for `punpck{h,l}bw`
I noticed this difference between LLVM and Cranelift for something I was
looking at recently, and while it's probably not all that common I
figured I'd add it here since it should be somewhat useful nevertheless.
* Review feedback
* Use u128 extractor instead
* doc: add a page listing supported proposals
This adds a table showing Wasmtime's support for various WASI proposals,
much like the one available for WebAssembly proposals. This change is
related to [#2423], which provides guidelines for implementing WASI
proposals but was never merged.
[#2423]: https://github.com/bytecodealliance/wasmtime/pull/2423
* review: remove phase-gating sentence
This instruction is only defined with i8x16 inputs and outputs so
there's no need for a type variable, so shadow the otherwise-generic `a`
result with a concrete i8x16 type.
This commit adds support for the bare lowering of the `iadd_pairwise`
instruction with `i16x8` and `i32x4` types on the x64 backend. These
lowerings are achieved with the `phaddw` and `phaddd` instructions,
respectively. Additionally AVX encodings of these instructions are added
too.
The motivation for these new lowerings comes from the relaxed-simd
proposal which will use them in the deterministic lowering of some
instructions on the x64 backend.
This change adds a basic coredump generation after a WebAssembly trap
was entered. The coredump includes rudimentary stack / process debugging
information.
A new CLI argument is added to enable coredump generation:
```
wasmtime --coredump-on-trap=/path/to/coredump/file module.wasm
```
See ./docs/examples-coredump.md for a working example.
Refs https://github.com/bytecodealliance/wasmtime/issues/5732
* Change the name of wit-bindgen's host implementation traits.
Instead of naming the host implementation trait something like
`wasi_filesystem::WasiFilesystem`, name it `wasi_filesystem::Host`, and
avoid using the identifier `Host` in other places.
This fixes a collision when generating bindings for the current
wasi-clock API, which contains an interface `wall-clock` which contains
a type `wall-clock`, which created a naming collision on the name
`WallClock`.
* Update tests to use the new trait name.
* Fix one more.
* Add the new test interface to the simple-wasi world.
A number of places in the x64 backend make use of 128-bit constants for
various wasm SIMD-related instructions although most of them currently
use the `x64_xmm_load_const` helper to load the constant into a
register. Almost all xmm instructions, however, enable using a memory
operand which means that these loads can be folded into instructions to
help reduce register pressure. Automatic conversions were added for a
`VCodeConstant` into an `XmmMem` value and then explicit loads were all
removed in favor of forwarding the `XmmMem` value directly to the
underlying instruction. Note that some instances of `x64_xmm_load_const`
remain since they're used in contexts where load sinking won't work
(e.g. they're the first operand, not the second for non-commutative
instructions).
This was added for the wasm SIMD proposal but I've been poking around at
this recently and the instruction can instead be represented by its
component parts with the same semantics I believe. This commit removes
the instruction and instead represents it with the existing
`iadd_pairwise` instruction (among others) and updates backends to with
new pattern matches to have the same codegen as before.
This interestingly entirely removed the codegen rule with no replacement
on the AArch64 backend as the existing rules all existed to produce the
same codegen.
* Generalize unsigned `(x << k) >> k` optimization
Split the existing rule into three parts:
- A dual of the rule for `(x >> k) << k` that is only valid for unsigned
shifts.
- Known-bits analysis for `(band (uextend x) k)`.
- A new rule for converting `sextend` to `uextend` if the sign-extended
bits are masked out anyway.
The first two together cover the existing rule.
* Generalize signed `(x << k) >> k` optimization
* Review comments
* Generalize sign-extending shifts further
The shifts can be eliminated even if the shift amount isn't exactly
equal to the difference in bit-widths between the narrow and wide types.
* Add filetests
Early on in WASI, we weren't sure whether we should allow preopens to be
closed, so conservatively, we disallowed them. Among other things, this
protected assumptions in wasi-libc that it can hold onto preopen file
descriptors and rely on them always being open.
However now, I think it makes sense to relax this restriction. wasi-libc
itself doesn't expose the preopen file descriptors, so users shouldn't
ever be closing them naively, unless they have wild closes. And
toolchains other than wasi-libc may want to close preopens as a way to
drop priveleges once the main file handles are opened.
* Test all backends when a runtest is modified
* Check that this triggers all backend tests
* Revert "Check that this triggers all backend tests"
This reverts commit 1d12536d04f5a3b01fa5420f407960d7ab81da8f.
For instructions with no results (such as branches and stores) or
instructions with multiple results (such as add with carry), we have
assertions checking that an optimization rule doesn't try to match on
or construct such instructions.
When we generate terms for matching or constructing instructions, the
terms for these instructions are guaranteed to panic if they're ever
used. So let's just not generate them.
In the future we may wish to generate terms with different types for
these instructions, to make them usable in ISLE rules for optimization
that fall outside our current egraph constraints.
* Add a Result type alias
* Refer to the type in top-level docs
* Use this inside the documentation for the bindgen! macro
* Fix tests
* Address small PR feedback
* Simply re-export anyhow types
This uses the `cmov`, which was previously necessary for Spectre
mitigation, to clamp the table index instead of zeroing it. By then
placing the default target as the last entry in the table, we can use
just one branch instruction in all cases.
Since there isn't a bounds-check branch any more, this sequence no
longer needs Spectre mitigation. And since we don't need to be careful
about preserving flags, half the instructions can be removed from this
pseudoinstruction and emitted as regular instructions instead.
This is a net savings of three bytes in the encoding of x64's br_table
pseudoinstruction. The generated code can sometimes be longer overall
because the blocks are emitted in a slightly different order.
My benchmark results show a very small effect on runtime performance
with this change.
The spidermonkey benchmark in Sightglass runs "1.01x faster" than main
by instructions retired, but with no significant difference in CPU
cycles. I think that means it rarely hit the default case in any
br_table instructions it executed.
The pulldown-cmark benchmark in Sightglass runs "1.01x faster" than main
by CPU cycles, but main runs "1.00x faster" by instructions retired. I
think that means this benchmark hit the default case a significant
amount of the time, so it executes a few more instructions per br_table,
but maybe the branches were predicted better.
* Remove globals from parking spot tests
Use `std:🧵:scope` to keep everything local to just the tests.
* Fix a panic due to a race in `unpark` and `park`
This commit fixes a panic in the `ParkingSpot` implementation where an
`unpark` signal may not get acknowledged when a waiter times out,
causing the waiter to remove itself from the internal map but panic
thinking that it missed an unpark signal.
The fix in this commit is to consume unpark signals when a timeout
happens. This can lead to another possible race I've detailed in the
comments which I believe is allowed by the specification of park/unpark
in wasm.
* Update crates/runtime/src/parking_spot.rs
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
---------
Co-authored-by: Andrew Brown <andrew.brown@intel.com>
* x64: Fill out more AVX instructions
This commit fills out more AVX instructions for SSE counterparts
currently used. Many of these instructions do not benefit from the
3-operand form that AVX uses but instead benefit from being able to use
`XmmMem` instead of `XmmMemAligned` which may be able to avoid some
extra temporary registers in some cases.
* Review comments
* Rework the blockorder module to reuse the dom tree's cfg postorder
* Update domtree tests
* Treat br_table with an empty jump table as multiple block exits
* Bless tests
* Change branch_idx to succ_idx and fix the comment
When expanding a min/max operation to a pair of icmp + select,
do not attempt to expand the input value operands twice, as
this might fail with memory operands.
Fixes https://github.com/bytecodealliance/wasmtime/issues/5859.
Use wrapping_neg in i{64,32,16}_from_negated_value to avoid Rust
aborts due to integer overflow. The resulting INT_MIN is already
handled correctly in subsequent operations.
Fixes https://github.com/bytecodealliance/wasmtime/issues/5863.
As @yamt points out [here], the `wait`/`notify` pairing used in this
manual WAT test was not effective. The `wait` always immediately
returned, meaning that the main thread essentially spins until a counter
is atomically incremented. This is fine for test correctness, but was
not the original intent, which was lost in a refactoring. This change
uses the `$i` local to keep track of the counter value we expect to see
for the `wait`, so that the `wait`/`notify` pair actually waits as
expected.
[here]: https://github.com/bytecodealliance/wasmtime/pull/5484#discussion_r1101200012
As per the linked issue, atomic_rmw was implemented without specific regard for thread safety.
Additionally, the relevant filetest (atomic-rmw-little.clif) was enabled and altered to fix an
inccorrect call to test function `%atomic_rmw_and_i64` after setting up test function
`%atomic_rmw_and_i32`.
The relaxed-simd proposal for WebAssembly adds a fused-multiply-add
operation for `v128` types so I was poking around at Cranelift's
existing support for its `fma` instruction. I was also poking around at
the x86_64 ISA's offerings for the FMA operation and ended up with this
PR that improves the lowering of the `fma` instruction on the x64
backend in a number of ways:
* A libcall-based fallback is now provided for `f32x4` and `f64x2` types
in preparation for eventual support of the relaxed-simd proposal.
These encodings are horribly slow, but it's expected that if FMA
semantics must be guaranteed then it's the best that can be done
without the `fma` feature. Otherwise it'll be up to producers (e.g.
Wasmtime embedders) whether wasm-level FMA operations should be FMA or
multiply-then-add.
* In addition to the existing `vfmadd213*` instructions opcodes were
added for `vfmadd132*`. The `132` variant is selected based on which
argument can have a sinkable load.
* Any argument in the `fma` CLIF instruction can now have a
`sinkable_load` and it'll generate a single FMA instruction.
* All `vfnmadd*` opcodes were added as well. These are pattern-matched
where one of the arguments to the CLIF instruction is an `fneg`. I
opted to not add a new CLIF instruction here since it seemed like
pattern matching was easy enough but I'm also not intimately familiar
with the semantics here so if that's the preferred approach I can do
that too.
* x64: Enable load-coalescing for SSE/AVX instructions
This commit unlocks the ability to fold loads into operands of SSE and
AVX instructions. This is beneficial for both function size when it
happens in addition to being able to reduce register pressure.
Previously this was not done because most SSE instructions require
memory to be aligned. AVX instructions, however, do not have alignment
requirements.
The solution implemented here is one recommended by Chris which is to
add a new `XmmMemAligned` newtype wrapper around `XmmMem`. All SSE
instructions are now annotated as requiring an `XmmMemAligned` operand
except for a new new instruction styles used specifically for
instructions that don't require alignment (e.g. `movdqu`, `*sd`, and
`*ss` instructions). All existing instruction helpers continue to take
`XmmMem`, however. This way if an AVX lowering is chosen it can be used
as-is. If an SSE lowering is chosen, however, then an automatic
conversion from `XmmMem` to `XmmMemAligned` kicks in. This automatic
conversion only fails for unaligned addresses in which case a load
instruction is emitted and the operand becomes a temporary register
instead. A number of prior `Xmm` arguments have now been converted to
`XmmMem` as well.
One change from this commit is that loading an unaligned operand for an
SSE instruction previously would use the "correct type" of load, e.g.
`movups` for f32x4 or `movup` for f64x2, but now the loading happens in
a context without type information so the `movdqu` instruction is
generated. According to [this stack overflow question][question] it
looks like modern processors won't penalize this "wrong" choice of type
when the operand is then used for f32 or f64 oriented instructions.
Finally this commit improves some reuse of logic in the `put_in_*_mem*`
helper to share code with `sinkable_load` and avoid duplication. With
this in place some various ISLE rules have been updated as well.
In the tests it can be seen that AVX-instructions are now automatically
load-coalesced and use memory operands in a few cases.
[question]: https://stackoverflow.com/questions/40854819/is-there-any-situation-where-using-movdqu-and-movupd-is-better-than-movups
* Fix tests
* Fix move-and-extend to be unaligned
These don't have alignment requirements like other xmm instructions as
well. Additionally add some ISA tests to ensure that their output is
tested.
* Review comments