x64: port atomic_rmw to ISLE (#4389)
* x64: port `atomic_rmw` to ISLE This change ports `atomic_rmw` to ISLE for the x64 backend. It does not change the lowering in any way, though it seems possible that the fixed regs need not be as fixed and that there are opportunities for single instruction lowerings. It does rename `inst_common::AtomicRmwOp` to `MachAtomicRmwOp` to disambiguate with the IR enum with the same name. * x64: remove remaining hardcoded register constraints for `atomic_rmw` * x64: use `SyntheticAmode` in `AtomicRmwSeq` * review: add missing reg collector for amode * review: collect memory registers in the 'late' phase
This commit is contained in:
@@ -2851,3 +2851,19 @@
|
||||
(rule (lower (has_type (and (fits_in_64 ty) (ty_int _))
|
||||
(atomic_cas flags address expected replacement)))
|
||||
(x64_cmpxchg ty expected replacement (to_amode flags address (zero_offset))))
|
||||
|
||||
;; Rules for `atomic_rmw` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
;; This is a simple, general-case atomic update, based on a loop involving
|
||||
;; `cmpxchg`. Note that we could do much better than this in the case where the
|
||||
;; old value at the location (that is to say, the SSA `Value` computed by this
|
||||
;; CLIF instruction) is not required. In that case, we could instead implement
|
||||
;; this using a single `lock`-prefixed x64 read-modify-write instruction. Also,
|
||||
;; even in the case where the old value is required, for the `add` and `sub`
|
||||
;; cases, we can use the single instruction `lock xadd`. However, those
|
||||
;; improvements have been left for another day. TODO: filed as
|
||||
;; https://github.com/bytecodealliance/wasmtime/issues/2153.
|
||||
|
||||
(rule (lower (has_type (and (fits_in_64 ty) (ty_int _))
|
||||
(atomic_rmw flags op address input)))
|
||||
(x64_atomic_rmw_seq ty op (to_amode flags address (zero_offset)) input))
|
||||
|
||||
Reference in New Issue
Block a user