Commit Graph

201 Commits

Author SHA1 Message Date
Andrew Brown
ca1b76421a [machinst x64]: remove duplicate code to insert a lane 2020-10-02 08:29:31 -07:00
Andrew Brown
c42a097a0c [machinst x64]: use is64 instead of w_bit 2020-10-02 08:29:31 -07:00
Andrew Brown
16a2538ecd [machinst x64]: rename Inst::XmmUninitializedValue and document
This approach is not the best but avoids an extra instruction; perhaps at some point, as mentioned in https://github.com/bytecodealliance/wasmtime/pull/2248, we will add the extra instruction or refactor things in such a way that this `Inst` variant is unnecessary.
2020-10-02 08:29:31 -07:00
Andrew Brown
50b9399006 [machinst x64]: lower remaining lane operations--any_true, all_true, splat 2020-10-02 08:29:31 -07:00
Andrew Brown
74226d6781 [machinst x64]: add integer comparisons 2020-10-02 08:29:31 -07:00
Andrew Brown
4484a00ea5 [machinst x64]: calculate extension modes in one place 2020-09-29 14:48:59 -07:00
Andrew Brown
715be68101 [machinst x64]: assert lane is correct size for extractlane
This change applies a good suggestion @bjorn3 made in #2230 that I forgot to implement there.
2020-09-29 09:34:22 -07:00
Andrew Brown
f50d905152 [machinst x64]: refactor using added RegMem::from(Writable<Reg>) 2020-09-29 08:45:12 -07:00
Andrew Brown
e3eb098c99 [machinst x64]: add swizzle implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
050f078f86 [machinst x64]: add saturating addition implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
a64abf9b76 [machinst x64]: add shuffle implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
f4836f9ca9 [machinst x64]: add extractlane implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
29fa894790 [machinst x64]: add insertlane implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
ac2bf9d246 [machinst x64]: add packed min/max implementations 2020-09-23 15:40:46 -07:00
Andrew Brown
7546d98844 [machinst x64]: add avg_round implementation 2020-09-23 15:40:46 -07:00
Andrew Brown
b202464fa0 [machinst x64]: add iabs implementation 2020-09-23 15:40:46 -07:00
Johnnie Birch
07d0d32b69 Adds i64x2.mul for the new backend targeting x64 2020-09-11 13:17:42 -07:00
Benjamin Bouvier
3849dc18b1 machinst x64: revamp integer immediate emission;
In particular:

- try to optimize the integer emission into a 32-bit emission, when the
high bits are all zero, and stop relying on the caller of `imm_r` to
ensure this.
- rename `Inst::imm_r`/`Inst::Imm_R` to `Inst::imm`/`Inst::Imm`.
- generate a sign-extending mov 32-bit immediate to 64-bits, whenever
possible.
- fix a few places where the previous commit did introduce the
generation of zero-constants with xor, when calling `put_input_to_reg`,
thus clobbering the flags before they were read.
2020-09-11 18:13:30 +02:00
Benjamin Bouvier
d9052d0a9c machinst x64: generate copies of constants during lowering; 2020-09-11 17:41:44 +02:00
Benjamin Bouvier
cace32746f machinst x64: pattern-match addresses that are base+cst index; 2020-09-11 17:41:44 +02:00
Benjamin Bouvier
b4a2dd37a4 machinst x64: rename input_to_reg to put_input_to_reg;
Eventually, we should be able to unify this function's implementation
with the aarch64 one; but the latter does much more, and this would
require abstractions brought up in another pending PR#2142.
2020-09-09 18:03:59 +02:00
Benjamin Bouvier
cb96d16ac7 machinst x64: inline helper used only once; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
7a833f442a machinst: common up some instruction data helpers; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
a835c247c0 machinst: make get_output_reg target independent; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
6a3c4fb54e machinst x64: rename output_to_reg to get_output_reg; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
9620ce6bdf machinst x64: mask shift count too; 2020-09-09 18:03:59 +02:00
Chris Fallin
e8f772c1ac x64 new backend: port ABI implementation to shared infrastructure with AArch64.
Previously, in #2128, we factored out a common "vanilla 64-bit ABI"
implementation from the AArch64 ABI code, with the idea that this should
be largely compatible with x64. This PR alters the new x64 backend to
make use of the shared infrastructure, removing the duplication that
existed previously. The generated code is nearly (not exactly) the same;
the only difference relates to how the clobber-save region is padded in
the prologue.

This also changes some register allocations in the aarch64 code because
call support in the shared ABI infra now passes a temp vreg in, rather
than requiring use of a fixed, non-allocable temp; tests have been
updated, and the runtime behavior is unchanged.
2020-09-08 17:59:01 -07:00
bjorn3
9428480230 Merge SignExtendAlAh and SignExtendRaxRdx 2020-09-08 15:00:24 +02:00
bjorn3
3dcda164dc Fix nits 2020-09-08 15:00:24 +02:00
bjorn3
067255ef45 x64: Implement rotl and rotr for small integers 2020-09-08 15:00:24 +02:00
bjorn3
4251a950ba x64: Implement ishl, ushr and sshr for small integers 2020-09-08 15:00:24 +02:00
bjorn3
cc35f1e9bb x64: Misc small integer fixes 2020-09-08 15:00:24 +02:00
bjorn3
ce033f2a0c x64: Fix udiv and sdiv for 8bit integers 2020-09-08 15:00:24 +02:00
bjorn3
74642b166f x64: Implement ineg and bnot 2020-09-08 15:00:24 +02:00
Johnnie Birch
a64af55cda Adds x64 packed negation for the new backend 2020-09-07 11:56:05 -07:00
Julian Seward
8ac4bd1d0d CL/newBE/x64: Lowering of scalar shifts: fix shift-by-imm generation
The logic for generation of shifts-by-immediate was not quite right.  The result was that even
shifts by an amount known at compile time were being done by moving the shift immediate into %cl
and then doing a variable shift by %cl.  The effect is worse than it sounds, because all of
those shift constants are small and often used in multiple places, so they were GVN'd up and
often ended up at the entry block of the function.  Hence these were connected to the use points
by long live ranges which got spilled.  So all in all, most of the win here comes from avoiding
spilling.

The problem was caused by this line, in the `Opcode::Ishl | Opcode::Ushr ..` case:
```
   let (count, rhs) = if let Some(cst) = ctx.get_constant(inputs[1].insn) {
```
`inputs[]` appears to refer to this CLIF instruction's inputs, and bizarrely `inputs[].insn` all
refer to the instruction (the shift) itself.  Hence `ctx.get_constant(inputs[1].insn)` asks
"does this shift instruction produce a constant" to which the answer is always "no", so the
shift-by-unknown amount code is always generated.  The fix here is to change that expression to
```
   let (count, rhs) = if let Some(cst) = ctx.get_input(insn, 1).constant {
```
`get_input`'s result conveniently includes a `constant` field of type `Option<u64>`, so we just
use that instead.
2020-08-27 11:48:35 +02:00
Benjamin Bouvier
7c85654285 Address review comments. 2020-08-24 17:00:30 +02:00
Benjamin Bouvier
efff43e769 machinst x64: fold address modes on loads/stores; 2020-08-24 17:00:30 +02:00
Benjamin Bouvier
b830ee79de machinst x64: commute operands of integer operations if one input is an immediate; 2020-08-24 17:00:30 +02:00
Benjamin Bouvier
cca10b87cb machinst x64: optimize select/brz/brnz when the input is a comparison; 2020-08-24 17:00:30 +02:00
Julian Seward
620e4b4e82 This patch fills in the missing pieces needed to support wasm atomics on newBE/x64.
It does this by providing an implementation of the CLIF instructions `AtomicRmw`, `AtomicCas`,
`AtomicLoad`, `AtomicStore` and `Fence`.

The translation is straightforward.  `AtomicCas` is translated into x64 `cmpxchg`, `AtomicLoad`
becomes a normal load because x64-TSO provides adequate sequencing, `AtomicStore` becomes a
normal store followed by `mfence`, and `Fence` becomes `mfence`.  `AtomicRmw` is the only
complex case: it becomes a normal load, followed by a loop which computes an updated value,
tries to `cmpxchg` it back to memory, and repeats if necessary.

This is a minimum-effort initial implementation.  `AtomicRmw` could be implemented more
efficiently using LOCK-prefixed integer read-modify-write instructions in the case where the old
value in memory is not required.  Subsequent work could add that, if required.

The x64 emitter has been updated to emit the new instructions, obviously.  The `LegacyPrefix`
mechanism has been revised to handle multiple prefix bytes, not just one, since it is now
sometimes necessary to emit both 0x66 (Operand Size Override) and F0 (Lock).

In the aarch64 implementation of atomics, there has been some minor renaming for the sake of
clarity, and for consistency with this x64 implementation.
2020-08-24 11:50:06 +02:00
Johnnie Birch
a31336996c Add support for some packed multiplication for new x64 backend
Adds support for i32x4, and i16x8 and lowering for pmuludq in
preperation for i64x2.
2020-08-19 10:24:14 -07:00
Johnnie Birch
38ef98700f Adds packed integer subtraction 2020-08-12 09:41:20 -07:00
Johnnie Birch
e60a6f2ad2 Fixup packed integer add lowering
Remove stray print statement
Fix bug in match statement causing unreachable code.
2020-08-06 22:25:18 -07:00
Johnnie Birch
dd6ba5f9d7 Lower packed integer add instructions (v128)
Adds lowering support for packed integer add instructions and helper
function for determining if a type for an instruction indicates it is
packed.
2020-08-06 22:25:18 -07:00
Johnnie Birch
2eadc6e2a8 Add packed integer add opcodes (v128) to instruction set enum 2020-08-06 22:25:18 -07:00
Andrew Brown
4cb36afd7b machinst x64: refactor to use types::[type] everywhere
This change is a pure refactoring--no change to functionality. It removes `use crate::ir::types::*` imports and uses instead `types::I32`, e.g., throughout the x64 code. Though it increases code verbosity, this change makes it more clear where the type identifiers come from (they are generated by `cranelif-codegen-meta` so without a prefix it is difficult to find their origin), avoids IDE confusion (e.g. CLion flags the un-prefixed identifiers as errors), and avoids importing unwanted identifiers into the namespace.
2020-08-05 10:45:45 -07:00
Andrew Brown
8cfff26957 machinst x64: implement floating point comparisons
Note that this fixes an encoding issue in which the packed single and packed double prefixes were flipped.
2020-08-04 13:24:38 -07:00
Andrew Brown
c21fe0eb73 machinst x64: use assert_eq! when possible 2020-08-04 09:18:45 -07:00
Andrew Brown
999e04a2c4 machinst x64: refactor imports to use rustfmt convention
This change is a pure refactoring--no change to functionality. It removes newlines between the `use ...` statements in the x64 backend so that rustfmt can format them according to its convention. I noticed some files had followed a manual convention but subsequent additions did not seem to fit; this change fixes that and lightly coalesces some of the occurrences of `use a::b; use a::c;` into `use::{b, c}`.
2020-08-04 09:17:54 -07:00