Commit Graph

215 Commits

Author SHA1 Message Date
Johnnie Birch
f27c0f3434 Adds support for signed packed integer conversion to float
f32x4.convert_i32x4_s
2020-10-16 14:16:53 -07:00
Andrew Brown
a26e9e9a20 [machinst x64]: lower load_splat using memory addressing 2020-10-14 09:43:33 -07:00
Andrew Brown
d990dd4c9a [machinst x64]: add source locations to more instruction formats
In order to register traps for `load_splat`, several instruction formats need knowledge of `SourceLoc`s; however, since the x64 backend does not correctly and completely register traps for `RegMem::Mem` variants I opened https://github.com/bytecodealliance/wasmtime/issues/2290 to discuss and resolve this issue. In the meantime, the current behavior (i.e. remaining largely unaware of `SourceLoc`s) is retained.
2020-10-14 09:43:33 -07:00
Andrew Brown
1799b0947f [machinst x64]: implement packed bitselect 2020-10-09 10:04:50 -07:00
Andrew Brown
95f0e96e62 [machinst x64]: implement packed not
This begins to use `Inst` helper functions as discussed in #2252.
2020-10-09 10:04:50 -07:00
Andrew Brown
3c55523d40 [machinst x64]: implement packed and, and_not, xor, or 2020-10-09 10:04:50 -07:00
Benjamin Bouvier
e8c2a1763a machinst x64: avoid emitting movzx when the input is an ALU 32-bits operation; 2020-10-09 18:49:27 +02:00
Benjamin Bouvier
3980a43cda machinst x64: use the (base,offset) addressing mode even in the presence of a uextend; 2020-10-09 18:49:27 +02:00
Andrew Brown
c8cce5d2d7 [machinst x64]: enable packed saturated arithmetic 2020-10-08 08:46:20 -07:00
Benjamin Bouvier
a470f1e0cd machinst x64: remove dead code and allow(dead_code) annotation;
The BranchTarget is always used as a label, so just use a plain
MachLabel in this case.
2020-10-08 10:05:57 +02:00
Andrew Brown
ce44719e1f refactor: change LowerCtx::get_immediate to return a DataValue
This change abstracts away (from the perspective of the new backend) how immediate values are stored in InstructionData. It gathers large immediates from necessary places (e.g. constant pool) and delegates to `InstructionData::imm_value` for the rest. This refactor only touches original users of `LowerCtx::get_immediate` but a future change could do the same for any place the new backend is accessing InstructionData directly to retrieve immediates.
2020-10-07 12:17:17 -07:00
Chris Fallin
71768bb6cf Fix AArch64 ABI to respect half-caller-save, half-callee-save vec regs.
This PR updates the AArch64 ABI implementation so that it (i) properly
respects that v8-v15 inclusive have callee-save lower halves, and
caller-save upper halves, by conservatively approximating (to full
registers) in the appropriate directions when generating prologue
caller-saves and when informing the regalloc of clobbered regs across
callsites.

In order to prevent saving all of these vector registers in the prologue
of every non-leaf function due to the above approximation, this also
makes use of a new regalloc.rs feature to exclude call instructions'
writes from the clobber set returned by register allocation. This is
safe whenever the caller and callee have the same ABI (because anything
the callee could clobber, the caller is allowed to clobber as well
without saving it in the prologue).

Fixes #2254.
2020-10-06 14:44:02 -07:00
Benjamin Bouvier
4a10a78e33 machinst x64: remove non_snake_case; 2020-10-05 17:44:31 +02:00
Johnnie Birch
7b4d173b90 Adds packed floating point min/max for X64 for the new backend
Allows for simd_f32x4 and simd_f64x2 spec tests
2020-10-02 16:20:10 -07:00
Andrew Brown
ca1b76421a [machinst x64]: remove duplicate code to insert a lane 2020-10-02 08:29:31 -07:00
Andrew Brown
c42a097a0c [machinst x64]: use is64 instead of w_bit 2020-10-02 08:29:31 -07:00
Andrew Brown
16a2538ecd [machinst x64]: rename Inst::XmmUninitializedValue and document
This approach is not the best but avoids an extra instruction; perhaps at some point, as mentioned in https://github.com/bytecodealliance/wasmtime/pull/2248, we will add the extra instruction or refactor things in such a way that this `Inst` variant is unnecessary.
2020-10-02 08:29:31 -07:00
Andrew Brown
50b9399006 [machinst x64]: lower remaining lane operations--any_true, all_true, splat 2020-10-02 08:29:31 -07:00
Andrew Brown
74226d6781 [machinst x64]: add integer comparisons 2020-10-02 08:29:31 -07:00
Andrew Brown
4484a00ea5 [machinst x64]: calculate extension modes in one place 2020-09-29 14:48:59 -07:00
Andrew Brown
715be68101 [machinst x64]: assert lane is correct size for extractlane
This change applies a good suggestion @bjorn3 made in #2230 that I forgot to implement there.
2020-09-29 09:34:22 -07:00
Andrew Brown
f50d905152 [machinst x64]: refactor using added RegMem::from(Writable<Reg>) 2020-09-29 08:45:12 -07:00
Andrew Brown
e3eb098c99 [machinst x64]: add swizzle implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
050f078f86 [machinst x64]: add saturating addition implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
a64abf9b76 [machinst x64]: add shuffle implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
f4836f9ca9 [machinst x64]: add extractlane implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
29fa894790 [machinst x64]: add insertlane implementation 2020-09-29 08:45:12 -07:00
Andrew Brown
ac2bf9d246 [machinst x64]: add packed min/max implementations 2020-09-23 15:40:46 -07:00
Andrew Brown
7546d98844 [machinst x64]: add avg_round implementation 2020-09-23 15:40:46 -07:00
Andrew Brown
b202464fa0 [machinst x64]: add iabs implementation 2020-09-23 15:40:46 -07:00
Johnnie Birch
07d0d32b69 Adds i64x2.mul for the new backend targeting x64 2020-09-11 13:17:42 -07:00
Benjamin Bouvier
3849dc18b1 machinst x64: revamp integer immediate emission;
In particular:

- try to optimize the integer emission into a 32-bit emission, when the
high bits are all zero, and stop relying on the caller of `imm_r` to
ensure this.
- rename `Inst::imm_r`/`Inst::Imm_R` to `Inst::imm`/`Inst::Imm`.
- generate a sign-extending mov 32-bit immediate to 64-bits, whenever
possible.
- fix a few places where the previous commit did introduce the
generation of zero-constants with xor, when calling `put_input_to_reg`,
thus clobbering the flags before they were read.
2020-09-11 18:13:30 +02:00
Benjamin Bouvier
d9052d0a9c machinst x64: generate copies of constants during lowering; 2020-09-11 17:41:44 +02:00
Benjamin Bouvier
cace32746f machinst x64: pattern-match addresses that are base+cst index; 2020-09-11 17:41:44 +02:00
Benjamin Bouvier
b4a2dd37a4 machinst x64: rename input_to_reg to put_input_to_reg;
Eventually, we should be able to unify this function's implementation
with the aarch64 one; but the latter does much more, and this would
require abstractions brought up in another pending PR#2142.
2020-09-09 18:03:59 +02:00
Benjamin Bouvier
cb96d16ac7 machinst x64: inline helper used only once; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
7a833f442a machinst: common up some instruction data helpers; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
a835c247c0 machinst: make get_output_reg target independent; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
6a3c4fb54e machinst x64: rename output_to_reg to get_output_reg; 2020-09-09 18:03:59 +02:00
Benjamin Bouvier
9620ce6bdf machinst x64: mask shift count too; 2020-09-09 18:03:59 +02:00
Chris Fallin
e8f772c1ac x64 new backend: port ABI implementation to shared infrastructure with AArch64.
Previously, in #2128, we factored out a common "vanilla 64-bit ABI"
implementation from the AArch64 ABI code, with the idea that this should
be largely compatible with x64. This PR alters the new x64 backend to
make use of the shared infrastructure, removing the duplication that
existed previously. The generated code is nearly (not exactly) the same;
the only difference relates to how the clobber-save region is padded in
the prologue.

This also changes some register allocations in the aarch64 code because
call support in the shared ABI infra now passes a temp vreg in, rather
than requiring use of a fixed, non-allocable temp; tests have been
updated, and the runtime behavior is unchanged.
2020-09-08 17:59:01 -07:00
bjorn3
9428480230 Merge SignExtendAlAh and SignExtendRaxRdx 2020-09-08 15:00:24 +02:00
bjorn3
3dcda164dc Fix nits 2020-09-08 15:00:24 +02:00
bjorn3
067255ef45 x64: Implement rotl and rotr for small integers 2020-09-08 15:00:24 +02:00
bjorn3
4251a950ba x64: Implement ishl, ushr and sshr for small integers 2020-09-08 15:00:24 +02:00
bjorn3
cc35f1e9bb x64: Misc small integer fixes 2020-09-08 15:00:24 +02:00
bjorn3
ce033f2a0c x64: Fix udiv and sdiv for 8bit integers 2020-09-08 15:00:24 +02:00
bjorn3
74642b166f x64: Implement ineg and bnot 2020-09-08 15:00:24 +02:00
Johnnie Birch
a64af55cda Adds x64 packed negation for the new backend 2020-09-07 11:56:05 -07:00
Julian Seward
8ac4bd1d0d CL/newBE/x64: Lowering of scalar shifts: fix shift-by-imm generation
The logic for generation of shifts-by-immediate was not quite right.  The result was that even
shifts by an amount known at compile time were being done by moving the shift immediate into %cl
and then doing a variable shift by %cl.  The effect is worse than it sounds, because all of
those shift constants are small and often used in multiple places, so they were GVN'd up and
often ended up at the entry block of the function.  Hence these were connected to the use points
by long live ranges which got spilled.  So all in all, most of the win here comes from avoiding
spilling.

The problem was caused by this line, in the `Opcode::Ishl | Opcode::Ushr ..` case:
```
   let (count, rhs) = if let Some(cst) = ctx.get_constant(inputs[1].insn) {
```
`inputs[]` appears to refer to this CLIF instruction's inputs, and bizarrely `inputs[].insn` all
refer to the instruction (the shift) itself.  Hence `ctx.get_constant(inputs[1].insn)` asks
"does this shift instruction produce a constant" to which the answer is always "no", so the
shift-by-unknown amount code is always generated.  The fix here is to change that expression to
```
   let (count, rhs) = if let Some(cst) = ctx.get_input(insn, 1).constant {
```
`get_input`'s result conveniently includes a `constant` field of type `Option<u64>`, so we just
use that instead.
2020-08-27 11:48:35 +02:00