Commit Graph

2926 Commits

Author SHA1 Message Date
Chris Fallin
34d9931ed8 Fix Wasm translator bug: end of toplevel frame is branched-to only for fallthrough returns.
This makes the value of `state.reachable()` inaccurate when observing at
the tail of functions (in the post-function hook) after an ordinary
return instruction.
2020-11-25 10:55:38 -08:00
Nick Fitzgerald
93c199363f Merge pull request #2449 from bytecodealliance/cfallin/add-pre-host-hooks
Add FuncEnvironment hooks to generate prologue and epilogue code.
2020-11-24 17:48:29 -08:00
Chris Fallin
4300c2c075 Add FuncEnvironment hooks to generate prologue and epilogue code.
In some cases, it is useful to do some work at entry to or exit from a
Cranelift function translated from WebAssembly. This PR adds two
optional methods to the `FuncEnvironment` trait to do just this,
analogous to the pre/post-hooks on operators that already exist.

This PR also includes a drive-by compilation fix due to the latest
nightly wherein `.is_empty()` on a `Range` ambiguously refers to either
the `Range` impl or the `ExactSizeIterator` impl and can't resolve.
2020-11-24 16:36:15 -08:00
Alex Crichton
62be6841e4 Propagate optional import names to the wasmtime/C API
With the module linking proposal the field name on imports is now
optional, and only the module is required to be specified. This commit
propagates this API change to the boundary of wasmtime's API, ensuring
consumers are aware of what's optional with module linking and what
isn't. Note that it's expected that all existing users will either
update accordingly or unwrap the result since module linking is
presumably disabled.
2020-11-23 15:26:26 -08:00
Johnnie Birch
ade9f12c72 Add support for X86_64 SIMD narrow instructions for vcode backend
Adds lowering support for:
i8x16.narrow_i16x8_s
i8x16.narrow_i16x8_u
i16x8.narrow_i32x4_s
i16x8.narrow_i32x4_u
2020-11-23 09:58:39 -08:00
Nick Fitzgerald
1dd20b4371 Merge pull request #2400 from MattX/improve-finalize-msg
Specify unsealed / unfilled blocks
2020-11-23 08:45:08 -08:00
Johnnie Birch
2cc501427e Add remaining X86_64 support for pack w/ signed/unsigned saturation
Adds lowering for packssdw, packusdw, packuswb
2020-11-22 23:14:29 -08:00
Johnnie Birch
258013cff1 Add support for SWidenHigh and UWidenHigh X86_64 for vcode backend
Support is based on SSE4.1
2020-11-22 22:14:19 -08:00
Johnnie Birch
124096735b Add support for palignr for X86_64 vcode backend 2020-11-22 22:14:02 -08:00
Johnnie Birch
f9937575d6 Add support for SwidenLow and UwidenLow for the X86_64 vcode backend
Adds support using lowerings compatible with SSE4.1
2020-11-22 21:38:53 -08:00
Johnnie Birch
615a575da1 Add support for x86_64 packed move lowering for the vcode backend 2020-11-22 20:23:00 -08:00
Johnnie Birch
b6d783a120 Adds support for i32x4.trunc_sat_f32x4_u 2020-11-22 12:00:54 -08:00
Matt
27f3307f24 Replace if + panic! with assert!
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
2020-11-21 00:03:41 -05:00
Yury Delendik
e34b410381 Update wasmparser for exception handling (#2431) 2020-11-19 14:08:10 -06:00
Alex Crichton
4d64c68b05 Run rustfmt 1.48
Run rustfmt over wasmtime with the new stable release which looks like
it wants to reformat a few lines.
2020-11-19 11:12:30 -08:00
Chris Fallin
073c727a74 x64 and aarch64: carry MemFlags on loads/stores; don't emit trap info unless an op can trap.
This end result was previously enacted by carrying a `SourceLoc` on
every load/store, which was somewhat cumbersome, and only indirectly
encoded metadata about a memory reference (can it trap) by its presence
or absence. We have a type for this -- `MemFlags` -- that tells us
everything we might want to know about a load or store, and we should
plumb it through to code emission instead.

This PR attaches a `MemFlags` to an `Amode` on x64, and puts it on load
and store `Inst` variants on aarch64. These two choices seem to factor
things out in the nicest way: there are relatively few load/store insts
on aarch64 but many addressing modes, while the opposite is true on x64.
2020-11-17 11:43:06 -08:00
Chris Fallin
b97f07b405 x64 backend: merge loads into ALU ops when appropriate.
This PR makes use of the support in #2366 for sinking effectful
instructions and merging them with consumers. In particular, on x86, we
want to make use of the ability of many instructions to load one operand
directly from memory. That is, instead of this:

```
    movq 0(%rdi), %rax
    addq %rax, %rbx
```

we want to generate this:

```
    addq 0(%rdi), %rax
```

As described in more detail in #2366, sinking and merging the load is
only possible under certain conditions. In particular, we need to ensure
that the use is the *only* use (otherwise the load happens more than
once), and we need to ensure that it does not move across other
effectful ops (see #2366 for how we ensure this).

This change is actually fairly simple, given that all the framework is
in place: we simply pattern-match a load on one operand of an ALU
instruction that takes an RMI (reg, mem, or immediate) operand, and
generate the mem form when we match.

Also makes a drive-by improvement in the x64 backend to use
statically-monomorphized `LowerCtx` types rather than a `&mut dyn
LowerCtx`.

On `bz2.wasm`, this results in ~1% instruction-count reduction. More is
likely possible by following up with other instructions that can merge
memory loads as well.
2020-11-17 11:06:46 -08:00
Chris Fallin
712ff22492 AArch64 SIMD: pattern-match load+splat into LD1R instruction. 2020-11-16 15:59:28 -08:00
Chris Fallin
39b5736727 Remove LoadSplat opcode, in preparation for pattern-matching Load+Splat.
This was added as an incremental step to improve AArch64 code quality in
PR #2278. At the time, we did not have a way to pattern-match the load +
splat opcode sequence that the relevant Wasm opcodes lowered to.
However, now with PR #2366, we can merge effectful instructions such as
loads into other ops, and so we can do this pattern matching directly.
The pattern-matching update will come in a subsequent commit.
2020-11-16 15:31:56 -08:00
Chris Fallin
3c8cb7b908 MachInst lowering logic: allow effectful instructions to merge.
This PR updates the "coloring" scheme that accounts for side-effects in
the MachInst lowering logic. As a result, the new backends will now be
able to merge effectful operations (such as memory loads) *into* other
operations; previously, only the other way (pure ops merged into
effectful ops) was possible. This will allow, for example, a load+ALU-op
combination, as is common on x86. It should even allow a load + ALU-op +
store sequence to merge into one lowered instruction.

The scheme arose from many fruitful discussions with @julian-seward1
(thanks!); significant credit is due to him for the insights here.

The first insight is that given the right basic conditions, i.e.  that
the root instruction is the only use of an effectful instruction's
result, all we need is that the "color" of the effectful instruction is
*one less* than the color of the current instruction. It's easier to
think about colors on the program points between instructions: if the
color coming *out* of the first (effectful def) instruction and *in* to
the second (effectful or effect-free use) instruction are the same, then
they can merge. Basically the color denotes a version of global state;
if the same, then no other effectful ops happened in the meantime.

The second insight is that we can keep state as we scan, tracking the
"current color", and *update* this when we sink (merge) an op. Hence
when we sink a load into another op, we effectively *re-color* every
instruction it moved over; this may allow further sinks.

Consider the example (and assume that we consider loads effectful in
order to conservatively ensure a strong memory model; otherwise, replace
with other effectful value-producing insts):

```
  v0 = load x
  v1 = load y
  v2 = add v0, 1
  v3 = add v1, 1
```

Scanning from bottom to top, we first see the add producing `v3` and we
can sink the load producing `v1` into it, producing a load + ALU-op
machine instruction. This is legal because `v1` moves over only `v2`,
which is a pure instruction. Consider, though, `v2`: under a simple
scheme that has no other context, `v0` could not sink to `v2` because it
would move over `v1`, another load. But because we already sunk `v1`
down to `v3`, we are free to sink `v0` to `v2`; the update of the
"current color" during the scan allows this.

This PR also cleans up the `LowerCtx` interface a bit at the same time:
whereas previously it always gave some subset of (constant, mergeable
inst, register) directly from `LowerCtx::get_input()`, it now returns
zero or more of (constant, mergable inst) from
`LowerCtx::maybe_get_input_as_source_or_const()`, and returns the
register only from `LowerCtx::put_input_in_reg()`. This removes the need
to explicitly denote uses of the register, so it's a little safer.

Note that this PR does not actually make use of the new ability to merge
loads into other ops; that will come in future PRs, especially to
optimize the `x64` backend by using direct-memory operands.
2020-11-16 14:53:45 -08:00
Chris Fallin
7b9d870030 Merge pull request #2410 from cfallin/x64-gc
Fix and enable GC on new x64 backend.
2020-11-13 09:40:05 -08:00
Chris Fallin
88fce766b0 Merge pull request #2411 from cfallin/x86-backend-cfg
Don't run old x86 backend-specific tests with new x64 backend.
2020-11-13 09:29:16 -08:00
Joey Gouly
70cbc4ca7c arm64: Refactor Inst::Extend handling
This refactors the handling of Inst::Extend and simplifies the lowering
of Bextend and Bmask, which allows the use of SBFX instructions for
extensions from 1-bit booleans. Other extensions use aliases of BFM,
and the code was changed to reflect that, rather than hard coding bit
patterns. Also ImmLogic is now implemented, so another hard coded
instruction can be removed.

As part of looking at boolean handling, `normalize_boolean_result` was
changed to `materialize_boolean_result`, such that it can use either
CSET or CSETM. Using CSETM saves an instruction (previously CSET + SUB)
for booleans bigger than 1-bit.

Copyright (c) 2020, Arm Limited.
2020-11-13 16:17:25 +00:00
bjorn3
d777ec675c Transparently change non-PLT libcall relocations to PLT relocations 2020-11-13 09:28:51 +01:00
Chris Fallin
0d703c12ed Don't run old x86 backend-specific tests with new x64 backend.
Some of the test failures tracked by #2079 are in unwind tests that are
specific to the old x86 backend: namely, these tests invoke the unwind
implementation that is paired with the old backend, rather than generic
over all backends. It thus doesn't make sense to try to run these tests
with the new backend. (The new backend's unwind code should have
analogous tests written/ported over eventually.)

It seems that we were actually building *both* x86 backends when the
`x64` feature was enabled, except that the old x86 backend would never
be instantiated by the usual ISA-lookup logic because a `x86-64` target
triple unconditionally resolves to the new one.

This PR resolves both of the issues by tweaking the feature-config
directives to exclude the `x86` backend when `x64` is enabled.
2020-11-12 20:44:53 -08:00
Chris Fallin
01b60e81b0 Fix and enable GC on new x64 backend.
One critical bit of plumbing was missing: the `StackMapSink` passed to
`compile_and_emit` was not actually receiving stackmaps. This seemingly
very basic issue was not caught because the other major user of reftype
support, SpiderMonkey, extracts stackmaps with a lower-level API. The
SM integration was built this way to avoid an awkward API quirk when
passing stackmaps through a `CodeSink` that proxies them to a
`StackMapSink`: the `CodeSink` wants `Value`s for each reference slot,
while the actual `StackMapSink` does not require these. This PR tweaks
the plumbing in a slightly different way to make `wasmtime` GC tests,
and presumably other consumers of stack-map info from the top-level
Cranelift interface, happy.
2020-11-12 16:55:18 -08:00
Chris Fallin
113d061129 Merge pull request #2369 from akirilov-arm/move_fix
Cranelift AArch64: Various small fixes
2020-11-12 14:59:10 -08:00
Andrew Brown
bd93e69eb4 [machinst x64]: implement packed shifts 2020-11-12 14:21:45 -08:00
Andrew Brown
8ba92853be [machinst x64]: add punpack[hl]bw instructions 2020-11-12 14:21:45 -08:00
Andrew Brown
8131b15921 [machinst x64]: allow addressing of constants 2020-11-12 14:21:45 -08:00
Chris Fallin
89dbc4590d Merge pull request #2363 from cfallin/extend-only-if-abi
Do value-extensions at ABI boundaries only when ABI requires it.
2020-11-12 12:26:20 -08:00
Chris Fallin
fd6433aaf5 Merge pull request #2395 from cfallin/lucet-x64-support
Add support for brff/brif and icmp_sp to new x64 backend to support Lucet.
2020-11-12 12:10:52 -08:00
bjorn3
8a35cbaf0d Enable PIC in SimpleJITBuilder::new 2020-11-12 19:49:42 +01:00
bjorn3
86d3dc9510 Add prepare_for_function_redefine 2020-11-12 19:39:44 +01:00
bjorn3
03c0e7e678 Rustfmt 2020-11-12 18:58:40 +01:00
bjorn3
cdbbcf7e13 Add plt entries to perf jit map 2020-11-12 18:58:28 +01:00
Julian Seward
cbce34af0a aarch64/inst/unwind.rs: handle zero-length prologues correctly. 2020-11-12 17:41:21 +01:00
bjorn3
bf9e5d9448 Use a PLT reference for function relocations in data objects
This ensures that all functions can be replaced without having to
perform relocations again.
2020-11-12 16:41:23 +01:00
bjorn3
8a4749af51 Immediately perform relocations when defining a function 2020-11-12 16:33:04 +01:00
bjorn3
5458473765 Implement PLT relocations for SimpleJIT 2020-11-12 16:19:16 +01:00
bjorn3
eaa2c5b3c2 Implement GOT relocations in SimpleJIT 2020-11-12 15:06:52 +01:00
Anton Kirilov
edaada3f57 Cranelift AArch64: Various small fixes
* Use FMOV to move 64-bit FP registers and SIMD vectors.
* Add support for additional vector load types.
* Fix the printing of Inst::LoadAddr.

Copyright (c) 2020, Arm Limited.
2020-11-12 13:54:05 +00:00
bjorn3
11a3bdfc6a Catch overflows when performing relocations 2020-11-12 14:13:06 +01:00
Matthieu Felix
35da24adfd Specify unsealed / unfilled blocks 2020-11-11 23:35:48 -05:00
Chris Fallin
19640367db Merge pull request #2394 from cfallin/no-size-asserts
Remove size-of-struct asserts that break with some Rust versions.
2020-11-11 18:04:34 -08:00
Chris Fallin
5e5e520654 Remove size-of-struct asserts that break with some Rust versions.
The asserts on the sizes of the VCode constant-table data structures
introduced in PR #2328 are dependent on the size of data structures such
as `HashMap` in the standard library, which can change. In particular,
on Rust 1.46 (which is not current, but could be e.g. pinned by a
project using Cranelift), it appears that these asserts fail. We
shouldn't depend on stdlib internals; IMHO the asserts on our own struct
sizes are enough to catch accidental size blowups.
2020-11-11 17:13:28 -08:00
Chris Fallin
5df8840483 Add support for brff/brif and icmp_sp to new x64 backend to support Lucet.
`lucetc` currently *almost*, but not quite, works with the new x64
backend; the only missing piece is support for the particular
instructions emitted as part of its prologue stack-check.

We do not normally see `brff`, `brif`, or `ifcmp_sp` in CLIF generated by
`cranelift-wasm` without the old-backend legalization rules, so these
were not supported in the new x64 backend as they were not necessary for
Wasm MVP support. Using them resulted in an `unimplemented!()` panic.

This PR adds support for `brff` and `brif` analogously to how AArch64
implements them, by pattern-matching the `ifcmp` / `ffcmp` directly.
Then `ifcmp_sp` is a straightforward variant of `ifcmp`.

Along the way, this also removes the notion of "fallthrough block" from
the branch-group lowering method; instead, `fallthrough` instructions
are handled as normal branches to their explicitly-provided targets,
which (in the original CLIF) match the fallthrough block. The reason for
this is that the block reordering done as part of lowering can change
the fallthrough block. We were not using `fallthrough` instructions in
the output produced by `cranelift-wasm`, so this, too, was not
previously caught.

With these changes, the `lucetc` crate in Lucet passes all tests with
the `x64` feature-flag added to its `cranelift-codegen` dependency.
2020-11-11 13:43:39 -08:00
Chris Fallin
997b654235 Merge pull request #2393 from jgouly/constant-addend
arm64: Fold some constants into load instructions
2020-11-11 11:23:21 -08:00
Pat Hickey
aa259ff92a Merge pull request #2390 from bjorn3/more_simplejit_refactors
More SimpleJIT refactorings
2020-11-11 11:16:04 -08:00
Joey Gouly
a5011e8212 arm64: Fold some constants into load instructions
This changes the following:
  mov x0, #4
  ldr x0, [x1, #4]

Into:
  ldr x0, [x1]

I noticed this pattern (but with #0), in a benchmark.

Copyright (c) 2020, Arm Limited.
2020-11-11 18:47:43 +00:00