Commit Graph

7677 Commits

Author SHA1 Message Date
Alex Crichton
62be6841e4 Propagate optional import names to the wasmtime/C API
With the module linking proposal the field name on imports is now
optional, and only the module is required to be specified. This commit
propagates this API change to the boundary of wasmtime's API, ensuring
consumers are aware of what's optional with module linking and what
isn't. Note that it's expected that all existing users will either
update accordingly or unwrap the result since module linking is
presumably disabled.
2020-11-23 15:26:26 -08:00
Johnnie Birch
ff8a4e4f9b Enable simd_conversions spec test 2020-11-23 13:00:13 -08:00
Johnnie Birch
ade9f12c72 Add support for X86_64 SIMD narrow instructions for vcode backend
Adds lowering support for:
i8x16.narrow_i16x8_s
i8x16.narrow_i16x8_u
i16x8.narrow_i32x4_s
i16x8.narrow_i32x4_u
2020-11-23 09:58:39 -08:00
Nick Fitzgerald
1dd20b4371 Merge pull request #2400 from MattX/improve-finalize-msg
Specify unsealed / unfilled blocks
2020-11-23 08:45:08 -08:00
Johnnie Birch
2cc501427e Add remaining X86_64 support for pack w/ signed/unsigned saturation
Adds lowering for packssdw, packusdw, packuswb
2020-11-22 23:14:29 -08:00
Johnnie Birch
258013cff1 Add support for SWidenHigh and UWidenHigh X86_64 for vcode backend
Support is based on SSE4.1
2020-11-22 22:14:19 -08:00
Johnnie Birch
124096735b Add support for palignr for X86_64 vcode backend 2020-11-22 22:14:02 -08:00
Johnnie Birch
f9937575d6 Add support for SwidenLow and UwidenLow for the X86_64 vcode backend
Adds support using lowerings compatible with SSE4.1
2020-11-22 21:38:53 -08:00
Johnnie Birch
615a575da1 Add support for x86_64 packed move lowering for the vcode backend 2020-11-22 20:23:00 -08:00
Johnnie Birch
b6d783a120 Adds support for i32x4.trunc_sat_f32x4_u 2020-11-22 12:00:54 -08:00
Matt
27f3307f24 Replace if + panic! with assert!
Co-authored-by: Nick Fitzgerald <fitzgen@gmail.com>
2020-11-21 00:03:41 -05:00
Pat Hickey
86ae0b7855 Merge pull request #2428 from bytecodealliance/pch/wiggle_immut_borrows
wiggle: support overlapping immutable borrows
2020-11-20 15:58:04 -08:00
Pat Hickey
966b347faa fix wiggle-borrow publishing order 2020-11-20 13:55:44 -08:00
Pat Hickey
b40a983b29 wiggle-borrow gets published now 2020-11-20 11:45:59 -08:00
Pat Hickey
2a937f06db fixes 2020-11-20 11:42:42 -08:00
Ulrich Weigand
9b9e46abd8 Update to backtrace version 0.3.55 (#2436)
This fixes a bug in the get_sp function on s390x, which caused
crashed during garbage collection.
2020-11-20 13:33:13 -06:00
Pat Hickey
f7a0d86c64 install-openvino: typo 2020-11-20 11:22:52 -08:00
Pat Hickey
6681e6786e debug assert could catch double-free 2020-11-20 11:20:40 -08:00
Ivan Zvonimir Horvat
309354d9e1 1272: bash script; simple gcd example (#2421) 2020-11-20 11:43:54 -06:00
Pat Hickey
f5f180a8fe refactor is_borrowed/unborrow into shared/mut variants 2020-11-19 15:29:12 -08:00
Pat Hickey
224e8b0e88 wasi-nn: fix mutable guestslice borrow 2020-11-19 15:28:53 -08:00
Pat Hickey
fb68c80420 install-openvino: make it easier to invoke on your local machine
put the default version in the shell script, not the yml.
write any files to the directory where the action lives,
and .gitignore them.
2020-11-19 15:23:07 -08:00
Pat Hickey
f9de1d3e5c rename immutable borrows to shared borrows 2020-11-19 14:42:31 -08:00
Alex Crichton
acbe85fa95 Remove empty .rustfmt.toml (#2429)
AFAIK this isn't really necessary nowadays given the prevalence of
rustfmt, and for whatever reason the Rust plugin for vim uses this file
in lieu of all other options, meaning it doesn't pass `--edition 2018`
by default which has been causing issues as I've been working on `async`
stuff. In any case I don't think we need the file regardless, so this
commit deletes it.
2020-11-19 15:56:54 -06:00
Yury Delendik
e34b410381 Update wasmparser for exception handling (#2431) 2020-11-19 14:08:10 -06:00
Chris Fallin
72811d35ae Merge pull request #2433 from alexcrichton/reformat
Run rustfmt 1.48
2020-11-19 12:04:28 -08:00
Alex Crichton
4d64c68b05 Run rustfmt 1.48
Run rustfmt over wasmtime with the new stable release which looks like
it wants to reformat a few lines.
2020-11-19 11:12:30 -08:00
Pat Hickey
3509883f2d wiggle: add test of overlapping immutable borrows 2020-11-18 15:02:02 -08:00
Pat Hickey
26192d6760 wasi-common: opt in to mutable borrowing 2020-11-18 14:43:47 -08:00
Pat Hickey
fc608e392b wiggle: make Mut variants of GuestStr, GuestPtr 2020-11-18 12:32:21 -08:00
Pat Hickey
78db3ff13b wiggle: borrow checker lives in own crate, and supports both mut/immut 2020-11-18 12:19:47 -08:00
Chris Fallin
bf971efa42 Merge pull request #2426 from cfallin/machinst-trap-info
Carry `MemFlag`s on loads/stores in MachInst backends, and emit trap info only where needed.
2020-11-18 08:23:42 -08:00
Andrew Brown
3d606a01e5 wasi-nn: remove unused functions (#2427) 2020-11-18 09:21:51 -06:00
Chris Fallin
073c727a74 x64 and aarch64: carry MemFlags on loads/stores; don't emit trap info unless an op can trap.
This end result was previously enacted by carrying a `SourceLoc` on
every load/store, which was somewhat cumbersome, and only indirectly
encoded metadata about a memory reference (can it trap) by its presence
or absence. We have a type for this -- `MemFlags` -- that tells us
everything we might want to know about a load or store, and we should
plumb it through to code emission instead.

This PR attaches a `MemFlags` to an `Amode` on x64, and puts it on load
and store `Inst` variants on aarch64. These two choices seem to factor
things out in the nicest way: there are relatively few load/store insts
on aarch64 but many addressing modes, while the opposite is true on x64.
2020-11-17 11:43:06 -08:00
Chris Fallin
e7df081696 Merge pull request #2389 from cfallin/x64-load-op
x64 backend: merge loads into ALU ops when appropriate.
2020-11-17 11:42:19 -08:00
Chris Fallin
b97f07b405 x64 backend: merge loads into ALU ops when appropriate.
This PR makes use of the support in #2366 for sinking effectful
instructions and merging them with consumers. In particular, on x86, we
want to make use of the ability of many instructions to load one operand
directly from memory. That is, instead of this:

```
    movq 0(%rdi), %rax
    addq %rax, %rbx
```

we want to generate this:

```
    addq 0(%rdi), %rax
```

As described in more detail in #2366, sinking and merging the load is
only possible under certain conditions. In particular, we need to ensure
that the use is the *only* use (otherwise the load happens more than
once), and we need to ensure that it does not move across other
effectful ops (see #2366 for how we ensure this).

This change is actually fairly simple, given that all the framework is
in place: we simply pattern-match a load on one operand of an ALU
instruction that takes an RMI (reg, mem, or immediate) operand, and
generate the mem form when we match.

Also makes a drive-by improvement in the x64 backend to use
statically-monomorphized `LowerCtx` types rather than a `&mut dyn
LowerCtx`.

On `bz2.wasm`, this results in ~1% instruction-count reduction. More is
likely possible by following up with other instructions that can merge
memory loads as well.
2020-11-17 11:06:46 -08:00
Nick Fitzgerald
281a41c08b Merge pull request #2406 from fitzgen/remove-typo
wasmtime: Remove typo in doc comment
2020-11-17 10:39:12 -08:00
Nick Fitzgerald
02156eaef3 wasmtime: Remove typo in doc comment 2020-11-17 09:39:38 -08:00
Chris Fallin
9e511ec0c0 Merge pull request #2376 from cfallin/loadsplat
AArch64 SIMD: replace `LoadSplat` with pattern-matching on load+splat
2020-11-17 08:03:21 -08:00
Nick Fitzgerald
d7e4f92030 Merge pull request #2425 from alexcrichton/fix-wrong-store-2
Fix assertion with cross-store values in `Func::new`
2020-11-16 16:36:05 -08:00
Nick Fitzgerald
3dde6559c0 Merge pull request #2408 from alexcrichton/fix-use-after-free-trampoline
Fix a use-after-free of trampoline code
2020-11-16 16:35:02 -08:00
Chris Fallin
712ff22492 AArch64 SIMD: pattern-match load+splat into LD1R instruction. 2020-11-16 15:59:28 -08:00
Chris Fallin
39b5736727 Remove LoadSplat opcode, in preparation for pattern-matching Load+Splat.
This was added as an incremental step to improve AArch64 code quality in
PR #2278. At the time, we did not have a way to pattern-match the load +
splat opcode sequence that the relevant Wasm opcodes lowered to.
However, now with PR #2366, we can merge effectful instructions such as
loads into other ops, and so we can do this pattern matching directly.
The pattern-matching update will come in a subsequent commit.
2020-11-16 15:31:56 -08:00
Chris Fallin
2150a533b6 Merge pull request #2366 from cfallin/load-isel
MachInst lowering logic: allow effectful instructions to merge.
2020-11-16 15:31:38 -08:00
Chris Fallin
3c8cb7b908 MachInst lowering logic: allow effectful instructions to merge.
This PR updates the "coloring" scheme that accounts for side-effects in
the MachInst lowering logic. As a result, the new backends will now be
able to merge effectful operations (such as memory loads) *into* other
operations; previously, only the other way (pure ops merged into
effectful ops) was possible. This will allow, for example, a load+ALU-op
combination, as is common on x86. It should even allow a load + ALU-op +
store sequence to merge into one lowered instruction.

The scheme arose from many fruitful discussions with @julian-seward1
(thanks!); significant credit is due to him for the insights here.

The first insight is that given the right basic conditions, i.e.  that
the root instruction is the only use of an effectful instruction's
result, all we need is that the "color" of the effectful instruction is
*one less* than the color of the current instruction. It's easier to
think about colors on the program points between instructions: if the
color coming *out* of the first (effectful def) instruction and *in* to
the second (effectful or effect-free use) instruction are the same, then
they can merge. Basically the color denotes a version of global state;
if the same, then no other effectful ops happened in the meantime.

The second insight is that we can keep state as we scan, tracking the
"current color", and *update* this when we sink (merge) an op. Hence
when we sink a load into another op, we effectively *re-color* every
instruction it moved over; this may allow further sinks.

Consider the example (and assume that we consider loads effectful in
order to conservatively ensure a strong memory model; otherwise, replace
with other effectful value-producing insts):

```
  v0 = load x
  v1 = load y
  v2 = add v0, 1
  v3 = add v1, 1
```

Scanning from bottom to top, we first see the add producing `v3` and we
can sink the load producing `v1` into it, producing a load + ALU-op
machine instruction. This is legal because `v1` moves over only `v2`,
which is a pure instruction. Consider, though, `v2`: under a simple
scheme that has no other context, `v0` could not sink to `v2` because it
would move over `v1`, another load. But because we already sunk `v1`
down to `v3`, we are free to sink `v0` to `v2`; the update of the
"current color" during the scan allows this.

This PR also cleans up the `LowerCtx` interface a bit at the same time:
whereas previously it always gave some subset of (constant, mergeable
inst, register) directly from `LowerCtx::get_input()`, it now returns
zero or more of (constant, mergable inst) from
`LowerCtx::maybe_get_input_as_source_or_const()`, and returns the
register only from `LowerCtx::put_input_in_reg()`. This removes the need
to explicitly denote uses of the register, so it's a little safer.

Note that this PR does not actually make use of the new ability to merge
loads into other ops; that will come in future PRs, especially to
optimize the `x64` backend by using direct-memory operands.
2020-11-16 14:53:45 -08:00
Alex Crichton
ffca0fc908 Fix assertion with cross-store values in Func::new
If a host-defined `Func::new` closure returns values from the wrong
store, this currently trips a debug assertion and causes other issues
elsewhere in release mode. This commit adds the same dynamic checks
found in `Func::wrap` in the `Func::new` case today.
2020-11-16 12:34:02 -08:00
Alex Crichton
8675fa5aa7 Fix a memory leak on returning incompatible values (#2424)
This fixes an issue where if a store-incompatible value is returned from
a host-defined function then that value is leaked. Practically this
means that it's possible to accidentally leak `Func` values, but a
simple insertion of a `drop` does the trick!
2020-11-16 14:26:48 -06:00
Andrew Brown
a61f068c64 Add an initial wasi-nn implementation for Wasmtime (#2208)
* Add an initial wasi-nn implementation for Wasmtime

This change adds a crate, `wasmtime-wasi-nn`, that uses `wiggle` to expose the current state of the wasi-nn API and `openvino` to implement the exposed functions. It includes an end-to-end test demonstrating how to do classification using wasi-nn:
 - `crates/wasi-nn/tests/classification-example` contains Rust code that is compiled to the `wasm32-wasi` target and run with a Wasmtime embedding that exposes the wasi-nn calls
 - the example uses Rust bindings for wasi-nn contained in `crates/wasi-nn/tests/wasi-nn-rust-bindings`; this crate contains code generated by `witx-bindgen` and eventually should be its own standalone crate

* Test wasi-nn as a CI step

This change adds:
 - a GitHub action for installing OpenVINO
 - a script, `ci/run-wasi-nn-example.sh`, to run the classification example
2020-11-16 12:54:00 -06:00
Ivan Zvonimir Horvat
61a0bcbdc6 examples: threads.rs; fixed eun typo -> run (#2422) 2020-11-16 11:48:49 -06:00
Chris Fallin
fd36be3682 Merge pull request #2420 from bytecodealliance/fitzgen-patch-1
Update docs to reflect that reference types work on aarch64 now
2020-11-16 09:37:31 -08:00