Commit Graph

2662 Commits

Author SHA1 Message Date
Alex Crichton
8dd091219a Update wasm-tools dependencies
Brings in fixes for some assorted wast issues.
2020-11-09 08:50:03 -08:00
bjorn3
5df5bbbdca Fix usage of default_libcall_names (#2378)
* Fix usage of default_libcall_names

* Add basic cranelift-object test

It is based on a test with the same name in cranelift-simplejit
2020-11-09 10:33:56 -06:00
Andrew Brown
c9e8889d47 Update clippy annotation to use latest version (#2375) 2020-11-09 09:24:59 -06:00
Alex Crichton
73cda83548 Propagate module-linking types to wasmtime (#2115)
This commit adds lots of plumbing to get the type section from the
module linking proposal plumbed all the way through to the `wasmtime`
crate and the `wasmtime-c-api` crate. This isn't all that useful right
now because Wasmtime doesn't support imported/exported
modules/instances, but this is all necessary groundwork to getting that
exported at some point. I've added some light tests but I suspect the
bulk of the testing will come in a future commit.

One major change in this commit is that `SignatureIndex` no longer
follows type type index space in a wasm module. Instead a new
`TypeIndex` type is used to track that. Function signatures, still
indexed by `SignatureIndex`, are then packed together tightly.
2020-11-06 14:48:09 -06:00
Alex Crichton
77827a48a9 Start compiling module-linking modules (#2093)
This commit is intended to be the first of many in implementing the
module linking proposal. At this time this builds on #2059 so it
shouldn't land yet. The goal of this commit is to compile bare-bones
modules which use module linking, e.g. those with nested modules.

My hope with module linking is that almost everything in wasmtime only
needs mild refactorings to handle it. The goal is that all per-module
structures are still per-module and at the top level there's just a
`Vec` containing a bunch of modules. That's implemented currently where
`wasmtime::Module` contains `Arc<[CompiledModule]>` and an index of
which one it's pointing to. This should enable
serialization/deserialization of any module in a nested modules
scenario, no matter how you got it.

Tons of features of the module linking proposal are missing from this
commit. For example instantiation flat out doesn't work, nor does
import/export of modules or instances. That'll be coming as future
commits, but the purpose here is to start laying groundwork in Wasmtime
for handling lots of modules in lots of places.
2020-11-06 13:32:30 -06:00
Alex Crichton
d2daf5064e Get lightbeam compiling on stable Rust (#2370)
This will hopefully remove a small thorn in our side with periodic
nightly breakage due to nightly features changing. This commit moves
lightbeam to stable Rust, swapping out `staticvec` for `arrayvec` and
otherwise updating some dependencies (namely `dynasm`) to compile with
stable.

This then also updates CI appropriately to not use a pinned nightly and
instead us a floating `nightly` channel so we can head off any breakage
coming up ASAP.
2020-11-06 13:23:08 -06:00
Alex Crichton
8af2dbfbac Allow offloading compilation in cranelift-object (#2371)
This commit is a slight refactoring of the `Module` trait and backend in
`cranelift-object`. The goal is to enable parallelization of compilation
when using `cranelift-object`. Currently this is difficult because
`ObjectModule::define_function` requires `&mut self`. This instead
soups up the `define_function_bytes` interface to handle relocations so
compilation can happen externally before defining it in a `Module`. This
also means that `define_function` is now a convenience wrapper around
`define_function_bytes`.
2020-11-06 09:56:44 -06:00
Yury Delendik
b2b7bc10e2 machinst aarch64: New backend unwind (#2313)
* Unwind information for aarch64 backend.
2020-11-06 08:02:45 -06:00
Yury Delendik
f60c0f3ec3 cranelift: refactor unwind logic to accommodate multiple backends (#2357)
*    Make cranelift_codegen::isa::unwind::input public
*    Move UnwindCode's common offset field out of the structure
*    Make MachCompileResult::unwind_info more generic
*    Record initial stack pointer offset
2020-11-05 16:57:40 -06:00
Andrew Brown
df59ffb1b6 Align island's worst case size 2020-11-05 14:25:02 -08:00
Andrew Brown
83f182b390 Implement initial emission of constants
This approach suffers from memory-size bloat during compile time due to the desire to de-duplicate the constants emitted and reduce runtime memory-size. As a first step, though, this provides an end-to-end mechanism for constants to be emitted in the MachBuffer islands.
2020-11-05 14:25:02 -08:00
Alex Crichton
e4c3fc5cf2 Update immediate and transitive dependencies
I don't think this has happened in awhile but I've run a `cargo update`
as well as trimming some of the duplicate/older dependencies in
`Cargo.lock` by updating some of our immediate dependencies as well.
2020-11-05 08:34:09 -08:00
Alex Crichton
ab1958434a Bump to 0.21.0 (#2359) 2020-11-05 09:39:53 -06:00
Julian Seward
dd9bfcefaa CL/aarch64: implement the wasm SIMD v128.load{32,64}_zero instructions.
This patch implements, for aarch64, the following wasm SIMD extensions.

  v128.load32_zero and v128.load64_zero instructions
  https://github.com/WebAssembly/simd/pull/237

The changes are straightforward:

* no new CLIF instructions.  They are translated into an existing CLIF scalar
  load followed by a CLIF `scalar_to_vector`.

* the comment/specification for CLIF `scalar_to_vector` has been changed to
  match the actual intended semantics, per consulation with Andrew Brown.

* translation from `scalar_to_vector` to aarch64 `fmov` instruction.  This
  has been generalised slightly so as to allow both 32- and 64-bit transfers.

* special-case zero in `lower_constant_f128` in order to avoid a
  potentially slow call to `Inst::load_fp_constant128`.

* Once "Allow loads to merge into other operations during instruction
  selection in MachInst backends"
  (https://github.com/bytecodealliance/wasmtime/issues/2340) lands,
  we can use that functionality to pattern match the two-CLIF pair and
  emit a single AArch64 instruction.

* A simple filetest has been added.

There is no comprehensive testcase in this commit, because that is a separate
repo.  The implementation has been tested, nevertheless.
2020-11-04 20:00:04 +01:00
Chris Fallin
75e02276be Merge pull request #2360 from jgouly/reg_map
aarch64: Fix aarch64_map_regs for FpuRRI
2020-11-04 10:37:50 -08:00
Joey Gouly
0223cb2f8c aarch64: Fix aarch64_map_regs for FpuRRI
This was wrong since I added it in 02c3f238f.

Copyright (c) 2020, Arm Limited.
2020-11-04 16:53:25 +00:00
Ulrich Weigand
80c2d70d2d machinst ABI: Support for accumulating outgoing args
When performing a function call, the platform ABI may require space
on the stack to hold outgoing arguments and/or return values.

Currently, this is supported via decrementing the stack pointer
before the call and incrementing it afterwards, using the
emit_stack_pre_adjust and emit_stack_post_adjust methods of
ABICaller.  However, on some platforms it would be preferable
to just allocate enough space for any call done in the function
in the caller's prologue instead.

This patch adds support to allow back-ends to choose that method.
Instead of calling emit_stack_pre/post_adjust around a call, they
simply call a new accumulate_outgoing_args_size method of
ABICaller instead.  This will pass on the required size to the
ABICallee structure of the calling function, which will accumulate
the maximum size required for all function calls.

That accumulated size is then passed to the gen_clobber_save
and gen_clobber_restore functions so they can include the size
in the stack allocation / deallocation that already happens in
the prologue / epilogue code.
2020-11-03 18:49:34 +01:00
Chris Fallin
5ab7b4aa7f Merge pull request #2345 from uweigand/abi-stackalign
machinst ABI: Allow back-end to define stack alignment
2020-11-03 09:02:41 -08:00
Chris Fallin
0c240991ae Merge pull request #2346 from uweigand/abi-noframepointer
machinst ABI: Pass fixed frame size to gen_clobber_restore
2020-11-03 09:00:59 -08:00
Julian Seward
5a5fb11979 CL/aarch64: implement the wasm SIMD i32x4.dot_i16x8_s instruction
This patch implements, for aarch64, the following wasm SIMD extensions

  i32x4.dot_i16x8_s instruction
  https://github.com/WebAssembly/simd/pull/127

It also updates dependencies as follows, in order that the new instruction can
be parsed, decoded, etc:

  wat          to  1.0.27
  wast         to  26.0.1
  wasmparser   to  0.65.0
  wasmprinter  to  0.2.12

The changes are straightforward:

* new CLIF instruction `widening_pairwise_dot_product_s`

* translation from wasm into `widening_pairwise_dot_product_s`

* new AArch64 instructions `smull`, `smull2` (part of the `VecRRR` group)

* translation from `widening_pairwise_dot_product_s` to `smull ; smull2 ; addv`

There is no testcase in this commit, because that is a separate repo.  The
implementation has been tested, nevertheless.
2020-11-03 14:25:04 +01:00
Ulrich Weigand
c9bc4edd08 machinst ABI: Pass fixed frame size to gen_clobber_restore
The ABI common code currently passes the fixed frame size to
the gen_clobber_save back-end routine, which is required to
emit code to allocate the required stack space in the prologue.

Similarly, the back-end needs to emit code to de-allocate the
stack in the epilogue.  However, at this point the back-end
does not have access to that fixed frame size value any more.
With targets that use a frame pointer, this does not matter,
since de-allocation can be done simply by assigning the frame
pointer back to the stack pointer.  However, on targets that
do not use a frame pointer, the frame size is required.

To allow back-ends that option, this patch changes ABI common
code to pass the fixed frame size to get_clobber_restore as
well (the same value as is passed to get_clobber_save).
2020-11-03 11:15:03 +01:00
Ulrich Weigand
d02ae3940c machinst ABI: Allow back-end to define stack alignment
The common gen_prologue code currently assumes that the stack
pointer has to be aligned to twice the word size.  While this
is true for many ABIs, it does not hold universally.

This patch adds a new callback stack_align that back-ends can
provide to define the specific stack alignment required by the
ABI on that platform.
2020-11-03 09:43:55 +01:00
Andrew Brown
6d50099816 Rewrite interpreter generically (#2323)
* Rewrite interpreter generically

This change re-implements the Cranelift interpreter to use generic values; this makes it possible to do abstract interpretation of Cranelift instructions. In doing so, the interpretation state is extracted from the `Interpreter` structure and is accessed via a `State` trait; this makes it possible to not only more clearly observe the interpreter's state but also to interpret using a dummy state (e.g. `ImmutableRegisterState`). This addition made it possible to implement more of the Cranelift instructions (~70%, ignoring the x86-specific instructions).

* Replace macros with closures
2020-11-02 12:28:07 -08:00
Nick Fitzgerald
bfbe6ea348 peepmatic: Update to z3 version 0.7.1
This fixes a memory leak related to custom datatypes:
https://github.com/prove-rs/z3.rs/pull/104
2020-11-02 10:55:13 -08:00
Chris Fallin
d1be8dcfc0 Merge pull request #2310 from akirilov-arm/vector_constants
Cranelift AArch64: Improve code generation for vector constants
2020-11-01 21:56:40 -08:00
bjorn3
23aafa1054 Fix icmp_imm.i128
The immediate splitting code contained a bug causing both low and high
to be equal for i128. This is the root cause for
bjorn3/rustc_codegen_cranelift#1097 and likely the only bug preventing
cg_clif from bootstrapping rustc.
2020-10-31 21:11:50 +01:00
Johnnie Birch
c32740ffcd Updates comments on Int to Float conversion
Int to float for unsigned ints has merged, but there were
some comments on a different PR for the same pull request that
are addressed in this PR
2020-10-30 16:49:30 -07:00
Anton Kirilov
207779fe1d Cranelift AArch64: Improve code generation for vector constants
In particular, introduce initial support for the MOVI and MVNI
instructions, with 8-bit elements. Also, treat vector constants
as 32- or 64-bit floating-point numbers, if their value allows
it, by relying on the architectural zero extension. Finally,
stop generating literal loads for 32-bit constants.

Copyright (c) 2020, Arm Limited.
2020-10-30 13:16:12 +00:00
Pat Hickey
7b43bf76ed Merge pull request #2287 from bjorn3/simplejit_improvements
Some SimpleJIT improvements
2020-10-29 12:09:37 -07:00
Nick Fitzgerald
5a09e47e38 peepmatic: update z3 dependency to version 0.7.0 2020-10-29 10:58:40 -07:00
Qinxuan Chen
3cd9d52d32 Update the hashbrown to use the same version (#2338)
Signed-off-by: koushiro <koushiro.cqx@gmail.com>
2020-10-29 09:59:56 -05:00
Andrew Brown
6c6d958f38 [machinst x64]: implement packed pmin/pmax 2020-10-28 16:03:53 -07:00
Andrew Brown
6725b6b129 [machinst x64]: implement bitmask 2020-10-28 15:16:36 -07:00
Andrew Brown
5b9a21e099 Add missing SourceLoc to newly-emitted instructions
The changes in https://github.com/bytecodealliance/wasmtime/pull/2278 added `SourceLoc`s to several x64 `Inst` variants; between when that PR was last run in CI and when it was merged, new instructions were added that require this new parameter. This change adds the parameter in order to fix CI.
2020-10-28 14:33:09 -07:00
Johnnie Birch
fa66daea25 Add filetests for fcvt_from_sint.f32x4
Add portions of filetests simd-conversion-legalize.clif and simd-conversion-run.clif
that test fcvt_from_sint.f32x4
2020-10-28 13:02:50 -07:00
Johnnie Birch
8bbe6a25a9 Add support for packed float to signed int conversion
Implements i32x4.trunc_sat_f32x4_s
2020-10-28 13:02:50 -07:00
Johnnie Birch
97392eae3d Adds support for converting packed unsigned integer to packed float 2020-10-28 13:02:50 -07:00
Chris Fallin
c35904a8bf Merge pull request #2278 from akirilov-arm/load_splat
Introduce the Cranelift IR instruction `LoadSplat`
2020-10-28 12:54:03 -07:00
Leonardo Yvens
bde9555793 Add Trap::trap_code (#2309)
* add Trap::trap_code

* Add non-exhaustive wasmtime::TrapCode

* wasmtime: Better document TrapCode

* move and refactor test
2020-10-27 16:30:45 -05:00
Julian Seward
c15d9bd61b CL/aarch64: implement the wasm SIMD pseudo-max/min and FP-rounding instructions
This patch implements, for aarch64, the following wasm SIMD extensions

  Floating-point rounding instructions
  https://github.com/WebAssembly/simd/pull/232

  Pseudo-Minimum and Pseudo-Maximum instructions
  https://github.com/WebAssembly/simd/pull/122

The changes are straightforward:

* `build.rs`: the relevant tests have been enabled

* `cranelift/codegen/meta/src/shared/instructions.rs`: new CLIF instructions
  `fmin_pseudo` and `fmax_pseudo`.  The wasm rounding instructions do not need
  any new CLIF instructions.

* `cranelift/wasm/src/code_translator.rs`: translation into CLIF; this is
  pretty much the same as any other unary or binary vector instruction (for
  the rounding and the pmin/max respectively)

* `cranelift/codegen/src/isa/aarch64/lower_inst.rs`:
  - `fmin_pseudo` and `fmax_pseudo` are converted into a two instruction
    sequence, `fcmpgt` followed by `bsl`
  - the CLIF rounding instructions are converted to a suitable vector
    `frint{n,z,p,m}` instruction.

* `cranelift/codegen/src/isa/aarch64/inst/mod.rs`: minor extension of `pub
  enum VecMisc2` to handle the rounding operations.  And corresponding `emit`
  cases.
2020-10-26 10:37:07 +01:00
Andrew Brown
6ebbab61b9 Update cfg-if dependency 2020-10-23 16:50:51 -07:00
Yury Delendik
de4af90af6 machinst x64: New backend unwind (#2266)
Addresses unwind for experimental x64 backend. The preliminary code enables backtrace on SystemV call convension.
2020-10-23 15:19:41 -05:00
Julian Seward
2702942050 CL/aarch64 back end: implement the wasm SIMD bitmask instructions
The `bitmask.{8x16,16x8,32x4}` instructions do not map neatly to any single
AArch64 SIMD instruction, and instead need a sequence of around ten
instructions.  Because of this, this patch is somewhat longer and more complex
than it would be for (eg) x64.

Main changes are:

* the relevant testsuite test (`simd_boolean.wast`) has been enabled on aarch64.

* at the CLIF level, add a new instruction `vhigh_bits`, into which these wasm
  instructions are to be translated.

* in the wasm->CLIF translation (code_translator.rs), translate into
  `vhigh_bits`.  This is straightforward.

* in the CLIF->AArch64 translation (lower_inst.rs), translate `vhigh_bits`
  into equivalent sequences of AArch64 instructions.  There is a different
  sequence for each of the `{8x16, 16x8, 32x4}` variants.

All other changes are AArch64-specific, and add instruction definitions needed
by the previous step:

* Add two new families of AArch64 instructions: `VecShiftImm` (vector shift by
  immediate) and `VecExtract` (effectively a double-length vector shift)

* To the existing AArch64 family `VecRRR`, add a `zip1` variant.  To the
  `VecLanesOp` family add an `addv` variant.

* Add supporting code for the above changes to AArch64 instructions:
  - getting the register uses (`aarch64_get_regs`)
  - mapping the registers (`aarch64_map_regs`)
  - printing instructions
  - emitting instructions (`impl MachInstEmit for Inst`).  The handling of
    `VecShiftImm` is a bit complex.
  - emission tests for new instructions and variants.
2020-10-23 05:26:25 +02:00
Yury Delendik
b10e027fef Refactor UnwindInfo codes and frame_register (#2307)
* Refactor UnwindInfo codes and frame_register

* use isa word_size

* fix filetests

* Add comment about UnwindCode::PushRegister
2020-10-22 14:52:42 -05:00
Julian Seward
ab65d8f10c wasm->CLIF translation: consistently bitcast V128 values that are block formal parameters.
In the current translation of wasm (128-bit) SIMD into CLIF, we work around differences in the
type system models of wasm vs CLIF by inserting `bitcast` (a no-op cast) CLIF instructions before
more or less every use of a SIMD value.  Unfortunately this was not being done consistently and
even small examples with a single if-then-else diamond that produces a SIMD value, could cause a
verification failure downstream.  In this case, the jump out of the "else" block needed a
bitcast, but didn't have one.

This patch wraps creation of CLIF jumps and conditional branches up into three functions,
`canonicalise_then_jump` and `canonicalise_then_br{z,nz}`, and uses them consistently.  They
first cast the relevant block formal parameters, then generate the relevant kind of branch/jump.
Hence, provided they are also used consistently in future to generate branches/jumps in this
file, we are protected against such failures.

The patch also adds a large(ish) comment at the top explaining this in more detail.
2020-10-21 17:43:49 +02:00
Johnnie Birch
f27c0f3434 Adds support for signed packed integer conversion to float
f32x4.convert_i32x4_s
2020-10-16 14:16:53 -07:00
Yury Delendik
3c68845813 Cranelift: refactoring of unwind info (#2289)
* factor common code

* move fde/unwind emit to more abstract level

* code_len -> function_size

* speedup block scanning

* better function_size calciulation

* Rename UnwindCode enums
2020-10-15 08:34:50 -05:00
Andrew Brown
a26e9e9a20 [machinst x64]: lower load_splat using memory addressing 2020-10-14 09:43:33 -07:00
Andrew Brown
d990dd4c9a [machinst x64]: add source locations to more instruction formats
In order to register traps for `load_splat`, several instruction formats need knowledge of `SourceLoc`s; however, since the x64 backend does not correctly and completely register traps for `RegMem::Mem` variants I opened https://github.com/bytecodealliance/wasmtime/issues/2290 to discuss and resolve this issue. In the meantime, the current behavior (i.e. remaining largely unaware of `SourceLoc`s) is retained.
2020-10-14 09:43:33 -07:00
Anton Kirilov
e0b911a4df Introduce the Cranelift IR instruction LoadSplat
It corresponds to WebAssembly's `load*_splat` operations, which
were previously represented as a combination of `Load` and `Splat`
instructions. However, there are architectures such as Armv8-A
that have a single machine instruction equivalent to the Wasm
operations. In order to generate it, it is necessary to merge the
`Load` and the `Splat` in the backend, which is not possible
because the load may have side effects. The new IR instruction
works around this limitation.

The AArch64 backend leverages the new instruction to improve code
generation.

Copyright (c) 2020, Arm Limited.
2020-10-14 13:07:13 +01:00