This is sometimes useful when performing analyses on the generated
machine code: for example, some kinds of code verifiers will want to do
a control-flow analysis, and it is much easier to do this if one does
not have to recover the CFG from the machine code (doing so requires
heavyweight analysis when indirect branches are involved). If one trusts
the control-flow lowering and only needs to verify other properties of
the code, this can be very useful.
Looks like GitHub Actions takes 10m+ to upload the documentation and
nearly 10 minutes to download it. I suspect this has to do with the
creation of thousands of files, and using `tar` here is likely much
faster. Let's test it out!
* wasmtime-wasi: re-exporting this WasiCtxBuilder was shadowing the right one
wasi-common's WasiCtxBuilder is really only useful wasi_cap_std_sync and
wasi_tokio to implement their own Builder on top of.
This re-export of wasi-common's is 1. not useful and 2. shadow's the
re-export of the right one in sync::*.
* wasi-common: eliminate WasiCtxBuilder, make the builder methods on WasiCtx instead
* delete wasi-common::WasiCtxBuilder altogether
just put those methods directly on &mut WasiCtx.
As a bonus, the sync and tokio WasiCtxBuilder::build functions
are no longer fallible!
* bench fixes
* more test fixes
Fixes a few issues that have been cropping up:
* Update `rustup` on Windows to latest to skip over the 1.24.1 installed
on GitHub Actions which can fail to install.
* Remove the no-longer-needed `define-llvm-env` action
* Install generic llvm/lldb packges instead of specific ones that may
migrate in versions over time.
When AVX512VL and AVX512F are available, use a single instruction
(`VCVTUDQ2PS`) instead of a length 9-instruction sequence. This
optimization is a port from the legacy x86 backend.
Previously, the x64 backend's ABI code would generate a sign-extending
load when loading a less-than-64-bit integer from a spillslot. This is
incorrect: e.g., for i32s > 0x80000000, this would result in all high
bits set.
This interacts poorly with another optimization. Normally, the invariant
is that the high bits of a register holding a value of a certain type,
beyond that type's bits, are undefined. However, as an optimization, we
recognize and use the fact that on x86-64, 32-bit instructions zero the
upper 32 bits. This allows us to elide a 32-to-64-bit zero-extend op
(turning it into just a move, which can then sometimes disappear
entirely due to register coalescing).
If a spill and reload happen between the production of a 32-bit value
from an instruction known to zero the upper bits and its use, then we
will rely on zero upper bits that might actually be set by a
sign-extend. This will result in incorrect execution.
As a fix, we stick to a simple invariant: we always spill and reload a
full 64 bits when handling integer registers on x64. This ensures that
no bits are mangled.
This change implements `vselect` using SSE4.1's `BLENDVPS`, `BLENDVPD`,
and `PBLENDVB`. `vselect` is a lane-selecting instruction that is used
by
[simple_preopt.rs](fa1faf5d22/cranelift/codegen/src/simple_preopt.rs (L947-L999))
to lower `bitselect` to a single x86 instruction when the condition mask
is known to be boolean (all 1s or 0s, e.g., from a conversion). This is
better than `bitselect` in general, which lowers to 4-5 instructions.
The old backend had the `vselect` lowering; this simply introduces it to
the new backend.
Previously the inclusion of the `criterion` crate had brought in a
transitive dependency to `cast`, which used old versions of several
libraries. Now that https://github.com/japaric/cast.rs/pull/26 is merged
and a new version published, we can update `cast` and remove the
cargo-deny rules for the duplicated, older versions.
Since `uadd_sat`, `sadd_sat`, `usub_sat`, and `ssub_sat` are now only
available to vector types, this removes the lowering code for the
scalar versions of these instructions in the arm32 and aarch64 backends.
This adds the machinery to encode the VPMULLQ instruction which is
available in AVX512VL and AVX512DQ. When these feature sets are
available, we use this instruction instead of a lengthy 12-instruction
sequence.
Since the lowering of `imul` complicated the other ALU operations it was
matched with and since future commits will alter the multiplication
lowering further, this change moves the `imul` lowering to its own match
block.
This adds benchmarks around module instantiation using criterion.
Both the default (i.e. on-demand) and pooling allocators are tested
sequentially and in parallel using a thread pool.
Instantiation is tested with an empty module, a module with a single page
linear memory, a larger linear memory with a data initializer, and a "hello
world" Rust WASI program.
Until https://github.com/japaric/cast.rs/pull/26 is resolved, the `cast`
crate will pull in older versions of the `rustc_version`, `semver`, and
`semver-parser` crates. `cast` is a build dependency of `criterion`
which is used for benchmarking and is itself a dev dependency, not a
normal dependency.
This change adds a criterion-enabled benchmark, x64-evex-encoding, to
compare the performance of the builder pattern used to encode EVEX
instructions in the new x64 backend against the function pattern
used to encode EVEX instructions in the legacy x86 backend. At face
value, the results imply that the builder pattern is faster, but no
efforts were made to analyze and optimize these approaches further.
In order to benchmark the encoding code with criterion, the functions
and structures must be public. Moving this code to its own module
(instead of keeping as a submodule to `inst`), allows `inst` to remain
private. This avoids having to expose and document (or ignore
documenting) the numerous instruction variants in `inst` while allowing
access to the encoding code. This commit changes no functionality.
* Upgrade to the latest versions of gimli, addr2line, object
And adapt to API changes. New gimli supports wasm dwarf, resulting in
some simplifications in the debug crate.
* upgrade gimli usage in linux-specific profiling too
* Add "continue" statement after interpreting a wasm local dwarf opcode
When dealing with params that need to be split, we follow the
arch64 ABI and split the value in two, and make sure that start that
argument in an even numbered xN register.
The apple ABI does not require this, so on those platforms, we start
params anywhere.