Clarify that they're executed not only around imports but also around
function calls. Additionally spell out the semantics around traps a bit
more clearly too.
As discussed in #3035, most backends have explicit
`unimplemented!(...)` match-arms for opcode lowering cases that are not
yet implemented; this allows the backend maintainer to easily see what
is not yet implemented, and avoiding a catch-all wildcard arm is less
error-prone as opcodes are added in the future.
However, the x64 backend was the exception: as @akirilov-arm pointed
out, it had a wildcard match arm. This fixes the issue by explicitly
listing all opcodes the x64 backend does not yet implement.
As per our tests, these opcodes are not used or need by Wasm lowering;
but, it is good to know that they exist, so that we can eventually
either support or remove them.
This was a good exercise for me as I wasn't aware of a few of these in
particular: e.g., aarch64 supports `bmask` while x64 does not, and there
isn't a good reason why x64 shouldn't, especially if others hope to use
Cranelift as a SIMD-capable general codegen in the future.
The `unimplemented!()` cases are separate from `panic!()` ones: my
convention here was to split out those that are logically just *missing*
from those that should be *impossible*, mostly due to expected removal
by legalization before we reach the lowering step.
Wasmtime was updated to reject creation of memories exactly 4gb in size
in #3013, but the fuzzers still had the assumption that any request to
create a host object for a particular wasm type would succeed.
Unfortunately now, though, a request to create a 4gb memory fails. This
is an expected failure, though, so the fix here was to catch the error
and allow it.
* wasi-common: update wasi submodule
This updates the WASI submodule, pulling in changes to the witx crate,
now that there is a 0.9.1 version including some bug fixes. See
WebAssembly/WASI#434 for more information.
* wiggle: update witx dependencies
* publish: verify and vendor witx-cli
* adjust root workspace members
This commit removes some items from the root manifest's workspace
members array, and adds `witx-cli` to the root `workspace.exclude`
array.
The motivation for this stems from a cargo bug described in
rust-lang/cargo#6745: `workspace.exclude` does not work if it is nested
under a `workspace.members` path.
See WebAssembly/WASI#438 for the underlying change to the WASI submodule
which reorganized the `witx-cli` crate, and WebAssembly/WASI#398 for the
original PR introducing `witx-cli`.
See [this
comment](https://github.com/bytecodealliance/wasmtime/pull/3025#issuecomment-867741175)
for more details about the compilation errors, and failed alternative
approaches that necessitated this change.
N.B. This is not a functional change, these crates are still implicitly
workspace members as transitive dependencies, but this will allow us to
side-step the aforementioned cargo bug.
Co-Authored-By: Alex Crichton <alex@alexcrichton.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
These backends will be removed in the future (see
bytecodealliance/rfcs#12 and the pending #3009 in this repo).
In the meantime, to more clearly communicate that they are using
"legacy" APIs and will eventually be removed, this PR places them in an
`isa/legacy/` subdirectory. No functional changes otherwise.
Implement the `TlsValue` opcode in the aarch64 backend for ELF_GD.
This is a little bit unusual as the default TLS mechanism for aarch64 is TLS Descriptors in other compilers.
However currently we only recognize elf_gd so lets start with that as a TLS implementation.
This increases the timeout from 50ms to 200ms, which makes the
tests reliably pass on my machine using the CI scripts againt
the s390x-linux-user qemu target.
This commit slims down the list of builtin intrinsics. It removes the
duplicated intrinsics for imported and locally defined items, instead
always using one intrinsic for both. This was previously inconsistently
applied where some intrinsics got two copies (one for imported one for
local) and other intrinsics got only one copy. This does add an extra
branch in intrinsics since they need to determine whether something is
local or not, but that's generally much lower cost than the intrinsics
themselves.
This also removes the `memory32_size` intrinsic, instead inlining the
codegen directly into the clif IR. This matches what the `table.size`
instruction does and removes the need for a few functions on a
`wasmtime_runtime::Instance`.
This commit removes some one-use methods to inline them at their use
site, and otherwise adds bounds checks to other functions like
`imported_function` where previously the `FuncIndex` may have been
accidentally out of bounds, which would cause memory unsafety. There's
no actual bug this was fixing, just trying to improve the safety of the
code internally a little.
The current_length member is defined as "usize" in Rust code,
but generated wasm code refers to it as if it were "u32".
While this happens to mostly work on little-endian machines
(as long as the length is < 4GB), it will always fail on
big-endian machines.
Fixed by making current_length "u32" in Rust as well, and
ensuring the actual memory size is always less than 4GB.
This adds enough support for the IaddIfcout opcode to make the
code emitted by dynamic_addr work on s390x.
Note: On s390x, the condition code mask that has to be used to
implement unsigned_add_overflow_condition does not match any of
the masks for the "normal" condition codes, so this design is
not really a good match for s390x ...
There has been occasional confusion with the representation that we use
for bool-typed values in registers, at least when these are wider than
one bit. Does a `b8` store `true` as 1, or as all-ones (`0xff`)?
We've settled on the latter because of some use-cases where the wide
bool becomes a mask -- see #2058 for more on this.
This is fine, and transparent, to most operations within CLIF, because
the bool-typed value still has only two semantically-visible states,
namely `true` and `false`.
However, we have to be careful with bool-to-int conversions. `bint` on
aarch64 correctly masked the all-ones value down to 0 or 1, as required
by the instruction specification, but on x64 it did not. This PR fixes
that bug and makes x64 consistent with aarch64.
While staring at this code I realized that `bextend` was also not
consistent with the all-ones invariant: it should do a sign-extend, not
a zero-extend as it previously did. This is also rectified and tested.
(Aarch64 also already had this case implemented correctly.)
Fixes#3003.
This code assumes that the Dirent structure has the same memory
layout on the host (Rust code) as in wasm code. This is not true
if the host is big-endian, as wasm is always little-endian.
Fixed by always byte-swapping Dirent fields to little-endian
before passing them on to wasm code.
The truncate_last_branch removes an instruction that had already
been added to the buffer, and must update various bookkeeping.
However, updating the "srclocs" field is incorrect: if there is
a srclocs entry that spans both the removed branch *and some
previous instruction*, that whole srclocs entry is removed,
which makes those previous instructions now uncovered by any
srclocs record. This can cause subsequent problems e.g. if
one of those instructions traps.
Fixed by just truncating instead of fully removing the srclocs
record in this case.
This commit fixes running the store's enter/exit hooks into wasm which
accidentally weren't run for an instance's `start` function. The fix
here was mostly to just sink the enter/exit hook much lower in the code
to `invoke_wasm_and_catch_traps`, which is the common entry point for
all wasm calls.
This did involve propagating the `StoreContext<T>` generic rather than
using `StoreOpaque` unfortunately, but it is overally not too too much
code and we generally wanted most of it inlined anyway.
This commit adds a `#[link]` annotation to the block defining symbols
coming from a native static library that we build and link. This is
required by rustc to get symbols to get exported correctly when linking
wasmtime into a Rust dynamic library instead of always as an rlib.
While I was at it I went ahead and renamed the symbols now that they're
no longer in C++ and they're doing setjmp/longjmp and not much else.
Closes#3006
Lowering icmp was duplicated across callers that only cared about
flags, and callers that only cared about the bool result.
Merge both callers into `lower_icmp` which does the correct thing
depending on a new IcmpOutput parameter.
* Add guard pages to the front of linear memories
This commit implements a safety feature for Wasmtime to place guard
pages before the allocation of all linear memories. Guard pages placed
after linear memories are typically present for performance (at least)
because it can help elide bounds checks. Guard pages before a linear
memory, however, are never strictly needed for performance or features.
The intention of a preceding guard page is to help insulate against bugs
in Cranelift or other code generators, such as CVE-2021-32629.
This commit adds a `Config::guard_before_linear_memory` configuration
option, defaulting to `true`, which indicates whether guard pages should
be present both before linear memories as well as afterwards. Guard
regions continue to be controlled by
`{static,dynamic}_memory_guard_size` methods.
The implementation here affects both on-demand allocated memories as
well as the pooling allocator for memories. For on-demand memories this
adjusts the size of the allocation as well as adjusts the calculations
for the base pointer of the wasm memory. For the pooling allocator this
will place a singular extra guard region at the very start of the
allocation for memories. Since linear memories in the pooling allocator
are contiguous every memory already had a preceding guard region in
memory, it was just the previous memory's guard region afterwards. Only
the first memory needed this extra guard.
I've attempted to write some tests to help test all this, but this is
all somewhat tricky to test because the settings are pretty far away
from the actual behavior. I think, though, that the tests added here
should help cover various use cases and help us have confidence in
tweaking the various `Config` settings beyond their defaults.
Note that this also contains a semantic change where
`InstanceLimits::memory_reservation_size` has been removed. Instead this
field is now inferred from the `static_memory_maximum_size` and guard
size settings. This should hopefully remove some duplication in these
settings, canonicalizing on the guard-size/static-size settings as the
way to control memory sizes and virtual reservations.
* Update config docs
* Fix a typo
* Fix benchmark
* Fix wasmtime-runtime tests
* Fix some more tests
* Try to fix uffd failing test
* Review items
* Tweak 32-bit defaults
Makes the pooling allocator a bit more reasonable by default on 32-bit
with these settings.