* Use relative `call` instructions between wasm functions
This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:
* Function calls are always indirect which means they are more difficult
to predict, resulting in worse performance.
* Generating a relocation-per-function call requires expensive
relocation resolution at module-load time, which can be a large
contributing factor to how long it takes to load a precompiled module.
To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.
When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.
The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.
This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:
* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
AArch64 calls use this to resolve relocations.
* The `emit_island` function has been rewritten internally to handle
some cases which previously didn't come up before, such as:
* When emitting an island the deadline is now recalculated, where
previously it was always set to infinitely in the future. This was ok
prior since only a `Branch19` supported veneers and once it was
promoted no veneers were supported, so without multiple layers of
promotion the lack of a new deadline was ok.
* When emitting an island all pending fixups had veneers forced if
their branch target wasn't known yet. This was generally ok for
19-bit fixups since the only kind getting a veneer was a 19-bit
fixup, but with mixed kinds it's a bit odd to force veneers for a
26-bit fixup just because a nearby 19-bit fixup needed a veneer.
Instead fixups are now re-enqueued unless they're known to be
out-of-bounds. This may run the risk of generating more islands for
19-bit branches but it should also reduce the number of islands for
between-function calls.
* Otherwise the internal logic was tweaked to ideally be a bit more
simple, but that's a pretty subjective criteria in compilers...
I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.
The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.
* Fix arm32
* Review comments
Almost all the tests in the interpreter are already in the runtests
folder so that we can reuse them for the backends. The distinction
between interpreter tests and runtests is no longer very clear, since
they should both support the same clif code, and produce the same results.
We only have two test files:
* `add.clif` tests the add and jump instruction, both of which are
already covered in other test files, so we remove that file.
* `fibonacci.clif` does a recursive call which is currently not supported
in the filetest environment, so we keep this test interpreter only for now.
Perform a search over block predecessors trying to find loops of
unreachable predecessors. We do this by iterating on predecessors and
marking them as visited, stopping if we find a previously visited block
or if we find a block with multiple predecessors.
This issue was found by the CLIF fuzzer in #3094.
This makes Cranelift use the Rust `alloc` API its allocations,
rather than directly calling into `libc`, which makes it respect
the `#[global_allocator]` configuration.
Also, use `region::page::ceil` instead of having our own copies of
that logic.
* Implement `IaddCin`, `IaddCout`, and `IaddCarry` for Cranelift interpreter
Implemented the following Opcodes for the Cranelift interpreter:
- `IaddCin` to add two scalar integers with an input carry flag.
- `IaddCout` to add two scalar integers and report overflow with the carry flag.
- `IaddCarry` to add two scalar integers with an input carry flag, reporting overflow with the output carry flag.
Copyright (c) 2021, Arm Limited
* Simplify carry check + add i64 `IaddCarry` tests
Copyright (c) 2021, Arm Limited
* Move tests to `runtests`
Copyright (c) 2021, Arm Limited
* Cranelift AArch64: Simplify leaf functions that do not use the stack
Leaf functions that do not use the stack (e.g. do not clobber any
callee-saved registers) do not need a frame record.
Copyright (c) 2021, Arm Limited.
* Replace some cfg(debug) with cfg(debug_assertions)
Cargo nor rustc ever sets `cfg(debug)` automatically, so it's expected
that these usages were intended to be `cfg(debug_assertions)`.
* Fix MachBuffer debug-assertion invariant checks.
We should only check invariants when we expect them to be true --
specifically, before the branch-simplification algorithm runs. At other
times, they may be temporarily violated: e.g., after
`add_{cond,uncond}_branch()` but before emitting the branch bytes. This
is the expected sequence, and the rest of the code is consistent with
that.
Some of the checks also were not quite right (w.r.t. the written
invariants); specifically, we should not check validity of a label's
offset when the label has been aliased to another label.
It seems that this is an unfortunate consequence of leftover
debug-assertions that weren't actually being run, so weren't kept
up-to-date. Should no longer happen now that we actually check these!
Co-authored-by: Chris Fallin <chris@cfallin.org>
* Implement `Extractlane`, `UaddSat`, and `UsubSat` for Cranelift interpreter
Implemented the `Extractlane`, `UaddSat`, and `UsubSat` opcodes for the interpreter,
and added helper functions for working with SIMD vectors (`extractlanes`, `vectorizelanes`,
and `binary_arith`).
Copyright (c) 2021, Arm Limited
* Re-use tests + constrict Vector assert
- Re-use interpreter tests as runtests where supported.
- Constrict Vector assertion.
- Code style adjustments following feedback.
Copyright (c) 2021, Arm Limited
* Runtest `i32x4` vectors on AArch64; add `i64x2` tests
Copyright (c) 2021, Arm Limited
* Add `simd-` prefix to test filenames
Copyright (c) 2021, Arm Limited
* Return aliased `SmallVec` from `extractlanes`
Using a `SmallVec<[i128; 4]>` allows larger-width 128-bit vectors
(`i32x4`, `i64x2`, ...) to not cause heap allocations.
Copyright (c) 2021, Arm Limited
* Accept slice to `vectorizelanes` rather than `Vec`
Copyright (c) 2021, Arm Limited
* cranelift: Add stack support to the interpreter
We also change the approach for heap loads and stores.
Previously we would use the offset as the address to the heap. However,
this approach does not allow using the load/store instructions to
read/write from both the heap and the stack.
This commit changes the addressing mechanism of the interpreter. We now
return the real addresses from the addressing instructions
(stack_addr/heap_addr), and instead check if the address passed into
the load/store instructions points to an area in the heap or the stack.
* cranelift: Add virtual addresses to cranelift interpreter
Adds a Virtual Addressing scheme that was discussed as a better
alternative to returning the real addresses.
The virtual addresses are split into 4 regions (stack, heap, tables and
global values), and the address itself is composed of an `entry` field
and an `offset` field. In general the `entry` field corresponds to the
instance of the resource (e.g. table5 is entry 5) and the `offset` field
is a byte offset inside that entry.
There is one exception to this which is the stack, where due to only
having one stack, the whole address is an offset field.
The number of bits in entry vs offset fields is variable with respect to
the `region` and the address size (32bits vs 64bits). This is done
because with 32 bit addresses we would have to compromise on heap size,
or have a small number of global values / tables. With 64 bit addresses
we do not have to compromise on this, but we need to support 32 bit
addresses.
* cranelift: Remove interpreter trap codes
* cranelift: Calculate frame_offset when entering or exiting a frame
* cranelift: Add safe read/write interface to DataValue
* cranelift: DataValue write full 128bit slot for booleans
* cranelift: Use DataValue accessors for trampoline.
* cranelift: Add heap support to filetest infrastructure
* cranelift: Explicit heap pointer placement in filetest annotations
* cranelift: Add documentation about the Heap directive
* cranelift: Clarify that heap filetests pointers must be laid out sequentially
* cranelift: Use wrapping add when computing bound pointer
* cranelift: Better error messages when invalid signatures are found for heap file tests.
Implemented the following Opcodes for the Cranelift interpreter:
- `IsubBin` to subtract two scalar integers with an input borrow flag.
- `IsubBout` to subtract two scalar integers with an output borrow flag.
- `IsubBorrow` to subtract two scalar integers with an input and output borrow flag.
Copyright (c) 2021, Arm Limited
* Move `CompiledFunction` into wasmtime-cranelift
This commit moves the `wasmtime_environ::CompiledFunction` type into the
`wasmtime-cranelift` crate. This type has lots of Cranelift-specific
pieces of compilation and doesn't need to be generated by all Wasmtime
compilers. This replaces the usage in the `Compiler` trait with a
`Box<Any>` type that each compiler can select. Each compiler must still
produce a `FunctionInfo`, however, which is shared information we'll
deserialize for each module.
The `wasmtime-debug` crate is also folded into the `wasmtime-cranelift`
crate as a result of this commit. One possibility was to move the
`CompiledFunction` commit into its own crate and have `wasmtime-debug`
depend on that, but since `wasmtime-debug` is Cranelift-specific at this
time it didn't seem like it was too too necessary to keep it separate.
If `wasmtime-debug` supports other backends in the future we can
recreate a new crate, perhaps with it refactored to not depend on
Cranelift.
* Move wasmtime_environ::reference_type
This now belongs in wasmtime-cranelift and nowhere else
* Remove `Type` reexport in wasmtime-environ
One less dependency on `cranelift-codegen`!
* Remove `types` reexport from `wasmtime-environ`
Less cranelift!
* Remove `SourceLoc` from wasmtime-environ
Change the `srcloc`, `start_srcloc`, and `end_srcloc` fields to a custom
`FilePos` type instead of `ir::SourceLoc`. These are only used in a few
places so there's not much to lose from an extra abstraction for these
leaf use cases outside of cranelift.
* Remove wasmtime-environ's dep on cranelift's `StackMap`
This commit "clones" the `StackMap` data structure in to
`wasmtime-environ` to have an independent representation that that
chosen by Cranelift. This allows Wasmtime to decouple this runtime
dependency of stack map information and let the two evolve
independently, if necessary.
An alternative would be to refactor cranelift's implementation into a
separate crate and have wasmtime depend on that but it seemed a bit like
overkill to do so and easier to clone just a few lines for this.
* Define code offsets in wasmtime-environ with `u32`
Don't use Cranelift's `binemit::CodeOffset` alias to define this field
type since the `wasmtime-environ` crate will be losing the
`cranelift-codegen` dependency soon.
* Commit to using `cranelift-entity` in Wasmtime
This commit removes the reexport of `cranelift-entity` from the
`wasmtime-environ` crate and instead directly depends on the
`cranelift-entity` crate in all referencing crates. The original reason
for the reexport was to make cranelift version bumps easier since it's
less versions to change, but nowadays we have a script to do that.
Otherwise this encourages crates to use whatever they want from
`cranelift-entity` since we'll always depend on the whole crate.
It's expected that the `cranelift-entity` crate will continue to be a
lean crate in dependencies and suitable for use at both runtime and
compile time. Consequently there's no need to avoid its usage in
Wasmtime at runtime, since "remove Cranelift at compile time" is
primarily about the `cranelift-codegen` crate.
* Remove most uses of `cranelift-codegen` in `wasmtime-environ`
There's only one final use remaining, which is the reexport of
`TrapCode`, which will get handled later.
* Limit the glob-reexport of `cranelift_wasm`
This commit removes the glob reexport of `cranelift-wasm` from the
`wasmtime-environ` crate. This is intended to explicitly define what
we're reexporting and is a transitionary step to curtail the amount of
dependencies taken on `cranelift-wasm` throughout the codebase. For
example some functions used by debuginfo mapping are better imported
directly from the crate since they're Cranelift-specific. Note that
this is intended to be a temporary state affairs, soon this reexport
will be gone entirely.
Additionally this commit reduces imports from `cranelift_wasm` and also
primarily imports from `crate::wasm` within `wasmtime-environ` to get a
better sense of what's imported from where and what will need to be
shared.
* Extract types from cranelift-wasm to cranelift-wasm-types
This commit creates a new crate called `cranelift-wasm-types` and
extracts type definitions from the `cranelift-wasm` crate into this new
crate. The purpose of this crate is to be a shared definition of wasm
types that can be shared both by compilers (like Cranelift) as well as
wasm runtimes (e.g. Wasmtime). This new `cranelift-wasm-types` crate
doesn't depend on `cranelift-codegen` and is the final step in severing
the unconditional dependency from Wasmtime to `cranelift-codegen`.
The final refactoring in this commit is to then reexport this crate from
`wasmtime-environ`, delete the `cranelift-codegen` dependency, and then
update all `use` paths to point to these new types.
The main change of substance here is that the `TrapCode` enum is
mirrored from Cranelift into this `cranelift-wasm-types` crate. While
this unfortunately results in three definitions (one more which is
non-exhaustive in Wasmtime itself) it's hopefully not too onerous and
ideally something we can patch up in the future.
* Get lightbeam compiling
* Remove unnecessary dependency
* Fix compile with uffd
* Update publish script
* Fix more uffd tests
* Rename cranelift-wasm-types to wasmtime-types
This reflects the purpose a bit more where it's types specifically
intended for Wasmtime and its support.
* Fix publish script
Previously cranelift's wasm code generator would emit a raw `store`
instruction for all wasm types, regardless of what the cranelift operand
type was. Cranelift's `store` instruction, however, isn't valid for
boolean vector types. This commit fixes this issue by inserting a
bitcast specifically for the store instruction if a boolean vector type
is being stored, continuing to avoid the bitcast for all other vector types.
Closes#3099
The main purpose for doing this is that this is a large piece of
functionality used by Wasmtime which is entirely independent of
Cranelift. Eventually Wasmtime wants to be able to compile without
Cranelift, but it can't also depend on `cranelift-wasm` in that
situation for module translation which means that something needs to
happen. One option is to refactor what's in `cranelift-wasm` into a
separate crate (since all these pieces don't actually depend on
`cranelift-codegen`), but I personally chose to not do this because:
* The `ModuleEnvironment` trait, AFAIK, only has a primary user of
Wasmtime. The Spidermonkey integration, for example, does not use this.
* This is an extra layer of abstraction between Wasmtime and the
compilation phase which was a bit of a pain to maintain. It couldn't
be Wasmtime-specific as it was part of Cranelift but at the same time
it had lots of Wasmtime-centric functionality (such as module
linking).
* Updating the "dummy" implementation has become pretty onerous over
time as frequent additions are made and the "dummy" implementation was
never actually used anywhere. This ended up feeling like effectively
busy-work to update this.
For these reasons I've opted to to move the meat of `cranelift-wasm`
used by `wasmtime-environ` directly into `wasmtime-environ`. This means
that the only real meat that Wasmtime uses from `cranelift-wasm` is the
function-translation bits in the `wasmtime-cranelift` crate.
The changes in `wasmtime-environ` are largely to inline module parsing
together so it's a bit easier to follow instead of trying to connect
the dots between lots of various function calls.
The tests for the SIMD floating-point maximum and minimum operations
require particular care because the handling of the NaN values is
non-deterministic and may vary between platforms. There is no way to
match several NaN values in a test, so the solution is to extract the
non-deterministic test cases into a separate file that is subsequently
replicated for every backend under test, with adjustments made to the
expected results.
Copyright (c) 2021, Arm Limited.
This commit started off by deleting the `cranelift_codegen::settings`
reexport in the `wasmtime-environ` crate and then basically played
whack-a-mole until everything compiled again. The main result of this is
that the `wasmtime-*` family of crates have generally less of a
dependency on the `TargetIsa` trait and type from Cranelift. While the
dependency isn't entirely severed yet this is at least a significant
start.
This commit is intended to be largely refactorings, no functional
changes are intended here. The refactorings are:
* A `CompilerBuilder` trait has been added to `wasmtime_environ` which
server as an abstraction used to create compilers and configure them
in a uniform fashion. The `wasmtime::Config` type now uses this
instead of cranelift-specific settings. The `wasmtime-jit` crate
exports the ability to create a compiler builder from a
`CompilationStrategy`, which only works for Cranelift right now. In a
cranelift-less build of Wasmtime this is expected to return a trait
object that fails all requests to compile.
* The `Compiler` trait in the `wasmtime_environ` crate has been souped
up with a number of methods that Wasmtime and other crates needed.
* The `wasmtime-debug` crate is now moved entirely behind the
`wasmtime-cranelift` crate.
* The `wasmtime-cranelift` crate is now only depended on by the
`wasmtime-jit` crate.
* Wasm types in `cranelift-wasm` no longer contain their IR type,
instead they only contain the `WasmType`. This is required to get
everything to align correctly but will also be required in a future
refactoring where the types used by `cranelift-wasm` will be extracted
to a separate crate.
* I moved around a fair bit of code in `wasmtime-cranelift`.
* Some gdb-specific jit-specific code has moved from `wasmtime-debug` to
`wasmtime-jit`.
* Move all trampoline compilation to `wasmtime-cranelift`
This commit moves compilation of all the trampolines used in wasmtime
behind the `Compiler` trait object to live in `wasmtime-cranelift`. The
long-term goal of this is to enable depending on cranelift *only* from
the `wasmtime-cranelift` crate, so by moving these dependencies we
should make that a little more flexible.
* Fix windows build
* Implement the memory64 proposal in Wasmtime
This commit implements the WebAssembly [memory64 proposal][proposal] in
both Wasmtime and Cranelift. In terms of work done Cranelift ended up
needing very little work here since most of it was already prepared for
64-bit memories at one point or another. Most of the work in Wasmtime is
largely refactoring, changing a bunch of `u32` values to something else.
A number of internal and public interfaces are changing as a result of
this commit, for example:
* Acessors on `wasmtime::Memory` that work with pages now all return
`u64` unconditionally rather than `u32`. This makes it possible to
accommodate 64-bit memories with this API, but we may also want to
consider `usize` here at some point since the host can't grow past
`usize`-limited pages anyway.
* The `wasmtime::Limits` structure is removed in favor of
minimum/maximum methods on table/memory types.
* Many libcall intrinsics called by jit code now unconditionally take
`u64` arguments instead of `u32`. Return values are `usize`, however,
since the return value, if successful, is always bounded by host
memory while arguments can come from any guest.
* The `heap_addr` clif instruction now takes a 64-bit offset argument
instead of a 32-bit one. It turns out that the legalization of
`heap_addr` already worked with 64-bit offsets, so this change was
fairly trivial to make.
* The runtime implementation of mmap-based linear memories has changed
to largely work in `usize` quantities in its API and in bytes instead
of pages. This simplifies various aspects and reflects that
mmap-memories are always bound by `usize` since that's what the host
is using to address things, and additionally most calculations care
about bytes rather than pages except for the very edge where we're
going to/from wasm.
Overall I've tried to minimize the amount of `as` casts as possible,
using checked `try_from` and checked arithemtic with either error
handling or explicit `unwrap()` calls to tell us about bugs in the
future. Most locations have relatively obvious things to do with various
implications on various hosts, and I think they should all be roughly of
the right shape but time will tell. I mostly relied on the compiler
complaining that various types weren't aligned to figure out
type-casting, and I manually audited some of the more obvious locations.
I suspect we have a number of hidden locations that will panic on 32-bit
hosts if 64-bit modules try to run there, but otherwise I think we
should be generally ok (famous last words). In any case I wouldn't want
to enable this by default naturally until we've fuzzed it for some time.
In terms of the actual underlying implementation, no one should expect
memory64 to be all that fast. Right now it's implemented with
"dynamic" heaps which have a few consequences:
* All memory accesses are bounds-checked. I'm not sure how aggressively
Cranelift tries to optimize out bounds checks, but I suspect not a ton
since we haven't stressed this much historically.
* Heaps are always precisely sized. This means that every call to
`memory.grow` will incur a `memcpy` of memory from the old heap to the
new. We probably want to at least look into `mremap` on Linux and
otherwise try to implement schemes where dynamic heaps have some
reserved pages to grow into to help amortize the cost of
`memory.grow`.
The memory64 spec test suite is scheduled to now run on CI, but as with
all the other spec test suites it's really not all that comprehensive.
I've tried adding more tests for basic things as I've had to implement
guards for them, but I wouldn't really consider the testing adequate
from just this PR itself. I did try to take care in one test to actually
allocate a 4gb+ heap and then avoid running that in the pooling
allocator or in emulation because otherwise that may fail or take
excessively long.
[proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md
* Fix some tests
* More test fixes
* Fix wasmtime tests
* Fix doctests
* Revert to 32-bit immediate offsets in `heap_addr`
This commit updates the generation of addresses in wasm code to always
use 32-bit offsets for `heap_addr`, and if the calculated offset is
bigger than 32-bits we emit a manual add with an overflow check.
* Disable memory64 for spectest fuzzing
* Fix wrong offset being added to heap addr
* More comments!
* Clarify bytes/pages
I've run up against the `Into`-vs-`From` impls a few times and figured
I'd go ahead and put up a refactoring. This switches `Into` impls into
`From` impls which allows using both traits instead of just the `Into`
version. Additionally this removes a few small `as` casts in favor of
infallible `from`/`into` or `try_from` with error handling.
This fixes some fuzz bugs that came about enabling simd where nan
canonicalization is performed on the fuzzers but cranelift would panic
on these ops for vectors. This adds some custom codegen with `bitselect`
to ensure any nan lanes are canonical-nan lanes in the canonicalized
operations.
The AArch64 support was a bit broken and was using Armv7 style
barriers, which aren't required with Armv8 acquire-release
load/stores.
The fallback CAS loops and RMW, for AArch64, have also been updated
to use acquire-release, exclusive, instructions which, again, remove
the need for barriers. The CAS loop has also been further optimised
by using the extending form of the cmp instruction.
Copyright (c) 2021, Arm Limited.
* Consolidate address calculations for atomics
This commit consolidates all calcuations of guest addresses into one
`prepare_addr` function. This notably remove the atomics-specifics paths
as well as the `prepare_load` function (now renamed to `prepare_addr`
and folded into `get_heap_addr`).
The goal of this commit is to simplify how addresses are managed in the
code generator for atomics to use all the shared infrastrucutre of other
loads/stores as well. This additionally fixes#3132 via the use of
`heap_addr` in clif for all operations.
I also added a number of tests for loads/stores with varying alignments.
Originally I was going to allow loads/stores to not be aligned since
that's what the current formal specification says, but the overview of
the threads proposal disagrees with the formal specification, so I
figured I'd leave it as-is but adding tests probably doesn't hurt.
Closes#3132
* Fix old backend
* Guarantee misalignment checks happen before out-of-bounds
* Bump the wasm-tools crates
Pulls in some updates here and there, mostly for updating crates to the
latest version to prepare for later memory64 work.
* Update lightbeam
* Change VMMemoryDefinition::current_length to `usize`
This commit changes the definition of
`VMMemoryDefinition::current_length` to `usize` from its previous
definition of `u32`. This is a pretty impactful change because it also
changes the cranelift semantics of "dynamic" heaps where the bound
global value specifier must now match the pointer type for the platform
rather than the index type for the heap.
The motivation for this change is that the `current_length` field (or
bound for the heap) is intended to reflect the current size of the heap.
This is bound by `usize` on the host platform rather than `u32` or`
u64`. The previous choice of `u32` couldn't represent a 4GB memory
because we couldn't put a number representing 4GB into the
`current_length` field. By using `usize`, which reflects the host's
memory allocation, this should better reflect the size of the heap and
allows Wasmtime to support a full 4GB heap for a wasm program (instead
of 4GB minus one page).
This commit also updates the legalization of the `heap_addr` clif
instruction to appropriately cast the address to the platform's pointer
type, handling bounds checks along the way. The practical impact for
today's targets is that a `uextend` is happening sooner than it happened
before, but otherwise there is no intended impact of this change. In the
future when 64-bit memories are supported there will likely need to be
fancier logic which handles offsets a bit differently (especially in the
case of a 64-bit memory on a 32-bit host).
The clif `filetest` changes should show the differences in codegen, and
the Wasmtime changes are largely removing casts here and there.
Closes#3022
* Add tests for memory.size at maximum memory size
* Add a dfg helper method
PR #3131 fixed the failing builds by allowing this field to be dead.
After looking at it further the field is not being used and can be
removedi completely.
One of the fields of `TargetIsa` isn't used in the
cranelift-codegen-meta crate, but instead of refactoring to try to
remove it this just adds `#[allow(dead_code)]` for now in the assumption
that when the old backends go away this will probably go away as well.