* Fix sret for AArch64
AArch64 requires the struct return address argument to be stored in the x8
register. This register is never used for regular arguments.
* Add extra sret tests for x86_64
Implement the tls_value for s390 in the ELF general-dynamic mode.
Notable differences to the x86_64 implementation are:
- We use a __tls_get_offset libcall instead of __tls_get_addr.
- The current thread pointer (stored in a pair of access registers)
needs to be added to the result of __tls_get_offset.
- __tls_get_offset has a variant ABI that requires the address of
the GOT (global offset table) is passed in %r12.
This means we need a new libcall entries for __tls_get_offset.
In addition, we also need a way to access _GLOBAL_OFFSET_TABLE_.
The latter is a "magic" symbol with a well-known name defined
by the ABI and recognized by the linker. This patch introduces
a new ExternalName::KnownSymbol variant to support such names
(originally due to @afonso360).
We also need to emit a relocation on a symbol placed in a
constant pool, as well as an extra relocation on the call
to __tls_get_offset required for TLS linker optimization.
Needed by the cg_clif frontend.
* [fuzz] Fix signature of `i64.extend32_s` single-instruction test
This single-instruction test incorrectly attempted to convert an `i32`
to an `i64`; the correct signature is `i64 -> i64`. See the [WebAssembly
specification](https://webassembly.github.io/spec/core/bikeshed/#a7-index-of-instructions).
* [fuzz] Fix typo in single-instruction function generator
Previously, the `unary!` macro created functions that used two operands
instead of the expected one.
* Update 0.40.0 release notes
Not a ton happened in terms of user-facing improvements here so I
outlined some internal changes as well. The cumulative effect of
improving compile times is Sightglass showing 30-40% improvements for
major benchmarks. Additionally I wrote down a note indicating that this
is likely the last `0.*` release and the next release of Wasmtime on
September 20 is planned to be 1.0.
* Remove perf-related relnotes
* Call out s390x simd at the top-level
* cranelift: Upgrade libm to 0.2.4
This resolves an issue with incorrect fmaf on the x86_64-pc-windows-gnu target under some inputs.
See: #4517
* supply-chain: Vet `libm` 0.2.4
This commit fixes#4600 in a somewhat roundabout fashion. Currently the
`main` branch of Wasmtime exhibits unusual behavior:
* If `./ci/run-tests.sh` is run then the `cache_accounts_for_opt_level`
test does not fail.
* If `cargo test -p wasmtime --lib` is run, however, then the test
fails.
This test is indeed being run as part of `./ci/run-tests.sh` and it's
also passing in CI. The exact failure is that part of the debuginfo
support we have takes an existing ELF image, copies it, and then appends
some information to inform profilers/gdb about the image. This code is
all quite old at this point and not 100% optimal, but that's at least
where we're at.
The problem is that the appended `ProgramHeader64` is not aligned
correctly during `cargo test -p wasmtime --lib`, which is the panic that
happens causing the test to fail. The reason, however, that this test
passes with `./ci/run-tests.sh` is that the alignment of
`ProgramHeader64` is 1 instead of 8. The reason for that is that the
`object` crate has an `unaligned` feature which forcibly unaligns all
primitives to 1 byte instead of their natural alignment. During `cargo
test -p wasmtime --lib` this feature is not enabled but during
`./ci/run-tests.sh` this feature is enabled. The feature is currently
enabled through inclusion of the `backtrace` crate which only happens
for some tests in some crates.
The alignment issue explains why the test fails on a single crate test
but fails on the whole workspace tests. The next issue I investigated
was if this test ever passed. It turns out that on v0.39.0 this test
passed, and the regression to main was introduced during #4571. That
PR, however, has nothing to do with any of this! The reason that this
showed up as causing a "regression" however is because it changed
cranelift settings which changed the size of serialized metadata at the
end of a Wasmtime cache object.
Wasmtime compiled artifacts are ELF images with Wasmtime-specific
metadata appended after them. This appended metadata was making its way
all the way through to the gdbjit image itself which mean that while the
end of the ELF file itself was properly aligned the space after the
Wasmtime metadata was not aligned. This metadata changes in size over
time as Cranelift settings change which explains why #4571 was the
"source" of the regression.
The fix in this commit is to discard the extra Wasmtime metadata when
creating an `MmapVec` representing the underlying ELF image. This is
already supported with `MmapVec::drain` so it was relatively easy to
insert that. This means that the gdbjit image starts with just the ELF
file itself which is always aligned at the end, which gets the test
passing with/without the `unaligned` feature in the `object` crate.
In #4671, the meta-differential fuzz target was finding errors when
running certain Wasm modules (specifically `shr_s` in that case).
@conrad-watt diagnosed the issue as a missing reversal in the operands
passed to the spec interpreter. This change fixes#4671 and adds an
additional unit test to keep it fixed.
* cranelift: Use JIT in runtests
Using `cranelift-jit` in run tests allows us to preform relocations and
libcalls. This is important since some instruction lowerings fallback
to libcall's when an extension is missing, or when it's too complicated
to implement manually.
This is also a first step to being able to test `call`'s between functions
in the runtest suite. It should also make it easier to eventually test
TLS relocations, symbol resolution and ABI's.
Another benefit of this is that we also get to test the JIT more, since
it now runs the runtests, and gets some fuzzing via `fuzzgen` (which
uses the `SingleFunctionCompiler`).
This change causes regressions in terms of runtime for the filetests.
I haven't done any serious benchmarking but what I've been seeing is
that it now takes about ~3 seconds to run the testsuite while it
previously took around 2 seconds.
* Add FMA tests for X86
Fixes this compiler warning:
```
warning: unused return value of `Box::<T>::from_raw` that must be used
--> crates/bench-api/src/lib.rs:351:9
|
351 | Box::from_raw(state as *mut BenchState);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
```
This commit is an effort to reduce the amount of complexity around
managing the size/alignment calculations of types in the canonical ABI.
Previously the logic for the size/alignment of a type was spread out
across a number of locations. While each individual calculation is not
really the most complicated thing in the world having the duplication in
so many places was constantly worrying me.
I've opted in this commit to centralize all of this within the runtime
at least, and now there's only one "duplicate" of this information in
the fuzzing infrastructure which is to some degree less important to
deduplicate. This commit introduces a new `CanonicalAbiInfo` type to
house all abi size/align information for both memory32 and memory64.
This new type is then used pervasively throughout fused adapter
compilation, dynamic `Val` management, and typed functions. This type
was also able to reduce the complexity of the macro-generated code
meaning that even `wasmtime-component-macro` is performing less math
than it was before.
One other major feature of this commit is that this ABI information is
now saved within a `ComponentTypes` structure. This avoids recursive
querying of size/align information frequently and instead effectively
caching it. This was a worry I had for the fused adapter compiler which
frequently sought out size/align information and would recursively
descend each type tree each time. The `fact-valid-module` fuzzer is now
nearly 10x faster in terms of iterations/s which I suspect is due to
this caching.
This is a nonsensical constraint: the entry block must come first in the
compiled code's layout, so it cannot also be sunk to the end of the
function.
This PR modifies the CLIF verifier to disallow this situation entirely.
It also adds an assert during final block-order computation to catch the
problem (and avoid a silent miscompile) even if the verifier is
disabled.
Fixes#4656.
In #4640 a feature was added to adapter modules that whenever
translation goes through memory it instead goes through a helper
function as opposed to inlining it directly. The generation of the
helper function happened recursively at compile time, however, and sure
enough oss-fuzz has found an input which blows the host stack at compile
time.
This commit removes the compile-time recursion from the adapter compiler
when translating these helper functions by deferring the translation to
a worklist which is processed after the original function is translated.
This makes the stack-based recursion instead heap-based, removing the
stack overflow.
The `MachBuffer` applies a set of peephole-optimization rules to do
branch threading, leverage fallthrough paths, eliminate empty blocks,
and flip conditional branches where needed to make branches more
efficient starting from naive always-branch-at-end-of-BB code.
This works by applying the rules at every label-bind, which is
equivalent to applying them at the end of every basic block, where
branches are usually inserted.
However, this misses one case: the end of the buffer! Currently we
don't optimize any redundant or foldable branches at the very end of
the machine code.
This usually doesn't matter when the function ends in an epilogue with
`ret` as the last instruction. However, when cold blocks exist, it can
actually matter.
Thanks to @mchesser for pointing out this issue in #4636.
The spec was expected to change to not bounds-check 0-byte lists/strings
but has since been updated to match `memory.copy` which does indeed
check the pointer for 0-byte copies.
This commit implements a scheme I've been meaning to work on in the
adapter compiler where instead of always generating a fresh local for
all operations locals may now be reused. Locals generated are explicitly
free'd when their lexical scope has ended, allowing reuse in translation
of later types in the adapter.
This also implements a new scheme for initializing locals where
previously a local could simply be generated, but now the local must be
fused with its initializer where a `local.{tee,set}` instruction is
always generated. This should help prevent a bug I ran into with strings
where one usage of a local was forgotten to be initialized which meant
that when it was used during a loop it may have had a stale value from
before.
Modeling this in Rust isn't possible at compile time unfortunately so I
opted for the next best thing, runtime panics. If a local is
accidentally not released back to the pool of free locals then it will
panic. The fuzzer for simply generating and validating adapter modules
should be good at exercising this and it weeded out a few forgotten
free's and should be good now.
To determine whether we need to insert a "veneer island" of
branch-range extension veneers, we need to know ahead of emitting a
basic block the worst-case size of that block. This is because veneers
only go between blocks (we could plop one in the middle of a block but
that would require another jump around it and would probably pessimize
some code significantly), and we can't back up once we emit a block.
To compute this worst-case size, we take the number of instructions
and multiply by the largest possible size of one pseudoinst (e.g., on
aarch64, this is 44 bytes; it explicitly excludes the `EmitIsland`
pseudo-op which is used before large jumptable inline offset tables
are emitted). This is conservative, but it always works, and veneers
are somewhat rare in practice (function body >1MiB on aarch64 for
example).
Unfortunately this logic didn't account for the spill/reload/move
instructions inserted by the register allocator, and in one example in
issue #4629, a block had only one instruction but 482
edge-moves (!). This came at just the wrong time as we were
approaching the 1MiB limit on aarch64.
This PR fixes that issue, and fixes the logic to actually look at the
correct next block (next in `final_order` rather than numerically
next), as a bonus correctness fix.
Fixes#4629.
This method configures whether native unwind information (e.g. `.eh_frame` on
Linux) is generated or not.
This helps integrate with third-party stack capturing tools, such as the system
unwinder or the `backtrace` crate. It does not affect whether Wasmtime can
capture stack traces in Wasm code that it is running or not.
Unwind info is always enabled on Windows, since the Windows ABI requires it.
This configuration option defaults to true.
Additionally, we deprecate `Config::wasm_backtrace` since we can always cheaply
capture stack traces ever since
https://github.com/bytecodealliance/wasmtime/pull/4431.
Fixes https://github.com/bytecodealliance/wasmtime/issues/4554
Ported the existing implementations of the following opcodes on AArch64
to ISLE:
- `AvgRound`
- Also introduced support for `i64x2` vectors, as per the docs.
- `SqmulRoundSat`
Copyright (c) 2022 Arm Limited
Separates the following opcodes for AArch64 into a separate `VecALUModOp` enum,
which is emitted via the `VecRRRMod` instruction. This separates vector ALU
instructions which modify a register from instructions which write to a new register:
- `Bsl`
- `Fmla`
Addresses [a discussion](https://github.com/bytecodealliance/wasmtime/pull/4608#discussion_r937975581) in #4608.
Copyright (c) 2022 Arm Limited
* Improve the `component_api` fuzzer on a few dimensions
* Update the generated component to use an adapter module. This involves
two core wasm instances communicating with each other to test that
data flows through everything correctly. The intention here is to fuzz
the fused adapter compiler. String encoding options have been plumbed
here to exercise differences in string encodings.
* Use `Cow<'static, ...>` and `static` declarations for each static test
case to try to cut down on rustc codegen time.
* Add `Copy` to derivation of fuzzed enums to make `derive(Clone)`
smaller.
* Use `Store<Box<dyn Any>>` to try to cut down on codegen by
monomorphizing fewer `Store<T>` implementation.
* Add debug logging to print out what's flowing in and what's flowing
out for debugging failures.
* Improve `Debug` representation of dynamic value types to more closely
match their Rust counterparts.
* Fix a variant issue with adapter trampolines
Previously the offset of the payload was calculated as the discriminant
aligned up to the alignment of a singular case, but instead this needs
to be aligned up to the alignment of all cases to ensure all cases start
at the same location.
* Fix a copy/paste error when copying masked integers
A 32-bit load was actually doing a 16-bit load by accident since it was
copied from the 16-bit load-and-mask case.
* Fix f32/i64 conversions in adapter modules
The adapter previously erroneously converted the f32 to f64 and then to
i64, where instead it should go from f32 to i32 to i64.
* Fix zero-sized flags in adapter modules
This commit corrects the size calculation for zero-sized flags in
adapter modules.
cc #4592
* Fix a variant size calculation bug in adapters
This fixes the same issue found with variants during normal host-side
fuzzing earlier where the size of a variant needs to align up the
summation of the discriminant and the maximum case size.
* Implement memory growth in libc bump realloc
Some fuzz-generated test cases are copying lists large enough to exceed
one page of memory so bake in a `memory.grow` to the bump allocator as
well.
* Avoid adapters of exponential size
This commit is an attempt to avoid adapters being exponentially sized
with respect to the type hierarchy of the input. Previously all
adaptation was done inline within each adapter which meant that if
something was structured as `tuple<T, T, T, T, ...>` the translation of
`T` would be inlined N times. For very deeply nested types this can
quickly create an exponentially sized adapter with types of the form:
(type $t0 (list u8))
(type $t1 (tuple $t0 $t0))
(type $t2 (tuple $t1 $t1))
(type $t3 (tuple $t2 $t2))
;; ...
where the translation of `t4` has 8 different copies of translating
`t0`.
This commit changes the translation of types through memory to almost
always go through a helper function. The hope here is that it doesn't
lose too much performance because types already reside in memory.
This can still lead to exponentially sized adapter modules to a lesser
degree where if the translation all happens on the "stack", e.g. via
`variant`s and their flat representation then many copies of one
translation could still be made. For now this commit at least gets the
problem under control for fuzzing where fuzzing doesn't trivially find
type hierarchies that take over a minute to codegen the adapter module.
One of the main tricky parts of this implementation is that when a
function is generated the index that it will be placed at in the final
module is not known at that time. To solve this the encoded form of the
`Call` instruction is saved in a relocation-style format where the
`Call` isn't encoded but instead saved into a different area for
encoding later. When the entire adapter module is encoded to wasm these
pseudo-`Call` instructions are encoded as real instructions at that
time.
* Fix some memory64 issues with string encodings
Introduced just before #4623 I had a few mistakes related to 64-bit
memories and mixing 32/64-bit memories.
* Actually insert into the `translate_mem_funcs` map
This... was the whole point of having the map!
* Assert memory growth succeeds in bump allocator
* Implement strings in adapter modules
This commit is a hefty addition to Wasmtime's support for the component
model. This implements the final remaining type (in the current type
hierarchy) unimplemented in adapter module trampolines: strings. Strings
are the most complicated type to implement in adapter trampolines
because they are highly structured chunks of data in memory (according
to specific encodings). Additionally each lift/lower operation can
choose its own encoding for strings meaning that Wasmtime, the host, may
have to convert between any pairwise ordering of string encodings.
The `CanonicalABI.md` in the component-model repo in general specifies
all the fiddly bits of string encoding so there's not a ton of wiggle
room for Wasmtime to get creative. This PR largely "just" implements
that. The high-level architecture of this implementation is:
* Fused adapters are first identified to determine src/dst string
encodings. This statically fixes what transcoding operation is being
performed.
* The generated adapter will be responsible for managing calls to
`realloc` and performing bounds checks. The adapter itself does not
perform memory copies or validation of string contents, however.
Instead each transcoding operation is modeled as an imported function
into the adapter module. This means that the adapter module
dynamically, during compile time, determines what string transcoders
are needed. Note that an imported transcoder is not only parameterized
over the transcoding operation but additionally which memory is the
source and which is the destination.
* The imported core wasm functions are modeled as a new
`CoreDef::Transcoder` structure. These transcoders end up being small
Cranelift-compiled trampolines. The Cranelift-compiled trampoline will
load the actual base pointer of memory and add it to the relative
pointers passed as function arguments. This trampoline then calls a
transcoder "libcall" which enters Rust-defined functions for actual
transcoding operations.
* Each possible transcoding operation is implemented in Rust with a
unique name and a unique signature depending on the needs of the
transcoder. I've tried to document inline what each transcoder does.
This means that the `Module::translate_string` in adapter modules is by
far the largest translation method. The main reason for this is due to
the management around calling the imported transcoder functions in the
face of validating string pointer/lengths and performing the dance of
`realloc`-vs-transcode at the right time. I've tried to ensure that each
individual case in transcoding is documented well enough to understand
what's going on as well.
Additionally in this PR is a full implementation in the host for the
`latin1+utf16` encoding which means that both lifting and lowering host
strings now works with this encoding.
Currently the implementation of each transcoder function is likely far
from optimal. Where possible I've leaned on the standard library itself
and for latin1-related things I'm leaning on the `encoding_rs` crate. I
initially tried to implement everything with `encoding_rs` but was
unable to uniformly do so easily. For now I settled on trying to get a
known-correct (even in the face of endianness) implementation for all of
these transcoders. If an when performance becomes an issue it should be
possible to implement more optimized versions of each of these
transcoding operations.
Testing this commit has been somewhat difficult and my general plan,
like with the `(list T)` type, is to rely heavily on fuzzing to cover
the various cases here. In this PR though I've added a simple test that
pushes some statically known strings through all the pairs of encodings
between source and destination. I've attempted to pick "interesting"
strings that one way or another stress the various paths in each
transcoding operation to ideally get full branch coverage there.
Additionally a suite of "negative" tests have also been added to ensure
that validity of encoding is actually checked.
* Fix a temporarily commented out case
* Fix wasmtime-runtime tests
* Update deny.toml configuration
* Add `BSD-3-Clause` for the `encoding_rs` crate
* Remove some unused licenses
* Add an exemption for `encoding_rs` for now
* Split up the `translate_string` method
Move out all the closures and package up captured state into smaller
lists of arguments.
* Test out-of-bounds for zero-length strings
We assert after emitting each instruction that its size was less than
the "worst-case size", which is used to determine when we need to
proactively emit an island so pending branch fixups don't go out of
bounds. However, the `EmitIsland` pseudo-inst itself can cause an
arbitrarily large island to be emitted; this should not have to fit
within the worst-case size (because island size is explicitly accounted
for by the threshold computation). This PR fixes the assert accordingly.
Fixes#4626.
* Cranelift: Don't print "skipped TEST can't run aarch64" on x64, etc
It's way too noisy. Move it to the logs.
* Cranelift: Enable Cranelift trace logs in `clif-util` by default
* cranelift-filetest: use `log::warn!` for warnings
Instead of `println!`
* rustfmt
* Convert `fma`, `valltrue` & `vanytrue` to ISLE (AArch64)
Ported the existing implementations of the following opcodes to ISLE on
AArch64:
- `fma`
- Introduced missing support for `fma` on vector values, as per the
docs.
- `valltrue`
- `vanytrue`
Also fixed `fcmp` on scalar values in the interpreter, and enabled
interpreter tests in `simd-fma.clif`.
This introduces the `FMLA` machine instruction.
Copyright (c) 2022 Arm Limited
* Add comments for `Fmla` and `Bsl`
Copyright (c) 2022 Arm Limited
And the `ABICallerImpl::ir_sig` field that was used to implement that
method. This removes 56 bytes from the size of `ABICallerImpl` and gives us
speed ups to compilation of about 7% on all benchmarks.
```
compilation :: nanoseconds :: benchmarks/pulldown-cmark/benchmark.wasm
Δ = 8205119.48 ± 4069474.25 (confidence = 99%)
main.so is 0.91x to 0.97x faster than feature.so!
feature.so is 1.03x to 1.10x faster than main.so!
[117729152 132258110.36 167484097] main.so
[107486500 124052990.88 138008797] feature.so
compilation :: nanoseconds :: benchmarks/bz2/benchmark.wasm
Δ = 4645258.32 ± 1981104.59 (confidence = 99%)
main.so is 0.92x to 0.97x faster than feature.so!
feature.so is 1.03x to 1.08x faster than main.so!
[76562171 85504479.28 93116863] main.so
[75180650 80859220.96 90591978] feature.so
compilation :: nanoseconds :: benchmarks/spidermonkey/benchmark.wasm
Δ = 150575617.54 ± 65021102.57 (confidence = 99%)
main.so is 0.92x to 0.97x faster than feature.so!
feature.so is 1.03x to 1.08x faster than main.so!
[2573089039 2843117485.10 3175982602] main.so
[2559784932 2692541867.56 3143529008] feature.so
```
When an adapter module depends on a particular core wasm instance this
means that it actually depends on not only that instance but all prior
core wasm instances as well. This is because core wasm instances must be
instantiated in the specified order within a component and that cannot
change depending on the dataflow between adapters. This commit fixes a
possible panic from linearizing the component dfg where an adapter
module tried to depend on an instance that hadn't been instantiated yet
because the ordering dependency between core wasm instances hadn't been
modeled.
* components: ignore export aliases to types in translation.
Currently, translation is ignoring type exports from components during
translation by skipping over them before adding them to the exports map.
If a component instantiates an inner component and aliases a type export of
that instance, it will cause wasmtime to panic with a failure to find the
export in the exports map.
The fix is to add a representation for exported types to the map that is simply
ignored when encountered. This also makes it easier to track places where we
would have to support type exports in translation in the future.
* Keep type information for type exports.
This commit keeps the type information for type exports so that types can be
properly aliased from an instance export and thereby adjusting the type index
space accordingly.
* Add a simple test case for type exports for the component model.
* Add a dataflow-based representation of components
This commit updates the inlining phase of compiling a component to
creating a dataflow-based representation of a component instead of
creating a final `Component` with a linear list of initializers. This
dataflow graph is then linearized in a final step to create the actual
final `Component`.
The motivation for this commit stems primarily from my work implementing
strings in fused adapters. In doing this my plan is to defer most
low-level transcoding to the host itself rather than implementing that
in the core wasm adapter modules. This means that small
cranelift-generated trampolines will be used for adapter modules to call
which then call "transcoding libcalls". The cranelift-generated
trampolines will get raw pointers into linear memory and pass those to
the libcall which core wasm doesn't have access to when passing
arguments to an import.
Implementing this with the previous representation of a `Component` was
becoming too tricky to bear. The initialization of a transcoder needed
to happen at just the right time: before the adapter module which needed
it was instantiated but after the linear memories referenced had been
extracted into the `VMComponentContext`. The difficulty here is further
compounded by the current adapter module injection pass already being
quite complicated. Adapter modules are already renumbering the index
space of runtime instances and shuffling items around in the
`GlobalInitializer` list. Perhaps the worst part of this was that
memories could already be referenced by host function imports or exports
to the host, and if adapters referenced the same memory it shouldn't be
referenced twice in the component. This meant that `ExtractMemory`
initializers ideally needed to be shuffled around in the initializer
list to happen as early as possible instead of wherever they happened to
show up during translation.
Overall I did my best to implement the transcoders but everything always
came up short. I have decided to throw my hands up in the air and try a
completely different approach to this, namely the dataflow-based
representation in this commit. This makes it much easier to edit the
component after initial translation for injection of adapters, injection
of transcoders, adding dependencies on possibly-already-existing items,
etc. The adapter module partitioning pass in this commit was greatly
simplified to something which I believe is functionally equivalent but
is probably an order of magnitude easier to understand.
The biggest downside of this representation I believe is having a
duplicate representation of a component. The `component::info` was
largely duplicated into the `component::dfg` module in this commit.
Personally though I think this is a more appropriate tradeoff than
before because it's very easy to reason about "convert representation A
to B" code whereas it was very difficult to reason about shuffling
around `GlobalInitializer` items in optimal fashions. This may also have
a cost at compile-time in terms of shuffling data around, but my hope is
that we have lots of other low-hanging fruit to optimize if it ever
comes to that which allows keeping this easier-to-understand
representation.
Finally, to reiterate, the final representation of components is not
changed by this PR. To the runtime internals everything is still the
same.
* Fix compile of factc
This adds full i128 support to the s390x target, including new filetests
and enabling the existing i128 runtest on s390x.
The ABI requires that i128 is passed and returned via implicit pointer,
but the front end still generates direct i128 types in call. This means
we have to implement ABI support to implicitly convert i128 types to
pointers when passing arguments.
To do so, we add a new variant ABIArg::ImplicitArg. This acts like
StructArg, except that the value type is the actual target type,
not a pointer type. The required conversions have to be inserted
in the prologue and at function call sites.
Note that when dereferencing the implicit pointer in the prologue,
we may require a temp register: the pointer may be passed on the
stack so it needs to be loaded first, but the value register may
be in the wrong class for pointer values. In this case, we use
the "stack limit" register, which should be available at this
point in the prologue.
For return values, we use a mechanism similar to the one used for
supporting multiple return values in the Wasmtime ABI. The only
difference is that the hidden pointer to the return buffer must
be the *first*, not last, argument in this case.
(This implements the second half of issue #4565.)
This addresses #4307.
For the static API we generate 100 arbitrary test cases at build time, each of
which includes 0-5 parameter types, a result type, and a WAT fragment containing
an imported function and an exported function. The exported function calls the
imported function, which is implemented by the host. At runtime, the fuzz test
selects a test case at random and feeds it zero or more sets of arbitrary
parameters and results, checking that values which flow host-to-guest and
guest-to-host make the transition unchanged.
The fuzz test for the dynamic API follows a similar pattern, the only difference
being that test cases are generated at runtime.
Signed-off-by: Joel Dice <joel.dice@fermyon.com>
* Add `try_use_var` method to `cranelift-frontend`.
- Unlike `use_var`, this method does not panic if the variable has not been defined
before use
* Add `try_declare_var` and `try_def_var`.
- Also implement Error for error enums.
* Use `write!` macro.
* Add `write!` use I missed.