Previously, the logic was wrong on two counts:
- It used the bits of the entire vector (e.g. i32x4 -> 128) instead of just the lane bits (e.g. i32x4 -> 32).
- It used the type of the first operand before it was bitcast to its correct type. Remember that, by default, vectors are handed around as i8x16 and we must bitcast them to their correct type for Cranelift's verifier; see https://github.com/bytecodealliance/wasmtime/issues/1147 for discussion on this. This fix simply uses the type of the instruction itself, which is equivalent and hopefully less fragile to any changes.
* Update `Table::grow`'s return to be the previous size
This brings it in line with `Memory::grow` and the `table.grow`
instruction which return the size of the table previously, not the size
of the table currently.
* Comment successful return
Previously, every call was lowered on AArch64 to a `call` instruction, which
takes a signed 26-bit PC-relative offset. Including the 2-bit left shift, this
gives a range of +/- 128 MB. Longer-distance offsets would cause an impossible
relocation record to be emitted (or rather, a record that a more sophisticated
linker would fix up by inserting a shim/veneer).
This commit adds a notion of "relocation distance" in the MachInst backends,
and provides this information for every call target and symbol reference. The
intent is that backends on architectures like AArch64, where there are different
offset sizes / addressing strategies to choose from, can either emit a regular
call or a load-64-bit-constant / call-indirect sequence, as necessary. This
avoids the need to implement complex linking behavior.
The MachInst driver code provides this information based on the "colocated" bit
in the CLIF symbol references, which appears to have been designed for this
purpose, or at least a similar one. Combined with the `use_colocated_libcalls`
setting, this allows client code to ensure that library calls can link to
library code at any location in the address space.
Separately, the `simplejit` example did not handle `Arm64Call`; rather than doing
so, it appears all that is necessary to get its tests to pass is to set the
`use_colocated_libcalls` flag to false, to make use of the above change. This
fixes the `libcall_function` unit-test in this crate.
* Improve output display of RunCommand
The previous use of Debug for displaying `print` and `run` results was less than clear.
* Avoid checking the types of vectors during trampoline construction
Because DataValue only understands `V128` vectors, we avoid type-checking vector values when constructing the trampoline arguments.
* Improve the documentation of the filetest `run` command
Adds an up-to-date example of how to use the `run` and `print` directives and includes an actual use of the new directives in a SIMD arithmetic filetest.
Previously we initialized trap handling (signals/etc) once-per-instance
but that's a bit too granular since we only need to do this as
one-time per-program initialization. This moves the initialization to
`Store` instead which means that we'll call this at least once per
thread, which some platforms may need (none currently do, they all only
need per-program initialization, but Fuchsia will need per-thread
initialization).
Looks like everything is in general passing now so it's probably time to
close#1521 and all other remaining tests that are failing are
classified under new more focused issues.
Closes#1521
Previously, the SourceLoc information transferred in `VCode` only
included PC-spans for non-default SourceLocs. I realized that the
invariant we're supposed to keep here is that every PC is covered; if no
source information, just use `SourceLoc::default()`.
This was spurred by @bjorn3's comment in #1575 (thanks!).
This resolves the work started in https://github.com/bytecodealliance/cranelift/pull/1231 and https://github.com/bytecodealliance/wasmtime/pull/1436. Cranelift filetests currently have the ability to run CLIF functions with a signature like `() -> b*` and check that the result is true under the `test run` directive. This PR adds the ability to call functions with arbitrary arguments and non-boolean returns and either print the result or check against a list of expected results:
- `run` commands look like `; run: %add(2, 2) == 4` or `; run: %add(2, 2) != 5` and verify that the executed CLIF function returns the expected value
- `print` commands look like `; print: %add(2, 2)` and print the result of the function to stdout
To make this work, this PR compiles a single Cranelift `Function` into a `CompiledFunction` using a `SingleFunctionCompiler`. Because we will not know the signature of the function until runtime, we use a `Trampoline` to place the values in the appropriate location for the calling convention; this should look a lot like what @alexcrichton is doing with `VMTrampoline` in wasmtime (see 3b7cb6ee64/crates/api/src/func.rs (L510-L526), 3b7cb6ee64/crates/jit/src/compiler.rs (L260)). To avoid re-compiling `Trampoline`s for the same function signatures, `Trampoline`s are cached in the `SingleFunctionCompiler`.
There was a bug how value labels were resolved, which caused some DWARF expressions not be transformed, e.g. those are in the registers.
* Implements FIXME in expression.rs
* Move TargetIsa from CompiledExpression structure
* Fix expression format for GDB
* Add tests for parsing
* Proper logic in ValueLabelRangesBuilder::process_label
* Tests for ValueLabelRangesBuilder
* Refactor build_with_locals to return Iterator instead of Vec<_>
* Misc comments and magical numbers
This splits off lower.rs into two files: lower.rs keeps all the utility
functions, while lower_inst.rs contains the (gigantic!) function
lowering a single Cranelift instruction into vcode.
This is done to satisfy a check done on the maximal file's size when
vendoring Rust source code into Mozilla central's repository.
* Expose memory-related options in `Config`
This commit was initially motivated by looking more into #1501, but it
ended up balooning a bit after finding a few issues. The high-level
items in this commit are:
* New configuration options via `wasmtime::Config` are exposed to
configure the tunable limits of how memories are allocated and such.
* The `MemoryCreator` trait has been updated to accurately reflect the
required allocation characteristics that JIT code expects.
* A bug has been fixed in the cranelift wasm code generation where if no
guard page was present bounds checks weren't accurately performed.
The new `Config` methods allow tuning the memory allocation
characteristics of wasmtime. Currently 64-bit platforms will reserve 6GB
chunks of memory for each linear memory, but by tweaking various config
options you can change how this is allocate, perhaps at the cost of
slower JIT code since it needs more bounds checks. The methods are
intended to be pretty thoroughly documented as to the effect they have
on the JIT code and what values you may wish to select. These new
methods have been added to the spectest fuzzer to ensure that various
configuration values for these methods don't affect correctness.
The `MemoryCreator` trait previously only allocated memories with a
`MemoryType`, but this didn't actually reflect the guarantees that JIT
code expected. JIT code is generated with an assumption about the
minimum size of the guard region, as well as whether memory is static or
dynamic (whether the base pointer can be relocated). These properties
must be upheld by custom allocation engines for JIT code to perform
correctly, so extra parameters have been added to
`MemoryCreator::new_memory` to reflect this.
Finally the fuzzing with `Config` turned up an issue where if no guard
pages present the wasm code wouldn't correctly bounds-check memory
accesses. The issue here was that with a guard page we only need to
bounds-check the first byte of access, but without a guard page we need
to bounds-check the last byte of access. This meant that the code
generation needed to account for the size of the memory operation
(load/store) and use this as the offset-to-check in the no-guard-page
scenario. I've attempted to make the various comments in cranelift a bit
more exhaustive too to hopefully make it a bit clearer for future
readers!
Closes#1501
* Review comments
* Update a comment
* Implement trap info in Lightbeam
* Start using wasm-reader instead of wasmparser for parsing operators
* Update to use wasm-reader, some reductions in allocation, support source location tracking for traps, start to support multi-value
The only thing that still needs to be supported for multi-value is stack returns, but we need to make it compatible with Cranelift.
* Error when running out of registers (although we'd hope it should be impossible) instead of panicking
* WIP: Update Lightbeam to work with latest Wasmtime
* WIP: Update Lightbeam to use current wasmtime
* WIP: Migrate to new system for builtin functions
* WIP: Update Lightbeam to work with latest Wasmtime
* Remove multi_mut
* Format
* Fix some bugs around arguments, add debuginfo offset tracking
* Complete integration with new Wasmtime
* Remove commented code
* Fix formatting
* Fix warnings, remove unused dependencies
* Fix `iter` if there are too many elements, fix compilation for latest wasmtime
* Fix float arguments on stack
* Remove wasm-reader and trap info work
* Allocate stack space _before_ passing arguments, fail if we can't zero a xmm reg
* Fix stack argument offset calculation
* Fix stack arguments in Lightbeam
* Re-add WASI because it somehow got removed during rebase
* Workaround for apparent `type_alias_impl_trait`-related bug in rustdoc
* Fix breakages caused by rebase, remove module offset info as it is unrelated to wasmtime integration PR and was broken by rebase
* Add TODO comment explaining `lightbeam::ModuleContext` trait
This commit fixes an issue in Wasmtime where Wasmtime would accidentally
"handle" non-wasm segfaults while executing host imports of wasm
modules. If a host import segfaulted then Wasmtime would recognize that
wasm code is on the stack, so it'd longjmp out of the wasm code. This
papers over real bugs though in host code and erroneously classified
segfaults as wasm traps.
The fix here was to add a check to our wasm signal handler for if the
faulting address falls in JIT code itself. Actually threading through
all the right information for that check to happen is a bit tricky,
though, so this involved some refactoring:
* A closure parameter to `catch_traps` was added. This closure is
responsible for classifying addresses as whether or not they fall in
JIT code. Anything returning `false` means that the trap won't get
handled and we'll forward to the next signal handler.
* To avoid passing tons of context all over the place, the start
function is now no longer automatically invoked by `InstanceHandle`.
This avoids the need for passing all sorts of trap-handling contextual
information like the maximum stack size and "is this a jit address"
closure. Instead creators of `InstanceHandle` (like wasmtime) are now
responsible for invoking the start function.
* To avoid excessive use of `transmute` with lifetimes since the
traphandler state now has a lifetime the per-instance custom signal
handler is now replaced with a per-store custom signal handler. I'm
not entirely certain the purpose of the custom signal handler, though,
so I'd look for feedback on this part.
A new test has been added which ensures that if a host function
segfaults we don't accidentally try to handle it, and instead we
correctly report the segfault.
* Revamp memory management of `InstanceHandle`
This commit fixes a known but in Wasmtime where an instance could still
be used after it was freed. Unfortunately the fix here is a bit of a
hammer, but it's the best that we can do for now. The changes made in
this commit are:
* A `Store` now stores all `InstanceHandle` objects it ever creates.
This keeps all instances alive unconditionally (along with all host
functions and such) until the `Store` is itself dropped. Note that a
`Store` is reference counted so basically everything has to be dropped
to drop anything, there's no longer any partial deallocation of instances.
* The `InstanceHandle` type's own reference counting has been removed.
This is largely redundant with what's already happening in `Store`, so
there's no need to manage two reference counts.
* Each `InstanceHandle` no longer tracks its dependencies in terms of
instance handles. This set was actually inaccurate due to dynamic
updates to tables and such, so we needed to revamp it anyway.
* Initialization of an `InstanceHandle` is now deferred until after
`InstanceHandle::new`. This allows storing the `InstanceHandle` before
side-effectful initialization, such as copying element segments or
running the start function, to ensure that regardless of the result of
instantiation the underlying `InstanceHandle` is still available to
persist in storage.
Overall this should fix a known possible way to safely segfault Wasmtime
today (yay!) and it should also fix some flaikness I've seen on CI.
Turns out one of the spec tests
(bulk-memory-operations/partial-init-table-segment.wast) exercises this
functionality and we were hitting sporating use-after-free, but only on
Windows.
* Shuffle some APIs around
* Comment weak cycle
This PR updates Cranelift to use the new version of regalloc.rs
(bytecodealliance/regalloc.rs#55) that provides dense vreg->rreg maps to
the `map_reg()` function for each instruction, rather than the earlier
hashmap-based approach.
In one test (regex-rs.wasm), this PR results in a 15% reduction in
memory allocations (1245MB -> 1060MB) as measured by DHAT on `clif-util
wasm` runs.
This adds a new `wasmtime_config_cache_config_load` C API function to
allow enabling and configuring the cache via the API. This was
originally requested over at bytecodealliance/wasmtime-py#3