* Reword env var hint for dwarf debug info
Try not to declare that more information will indeed be displayed,
instead suggest that the output may improve if the env var is set since
dwarf debug info wasn't parsed.
cc bytecodealliance/wasmtime-go#90
* Fix test assertion
* Fix stack checks of recursive async function calls
Previously the stack pointer limit wasn't adjusted, even in the face of
stack switching. This commit updates the logic around the stack limit
calculation to configure it on all async function calls, even if they're
recursive. Synchronous function calls, however, continue to only
configure the stack limit at the start, not for recursive calls.
* Update crates/wasmtime/src/func.rs
Co-authored-by: Peter Huene <peter@huene.dev>
Co-authored-by: Peter Huene <peter@huene.dev>
* Restore POSIX signal handling on MacOS behind a feature flag
As described in Issue #3052, the switch to Mach Exception handling
removed `unix::StoreExt` from the public API of crate on MacOS.
That is a breaking change and makes it difficult for some
application to upgrade to the current stable Wasmtime.
As a workaround this PR introduces a feature flag called
`posix-signals-on-macos` that restores the old behaviour on MacOS.
The flag is disabled by default.
* Fix test guard
* Fix formatting in the test
We previously had some off-by-one errors in our error messages and this led to
very confusing messages like "expected 0 types, found 0" that were quite
annoying to debug as an API consumer.
* Change the injection count of fuel in a store from u32 to u64
This commit updates the type of the amount of times to inject fuel in
the `out_of_fuel_async_yield` to `u64` instead of `u32`. This should
allow effectively infinite fuel to get injected, even if a small amount
of fuel is injected per iteration.
Closes#2927Closes#3046
* Fix tokio example
Clarify that they're executed not only around imports but also around
function calls. Additionally spell out the semantics around traps a bit
more clearly too.
The current_length member is defined as "usize" in Rust code,
but generated wasm code refers to it as if it were "u32".
While this happens to mostly work on little-endian machines
(as long as the length is < 4GB), it will always fail on
big-endian machines.
Fixed by making current_length "u32" in Rust as well, and
ensuring the actual memory size is always less than 4GB.
This commit fixes running the store's enter/exit hooks into wasm which
accidentally weren't run for an instance's `start` function. The fix
here was mostly to just sink the enter/exit hook much lower in the code
to `invoke_wasm_and_catch_traps`, which is the common entry point for
all wasm calls.
This did involve propagating the `StoreContext<T>` generic rather than
using `StoreOpaque` unfortunately, but it is overally not too too much
code and we generally wanted most of it inlined anyway.
* Add guard pages to the front of linear memories
This commit implements a safety feature for Wasmtime to place guard
pages before the allocation of all linear memories. Guard pages placed
after linear memories are typically present for performance (at least)
because it can help elide bounds checks. Guard pages before a linear
memory, however, are never strictly needed for performance or features.
The intention of a preceding guard page is to help insulate against bugs
in Cranelift or other code generators, such as CVE-2021-32629.
This commit adds a `Config::guard_before_linear_memory` configuration
option, defaulting to `true`, which indicates whether guard pages should
be present both before linear memories as well as afterwards. Guard
regions continue to be controlled by
`{static,dynamic}_memory_guard_size` methods.
The implementation here affects both on-demand allocated memories as
well as the pooling allocator for memories. For on-demand memories this
adjusts the size of the allocation as well as adjusts the calculations
for the base pointer of the wasm memory. For the pooling allocator this
will place a singular extra guard region at the very start of the
allocation for memories. Since linear memories in the pooling allocator
are contiguous every memory already had a preceding guard region in
memory, it was just the previous memory's guard region afterwards. Only
the first memory needed this extra guard.
I've attempted to write some tests to help test all this, but this is
all somewhat tricky to test because the settings are pretty far away
from the actual behavior. I think, though, that the tests added here
should help cover various use cases and help us have confidence in
tweaking the various `Config` settings beyond their defaults.
Note that this also contains a semantic change where
`InstanceLimits::memory_reservation_size` has been removed. Instead this
field is now inferred from the `static_memory_maximum_size` and guard
size settings. This should hopefully remove some duplication in these
settings, canonicalizing on the guard-size/static-size settings as the
way to control memory sizes and virtual reservations.
* Update config docs
* Fix a typo
* Fix benchmark
* Fix wasmtime-runtime tests
* Fix some more tests
* Try to fix uffd failing test
* Review items
* Tweak 32-bit defaults
Makes the pooling allocator a bit more reasonable by default on 32-bit
with these settings.
* Reimplement how instance exports are stored/loaded
This commit internally refactors how instance exports are handled and
fixes two issues. One issue is that when we instantiate an instance we
no longer forcibly load all items from the instance immediately,
deferring insertion of each item into the store data tables to happen
later as necessary. The next issue is that repeated calls to
`Caller::get_export` would continuously insert items into the store data
tables. While working as intended this was undesirable because it would
continuously push onto a vector that only got deallocated once the
entire store was deallocate. Now it's routed to `Instance::get_export`
which doesn't have this behavior.
Closes#2916Closes#2983
* Just define our own `Either`
* Update wasm-tools crates
This brings in recent updates, notably including more improvements to
wasm-smith which will hopefully help exercise non-trapping wasm more.
* Fix some wat
This is currently a very common operation in host bindings where if wasm
gives a host function a relative pointer you'll want to simulataneously
work with the host state and the wasm memory. These two regions are
distinct and safe to borrow mutably simulataneously but it's not obvious
in the Rust type system that this is so, so add a helper method here to
assist in doing so.
* wasmtime_runtime: move ResourceLimiter defaults into this crate
In preparation of changing wasmtime::ResourceLimiter to be a re-export
of this definition, because translating between two traits was causing
problems elsewhere.
* wasmtime: make ResourceLimiter a re-export of wasmtime_runtime::ResourceLimiter
* refactor Store internals to support ResourceLimiter as part of store's data
* add hooks for entering and exiting native code to Store
* wasmtime-wast, fuzz: changes to adapt ResourceLimiter API
* fix tests
* wrap calls into wasm with entering/exiting exit hooks as well
* the most trivial test found a bug, lets write some more
* store: mark some methods as #[inline] on Store, StoreInner, StoreInnerMost
Co-authored-By: Alex Crichton <alex@alexcrichton.com>
* improve tests for the entering/exiting native hooks
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
* make Module::deserialize's version check optional via Config
A SerializedModule contains the CARGO_PKG_VERSION string, which is
checked for equality when loading. This is a great guard-rail but
some users may want to disable this check (e.g. so they can implement
their own versioning scheme)
* rename config to deserialize_check_wasmtime_version
* add test
* fix doc links
* fix
* thank you rustdoc
* Add the ability to cache typechecking an instance
This commit adds the abilty to cache the type-checked imports of an
instance if an instance is going to be instantiated multiple times. This
can also be useful to do a "dry run" of instantiation where no wasm code
is run but it's double-checked that a `Linker` possesses everything
necessary to instantiate the provided module.
This should ideally help cut down repeated instantiation costs slightly
by avoiding type-checking and allocation a `Vec<Extern>` on each
instantiation. It's expected though that the impact on instantiation
time is quite small and likely not super significant. The functionality,
though, of pre-checking can be useful for some embeddings.
* Fix build with async
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
* wasmtime-wasi: re-exporting this WasiCtxBuilder was shadowing the right one
wasi-common's WasiCtxBuilder is really only useful wasi_cap_std_sync and
wasi_tokio to implement their own Builder on top of.
This re-export of wasi-common's is 1. not useful and 2. shadow's the
re-export of the right one in sync::*.
* wasi-common: eliminate WasiCtxBuilder, make the builder methods on WasiCtx instead
* delete wasi-common::WasiCtxBuilder altogether
just put those methods directly on &mut WasiCtx.
As a bonus, the sync and tokio WasiCtxBuilder::build functions
are no longer fallible!
* bench fixes
* more test fixes
This commit implements the `--allow-unknown-exports` option to the CLI run
command that will ignore unknown exports in a command module rather than
return an error.
Fixes#2587.
* Bring back per-thread lazy initialization
Platforms Wasmtime supports may have per-thread initialization that
needs to run before WebAssembly. For example Unix needs to setup a
sigaltstack and macOS needs to set up mach ports. In #2757 this
per-thread setup was moved out of the invocation of a wasm function,
relying on the lack of Send for Store to initialize the thread at Store
creation time and never worry about it later.
This conflicted with [wasmtime's desired multithreading
story](https://github.com/bytecodealliance/wasmtime/pull/2812) so a new
[`Store::notify_switched_thread` was
added](https://github.com/bytecodealliance/wasmtime/pull/2822) to
explicitly indicate a Store has moved to another thread (if it unsafely
did so).
It turns out though that it's not always easy to determine when a
`Store` moves to a new thread. For example the Go bindings for Wasmtime
are generally unaware when a goroutine switches OS threads. This led to
https://github.com/bytecodealliance/wasmtime-go/issues/74 where a SIGILL
was left uncaught, making it appear that traps aren't working properly.
This commit revisits the decision in #2757 and moves per-thread
initialization back into the path of calling into WebAssembly. This is
differently from before, though, where there's still only one TLS access
on the path of calling into WebAssembly, unlike before where it was a
separate access. This allows us to get the speed benefits of #2757 as
well as the flexibility benefits of not having to explicitly move a
store between threads.
With this new ability this commit deletes the recently added
`Store::notify_switched_thread` method since it's no longer necessary.
* Fix a test compiling
* Bring back `Module::deserialize`
I thought I was being clever suggesting that `Module::deserialize` was
removed from #2791 by funneling all module constructors into
`Module::new`. As our studious fuzzers have found, though, this means
that `Module::new` is not safe currently to pass arbitrary user-defined
input into. Now one might pretty reasonable expect to be able to do
that, however, being a WebAssembly engine and all. This PR as a result
separates the `deserialize` part of `Module::new` back into
`Module::deserialize`.
This means that binary blobs created with `Module::serialize` and
`Engine::precompile_module` will need to be passed to
`Module::deserialize` to "rehydrate" them back into a `Module`. This
restores the property that it should be safe to pass arbitrary input to
`Module::new` since it's always expected to be a wasm module. This also
means that fuzzing will no longer attempt to fuzz `Module::deserialize`
which isn't something we want to do anyway.
* Fix an example
* Mark `Module::deserialize` as `unsafe`
* Add resource limiting to the Wasmtime API.
This commit adds a `ResourceLimiter` trait to the Wasmtime API.
When used in conjunction with `Store::new_with_limiter`, this can be used to
monitor and prevent WebAssembly code from growing linear memories and tables.
This is particularly useful when hosts need to take into account host resource
usage to determine if WebAssembly code can consume more resources.
A simple `StaticResourceLimiter` is also included with these changes that will
simply limit the size of linear memories or tables for all instances created in
the store based on static values.
* Code review feedback.
* Implemented `StoreLimits` and `StoreLimitsBuilder`.
* Moved `max_instances`, `max_memories`, `max_tables` out of `Config` and into
`StoreLimits`.
* Moved storage of the limiter in the runtime into `Memory` and `Table`.
* Made `InstanceAllocationRequest` use a reference to the limiter.
* Updated docs.
* Made `ResourceLimiterProxy` generic to remove a level of indirection.
* Fixed the limiter not being used for `wasmtime::Memory` and
`wasmtime::Table`.
* Code review feedback and bug fix.
* `Memory::new` now returns `Result<Self>` so that an error can be returned if
the initial requested memory exceeds any limits placed on the store.
* Changed an `Arc` to `Rc` as the `Arc` wasn't necessary.
* Removed `Store` from the `ResourceLimiter` callbacks. Custom resource limiter
implementations are free to capture any context they want, so no need to
unnecessarily store a weak reference to `Store` from the proxy type.
* Fixed a bug in the pooling instance allocator where an instance would be
leaked from the pool. Previously, this would only have happened if the OS was
unable to make the necessary linear memory available for the instance. With
these changes, however, the instance might not be created due to limits
placed on the store. We now properly deallocate the instance on error.
* Added more tests, including one that covers the fix mentioned above.
* Code review feedback.
* Add another memory to `test_pooling_allocator_initial_limits_exceeded` to
ensure a partially created instance is successfully deallocated.
* Update some doc comments for better documentation of `Store` and
`ResourceLimiter`.
This commit adds `Module::from_parts` as an internal constructor that shared
the implementation between `Module::from_binary` and module deserialization.
This commit uses a two-phase lookup of stack map information from modules
rather than giving back raw pointers to stack maps.
First the runtime looks up information about a module from a pc value, which
returns an `Arc` it keeps a reference on while completing the stack map lookup.
Second it then queries the module information for the stack map from a pc
value, getting a reference to the stack map (which is now safe because of the
`Arc` held by the runtime).
* Make `FunctionInfo` public and `CompiledModule::func_info` return it.
* Make the `StackMapLookup` trait unsafe.
* Add comments for the purpose of `EngineHostFuncs`.
* Rework ownership model of shared signatures: `SignatureCollection` in
conjunction with `SignatureRegistry` is now used so that the `Engine`,
`Store`, and `Module` don't need to worry about unregistering shared
signatures.
* Implement `Func::param_arity` and `Func::result_arity` in terms of
`Func::ty`.
* Make looking up a trampoline with the module registry more efficient by doing
a binary search on the function's starting PC value for the owning module and
then looking up the trampoline with only that module.
* Remove reference to the shared signatures from `GlobalRegisteredModule`.
This commit removes the stack map registry and instead uses the existing
information from the store's module registry to lookup stack maps.
A trait is now used to pass the lookup context to the runtime, implemented by
`Store` to do the lookup.
With this change, module registration in `Store` is now entirely limited to
inserting the module into the module registry.
This commit moves the shared signature registry out of `Store` and into
`Engine`.
This helps eliminate work that was performed whenever a `Module` was
instantiated into a `Store`.
Now a `Module` is registered with the shared signature registry upon creation,
storing the mapping from the module's signature index space to the shared index
space.
This also refactors the "frame info" registry into a general purpose "module
registry" that is used to look up trap information, signature information, and
(soon) stack map information.
* Introduce a new API that allows notifying that a Store has moved to a new thread
* Add backlink to documentation, and mention the new API in the multithreading doc;
This commit is intended to be a perf improvement for instantiation of
modules with lots of functions. Previously the `lookup_shared_signature`
callback was showing up quite high in profiles as part of instantiation.
As some background, this callback is used to translate from a module's
`SignatureIndex` to a `VMSharedSignatureIndex` which the instance
stores. This callback is called for two reasons, one is to translate all
of the module's own types into `VMSharedSignatureIndex` for the purposes
of `call_indirect` (the translation of that loads from this table to
compare indices). The second reason is that a `VMCallerCheckedAnyfunc`
is prepared for all functions and this embeds a `VMSharedSignatureIndex`
inside of it.
The slow part today is that the lookup callback was called
once-per-function and each lookup involved hashing a full
`WasmFuncType`. Albeit our hash algorithm is still Rust's default
SipHash algorithm which is quite slow, but we also shouldn't need to
re-hash each signature if we see it multiple times anyway.
The fix applied in this commit is to change this lookup callback to an
`enum` where one variant is that there's a table to lookup from. This
table is a `PrimaryMap` which means that lookup is quite fast. The only
thing we need to do is to prepare the table ahead of time. Currently
this happens on the instantiation path because in my measurments the
creation of the table is quite fast compared to the rest of
instantiation. If this becomes an issue, though, we can look into
creating the table as part of `SigRegistry::register_module` and caching
it somewhere (I'm not entirely sure where but I'm sure we can figure it
out).
There's in generally not a ton of efficiency around the `SigRegistry`
type. I'm hoping though that this fixes the next-lowest-hanging-fruit in
terms of performance without complicating the implementation too much. I
tried a few variants and this change seemed like the best balance
between simplicity and still a nice performance gain.
Locally I measured an improvement in instantiation time for a large-ish
module by reducing the time from ~3ms to ~2.6ms per instance.