* Add resource limiting to the Wasmtime API.
This commit adds a `ResourceLimiter` trait to the Wasmtime API.
When used in conjunction with `Store::new_with_limiter`, this can be used to
monitor and prevent WebAssembly code from growing linear memories and tables.
This is particularly useful when hosts need to take into account host resource
usage to determine if WebAssembly code can consume more resources.
A simple `StaticResourceLimiter` is also included with these changes that will
simply limit the size of linear memories or tables for all instances created in
the store based on static values.
* Code review feedback.
* Implemented `StoreLimits` and `StoreLimitsBuilder`.
* Moved `max_instances`, `max_memories`, `max_tables` out of `Config` and into
`StoreLimits`.
* Moved storage of the limiter in the runtime into `Memory` and `Table`.
* Made `InstanceAllocationRequest` use a reference to the limiter.
* Updated docs.
* Made `ResourceLimiterProxy` generic to remove a level of indirection.
* Fixed the limiter not being used for `wasmtime::Memory` and
`wasmtime::Table`.
* Code review feedback and bug fix.
* `Memory::new` now returns `Result<Self>` so that an error can be returned if
the initial requested memory exceeds any limits placed on the store.
* Changed an `Arc` to `Rc` as the `Arc` wasn't necessary.
* Removed `Store` from the `ResourceLimiter` callbacks. Custom resource limiter
implementations are free to capture any context they want, so no need to
unnecessarily store a weak reference to `Store` from the proxy type.
* Fixed a bug in the pooling instance allocator where an instance would be
leaked from the pool. Previously, this would only have happened if the OS was
unable to make the necessary linear memory available for the instance. With
these changes, however, the instance might not be created due to limits
placed on the store. We now properly deallocate the instance on error.
* Added more tests, including one that covers the fix mentioned above.
* Code review feedback.
* Add another memory to `test_pooling_allocator_initial_limits_exceeded` to
ensure a partially created instance is successfully deallocated.
* Update some doc comments for better documentation of `Store` and
`ResourceLimiter`.
This commit adds `Module::from_parts` as an internal constructor that shared
the implementation between `Module::from_binary` and module deserialization.
This commit uses a two-phase lookup of stack map information from modules
rather than giving back raw pointers to stack maps.
First the runtime looks up information about a module from a pc value, which
returns an `Arc` it keeps a reference on while completing the stack map lookup.
Second it then queries the module information for the stack map from a pc
value, getting a reference to the stack map (which is now safe because of the
`Arc` held by the runtime).
* Make `FunctionInfo` public and `CompiledModule::func_info` return it.
* Make the `StackMapLookup` trait unsafe.
* Add comments for the purpose of `EngineHostFuncs`.
* Rework ownership model of shared signatures: `SignatureCollection` in
conjunction with `SignatureRegistry` is now used so that the `Engine`,
`Store`, and `Module` don't need to worry about unregistering shared
signatures.
* Implement `Func::param_arity` and `Func::result_arity` in terms of
`Func::ty`.
* Make looking up a trampoline with the module registry more efficient by doing
a binary search on the function's starting PC value for the owning module and
then looking up the trampoline with only that module.
* Remove reference to the shared signatures from `GlobalRegisteredModule`.
This commit removes the stack map registry and instead uses the existing
information from the store's module registry to lookup stack maps.
A trait is now used to pass the lookup context to the runtime, implemented by
`Store` to do the lookup.
With this change, module registration in `Store` is now entirely limited to
inserting the module into the module registry.
This commit moves the shared signature registry out of `Store` and into
`Engine`.
This helps eliminate work that was performed whenever a `Module` was
instantiated into a `Store`.
Now a `Module` is registered with the shared signature registry upon creation,
storing the mapping from the module's signature index space to the shared index
space.
This also refactors the "frame info" registry into a general purpose "module
registry" that is used to look up trap information, signature information, and
(soon) stack map information.
* Introduce a new API that allows notifying that a Store has moved to a new thread
* Add backlink to documentation, and mention the new API in the multithreading doc;
Following the new ABI introduced for efficient support of multiple return values, the old-backend test for generating unwind information was incomplete, resulting in no unwind information being generated and traps not being correctly caught by the runtime.
PR 2840 changed the store_spillslot routine to always store
integer registers in full word size to a spill slot. However,
the load_spillslot routine was not updated, which may causes
the contents to be reloaded in a different type. On big-endian
systems this will fetch wrong data.
Fixed by using the same type override in load_spillslot.
* x64: add EVEX encoding mechanism
Also, includes an empty stub module for the VEX encoding.
* x64: lower abs.i64x2 to VPABSQ when available
* x64: refactor EVEX encodings to use `EvexInstruction`
This change replaces the `encode_evex` function with a builder-style struct, `EvexInstruction`. This approach clarifies the code, adds documentation, and results in slight speedups when benchmarked.
* x64: rename encoding CodeSink to ByteSink
The patch extends the unwinder to support targets that do not need
to use a dedicated frame pointer register. Specifically, the
changes include:
- Change the "fp" routine in the RegisterMapper to return an
*optional* frame pointer regsiter via Option<Register>.
- On targets that choose to not define a FP register via the above
routine, the UnwindInst::DefineNewFrame operation no longer switches
the CFA to be defined in terms of the FP. (The operation still can
be used to define the location of the clobber area.)
- In addition, on targets that choose not to define a FP register, the
UnwindInst::PushFrameRegs operation is not supported.
- There is a new operation UnwindInst::StackAlloc that needs to be
called on targets without FP whenever the stack pointer is updated.
This caused the CFA offset to be adjusted accordingly. (On
targets with FP this operation is a no-op.)
The unwind rework (commit 2d5db92a) removed support for the
feature to allow a target to allocate the space for outgoing
function arguments right in the prologue (originally added
via commit 80c2d70d). This patch adds it back.
After the unwind rework (commit 2d5db92a) the space used to save
clobbered registers now lies between the nominal SP and the FP.
Therefore, the size of that space should now be included in the
frame size as reported by frame_size(), since this value is used
to compute the nominal_sp_to_fp offset.
This re-factoring replaces uses of `Inst::mov_r_m` with `Inst::store` to ensure there is only one code location to troubleshoot when generating store instructions for a specific type.
Previously, `Inst::store` only understood a subset of the scalar types, which resulted in failures seen in #2826. This change allows `Inst::store` to generate instructions for all scalar widths (`8 | 16 | 32 | 64`) since all of these are supported in the emission code of `Inst::MovRM`.
SIMD & FP registers are now saved and restored in pairs, similarly
to general-purpose registers. Also, only the bottom 64 bits of the
registers are saved and restored (in case of non-Baldrdash ABIs),
which is the requirement from the Procedure Call Standard for the
Arm 64-bit Architecture.
As for the callee-saved general-purpose registers, if a procedure
needs to save and restore an odd number of them, it no longer uses
store and load pair instructions for the last register.
Copyright (c) 2021, Arm Limited.
This commit is intended to be a perf improvement for instantiation of
modules with lots of functions. Previously the `lookup_shared_signature`
callback was showing up quite high in profiles as part of instantiation.
As some background, this callback is used to translate from a module's
`SignatureIndex` to a `VMSharedSignatureIndex` which the instance
stores. This callback is called for two reasons, one is to translate all
of the module's own types into `VMSharedSignatureIndex` for the purposes
of `call_indirect` (the translation of that loads from this table to
compare indices). The second reason is that a `VMCallerCheckedAnyfunc`
is prepared for all functions and this embeds a `VMSharedSignatureIndex`
inside of it.
The slow part today is that the lookup callback was called
once-per-function and each lookup involved hashing a full
`WasmFuncType`. Albeit our hash algorithm is still Rust's default
SipHash algorithm which is quite slow, but we also shouldn't need to
re-hash each signature if we see it multiple times anyway.
The fix applied in this commit is to change this lookup callback to an
`enum` where one variant is that there's a table to lookup from. This
table is a `PrimaryMap` which means that lookup is quite fast. The only
thing we need to do is to prepare the table ahead of time. Currently
this happens on the instantiation path because in my measurments the
creation of the table is quite fast compared to the rest of
instantiation. If this becomes an issue, though, we can look into
creating the table as part of `SigRegistry::register_module` and caching
it somewhere (I'm not entirely sure where but I'm sure we can figure it
out).
There's in generally not a ton of efficiency around the `SigRegistry`
type. I'm hoping though that this fixes the next-lowest-hanging-fruit in
terms of performance without complicating the implementation too much. I
tried a few variants and this change seemed like the best balance
between simplicity and still a nice performance gain.
Locally I measured an improvement in instantiation time for a large-ish
module by reducing the time from ~3ms to ~2.6ms per instance.
This commit updates the implementation of `VMOffsets` to frontload all
checked arithmetic on construction of the `VMOffsets` which allows
eliding all checked arithmetic when accessing the fields of `VMOffsets`.
For testing and such this adds a new constructor as well from a new
`VMOffsetsFields` structure which is a clone of the old definition.
This should help speed up some profile hot spots I've been seeing where
with all the checked arithmetic on field sizes this was slowing down the
various accessors during instantiation (which uses `VMOffsets` to
initialize various fields of the `VMContext`).
Because there are instructions that are present in more than one ISA feature set, we need to see if any of the ISA requirements match before emitting. This change includes the `VPABSQ` instruction as an example, which is present in both `AVX512F` and `AVX512VL`.
Looking at some profiles these or their related functions were all
showing up, so this commit adds `#[inline]` to allow cross-crate
inlining by default.
This fixes an issue where even if the wasmtime cache was disabled we're
still calculating the sha256 of modules for the hash key. This hash
was then simply discarded if the cache was disabled!
* Remove `once-cell` dependency.
* Remove function address `BTreeMap` from `CompiledModule` in favor of binary
searching finished functions directly.
* Use `with_capacity` when populating `CompiledModule` finished functions and
trampolines.