Alex Crichton 99c6d7c083 components: Improve heuristic for splitting adapters (#4827)
This commit is a (second?) attempt at improving the generation of
adapter modules to avoid excessively large functions for fuzz-generated
inputs.

The first iteration of adapters simply translated an entire type
inline per-function. This proved problematic however since the size of
the adapter function was on the order of the overall size of a type,
which can be exponential for a type that is otherwise defined in linear
size.

The second iteration of adapters performed a split where memory-based
types would always be translated with individual functions. The theory
here was that once a type was memory-based it was large enough to not
warrant inline translation in the original function and a separate
outlined function could be shared and otherwise used to deduplicate
portions of the original giant function. This again proved problematic,
however, since the splitting heuristic was quite naive and didn't take
into account large stack-based types.

This third iteration in this commit replaces the previous system with a
similar but slightly more general one. Each adapter function now has a
concept of fuel which is decremented each time a layer of a type is
translated. When fuel runs out further translations are deferred to
outlined functions. The fuel counter should hopefully provide a sort of
reasonable upper bound on the size of a function and the outlined
functions should ideally provide the ability to be called from multiple
places and therefore deduplicate what would otherwise be a massive
function.

This final iteration is another attempt at guaranteeing that an adapter
module is linear in size with respect to the input type section of the
original module. Additionally this iteration uniformly handles stack and
memory-based translations which means that stack-based translations
can't go wild in their function size and memory-based translations may
benefit slightly from having at least a little bit of inlining
internally.

The immediate impact of this is that the `component_api` fuzzer seems to
be running at a faster rate than before. Otherwise #4825 is sufficient
to invalidate preexisting fuzz-bugs and this PR is hopefully the final
nail in the coffin to prevent further timeouts for small inputs cropping
up.

Closes #4816
2022-08-31 12:09:45 -05:00
2020-02-28 09:16:05 -08:00

wasmtime

A standalone runtime for WebAssembly

A Bytecode Alliance project

build status zulip chat supported rustc stable Documentation Status

Guide | Contributing | Website | Chat

Installation

The Wasmtime CLI can be installed on Linux and macOS with a small install script:

curl https://wasmtime.dev/install.sh -sSf | bash

Windows or otherwise interested users can download installers and binaries directly from the GitHub Releases page.

Example

If you've got the Rust compiler installed then you can take some Rust source code:

fn main() {
    println!("Hello, world!");
}

and compile/run it with:

$ rustup target add wasm32-wasi
$ rustc hello.rs --target wasm32-wasi
$ wasmtime hello.wasm
Hello, world!

Features

  • Fast. Wasmtime is built on the optimizing Cranelift code generator to quickly generate high-quality machine code either at runtime or ahead-of-time. Wasmtime is optimized for efficient instantiation, low-overhead calls between the embedder and wasm, and scalability of concurrent instances.

  • Secure. Wasmtime's development is strongly focused on correctness and security. Building on top of Rust's runtime safety guarantees, each Wasmtime feature goes through careful review and consideration via an RFC process. Once features are designed and implemented, they undergo 24/7 fuzzing donated by Google's OSS Fuzz. As features stabilize they become part of a release, and when things go wrong we have a well-defined security policy in place to quickly mitigate and patch any issues. We follow best practices for defense-in-depth and integrate protections and mitigations for issues like Spectre. Finally, we're working to push the state-of-the-art by collaborating with academic researchers to formally verify critical parts of Wasmtime and Cranelift.

  • Configurable. Wasmtime uses sensible defaults, but can also be configured to provide more fine-grained control over things like CPU and memory consumption. Whether you want to run Wasmtime in a tiny environment or on massive servers with many concurrent instances, we've got you covered.

  • WASI. Wasmtime supports a rich set of APIs for interacting with the host environment through the WASI standard.

  • Standards Compliant. Wasmtime passes the official WebAssembly test suite, implements the official C API of wasm, and implements future proposals to WebAssembly as well. Wasmtime developers are intimately engaged with the WebAssembly standards process all along the way too.

Language Support

You can use Wasmtime from a variety of different languages through embeddings of the implementation:

Documentation

📚 Read the Wasmtime guide here! 📚

The wasmtime guide is the best starting point to learn about what Wasmtime can do for you or help answer your questions about Wasmtime. If you're curious in contributing to Wasmtime, it can also help you do that!


It's Wasmtime.

Description
No description provided
Readme 125 MiB
Languages
Rust 77.8%
WebAssembly 20.6%
C 1.3%