Commit Graph

44 Commits

Author SHA1 Message Date
Nick Fitzgerald
edf7f9f2bb wasmtime: Add lots of logging for externrefs and table_ops fuzz target (#4583)
I essentially add these same logs back in every time I'm debugging something
related to this fuzz target or `externref`s in general. Probably like 5 times
I've added roughly these logs. We should just make them available whenever we
need them via `RUST_LOG=wasmtime_runtime=trace`.

This also changes a couple `if let`s to `unwrap`s that are now infallible after
2022-08-02 15:06:44 -07:00
Nick Fitzgerald
46782b18c2 wasmtime: Implement fast Wasm stack walking (#4431)
* Always preserve frame pointers in Wasmtime

This allows us to efficiently and simply capture Wasm stacks without maintaining
and synchronizing any safety-critical side tables between the compiler and the
runtime.

* wasmtime: Implement fast Wasm stack walking

Why do we want Wasm stack walking to be fast? Because we capture stacks whenever
there is a trap and traps actually happen fairly frequently with short-lived
programs and WASI's `exit`.

Previously, we would rely on generating the system unwind info (e.g.
`.eh_frame`) and using the system unwinder (via the `backtrace`crate) to walk
the full stack and filter out any non-Wasm stack frames. This can,
unfortunately, be slow for two primary reasons:

1. The system unwinder is doing `O(all-kinds-of-frames)` work rather than
`O(wasm-frames)` work.

2. System unwind info and the system unwinder need to be much more general than
a purpose-built stack walker for Wasm needs to be. It has to handle any kind of
stack frame that any compiler might emit where as our Wasm frames are emitted by
Cranelift and always have frame pointers. This translates into implementation
complexity and general overhead. There can also be unnecessary-for-our-use-cases
global synchronization and locks involved, further slowing down stack walking in
the presence of multiple threads trying to capture stacks in parallel.

This commit introduces a purpose-built stack walker for traversing just our Wasm
frames. To find all the sequences of Wasm-to-Wasm stack frames, and ignore
non-Wasm stack frames, we keep a linked list of `(entry stack pointer, exit
frame pointer)` pairs. This linked list is maintained via Wasm-to-host and
host-to-Wasm trampolines. Within a sequence of Wasm-to-Wasm calls, we can use
frame pointers (which Cranelift preserves) to find the next older Wasm frame on
the stack, and we keep doing this until we reach the entry stack pointer,
meaning that the next older frame will be a host frame.

The trampolines need to avoid a couple stumbling blocks. First, they need to be
compiled ahead of time, since we may not have access to a compiler at
runtime (e.g. if the `cranelift` feature is disabled) but still want to be able
to call functions that have already been compiled and get stack traces for those
functions. Usually this means we would compile the appropriate trampolines
inside `Module::new` and the compiled module object would hold the
trampolines. However, we *also* need to support calling host functions that are
wrapped into `wasmtime::Func`s and there doesn't exist *any* ahead-of-time
compiled module object to hold the appropriate trampolines:

```rust
// Define a host function.
let func_type = wasmtime::FuncType::new(
    vec![wasmtime::ValType::I32],
    vec![wasmtime::ValType::I32],
);
let func = Func::new(&mut store, func_type, |_, params, results| {
    // ...
    Ok(())
});

// Call that host function.
let mut results = vec![wasmtime::Val::I32(0)];
func.call(&[wasmtime::Val::I32(0)], &mut results)?;
```

Therefore, we define one host-to-Wasm trampoline and one Wasm-to-host trampoline
in assembly that work for all Wasm and host function signatures. These
trampolines are careful to only use volatile registers, avoid touching any
register that is an argument in the calling convention ABI, and tail call to the
target callee function. This allows forwarding any set of arguments and any
returns to and from the callee, while also allowing us to maintain our linked
list of Wasm stack and frame pointers before transferring control to the
callee. These trampolines are not used in Wasm-to-Wasm calls, only when crossing
the host-Wasm boundary, so they do not impose overhead on regular calls. (And if
using one trampoline for all host-Wasm boundary crossing ever breaks branch
prediction enough in the CPU to become any kind of bottleneck, we can do fun
things like have multiple copies of the same trampoline and choose a random copy
for each function, sharding the functions across branch predictor entries.)

Finally, this commit also ends the use of a synthetic `Module` and allocating a
stubbed out `VMContext` for host functions. Instead, we define a
`VMHostFuncContext` with its own magic value, similar to `VMComponentContext`,
specifically for host functions.

<h2>Benchmarks</h2>

<h3>Traps and Stack Traces</h3>

Large improvements to taking stack traces on traps, ranging from shaving off 64%
to 99.95% of the time it used to take.

<details>

```
multi-threaded-traps/0  time:   [2.5686 us 2.5808 us 2.5934 us]
                        thrpt:  [0.0000  elem/s 0.0000  elem/s 0.0000  elem/s]
                 change:
                        time:   [-85.419% -85.153% -84.869%] (p = 0.00 < 0.05)
                        thrpt:  [+560.90% +573.56% +585.84%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe
multi-threaded-traps/1  time:   [2.9021 us 2.9167 us 2.9322 us]
                        thrpt:  [341.04 Kelem/s 342.86 Kelem/s 344.58 Kelem/s]
                 change:
                        time:   [-91.455% -91.294% -91.096%] (p = 0.00 < 0.05)
                        thrpt:  [+1023.1% +1048.6% +1070.3%]
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  1 (1.00%) high mild
  5 (5.00%) high severe
multi-threaded-traps/2  time:   [2.9996 us 3.0145 us 3.0295 us]
                        thrpt:  [660.18 Kelem/s 663.47 Kelem/s 666.76 Kelem/s]
                 change:
                        time:   [-94.040% -93.910% -93.762%] (p = 0.00 < 0.05)
                        thrpt:  [+1503.1% +1542.0% +1578.0%]
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  5 (5.00%) high severe
multi-threaded-traps/4  time:   [5.5768 us 5.6052 us 5.6364 us]
                        thrpt:  [709.68 Kelem/s 713.63 Kelem/s 717.25 Kelem/s]
                 change:
                        time:   [-93.193% -93.121% -93.052%] (p = 0.00 < 0.05)
                        thrpt:  [+1339.2% +1353.6% +1369.1%]
                        Performance has improved.
multi-threaded-traps/8  time:   [8.6408 us 9.1212 us 9.5438 us]
                        thrpt:  [838.24 Kelem/s 877.08 Kelem/s 925.84 Kelem/s]
                 change:
                        time:   [-94.754% -94.473% -94.202%] (p = 0.00 < 0.05)
                        thrpt:  [+1624.7% +1709.2% +1806.1%]
                        Performance has improved.
multi-threaded-traps/16 time:   [10.152 us 10.840 us 11.545 us]
                        thrpt:  [1.3858 Melem/s 1.4760 Melem/s 1.5761 Melem/s]
                 change:
                        time:   [-97.042% -96.823% -96.577%] (p = 0.00 < 0.05)
                        thrpt:  [+2821.5% +3048.1% +3281.1%]
                        Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild

many-modules-registered-traps/1
                        time:   [2.6278 us 2.6361 us 2.6447 us]
                        thrpt:  [378.11 Kelem/s 379.35 Kelem/s 380.55 Kelem/s]
                 change:
                        time:   [-85.311% -85.108% -84.909%] (p = 0.00 < 0.05)
                        thrpt:  [+562.65% +571.51% +580.76%]
                        Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
  3 (3.00%) high mild
  6 (6.00%) high severe
many-modules-registered-traps/8
                        time:   [2.6294 us 2.6460 us 2.6623 us]
                        thrpt:  [3.0049 Melem/s 3.0235 Melem/s 3.0425 Melem/s]
                 change:
                        time:   [-85.895% -85.485% -85.022%] (p = 0.00 < 0.05)
                        thrpt:  [+567.63% +588.95% +608.95%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  3 (3.00%) high mild
  5 (5.00%) high severe
many-modules-registered-traps/64
                        time:   [2.6218 us 2.6329 us 2.6452 us]
                        thrpt:  [24.195 Melem/s 24.308 Melem/s 24.411 Melem/s]
                 change:
                        time:   [-93.629% -93.551% -93.470%] (p = 0.00 < 0.05)
                        thrpt:  [+1431.4% +1450.6% +1469.5%]
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  3 (3.00%) high mild
many-modules-registered-traps/512
                        time:   [2.6569 us 2.6737 us 2.6923 us]
                        thrpt:  [190.17 Melem/s 191.50 Melem/s 192.71 Melem/s]
                 change:
                        time:   [-99.277% -99.268% -99.260%] (p = 0.00 < 0.05)
                        thrpt:  [+13417% +13566% +13731%]
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  4 (4.00%) high mild
many-modules-registered-traps/4096
                        time:   [2.7258 us 2.7390 us 2.7535 us]
                        thrpt:  [1.4876 Gelem/s 1.4955 Gelem/s 1.5027 Gelem/s]
                 change:
                        time:   [-99.956% -99.955% -99.955%] (p = 0.00 < 0.05)
                        thrpt:  [+221417% +223380% +224881%]
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

many-stack-frames-traps/1
                        time:   [1.4658 us 1.4719 us 1.4784 us]
                        thrpt:  [676.39 Kelem/s 679.38 Kelem/s 682.21 Kelem/s]
                 change:
                        time:   [-90.368% -89.947% -89.586%] (p = 0.00 < 0.05)
                        thrpt:  [+860.23% +894.72% +938.21%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  5 (5.00%) high mild
  3 (3.00%) high severe
many-stack-frames-traps/8
                        time:   [2.4772 us 2.4870 us 2.4973 us]
                        thrpt:  [3.2034 Melem/s 3.2167 Melem/s 3.2294 Melem/s]
                 change:
                        time:   [-85.550% -85.370% -85.199%] (p = 0.00 < 0.05)
                        thrpt:  [+575.65% +583.51% +592.03%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe
many-stack-frames-traps/64
                        time:   [10.109 us 10.171 us 10.236 us]
                        thrpt:  [6.2525 Melem/s 6.2925 Melem/s 6.3309 Melem/s]
                 change:
                        time:   [-78.144% -77.797% -77.336%] (p = 0.00 < 0.05)
                        thrpt:  [+341.22% +350.38% +357.55%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  5 (5.00%) high mild
  2 (2.00%) high severe
many-stack-frames-traps/512
                        time:   [126.16 us 126.54 us 126.96 us]
                        thrpt:  [4.0329 Melem/s 4.0461 Melem/s 4.0583 Melem/s]
                 change:
                        time:   [-65.364% -64.933% -64.453%] (p = 0.00 < 0.05)
                        thrpt:  [+181.32% +185.17% +188.71%]
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  4 (4.00%) high severe
```

</details>

<h3>Calls</h3>

There is, however, a small regression in raw Wasm-to-host and host-to-Wasm call
performance due the new trampolines. It seems to be on the order of about 2-10
nanoseconds per call, depending on the benchmark.

I believe this regression is ultimately acceptable because

1. this overhead will be vastly dominated by whatever work a non-nop callee
actually does,

2. we will need these trampolines, or something like them, when implementing the
Wasm exceptions proposal to do things like translate Wasm's exceptions into
Rust's `Result`s,

3. and because the performance improvements to trapping and capturing stack
traces are of such a larger magnitude than this call regressions.

<details>

```
sync/no-hook/host-to-wasm - typed - nop
                        time:   [28.683 ns 28.757 ns 28.844 ns]
                        change: [+16.472% +17.183% +17.904%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  1 (1.00%) low mild
  4 (4.00%) high mild
  5 (5.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop
                        time:   [42.515 ns 42.652 ns 42.841 ns]
                        change: [+12.371% +14.614% +17.462%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  1 (1.00%) high mild
  10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop
                        time:   [33.936 ns 34.052 ns 34.179 ns]
                        change: [+25.478% +26.938% +28.369%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  7 (7.00%) high mild
  2 (2.00%) high severe
sync/no-hook/host-to-wasm - typed - nop-params-and-results
                        time:   [34.290 ns 34.388 ns 34.502 ns]
                        change: [+40.802% +42.706% +44.526%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
  5 (5.00%) high mild
  8 (8.00%) high severe
sync/no-hook/host-to-wasm - untyped - nop-params-and-results
                        time:   [62.546 ns 62.721 ns 62.919 ns]
                        change: [+2.5014% +3.6319% +4.8078%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  2 (2.00%) high mild
  10 (10.00%) high severe
sync/no-hook/host-to-wasm - unchecked - nop-params-and-results
                        time:   [42.609 ns 42.710 ns 42.831 ns]
                        change: [+20.966% +22.282% +23.475%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  4 (4.00%) high mild
  7 (7.00%) high severe

sync/hook-sync/host-to-wasm - typed - nop
                        time:   [29.546 ns 29.675 ns 29.818 ns]
                        change: [+20.693% +21.794% +22.836%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) high mild
  2 (2.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop
                        time:   [45.448 ns 45.699 ns 45.961 ns]
                        change: [+17.204% +18.514% +19.590%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
  4 (4.00%) high mild
  10 (10.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop
                        time:   [34.334 ns 34.437 ns 34.558 ns]
                        change: [+23.225% +24.477% +25.886%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  5 (5.00%) high mild
  7 (7.00%) high severe
sync/hook-sync/host-to-wasm - typed - nop-params-and-results
                        time:   [36.594 ns 36.763 ns 36.974 ns]
                        change: [+41.967% +47.261% +52.086%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  3 (3.00%) high mild
  9 (9.00%) high severe
sync/hook-sync/host-to-wasm - untyped - nop-params-and-results
                        time:   [63.541 ns 63.831 ns 64.194 ns]
                        change: [-4.4337% -0.6855% +2.7134%] (p = 0.73 > 0.05)
                        No change in performance detected.
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe
sync/hook-sync/host-to-wasm - unchecked - nop-params-and-results
                        time:   [43.968 ns 44.169 ns 44.437 ns]
                        change: [+18.772% +21.802% +24.623%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
  3 (3.00%) high mild
  12 (12.00%) high severe

async/no-hook/host-to-wasm - typed - nop
                        time:   [4.9612 us 4.9743 us 4.9889 us]
                        change: [+9.9493% +11.911% +13.502%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  6 (6.00%) high mild
  4 (4.00%) high severe
async/no-hook/host-to-wasm - untyped - nop
                        time:   [5.0030 us 5.0211 us 5.0439 us]
                        change: [+10.841% +11.873% +12.977%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  3 (3.00%) high mild
  7 (7.00%) high severe
async/no-hook/host-to-wasm - typed - nop-params-and-results
                        time:   [4.9273 us 4.9468 us 4.9700 us]
                        change: [+4.7381% +6.8445% +8.8238%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
  5 (5.00%) high mild
  9 (9.00%) high severe
async/no-hook/host-to-wasm - untyped - nop-params-and-results
                        time:   [5.1151 us 5.1338 us 5.1555 us]
                        change: [+9.5335% +11.290% +13.044%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  3 (3.00%) high mild
  13 (13.00%) high severe

async/hook-sync/host-to-wasm - typed - nop
                        time:   [4.9330 us 4.9394 us 4.9467 us]
                        change: [+10.046% +11.038% +12.035%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  5 (5.00%) high mild
  7 (7.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop
                        time:   [5.0073 us 5.0183 us 5.0310 us]
                        change: [+9.3828% +10.565% +11.752%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  3 (3.00%) high mild
  5 (5.00%) high severe
async/hook-sync/host-to-wasm - typed - nop-params-and-results
                        time:   [4.9610 us 4.9839 us 5.0097 us]
                        change: [+9.0857% +11.513% +14.359%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
  7 (7.00%) high mild
  6 (6.00%) high severe
async/hook-sync/host-to-wasm - untyped - nop-params-and-results
                        time:   [5.0995 us 5.1272 us 5.1617 us]
                        change: [+9.3600% +11.506% +13.809%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  6 (6.00%) high mild
  4 (4.00%) high severe

async-pool/no-hook/host-to-wasm - typed - nop
                        time:   [2.4242 us 2.4316 us 2.4396 us]
                        change: [+7.8756% +8.8803% +9.8346%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  5 (5.00%) high mild
  3 (3.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop
                        time:   [2.5102 us 2.5155 us 2.5210 us]
                        change: [+12.130% +13.194% +14.270%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  4 (4.00%) high mild
  8 (8.00%) high severe
async-pool/no-hook/host-to-wasm - typed - nop-params-and-results
                        time:   [2.4203 us 2.4310 us 2.4440 us]
                        change: [+4.0380% +6.3623% +8.7534%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
  5 (5.00%) high mild
  9 (9.00%) high severe
async-pool/no-hook/host-to-wasm - untyped - nop-params-and-results
                        time:   [2.5501 us 2.5593 us 2.5700 us]
                        change: [+8.8802% +10.976% +12.937%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  5 (5.00%) high mild
  11 (11.00%) high severe

async-pool/hook-sync/host-to-wasm - typed - nop
                        time:   [2.4135 us 2.4190 us 2.4254 us]
                        change: [+8.3640% +9.3774% +10.435%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  6 (6.00%) high mild
  5 (5.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop
                        time:   [2.5172 us 2.5248 us 2.5357 us]
                        change: [+11.543% +12.750% +13.982%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  1 (1.00%) high mild
  7 (7.00%) high severe
async-pool/hook-sync/host-to-wasm - typed - nop-params-and-results
                        time:   [2.4214 us 2.4353 us 2.4532 us]
                        change: [+1.5158% +5.0872% +8.6765%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
  2 (2.00%) high mild
  13 (13.00%) high severe
async-pool/hook-sync/host-to-wasm - untyped - nop-params-and-results
                        time:   [2.5499 us 2.5607 us 2.5748 us]
                        change: [+10.146% +12.459% +14.919%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
  3 (3.00%) high mild
  15 (15.00%) high severe

sync/no-hook/wasm-to-host - nop - typed
                        time:   [6.6135 ns 6.6288 ns 6.6452 ns]
                        change: [+37.927% +38.837% +39.869%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
  2 (2.00%) high mild
  5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - typed
                        time:   [15.930 ns 15.993 ns 16.067 ns]
                        change: [+3.9583% +5.6286% +7.2430%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  11 (11.00%) high mild
  1 (1.00%) high severe
sync/no-hook/wasm-to-host - nop - untyped
                        time:   [20.596 ns 20.640 ns 20.690 ns]
                        change: [+4.3293% +5.2047% +6.0935%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  5 (5.00%) high mild
  5 (5.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - untyped
                        time:   [42.659 ns 42.882 ns 43.159 ns]
                        change: [-2.1466% -0.5079% +1.2554%] (p = 0.58 > 0.05)
                        No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
  1 (1.00%) high mild
  14 (14.00%) high severe
sync/no-hook/wasm-to-host - nop - unchecked
                        time:   [10.671 ns 10.691 ns 10.713 ns]
                        change: [+83.911% +87.620% +92.062%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  2 (2.00%) high mild
  7 (7.00%) high severe
sync/no-hook/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.136 ns 11.190 ns 11.263 ns]
                        change: [-29.719% -28.446% -27.029%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
  4 (4.00%) high mild
  10 (10.00%) high severe

sync/hook-sync/wasm-to-host - nop - typed
                        time:   [6.7964 ns 6.8087 ns 6.8226 ns]
                        change: [+21.531% +24.206% +27.331%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
  4 (4.00%) high mild
  10 (10.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - typed
                        time:   [15.865 ns 15.921 ns 15.985 ns]
                        change: [+4.8466% +6.3330% +7.8317%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  3 (3.00%) high mild
  13 (13.00%) high severe
sync/hook-sync/wasm-to-host - nop - untyped
                        time:   [21.505 ns 21.587 ns 21.677 ns]
                        change: [+8.0908% +9.1943% +10.254%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - untyped
                        time:   [44.018 ns 44.128 ns 44.261 ns]
                        change: [-1.4671% -0.0458% +1.2443%] (p = 0.94 > 0.05)
                        No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
  5 (5.00%) high mild
  9 (9.00%) high severe
sync/hook-sync/wasm-to-host - nop - unchecked
                        time:   [11.264 ns 11.326 ns 11.387 ns]
                        change: [+80.225% +81.659% +83.068%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe
sync/hook-sync/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.816 ns 11.865 ns 11.920 ns]
                        change: [-29.152% -28.040% -26.957%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
  8 (8.00%) high mild
  6 (6.00%) high severe

async/no-hook/wasm-to-host - nop - typed
                        time:   [6.6221 ns 6.6385 ns 6.6569 ns]
                        change: [+43.618% +44.755% +45.965%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
  6 (6.00%) high mild
  7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - typed
                        time:   [15.884 ns 15.929 ns 15.983 ns]
                        change: [+3.5987% +5.2053% +6.7846%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  3 (3.00%) high mild
  13 (13.00%) high severe
async/no-hook/wasm-to-host - nop - untyped
                        time:   [20.615 ns 20.702 ns 20.821 ns]
                        change: [+6.9799% +8.1212% +9.2819%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  2 (2.00%) high mild
  8 (8.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - untyped
                        time:   [41.956 ns 42.207 ns 42.521 ns]
                        change: [-4.3057% -2.7730% -1.2428%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
  3 (3.00%) high mild
  11 (11.00%) high severe
async/no-hook/wasm-to-host - nop - unchecked
                        time:   [10.440 ns 10.474 ns 10.513 ns]
                        change: [+83.959% +85.826% +87.541%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  5 (5.00%) high mild
  6 (6.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.476 ns 11.512 ns 11.554 ns]
                        change: [-29.857% -28.383% -26.978%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
  1 (1.00%) low mild
  6 (6.00%) high mild
  5 (5.00%) high severe
async/no-hook/wasm-to-host - nop - async-typed
                        time:   [26.427 ns 26.478 ns 26.532 ns]
                        change: [+6.5730% +7.4676% +8.3983%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  2 (2.00%) high mild
  7 (7.00%) high severe
async/no-hook/wasm-to-host - nop-params-and-results - async-typed
                        time:   [28.557 ns 28.693 ns 28.880 ns]
                        change: [+1.9099% +3.7332% +5.9731%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
  1 (1.00%) high mild
  14 (14.00%) high severe

async/hook-sync/wasm-to-host - nop - typed
                        time:   [6.7488 ns 6.7630 ns 6.7784 ns]
                        change: [+19.935% +22.080% +23.683%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  4 (4.00%) high mild
  5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - typed
                        time:   [15.928 ns 16.031 ns 16.149 ns]
                        change: [+5.5188% +6.9567% +8.3839%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  9 (9.00%) high mild
  2 (2.00%) high severe
async/hook-sync/wasm-to-host - nop - untyped
                        time:   [21.930 ns 22.114 ns 22.296 ns]
                        change: [+4.6674% +7.7588% +10.375%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - untyped
                        time:   [42.684 ns 42.858 ns 43.081 ns]
                        change: [-5.2957% -3.4693% -1.6217%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
  2 (2.00%) high mild
  12 (12.00%) high severe
async/hook-sync/wasm-to-host - nop - unchecked
                        time:   [11.026 ns 11.053 ns 11.086 ns]
                        change: [+70.751% +72.378% +73.961%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  5 (5.00%) high mild
  5 (5.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.840 ns 11.900 ns 11.982 ns]
                        change: [-27.977% -26.584% -24.887%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
  3 (3.00%) high mild
  15 (15.00%) high severe
async/hook-sync/wasm-to-host - nop - async-typed
                        time:   [27.601 ns 27.709 ns 27.882 ns]
                        change: [+8.1781% +9.1102% +10.030%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild
  6 (6.00%) high severe
async/hook-sync/wasm-to-host - nop-params-and-results - async-typed
                        time:   [28.955 ns 29.174 ns 29.413 ns]
                        change: [+1.1226% +3.0366% +5.1126%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
  7 (7.00%) high mild
  6 (6.00%) high severe

async-pool/no-hook/wasm-to-host - nop - typed
                        time:   [6.5626 ns 6.5733 ns 6.5851 ns]
                        change: [+40.561% +42.307% +44.514%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  5 (5.00%) high mild
  4 (4.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - typed
                        time:   [15.820 ns 15.886 ns 15.969 ns]
                        change: [+4.1044% +5.7928% +7.7122%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 17 outliers among 100 measurements (17.00%)
  4 (4.00%) high mild
  13 (13.00%) high severe
async-pool/no-hook/wasm-to-host - nop - untyped
                        time:   [20.481 ns 20.521 ns 20.566 ns]
                        change: [+6.7962% +7.6950% +8.7612%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  6 (6.00%) high mild
  5 (5.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - untyped
                        time:   [41.834 ns 41.998 ns 42.189 ns]
                        change: [-3.8185% -2.2687% -0.7541%] (p = 0.01 < 0.05)
                        Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
  3 (3.00%) high mild
  10 (10.00%) high severe
async-pool/no-hook/wasm-to-host - nop - unchecked
                        time:   [10.353 ns 10.380 ns 10.414 ns]
                        change: [+82.042% +84.591% +87.205%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
  4 (4.00%) high mild
  3 (3.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.123 ns 11.168 ns 11.228 ns]
                        change: [-30.813% -29.285% -27.874%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
  11 (11.00%) high mild
  1 (1.00%) high severe
async-pool/no-hook/wasm-to-host - nop - async-typed
                        time:   [27.442 ns 27.528 ns 27.638 ns]
                        change: [+7.5215% +9.9795% +12.266%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 18 outliers among 100 measurements (18.00%)
  3 (3.00%) high mild
  15 (15.00%) high severe
async-pool/no-hook/wasm-to-host - nop-params-and-results - async-typed
                        time:   [29.014 ns 29.148 ns 29.312 ns]
                        change: [+2.0227% +3.4722% +4.9047%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
  6 (6.00%) high mild
  1 (1.00%) high severe

async-pool/hook-sync/wasm-to-host - nop - typed
                        time:   [6.7916 ns 6.8116 ns 6.8325 ns]
                        change: [+20.937% +22.050% +23.281%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  5 (5.00%) high mild
  6 (6.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - typed
                        time:   [15.917 ns 15.975 ns 16.051 ns]
                        change: [+4.6404% +6.4217% +8.3075%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  5 (5.00%) high mild
  11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - untyped
                        time:   [21.558 ns 21.612 ns 21.679 ns]
                        change: [+8.1158% +9.1409% +10.217%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  2 (2.00%) high mild
  7 (7.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - untyped
                        time:   [42.475 ns 42.614 ns 42.775 ns]
                        change: [-6.3613% -4.4709% -2.7647%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 18 outliers among 100 measurements (18.00%)
  3 (3.00%) high mild
  15 (15.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - unchecked
                        time:   [11.150 ns 11.195 ns 11.247 ns]
                        change: [+74.424% +77.056% +79.811%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 14 outliers among 100 measurements (14.00%)
  3 (3.00%) high mild
  11 (11.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - unchecked
                        time:   [11.639 ns 11.695 ns 11.760 ns]
                        change: [-30.212% -29.023% -27.954%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
  7 (7.00%) high mild
  8 (8.00%) high severe
async-pool/hook-sync/wasm-to-host - nop - async-typed
                        time:   [27.480 ns 27.712 ns 27.984 ns]
                        change: [+2.9764% +6.5061% +9.8914%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe
async-pool/hook-sync/wasm-to-host - nop-params-and-results - async-typed
                        time:   [29.218 ns 29.380 ns 29.600 ns]
                        change: [+5.2283% +7.7247% +10.822%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 16 outliers among 100 measurements (16.00%)
  2 (2.00%) high mild
  14 (14.00%) high severe
```

</details>

* Add s390x support for frame pointer-based stack walking

* wasmtime: Allow `Caller::get_export` to get all exports

* fuzzing: Add a fuzz target to check that our stack traces are correct

We generate Wasm modules that keep track of their own stack as they call and
return between functions, and then we periodically check that if the host
captures a backtrace, it matches what the Wasm module has recorded.

* Remove VM offsets for `VMHostFuncContext` since it isn't used by JIT code

* Add doc comment with stack walking implementation notes

* Document the extra state that can be passed to `wasmtime_runtime::Backtrace` methods

* Add extensive comments for stack walking function

* Factor architecture-specific bits of stack walking out into modules

* Initialize store-related fields in a vmctx to null when there is no store yet

Rather than leaving them as uninitialized data.

* Use `set_callee` instead of manually setting the vmctx field

* Use a more informative compile error message for unsupported architectures

* Document unsafety of `prepare_host_to_wasm_trampoline`

* Use `bti c` instead of `hint #34` in inline aarch64 assembly

* Remove outdated TODO comment

* Remove setting of `last_wasm_exit_fp` in `set_jit_trap`

This is no longer needed as the value is plumbed through to the backtrace code
directly now.

* Only set the stack limit once, in the face of re-entrancy into Wasm

* Add comments for s390x-specific stack walking bits

* Use the helper macro for all libcalls

If we forget to use it, and then trigger a GC from the libcall, that means we
could miss stack frames when walking the stack, fail to find live GC refs, and
then get use after free bugs. Much less risky to always use the helper macro
that takes care of all of that for us.

* Use the `asm_sym!` macro in Wasm-to-libcall trampolines

This macro handles the macOS-specific underscore prefix stuff for us.

* wasmtime: add size and align to `externref` assertion error message

* Extend the `stacks` fuzzer to have host frames in between Wasm frames

This way we get one or more contiguous sequences of Wasm frames on the stack,
instead of exactly one.

* Add documentation for aarch64-specific backtrace helpers

* Clarify that we only support little-endian aarch64 in trampoline comment

* Use `.machine z13` in s390x assembly file

Since apparently our CI machines have pretty old assemblers that don't have
`.machine z14`. This should be fine though since these trampolines don't make
use of anything that is introduced in z14.

* Fix aarch64 build

* Fix macOS build

* Document the `asm_sym!` macro

* Add windows support to the `wasmtime-asm-macros` crate

* Add windows support to host<--->Wasm trampolines

* Fix trap handler build on windows

* Run `rustfmt` on s390x trampoline source file

* Temporarily disable some assertions about a trap's backtrace in the component model tests

Follow up to re-enable this and fix the associated issue:
https://github.com/bytecodealliance/wasmtime/issues/4535

* Refactor libcall definitions with less macros

This refactors the `libcall!` macro to use the
`foreach_builtin_function!` macro to define all of the trampolines.
Additionally the macro surrounding each libcall itself is no longer
necessary and helps avoid too many macros.

* Use `VMOpaqueContext::from_vm_host_func_context` in `VMHostFuncContext::new`

* Move `backtrace` module to be submodule of `traphandlers`

This avoids making some things `pub(crate)` in `traphandlers` that really
shouldn't be.

* Fix macOS aarch64 build

* Use "i64" instead of "word" in aarch64-specific file

* Save/restore entry SP and exit FP/return pointer in the face of panicking imported host functions

Also clean up assertions surrounding our saved entry/exit registers.

* Put "typed" vs "untyped" in the same position of call benchmark names

Regardless if we are doing wasm-to-host or host-to-wasm

* Fix stacks test case generator build for new `wasm-encoder`

* Fix build for s390x

* Expand libcalls in s390x asm

* Disable more parts of component tests now that backtrace assertions are a bit tighter

* Remove assertion that can maybe fail on s390x

Co-authored-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2022-07-28 15:46:14 -07:00
Nick Fitzgerald
35b750ab9a Implement std::fmt::Pointer for ExternRef (#4504) 2022-07-21 16:16:23 -07:00
Andrew Brown
2b52f47b83 Add shared memories (#4187)
* Add shared memories

This change adds the ability to use shared memories in Wasmtime when the
[threads proposal] is enabled. Shared memories are annotated as `shared`
in the WebAssembly syntax, e.g., `(memory 1 1 shared)`, and are
protected from concurrent access during `memory.size` and `memory.grow`.

[threads proposal]: https://github.com/WebAssembly/threads/blob/master/proposals/threads/Overview.md

In order to implement this in Wasmtime, there are two main cases to
cover:
    - a program may simply create a shared memory and possibly export it;
    this means that Wasmtime itself must be able to create shared
    memories
    - a user may create a shared memory externally and pass it in as an
    import during instantiation; this is the case when the program
    contains code like `(import "env" "memory" (memory 1 1
    shared))`--this case is handled by a new Wasmtime API
    type--`SharedMemory`

Because of the first case, this change allows any of the current
memory-creation mechanisms to work as-is. Wasmtime can still create
either static or dynamic memories in either on-demand or pooling modes,
and any of these memories can be considered shared. When shared, the
`Memory` runtime container will lock appropriately during `memory.size`
and `memory.grow` operations; since all memories use this container, it
is an ideal place for implementing the locking once and once only.

The second case is covered by the new `SharedMemory` structure. It uses
the same `Mmap` allocation under the hood as non-shared memories, but
allows the user to perform the allocation externally to Wasmtime and
share the memory across threads (via an `Arc`). The pointer address to
the actual memory is carefully wired through and owned by the
`SharedMemory` structure itself. This means that there are differing
views of where to access the pointer (i.e., `VMMemoryDefinition`): for
owned memories (the default), the `VMMemoryDefinition` is stored
directly by the `VMContext`; in the `SharedMemory` case, however, this
`VMContext` must point to this separate structure.

To ensure that the `VMContext` can always point to the correct
`VMMemoryDefinition`, this change alters the `VMContext` structure.
Since a `SharedMemory` owns its own `VMMemoryDefinition`, the
`defined_memories` table in the `VMContext` becomes a sequence of
pointers--in the shared memory case, they point to the
`VMMemoryDefinition` owned by the `SharedMemory` and in the owned memory
case (i.e., not shared) they point to `VMMemoryDefinition`s stored in a
new table, `owned_memories`.

This change adds an additional indirection (through the `*mut
VMMemoryDefinition` pointer) that could add overhead. Using an imported
memory as a proxy, we measured a 1-3% overhead of this approach on the
`pulldown-cmark` benchmark. To avoid this, Cranelift-generated code will
special-case the owned memory access (i.e., load a pointer directly to
the `owned_memories` entry) for `memory.size` so that only
shared memories (and imported memories, as before) incur the indirection
cost.

* review: remove thread feature check

* review: swap wasmtime-types dependency for existing wasmtime-environ use

* review: remove unused VMMemoryUnion

* review: reword cross-engine error message

* review: improve tests

* review: refactor to separate prevent Memory <-> SharedMemory conversion

* review: into_shared_memory -> as_shared_memory

* review: remove commented out code

* review: limit shared min/max to 32 bits

* review: skip imported memories

* review: imported memories are not owned

* review: remove TODO

* review: document unsafe send + sync

* review: add limiter assertion

* review: remove TODO

* review: improve tests

* review: fix doc test

* fix: fixes based on discussion with Alex

This changes several key parts:
 - adds memory indexes to imports and exports
 - makes `VMMemoryDefinition::current_length` an atomic usize

* review: add `Extern::SharedMemory`

* review: remove TODO

* review: atomically load from VMMemoryDescription in JIT-generated code

* review: add test probing the last available memory slot across threads

* fix: move assertion to new location due to rebase

* fix: doc link

* fix: add TODOs to c-api

* fix: broken doc link

* fix: modify pooling allocator messages in tests

* review: make owned_memory_index panic instead of returning an option

* review: clarify calculation of num_owned_memories

* review: move 'use' to top of file

* review: change '*const [u8]' to '*mut [u8]'

* review: remove TODO

* review: avoid hard-coding memory index

* review: remove 'preallocation' parameter from 'Memory::_new'

* fix: component model memory length

* review: check that shared memory plans are static

* review: ignore growth limits for shared memory

* review: improve atomic store comment

* review: add FIXME for memory growth failure

* review: add comment about absence of bounds-checked 'memory.size'

* review: make 'current_length()' doc comment more precise

* review: more comments related to memory.size non-determinism

* review: make 'vmmemory' unreachable for shared memory

* review: move code around

* review: thread plan through to 'wrap()'

* review: disallow shared memory allocation with the pooling allocator
2022-06-08 12:13:40 -05:00
Alex Crichton
9f5f978baa Fix double-counting imports in VMOffsets calculations (#4209)
* Fix double-counting imports in `VMOffsets` calculations

This fixes an oversight in the initial creation of `VMOffsets` for a
module to avoid double-counting imported globals, tables, and memories
for calculating the size of the `VMContext`. Prior to this PR imported
items are accidentally also counted as defined items for sizing
calculations meaning that when a memory is imported but not defined, for
example, the `VMContext` will have a space for an inline
`VMMemoryDefinition` when it doesn't need to.

Auditing where all this relates to it appears that the only issue from
this mistake is that `VMContext` is a bit larger than it would otherwise
need to be. Extra slots are uninitialized memory but nothing in Wasmtime
ever actually accesses the memory either, so it should be harmless to
have extra space here. Nevertheless it seems better to shrink the size
as much as possible to avoid wasting space where we can.

* Fix tests
2022-06-02 13:39:38 -05:00
Pat Hickey
bffce37050 make backtrace collection a Config field rather than a cargo feature (#4183)
* sorta working in runtime

* wasmtime-runtime: get rid of wasm-backtrace feature

* wasmtime: factor to make backtraces recording optional. not configurable yet

* get rid of wasm-backtrace features

* trap tests: now a Trap optionally contains backtrace

* eliminate wasm-backtrace feature

* code review fixes

* ci: no more wasm-backtrace feature

* c_api: backtraces always enabled

* config: unwind required by backtraces and ref types

* plumbed

* test that disabling backtraces works

* code review comments

* fuzzing generator: wasm_backtrace is a runtime config now

* doc fix
2022-05-25 12:25:50 -07:00
Alex Crichton
90791a0e32 Reduce contention on the global module rwlock (#4041)
* Reduce contention on the global module rwlock

This commit intendes to close #4025 by reducing contention on the global
rwlock Wasmtime has for module information during instantiation and
dropping a store. Currently registration of a module into this global
map happens during instantiation, but this can be a hot path as
embeddings may want to, in parallel, instantiate modules.

Instead this switches to a strategy of inserting into the global module
map when a `Module` is created and then removing it from the map when
the `Module` is dropped. Registration in a `Store` now preserves the
entire `Module` within the store as opposed to trying to only save it
piecemeal. In reality the only piece that wasn't saved within a store
was the `TypeTables` which was pretty inconsequential for core wasm
modules anyway.

This means that instantiation should now clone a singluar `Arc` into a
`Store` per `Module` (previously it cloned two) with zero managemnt on
the global rwlock as that happened at `Module` creation time.
Additionally dropping a `Store` again involves zero rwlock management
and only a single `Arc` drop per-instantiated module (previously it was
two).

In the process of doing this I also went ahead and removed the
`Module::new_with_name` API. This has been difficult to support
historically with various variations on the internals of `ModuleInner`
because it involves mutating a `Module` after it's been created. My hope
is that this API is pretty rarely used and/or isn't super important, so
it's ok to remove.

Finally this change removes some internal `Arc` layerings that are no
longer necessary, attempting to use either `T` or `&T` where possible
without dealing with the overhead of an `Arc`.

Closes #4025

* Move back to a `BTreeMap` in `ModuleRegistry`
2022-04-19 15:13:47 -05:00
Alex Crichton
3f9bff17c8 Support disabling backtraces at compile time (#3932)
* Support disabling backtraces at compile time

This commit adds support to Wasmtime to disable, at compile time, the
gathering of backtraces on traps. The `wasmtime` crate now sports a
`wasm-backtrace` feature which, when disabled, will mean that backtraces
are never collected at compile time nor are unwinding tables inserted
into compiled objects.

The motivation for this commit stems from the fact that generating a
backtrace is quite a slow operation. Currently backtrace generation is
done with libunwind and `_Unwind_Backtrace` typically found in glibc or
other system libraries. When thousands of modules are loaded into the
same process though this means that the initial backtrace can take
nearly half a second and all subsequent backtraces can take upwards of
hundreds of milliseconds. Relative to all other operations in Wasmtime
this is extremely expensive at this time. In the future we'd like to
implement a more performant backtrace scheme but such an implementation
would require coordination with Cranelift and is a big chunk of work
that may take some time, so in the meantime if embedders don't need a
backtrace they can still use this option to disable backtraces at
compile time and avoid the performance pitfalls of collecting
backtraces.

In general I tried to originally make this a runtime configuration
option but ended up opting for a compile-time option because `Trap::new`
otherwise has no arguments and always captures a backtrace. By making
this a compile-time option it was possible to configure, statically, the
behavior of `Trap::new`. Additionally I also tried to minimize the
amount of `#[cfg]` necessary by largely only having it at the producer
and consumer sites.

Also a noteworthy restriction of this implementation is that if
backtrace support is disabled at compile time then reference types
support will be unconditionally disabled at runtime. With backtrace
support disabled there's no way to trace the stack of wasm frames which
means that GC can't happen given our current implementation.

* Always enable backtraces for the C API
2022-03-16 09:18:16 -05:00
Alex Crichton
2a6969d2bd Shrink the size of the anyfunc table in VMContext (#3850)
* Shrink the size of the anyfunc table in `VMContext`

This commit shrinks the size of the `VMCallerCheckedAnyfunc` table
allocated into a `VMContext` to be the size of the number of "escaped"
functions in a module rather than the number of functions in a module.
Escaped functions include exports, table elements, etc, and are
typically an order of magnitude smaller than the number of functions in
general. This should greatly shrink the `VMContext` for some modules
which while we aren't necessarily having any problems with that today
shouldn't cause any problems in the future.

The original motivation for this was that this came up during the recent
lazy-table-initialization work and while it no longer has a direct
performance benefit since tables aren't initialized at all on
instantiation it should still improve long-running instances
theoretically with smaller `VMContext` allocations as well as better
locality between anyfuncs.

* Fix some tests

* Remove redundant hash set

* Use a helper for pushing function type information

* Use a more descriptive `is_escaping` method

* Clarify a comment

* Fix condition
2022-02-28 10:11:04 -06:00
Alex Crichton
b438617e12 Further minor optimizations to instantiation (#3791)
* Shrink the size of `FuncData`

Before this commit on a 64-bit system the `FuncData` type had a size of
88 bytes and after this commit it has a size of 32 bytes. A `FuncData`
is required for all host functions in a store, including those inserted
from a `Linker` into a store used during linking. This means that
instantiation ends up creating a nontrivial number of these types and
pushing them into the store. Looking at some profiles there were some
surprisingly expensive movements of `FuncData` from the stack to a
vector for moves-by-value generated by Rust. Shrinking this type enables
more efficient code to be generated and additionally means less storage
is needed in a store's function array.

For instantiating the spidermonkey and rustpython modules this improves
instantiation by 10% since they each import a fair number of host
functions and the speedup here is relative to the number of items
imported.

* Use `ptr::copy_nonoverlapping` during initialization

Prevoiusly `ptr::copy` was used for copying imports into place which
translates to `memmove`, but `ptr::copy_nonoverlapping` can be used here
since it's statically known these areas don't overlap. While this
doesn't end up having a performance difference it's something I kept
noticing while looking at the disassembly of `initialize_vmcontext` so I
figured I'd go ahead and implement.

* Indirect shared signature ids in the VMContext

This commit is a small improvement for the instantiation time of modules
by avoiding copying a list of `VMSharedSignatureIndex` entries into each
`VMContext`, instead building one inside of a module and sharing that
amongst all instances. This involves less lookups at instantiation time
and less movement of data during instantiation. The downside is that
type-checks on `call_indirect` now involve an additionally load, but I'm
assuming that these are somewhat pessimized enough as-is that the
runtime impact won't be much there.

For instantiation performance this is a 5-10% win with
rustpyhon/spidermonky instantiation. This should also reduce the size of
each `VMContext` for an instantiation since signatures are no longer
stored inline but shared amongst all instances with one module.

Note that one subtle change here is that the array of
`VMSharedSignatureIndex` was previously indexed by `TypeIndex`, and now
it's indexed by `SignaturedIndex` which is a deduplicated form of
`TypeIndex`. This is done because we already had a list of those lying
around in `Module`, so it was easier to reuse that than to build a
separate array and store it somewhere.

* Reserve space in `Store<T>` with `InstancePre`

This commit updates the instantiation process to reserve space in a
`Store<T>` for the functions that an `InstancePre<T>`, as part of
instantiation, will insert into it. Using an `InstancePre<T>` to
instantiate allows pre-computing the number of host functions that will
be inserted into a store, and by pre-reserving space we can avoid costly
reallocations during instantiation by ensuring the function vector has
enough space to fit everything during the instantiation process.

Overall this makes instantiation of rustpython/spidermonkey about 8%
faster locally.

* Fix tests

* Use checked arithmetic
2022-02-11 09:55:08 -06:00
Alex Crichton
2f494240f8 Lazily allocate the bump-alloc chunk in the externref table (#3739)
This commit updates the allocation of a `VMExternRefActivationsTable`
structure to perform zero malloc memory allocations. Previously it would
allocate a page-size of `chunk` plus some space in hash sets for future
insertions. The main trick here implemented is that after the first gc
during the slow path the fast chunk allocation is allocated and
configured.

The motivation for this PR is that given our recent work to further
refine and optimize the instantiation process this allocation started to
show up in a nontrivial fashion. Most modules today never touch this
table anyway as almost none of them use reference types, so the time
spent allocation and deallocating the table per-store was largely wasted
time.

Concretely on a microbenchmark this PR speeds up instantiation of a
module with one function by 30%, decreasing the instantiation cost from
1.8us to 1.2us. Overall a pretty minor win but when the instantiation
times we're measuring start being in the single-digit microseconds this
win ends up getting magnified!
2022-01-28 16:10:05 -06:00
Nick Fitzgerald
cbc6f6071f Fix a debug assertion in externref garbage collections
When we GC, we assert the invariant that all `externref`s we find on the stack
have a corresponding entry in the `VMExternRefActivationsTable`. However, we
also might be in code that is in the process of fixing up this invariant and
adding an entry to the table, but the table's bump chunk is full, and so we do a
GC and then add the entry into the table. This will ultimately maintain our
desired invariant, but there is a moment in time when we are doing the GC where
the invariant is relaxed which is okay because the reference will be in the
table before we return to Wasm or do anything else. This isn't a possible UAF,
in other words. To make it so that the assertion won't trip, we explicitly
insert the reference into the table *before* we GC, so that the invariant is not
relaxed across a possibly-GCing operation (even though it would be safe in this
particular case).
2022-01-27 14:46:01 -08:00
Dan Gohman
881c19473d Use ptr::cast instead of as casts in several places. (#3507)
`ptr::cast` has the advantage of being unable to silently cast
`*const T` to `*mut T`. This turned up several places that were
performing such casts, which this PR also fixes.
2022-01-21 13:03:17 -08:00
Alex Crichton
bfdbd10a13 Add *_unchecked variants of Func APIs for the C API (#3350)
* Add `*_unchecked` variants of `Func` APIs for the C API

This commit is what is hopefully going to be my last installment within
the saga of optimizing function calls in/out of WebAssembly modules in
the C API. This is yet another alternative approach to #3345 (sorry) but
also contains everything necessary to make the C API fast. As in #3345
the general idea is just moving checks out of the call path in the same
style of `TypedFunc`.

This new strategy takes inspiration from previously learned attempts
effectively "just" exposes how we previously passed `*mut u128` through
trampolines for arguments/results. This storage format is formalized
through a new `ValRaw` union that is exposed from the `wasmtime` crate.
By doing this it made it relatively easy to expose two new APIs:

* `Func::new_unchecked`
* `Func::call_unchecked`

These are the same as their checked equivalents except that they're
`unsafe` and they work with `*mut ValRaw` rather than safe slices of
`Val`. Working with these eschews type checks and such and requires
callers/embedders to do the right thing.

These two new functions are then exposed via the C API with new
functions, enabling C to have a fast-path of calling/defining functions.
This fast path is akin to `Func::wrap` in Rust, although that API can't
be built in C due to C not having generics in the same way that Rust
has.

For some benchmarks, the benchmarks here are:

* `nop` - Call a wasm function from the host that does nothing and
  returns nothing.
* `i64` - Call a wasm function from the host, the wasm function calls a
  host function, and the host function returns an `i64` all the way out to
  the original caller.
* `many` - Call a wasm function from the host, the wasm calls
   host function with 5 `i32` parameters, and then an `i64` result is
   returned back to the original host
* `i64` host - just the overhead of the wasm calling the host, so the
  wasm calls the host function in a loop.
* `many` host - same as `i64` host, but calling the `many` host function.

All numbers in this table are in nanoseconds, and this is just one
measurement as well so there's bound to be some variation in the precise
numbers here.

| Name      | Rust | C (before) | C (after) |
|-----------|------|------------|-----------|
| nop       | 19   | 112        | 25        |
| i64       | 22   | 207        | 32        |
| many      | 27   | 189        | 34        |
| i64 host  | 2    | 38         | 5         |
| many host | 7    | 75         | 8         |

The main conclusion here is that the C API is significantly faster than
before when using the `*_unchecked` variants of APIs. The Rust
implementation is still the ceiling (or floor I guess?) for performance
The main reason that C is slower than Rust is that a little bit more has
to travel through memory where on the Rust side of things we can
monomorphize and inline a bit more to get rid of that. Overall though
the costs are way way down from where they were originally and I don't
plan on doing a whole lot more myself at this time. There's various
things we theoretically could do I've considered but implementation-wise
I think they'll be much more weighty.

* Tweak `wasmtime_externref_t` API comments
2021-09-24 14:05:45 -05:00
Nick Fitzgerald
d2ce1ac753 Fix a use-after-free bug when passing ExternRefs to Wasm
We _must not_ trigger a GC when moving refs from host code into
Wasm (e.g. returned from a host function or passed as arguments to a Wasm
function). After insertion into the table, this reference is no longer
rooted. If multiple references are being sent from the host into Wasm and we
allowed GCs during insertion, then the following events could happen:

* Reference A is inserted into the activations table. This does not trigger a
  GC, but does fill the table to capacity.

* The caller's reference to A is removed. Now the only reference to A is from
  the activations table.

* Reference B is inserted into the activations table. Because the table is at
  capacity, a GC is triggered.

* A is reclaimed because the only reference keeping it alive was the activation
  table's reference (it isn't inside any Wasm frames on the stack yet, so stack
  scanning and stack maps don't increment its reference count).

* We transfer control to Wasm, giving it A and B. Wasm uses A. That's a use
  after free.

To prevent uses after free, we cannot GC when moving refs into the
`VMExternRefActivationsTable` because we are passing them from the host to Wasm.

On the other hand, when we are *cloning* -- as opposed to moving -- refs from
the host to Wasm, then it is fine to GC while inserting into the activations
table, because the original referent that we are cloning from is still alive and
rooting the ref.
2021-09-14 14:23:42 -07:00
Alex Crichton
87c33c2969 Remove wasmtime-environ's dependency on cranelift-codegen (#3199)
* Move `CompiledFunction` into wasmtime-cranelift

This commit moves the `wasmtime_environ::CompiledFunction` type into the
`wasmtime-cranelift` crate. This type has lots of Cranelift-specific
pieces of compilation and doesn't need to be generated by all Wasmtime
compilers. This replaces the usage in the `Compiler` trait with a
`Box<Any>` type that each compiler can select. Each compiler must still
produce a `FunctionInfo`, however, which is shared information we'll
deserialize for each module.

The `wasmtime-debug` crate is also folded into the `wasmtime-cranelift`
crate as a result of this commit. One possibility was to move the
`CompiledFunction` commit into its own crate and have `wasmtime-debug`
depend on that, but since `wasmtime-debug` is Cranelift-specific at this
time it didn't seem like it was too too necessary to keep it separate.
If `wasmtime-debug` supports other backends in the future we can
recreate a new crate, perhaps with it refactored to not depend on
Cranelift.

* Move wasmtime_environ::reference_type

This now belongs in wasmtime-cranelift and nowhere else

* Remove `Type` reexport in wasmtime-environ

One less dependency on `cranelift-codegen`!

* Remove `types` reexport from `wasmtime-environ`

Less cranelift!

* Remove `SourceLoc` from wasmtime-environ

Change the `srcloc`, `start_srcloc`, and `end_srcloc` fields to a custom
`FilePos` type instead of `ir::SourceLoc`. These are only used in a few
places so there's not much to lose from an extra abstraction for these
leaf use cases outside of cranelift.

* Remove wasmtime-environ's dep on cranelift's `StackMap`

This commit "clones" the `StackMap` data structure in to
`wasmtime-environ` to have an independent representation that that
chosen by Cranelift. This allows Wasmtime to decouple this runtime
dependency of stack map information and let the two evolve
independently, if necessary.

An alternative would be to refactor cranelift's implementation into a
separate crate and have wasmtime depend on that but it seemed a bit like
overkill to do so and easier to clone just a few lines for this.

* Define code offsets in wasmtime-environ with `u32`

Don't use Cranelift's `binemit::CodeOffset` alias to define this field
type since the `wasmtime-environ` crate will be losing the
`cranelift-codegen` dependency soon.

* Commit to using `cranelift-entity` in Wasmtime

This commit removes the reexport of `cranelift-entity` from the
`wasmtime-environ` crate and instead directly depends on the
`cranelift-entity` crate in all referencing crates. The original reason
for the reexport was to make cranelift version bumps easier since it's
less versions to change, but nowadays we have a script to do that.
Otherwise this encourages crates to use whatever they want from
`cranelift-entity` since  we'll always depend on the whole crate.

It's expected that the `cranelift-entity` crate will continue to be a
lean crate in dependencies and suitable for use at both runtime and
compile time. Consequently there's no need to avoid its usage in
Wasmtime at runtime, since "remove Cranelift at compile time" is
primarily about the `cranelift-codegen` crate.

* Remove most uses of `cranelift-codegen` in `wasmtime-environ`

There's only one final use remaining, which is the reexport of
`TrapCode`, which will get handled later.

* Limit the glob-reexport of `cranelift_wasm`

This commit removes the glob reexport of `cranelift-wasm` from the
`wasmtime-environ` crate. This is intended to explicitly define what
we're reexporting and is a transitionary step to curtail the amount of
dependencies taken on `cranelift-wasm` throughout the codebase. For
example some functions used by debuginfo mapping are better imported
directly from the crate since they're Cranelift-specific. Note that
this is intended to be a temporary state affairs, soon this reexport
will be gone entirely.

Additionally this commit reduces imports from `cranelift_wasm` and also
primarily imports from `crate::wasm` within `wasmtime-environ` to get a
better sense of what's imported from where and what will need to be
shared.

* Extract types from cranelift-wasm to cranelift-wasm-types

This commit creates a new crate called `cranelift-wasm-types` and
extracts type definitions from the `cranelift-wasm` crate into this new
crate. The purpose of this crate is to be a shared definition of wasm
types that can be shared both by compilers (like Cranelift) as well as
wasm runtimes (e.g. Wasmtime). This new `cranelift-wasm-types` crate
doesn't depend on `cranelift-codegen` and is the final step in severing
the unconditional dependency from Wasmtime to `cranelift-codegen`.

The final refactoring in this commit is to then reexport this crate from
`wasmtime-environ`, delete the `cranelift-codegen` dependency, and then
update all `use` paths to point to these new types.

The main change of substance here is that the `TrapCode` enum is
mirrored from Cranelift into this `cranelift-wasm-types` crate. While
this unfortunately results in three definitions (one more which is
non-exhaustive in Wasmtime itself) it's hopefully not too onerous and
ideally something we can patch up in the future.

* Get lightbeam compiling

* Remove unnecessary dependency

* Fix compile with uffd

* Update publish script

* Fix more uffd tests

* Rename cranelift-wasm-types to wasmtime-types

This reflects the purpose a bit more where it's types specifically
intended for Wasmtime and its support.

* Fix publish script
2021-08-18 13:14:52 -05:00
Alex Crichton
992d85ae8b Add a type parameter to VMOffsets for pointer size (#3020)
* Add a type parameter to `VMOffsets` for pointer size

This commit adds a type parameter to `VMOffsets` representing the
pointer size to improve computations in `wasmtime-runtime` which always
use a constant value of the host's pointer size. The type parameter is
`u8` for `wasmtime-cranelift`'s use case where cross-compilation may be
involved.

* fix lightbeam
2021-07-13 09:52:27 -05:00
Alex Crichton
7a1b7cdf92 Implement RFC 11: Redesigning Wasmtime's APIs (#2897)
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
2021-06-03 09:10:53 -05:00
Peter Huene
b775b68cfb Make module information lookup from runtime safe.
This commit uses a two-phase lookup of stack map information from modules
rather than giving back raw pointers to stack maps.

First the runtime looks up information about a module from a pc value, which
returns an `Arc` it keeps a reference on while completing the stack map lookup.

Second it then queries the module information for the stack map from a pc
value, getting a reference to the stack map (which is now safe because of the
`Arc` held by the runtime).
2021-04-16 12:30:10 -07:00
Peter Huene
6ac1321162 Minor corrections with latest changes. 2021-04-16 11:08:22 -07:00
Peter Huene
510fc71728 Code review feedback.
* Make `FunctionInfo` public and `CompiledModule::func_info` return it.
* Make the `StackMapLookup` trait unsafe.
* Add comments for the purpose of `EngineHostFuncs`.
* Rework ownership model of shared signatures: `SignatureCollection` in
  conjunction with `SignatureRegistry` is now used so that the `Engine`,
  `Store`, and `Module` don't need to worry about unregistering shared
  signatures.
* Implement `Func::param_arity` and `Func::result_arity` in terms of
  `Func::ty`.
* Make looking up a trampoline with the module registry more efficient by doing
  a binary search on the function's starting PC value for the owning module and
  then looking up the trampoline with only that module.
* Remove reference to the shared signatures from `GlobalRegisteredModule`.
2021-04-16 11:08:21 -07:00
Peter Huene
ea72c621f0 Remove the stack map registry.
This commit removes the stack map registry and instead uses the existing
information from the store's module registry to lookup stack maps.

A trait is now used to pass the lookup context to the runtime, implemented by
`Store` to do the lookup.

With this change, module registration in `Store` is now entirely limited to
inserting the module into the module registry.
2021-04-16 11:08:21 -07:00
Peter Huene
a2466b3c23 Move the signature registry into Engine.
This commit moves the shared signature registry out of `Store` and into
`Engine`.

This helps eliminate work that was performed whenever a `Module` was
instantiated into a `Store`.

Now a `Module` is registered with the shared signature registry upon creation,
storing the mapping from the module's signature index space to the shared index
space.

This also refactors the "frame info" registry into a general purpose "module
registry" that is used to look up trap information, signature information, and
(soon) stack map information.
2021-04-16 11:06:44 -07:00
Nick Fitzgerald
2a32567871 Merge pull request #2821 from alexcrichton/faster-vmoffsets
Precompute fields in `VMOffsets`
2021-04-08 14:17:11 -07:00
Alex Crichton
18dd82ba7d Improve signature lookup happening during instantiation (#2818)
This commit is intended to be a perf improvement for instantiation of
modules with lots of functions. Previously the `lookup_shared_signature`
callback was showing up quite high in profiles as part of instantiation.

As some background, this callback is used to translate from a module's
`SignatureIndex` to a `VMSharedSignatureIndex` which the instance
stores. This callback is called for two reasons, one is to translate all
of the module's own types into `VMSharedSignatureIndex` for the purposes
of `call_indirect` (the translation of that loads from this table to
compare indices). The second reason is that a `VMCallerCheckedAnyfunc`
is prepared for all functions and this embeds a `VMSharedSignatureIndex`
inside of it.

The slow part today is that the lookup callback was called
once-per-function and each lookup involved hashing a full
`WasmFuncType`. Albeit our hash algorithm is still Rust's default
SipHash algorithm which is quite slow, but we also shouldn't need to
re-hash each signature if we see it multiple times anyway.

The fix applied in this commit is to change this lookup callback to an
`enum` where one variant is that there's a table to lookup from. This
table is a `PrimaryMap` which means that lookup is quite fast. The only
thing we need to do is to prepare the table ahead of time. Currently
this happens on the instantiation path because in my measurments the
creation of the table is quite fast compared to the rest of
instantiation. If this becomes an issue, though, we can look into
creating the table as part of `SigRegistry::register_module` and caching
it somewhere (I'm not entirely sure where but I'm sure we can figure it
out).

There's in generally not a ton of efficiency around the `SigRegistry`
type. I'm hoping though that this fixes the next-lowest-hanging-fruit in
terms of performance without complicating the implementation too much. I
tried a few variants and this change seemed like the best balance
between simplicity and still a nice performance gain.

Locally I measured an improvement in instantiation time for a large-ish
module by reducing the time from ~3ms to ~2.6ms per instance.
2021-04-08 15:04:18 -05:00
Alex Crichton
c91e14d83f Precompute fields in VMOffsets
This commit updates the implementation of `VMOffsets` to frontload all
checked arithmetic on construction of the `VMOffsets` which allows
eliding all checked arithmetic when accessing the fields of `VMOffsets`.
For testing and such this adds a new constructor as well from a new
`VMOffsetsFields` structure which is a clone of the old definition.

This should help speed up some profile hot spots I've been seeing where
with all the checked arithmetic on field sizes this was slowing down the
various accessors during instantiation (which uses `VMOffsets` to
initialize various fields of the `VMContext`).
2021-04-08 12:46:17 -07:00
Alex Crichton
d4b54ee0a8 More optimizations for calling into WebAssembly (#2759)
* Combine stack-based cleanups for faster wasm calls

This commit is an extension of #2757 where the goal is to optimize entry
into WebAssembly. Currently wasmtime has two stack-based cleanups when
entering wasm, one for the externref activation table and another for
stack limits getting reset. This commit fuses these two cleanups
together into one and moves some code around which enables less captures
for fewer closures and such to speed up calls in to wasm a bit more.
Overall this drops the execution time from 88ns to 80ns locally for me.

This also updates the atomic orderings when updating the stack limit
from `SeqCst` to `Relaxed`. While `SeqCst` is a reasonable starting
point the usage here should be safe to use `Relaxed` since we're not
using the atomics to actually protect any memory, it's simply receiving
signals from other threads.

* Determine whether a pc is wasm via a global map

The macOS implementation of traps recently changed to using mach ports
for handlers instead of signal handlers. This means that a previously
relied upon invariant, each thread fixes its own trap, was broken. The
macOS implementation worked around this by maintaining a global map from
thread id to thread local information, however, to solve the problem.

This global map is quite slow though. It involves taking a lock and
updating a hash map on all calls into WebAssembly. In my local testing
this accounts for >70% of the overhead of calling into WebAssembly on
macOS. Naturally it'd be great to remove this!

This commit fixes this issue and removes the global lock/map that is
updated on all calls into WebAssembly. The fix is to maintain a global
map of wasm modules and their trap addresses in the `wasmtime` crate.
Doing so is relatively simple since we're already tracking this
information at the `Store` level.

Once we've got a global map then the macOS implementation can use this
from a foreign thread and everything works out.

Locally this brings the overhead, on macOS specifically, of calling into
wasm from 80ns to ~20ns.

* Fix compiles

* Review comments
2021-03-24 11:41:33 -05:00
Alex Crichton
c95971ab59 Optimize calling a WebAssembly function (#2757)
This commit implements a few optimizations, mainly inlining, that should
improve the performance of calling a WebAssembly function. This code
path can be quite hot depending on the embedding case and we hadn't
really put much effort into optimizing the nitty gritty.

The predominant optimization here is adding `#[inline]` to trivial
functions so performance is improved without having to compile with LTO.
Another optimization is to call `lazy_per_thread_init` when traps are
initialized per-thread (when a `Store` is created) rather than each time
a function is called. The next optimization is to change the unwind
reason in the `CallThreadState` to `MaybeUninit` to avoid extra checks
in the default case about whether we need to drop its variants (since in
the happy path we never need to drop it). The final optimization is to
optimize out a few checks when `async` support is disabled for a small
speed boost.

In a small benchmark where wasmtime calls a simple wasm function my
macOS computer dropped from 110ns to 86ns overhead, a 20% decrease. The
macOS overhead is still largely dominated by the global lock acquisition
and hash table management for traps right now, but I suspect the Linux
overhead is much better (should be on the order of ~30 or so ns).

We still have a long way to go to compete with SpiderMonkey which, in
testing, seem to have ~6ns overhead in calling the same wasm function on
my computer.
2021-03-23 15:22:37 -05:00
Peter Huene
1a0493946d Make the storage of wasmtime_runtime::Table consistent.
This change makes the storage of `Table` more internally consistent.

Elements are stored as raw pointers for both static and dynamic table storage.

Explicitly storing elements as pointers removes assumptions being made by the
pooling allocator in terms of the size and default representation of the
elements.

However, care must be made to properly clone externrefs for table operations.
2021-03-05 18:36:14 -08:00
Alex Crichton
0e41861662 Implement limiting WebAssembly execution with fuel (#2611)
* Consume fuel during function execution

This commit adds codegen infrastructure necessary to instrument wasm
code to consume fuel as it executes. Currently nothing is really done
with the fuel, but that'll come in later commits.

The focus of this commit is to implement the codegen infrastructure
necessary to consume fuel and account for fuel consumed correctly.

* Periodically check remaining fuel in wasm JIT code

This commit enables wasm code to periodically check to see if fuel has
run out. When fuel runs out an intrinsic is called which can do what it
needs to do in the result of fuel running out. For now a trap is thrown
to have at least some semantics in synchronous stores, but another
planned use for this feature is for asynchronous stores to periodically
yield back to the host based on fuel running out.

Checks for remaining fuel happen in the same locations as interrupt
checks, which is to say the start of the function as well as loop
headers.

* Improve codegen by caching `*const VMInterrupts`

The location of the shared interrupt value and fuel value is through a
double-indirection on the vmctx (load through the vmctx and then load
through that pointer). The second pointer in this chain, however, never
changes, so we can alter codegen to account for this and remove some
extraneous load instructions and hopefully reduce some register
pressure even maybe.

* Add tests fuel can abort infinite loops

* More fuzzing with fuel

Use fuel to time out modules in addition to time, using fuzz input to
figure out which.

* Update docs on trapping instructions

* Fix doc links

* Fix a fuzz test

* Change setting fuel to adding fuel

* Fix a doc link

* Squelch some rustdoc warnings
2021-01-29 08:57:17 -06:00
Alex Crichton
068340d30f Fix a case of using the wrong stack map during gcs (#2396)
This commit fixes an issue where when looking up the stack map for a pc
within a function we might end up reading the *previous* function's
stack maps. This then later caused asserts to trip because we started
interpreting random data as a `VMExternRef` when it wasn't. The fix was
to add `None` markers for "this range has no stack map" in the function
ranges map.

Closes #2386
2020-11-12 13:24:00 -06:00
Nick Fitzgerald
05bf9ea3f3 Rename "Stackmap" to "StackMap"
And "stackmap" to "stack_map".

This commit is purely mechanical.
2020-08-07 10:08:44 -07:00
Nick Fitzgerald
98e899f6b3 fuzz: Add a fuzz target for table.{get,set} operations
This new fuzz target exercises sequences of `table.get`s, `table.set`s, and
GCs.

It already found a couple bugs:

* Some leaks due to ref count cycles between stores and host-defined functions
  closing over those stores.

* If there are no live references for a PC, Cranelift can avoid emiting an
  associated stack map. This was running afoul of a debug assertion.
2020-06-30 12:00:57 -07:00
Nick Fitzgerald
8c5f59c0cf wasmtime: Implement table.get and table.set
These instructions have fast, inline JIT paths for the common cases, and only
call out to host VM functions for the slow paths. This required some changes to
`cranelift-wasm`'s `FuncEnvironment`: instead of taking a `FuncCursor` to insert
an instruction sequence within the current basic block,
`FuncEnvironment::translate_table_{get,set}` now take a `&mut FunctionBuilder`
so that they can create whole new basic blocks. This is necessary for
implementing GC read/write barriers that involve branching (e.g. checking for
null, or whether a store buffer is at capacity).

Furthermore, it required that the `load`, `load_complex`, and `store`
instructions handle loading and storing through an `r{32,64}` rather than just
`i{32,64}` addresses. This involved making `r{32,64}` types acceptable
instantiations of the `iAddr` type variable, plus a few new instruction
encodings.

Part of #929
2020-06-30 12:00:57 -07:00
Alex Crichton
0acd2072c2 Fix doc warnings and link failures (#1948)
Also add configuration to CI to fail doc generation if any links are
broken. Unfortunately we can't blanket deny all warnings in rustdoc
since some are unconditional warnings, but for now this is hopefully
good enough.

Closes #1947
2020-06-30 13:01:49 -05:00
Nick Fitzgerald
58bb5dd953 wasmtime: Add support for func.ref and table.grow with funcrefs
`funcref`s are implemented as `NonNull<VMCallerCheckedAnyfunc>`.

This should be more efficient than using a `VMExternRef` that points at a
`VMCallerCheckedAnyfunc` because it gets rid of an indirection, dynamic
allocation, and some reference counting.

Note that the null function reference is *NOT* a null pointer; it is a
`VMCallerCheckedAnyfunc` that has a null `func_ptr` member.

Part of #929
2020-06-24 10:08:13 -07:00
Nick Fitzgerald
7e167cae10 externref: Address review feedback 2020-06-15 15:39:26 -07:00
Nick Fitzgerald
618c278e41 externref: implement a canary for GC stack walking
This allows us to detect when stack walking has failed to walk the whole stack,
and we are potentially missing on-stack roots, and therefore it would be unsafe
to do a GC because we could free objects too early, leading to use-after-free.
When we detect this scenario, we skip the GC.
2020-06-15 09:39:37 -07:00
Nick Fitzgerald
f30ce1fe97 externref: implement stack map-based garbage collection
For host VM code, we use plain reference counting, where cloning increments
the reference count, and dropping decrements it. We can avoid many of the
on-stack increment/decrement operations that typically plague the
performance of reference counting via Rust's ownership and borrowing system.
Moving a `VMExternRef` avoids mutating its reference count, and borrowing it
either avoids the reference count increment or delays it until if/when the
`VMExternRef` is cloned.

When passing a `VMExternRef` into compiled Wasm code, we don't want to do
reference count mutations for every compiled `local.{get,set}`, nor for
every function call. Therefore, we use a variation of **deferred reference
counting**, where we only mutate reference counts when storing
`VMExternRef`s somewhere that outlives the activation: into a global or
table. Simultaneously, we over-approximate the set of `VMExternRef`s that
are inside Wasm function activations. Periodically, we walk the stack at GC
safe points, and use stack map information to precisely identify the set of
`VMExternRef`s inside Wasm activations. Then we take the difference between
this precise set and our over-approximation, and decrement the reference
count for each of the `VMExternRef`s that are in our over-approximation but
not in the precise set. Finally, the over-approximation is replaced with the
precise set.

The `VMExternRefActivationsTable` implements the over-approximized set of
`VMExternRef`s referenced by Wasm activations. Calling a Wasm function and
passing it a `VMExternRef` moves the `VMExternRef` into the table, and the
compiled Wasm function logically "borrows" the `VMExternRef` from the
table. Similarly, `global.get` and `table.get` operations clone the gotten
`VMExternRef` into the `VMExternRefActivationsTable` and then "borrow" the
reference out of the table.

When a `VMExternRef` is returned to host code from a Wasm function, the host
increments the reference count (because the reference is logically
"borrowed" from the `VMExternRefActivationsTable` and the reference count
from the table will be dropped at the next GC).

For more general information on deferred reference counting, see *An
Examination of Deferred Reference Counting and Cycle Detection* by Quinane:
https://openresearch-repository.anu.edu.au/bitstream/1885/42030/2/hon-thesis.pdf

cc #929

Fixes #1804
2020-06-15 09:39:37 -07:00
Nick Fitzgerald
b78eafcfd3 externref: Do not impl common traits, have free functions instead
If you aren't expecting `VMExternRef`'s pointer-equality semantics, then these
trait implementations can be foot guns. Instead of implementing the trait, make
free functions in the `VMExternRef` namespace. This way, callers have to be a
little more explicit.
2020-06-01 15:09:51 -07:00
Nick Fitzgerald
98da9ee8a9 Use std::alloc::handle_alloc_failure instead of home-rolled version 2020-06-01 15:09:51 -07:00
Nick Fitzgerald
25548d7fbe externref: Share more Layout-computing code 2020-06-01 15:09:51 -07:00
Nick Fitzgerald
a8ee0554a9 wasmtime: Initial, partial support for externref
This is enough to get an `externref -> externref` identity function
passing.

However, `externref`s that are dropped by compiled Wasm code are (safely)
leaked. Follow up work will leverage cranelift's stack maps to resolve this
issue.
2020-06-01 15:09:51 -07:00
Nick Fitzgerald
1898d52966 runtime: Introduce VMExternRef and VMExternData
`VMExternRef` is a reference-counted box for any kind of data that is
external and opaque to running Wasm. Sometimes it might hold a Wasmtime
thing, other times it might hold something from a Wasmtime embedder and is
opaque even to us. It is morally equivalent to `Rc<dyn Any>` in Rust, but
additionally always fits in a pointer-sized word. `VMExternRef` is
non-nullable, but `Option<VMExternRef>` is a null pointer.

The one part of `VMExternRef` that can't ever be opaque to us is the
reference count. Even when we don't know what's inside an `VMExternRef`, we
need to be able to manipulate its reference count as we add and remove
references to it. And we need to do this from compiled Wasm code, so it must
be `repr(C)`!

`VMExternRef` itself is just a pointer to an `VMExternData`, which holds the
opaque, boxed value, its reference count, and its vtable pointer.

The `VMExternData` struct is *preceded* by the dynamically-sized value boxed
up and referenced by one or more `VMExternRef`s:

```ignore
     ,-------------------------------------------------------.
     |                                                       |
     V                                                       |
    +----------------------------+-----------+-----------+   |
    | dynamically-sized value... | ref_count | value_ptr |---'
    +----------------------------+-----------+-----------+
                                 | VMExternData          |
                                 +-----------------------+
                                  ^
+-------------+                   |
| VMExternRef |-------------------+
+-------------+                   |
                                  |
+-------------+                   |
| VMExternRef |-------------------+
+-------------+                   |
                                  |
  ...                            ===
                                  |
+-------------+                   |
| VMExternRef |-------------------'
+-------------+
```

The `value_ptr` member always points backwards to the start of the
dynamically-sized value (which is also the start of the heap allocation for
this value-and-`VMExternData` pair). Because it is a `dyn` pointer, it is
fat, and also points to the value's `Any` vtable.

The boxed value and the `VMExternRef` footer are held a single heap
allocation. The layout described above is used to make satisfying the
value's alignment easy: we just need to ensure that the heap allocation used
to hold everything satisfies its alignment. It also ensures that we don't
need a ton of excess padding between the `VMExternData` and the value for
values with large alignment.
2020-06-01 14:53:10 -07:00