We've got some OOM fuzz test cases getting reported, but these aren't
very interesting. The OOMs, after some investigation, are confirmed to
be happening because the test is simply allocating thousands of
instances with massive tables, quickly exceeding the 2GB memory
threshold for fuzzing. This isn't really interesting because this is
expected behavior if you instantiate these sorts of modules.
This commit updates the fuzz test case generator to have a "prediction"
for each module how much memory it will take to instantiate it. This
prediction is then used to avoid instantiating new modules if we predict
that it will exceed our memory limit. The limits here are intentionally
very squishy and imprecise. The goal here is to still generate lots of
interesting test cases, but not ones that simply exhaust memory
trivially.
We only generate *valid* sequences of API calls. To do this, we keep track of
what objects we've already created in earlier API calls via the `Scope` struct.
To generate even-more-pathological sequences of API calls, we use [swarm
testing]:
> In swarm testing, the usual practice of potentially including all features
> in every test case is abandoned. Rather, a large “swarm” of randomly
> generated configurations, each of which omits some features, is used, with
> configurations receiving equal resources.
[swarm testing]: https://www.cs.utah.edu/~regehr/papers/swarm12.pdf
There are more public APIs and instance introspection APIs that we have than
this fuzzer exercises right now. We will need a better generator of valid Wasm
than `wasm-opt -ttf` to really get the most out of those currently-unexercised
APIs, since the Wasm modules generated by `wasm-opt -ttf` don't import and
export a huge variety of things.