* ISLE compiler: fix priority-trie interval bug. (#4093)
This PR fixes a bug in the ISLE compiler related to rule priorities.
An important note first: the bug did not affect the correctness of the
Cranelift backends, either in theory (because the rules should be
correct applied in any order, even contrary to the stated priorities)
or in practice (because the generated code actually does not change at
all with the DSL compiler fix, only with a separate minimized bug
example).
The issue was a simple swap of `min` for `max` (see first
commit). This is the minimal fix, I think, to get a correct
priority-trie with the minimized bug example in this commit.
However, while debugging this, I started to convince myself that the
complexity of merging multiple priority ranges using the sort of
hybrid interval tree / string-matching trie data structure was
unneeded. The original design was built with the assumption we might
have a bunch of different priority levels, and would need the
efficiency of merging where possible. But in practice we haven't used
priorities this way: the vast majority of lowering rules exist at the
default (priority 0), and just a few overrides are explicitly at prio
1, 2 or (rarely) 3.
So, it turns out to be a lot simpler to label trie edges with (prio,
symbol) rather than (prio-range, symbol), and delete the whole mess of
interval-splitting logic on insertion. It's easier (IMHO) to convince
oneself that the resulting insertion algorithm is correct.
I was worried that this might impact the size of the generated Rust
code or its runtime, but In fact, to my initial surprise (but it makes
sense given the above "rarely used" factor), the generated code with
this compiler fix is *exactly the same*. I rebuilt with `--features
rebuild-isle,all-arch` but... there were no diffs to commit! This is
to me the simplest evidence that we didn't really need that
complexity.
* Fix earlier commit from #4093: properly sort trie.
This commit fixes an in-hindsight-obvious bug in #4093: the trie's edges
must be sorted recursively, not just at the top level.
With this fix, the generated code differs only in one cosmetic way (a
let-binding moves) but otherwise is the same.
This includes @fitzgen's fix to the CI (from the revert in #4102) that
deletes manifests to actually check that the checked-in source is
consistent with the checked-in compiler. The force-rebuild step is now
in a shell script for convenience: anyone hacking on the ISLE compiler
itself can use this script to more easily rebuild everything.
* Add note to build.rs to remind to update force-rebuild-isle.sh
* Revert "ISLE compiler: fix priority-trie interval bug. (#4093)"
This reverts commit 2122337112.
* ci: Delete ISLE manifest files to force an ISLE rebuild
This makes it so that we actually rebuild ISLE rather than just check that the
manifests are up-to-date.
The automated release process failed last night because the `./publish
bump` process failed to automatically increment the version number. This
is fixed in #4097 but it also seems prudent to run this continually on
CI as well to ensure that this doesn't regress again.
* Update more workflows to only this repository
This adds `if: github.repository == 'bytecodealliance/wasmtime'` to a
few more workflows related to the release process which should only run
in this repository and no other (e.g. forks).
* Also only run verify-publish in the upstream repo
No need for local deelopment to be burdened with ensuring everything is
actually publish-able, that's just a concern for the main repository.
* Gate workflows which need secrets on this repository
This commit removes support for the `userfaultfd` or "uffd" syscall on
Linux. This support was originally added for users migrating from Lucet
to Wasmtime, but the recent developments of kernel-supported
copy-on-write support for memory initialization wound up being more
appropriate for these use cases than usefaultfd. The main reason for
moving to copy-on-write initialization are:
* The `userfaultfd` feature was never necessarily intended for this
style of use case with wasm and was susceptible to subtle and rare
bugs that were extremely difficult to track down. We were never 100%
certain that there were kernel bugs related to userfaultfd but the
suspicion never went away.
* Handling faults with userfaultfd was always slow and single-threaded.
Only one thread could handle faults and traveling to user-space to
handle faults is inherently slower than handling them all in the
kernel. The single-threaded aspect in particular presented a
significant scaling bottleneck for embeddings that want to run many
wasm instances in parallel.
* One of the major benefits of userfaultfd was lazy initialization of
wasm linear memory which is also achieved with the copy-on-write
initialization support we have right now.
* One of the suspected benefits of userfaultfd was less frobbing of the
kernel vma structures when wasm modules are instantiated. Currently
the copy-on-write support has a mitigation where we attempt to reuse
the memory images where possible to avoid changing vma structures.
When comparing this to userfaultfd's performance it was found that
kernel modifications of vmas aren't a worrisome bottleneck so
copy-on-write is suitable for this as well.
Overall there are no remaining benefits that userfaultfd gives that
copy-on-write doesn't, and copy-on-write solves a major downsides of
userfaultfd, the scaling issue with a single faulting thread.
Additionally copy-on-write support seems much more robust in terms of
kernel implementation since it's only using standard memory-management
syscalls which are heavily exercised. Finally copy-on-write support
provides a new bonus where read-only memory in WebAssembly can be mapped
directly to the same kernel cache page, even amongst many wasm instances
of the same module, which was never possible with userfaultfd.
In light of all this it's expected that all users of userfaultfd should
migrate to the copy-on-write initialization of Wasmtime (which is
enabled by default).
* Add m1 to release process
This will create a pre-compiled binary for m1 macs and adds
a link to review embark studios CI for verification.
* remove test for macos arm
Tests will not succeed for macos arm until GitHub provides a an m1 hosted runner.
Co-authored-by: Bailey Hayes <bhayes@singlestore.com>
* Support disabling backtraces at compile time
This commit adds support to Wasmtime to disable, at compile time, the
gathering of backtraces on traps. The `wasmtime` crate now sports a
`wasm-backtrace` feature which, when disabled, will mean that backtraces
are never collected at compile time nor are unwinding tables inserted
into compiled objects.
The motivation for this commit stems from the fact that generating a
backtrace is quite a slow operation. Currently backtrace generation is
done with libunwind and `_Unwind_Backtrace` typically found in glibc or
other system libraries. When thousands of modules are loaded into the
same process though this means that the initial backtrace can take
nearly half a second and all subsequent backtraces can take upwards of
hundreds of milliseconds. Relative to all other operations in Wasmtime
this is extremely expensive at this time. In the future we'd like to
implement a more performant backtrace scheme but such an implementation
would require coordination with Cranelift and is a big chunk of work
that may take some time, so in the meantime if embedders don't need a
backtrace they can still use this option to disable backtraces at
compile time and avoid the performance pitfalls of collecting
backtraces.
In general I tried to originally make this a runtime configuration
option but ended up opting for a compile-time option because `Trap::new`
otherwise has no arguments and always captures a backtrace. By making
this a compile-time option it was possible to configure, statically, the
behavior of `Trap::new`. Additionally I also tried to minimize the
amount of `#[cfg]` necessary by largely only having it at the producer
and consumer sites.
Also a noteworthy restriction of this implementation is that if
backtrace support is disabled at compile time then reference types
support will be unconditionally disabled at runtime. With backtrace
support disabled there's no way to trace the stack of wasm frames which
means that GC can't happen given our current implementation.
* Always enable backtraces for the C API
After adding the `call`-oriented benchmark recently I just noticed that
running benchmarks on CI is taking 30+ minutes which is not intended.
Instead of running a full benchmark run on CI (which I believe we're not
looking at anyway) instead only run the benchmarks for a single
iteration to ensure they still work but otherwise don't collect
statistics about them.
Additionally cap the number of parallel instantiations to 16 to avoid
running tons of tests for machines with lots of cpus.
It seems our `compile` fuzz target for ISLE has not been regularly
tested, as it was never updated for the `isle` -> `cranelift_isle` crate
renaming. This PR fixes it to compile again.
This also includes a simple fix in the typechecking: when verifying that
a term decl is valid, we might insert a term ID into the name->ID map
before fully checking that all of the types exist, and then skipping
(for error recovery purposes) the actual push onto the term-signature
vector if one of the types does have an error. This phantom TID can
later cause a panic. The fix is to avoid adding to the map until we have
fully verified the term decl.
Looks like the 0.34.1 release is missing artifacts and some jobs
building artifacts ended up being cancelled because of API rate limits
being hit on the builders. Artifacts are uploaded to the job, however,
which means we can always go back and grab them to upload them, unless
the whole job was cancelled. For 0.34.1 it looks like the Linux builder
hit an error but its error then subsequently cancelled the Windows
builders, so we don't actually have artifacts for Windows for the 0.34.1
release. This will hopefully prevent this from causing further issues in
the future where if one builder hits an error while uploading artifacts
the others will continue and we can manually upload what's missing if
necessary.
cc #3812
In #3721, we have been discussing what to do about the ARM32 backend in
Cranelift. Currently, this backend supports only 32-bit types, which is
insufficient for full Wasm-MVP; it's missing other critical bits, like
floating-point support; and it has only ever been exercised, AFAIK, via
the filetests for the individual CLIF instructions that are implemented.
We were very very thankful for the original contribution of this
backend, even in its partial state, and we had hoped at the time that we
could eventually mature it in-tree until it supported e.g. Wasm and
other use-cases. But that hasn't yet happened -- to the blame of no-one,
to be clear, we just haven't had a contributor with sufficient time.
Unfortunately, the existence of the backend and lack of active
maintainer now potentially pose a bit of a burden as we hope to make
continuing changes to the backend framework. For example, the ISLE
migration, and the use of regalloc2 that it will allow, would need all
of the existing lowering patterns in the hand-written ARM32 backend to
be rewritten as ISLE rules.
Given that we don't currently have the resources to do this, we think
it's probably best if we, sadly, for now remove this partial backend.
This is not in any way a statement of what we might accept in the
future, though. If, in the future, an ARM32 backend updated to our
latest codebase with an active maintainer were to appear, we'd be happy
to merge it (and likewise for any other architecture!). But for now,
this is probably the best path. Thanks again to the original contributor
@jmkrauz and we hope that this work can eventually be brought back and
reused if someone has the time to do so!
We currently skip some tests when running our qemu-based tests for
aarch64 and s390x. Qemu has broken madvise(MADV_DONTNEED) semantics --
specifically, it just ignores madvise() [1].
We could continue to whack-a-mole the tests whenever we create new
functionality that relies on madvise() semantics, but ideally we'd just
have emulation that properly emulates!
The earlier discussions on the qemu mailing list [2] had a proposed
patch for this, but (i) this patch doesn't seem to apply cleanly anymore
(it's 3.5 years old) and (ii) it's pretty complex due to the need to
handle qemu's ability to emulate differing page sizes on host and guest.
It turns out that we only really need this for CI when host and guest
have the same page size (4KiB), so we *could* just pass the madvise()s
through. I wouldn't expect such a patch to ever land upstream in qemu,
but it satisfies our needs I think. So this PR modifies our CI setup to
patch qemu before building it locally with a little one-off patch.
[1]
https://github.com/bytecodealliance/wasmtime/pull/2518#issuecomment-747280133
[2]
https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg05416.html
As first suggested by Jan on the Zulip here [1], a cheap and effective
way to obtain copy-on-write semantics of a "backing image" for a Wasm
memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism
provided by the Linux kernel allows us to create anonymous,
in-memory-only files that we can use for this mapping, so we can
construct the image contents on-the-fly then effectively create a CoW
overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)`
will discard the CoW overlay, returning the mapping to its original
state.
By itself this is almost enough for a very fast
instantiation-termination loop of the same image over and over,
without changing the address space mapping at all (which is
expensive). The only missing bit is how to implement
heap *growth*. But here memfds can help us again: if we create another
anonymous file and map it where the extended parts of the heap would
go, we can take advantage of the fact that a `mmap()` mapping can
be *larger than the file itself*, with accesses beyond the end
generating a `SIGBUS`, and the fact that we can cheaply resize the
file with `ftruncate`, even after a mapping exists. So we can map the
"heap extension" file once with the maximum memory-slot size and grow
the memfd itself as `memory.grow` operations occur.
The above CoW technique and heap-growth technique together allow us a
fastpath of `madvise()` and `ftruncate()` only when we re-instantiate
the same module over and over, as long as we can reuse the same
slot. This fastpath avoids all whole-process address-space locks in
the Linux kernel, which should mean it is highly scalable. It also
avoids the cost of copying data on read, as the `uffd` heap backend
does when servicing pagefaults; the kernel's own optimized CoW
logic (same as used by all file mmaps) is used instead.
[1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
This was accidentally ommitted from our CI configuration which meant
that release branches didn't get PR CI. They still won't get on-merge CI
but that shouldn't be an issue because the PR CI is the full CI.
* Update to cap-std 0.22.0.
The main change relevant to Wasmtime here is that this includes the
rustix fix for compilation errors on Rust nightly with the `asm!` macro.
* Add itoa to deny.toml.
* Update the doc and fuzz builds to the latest Rust nightly.
* Update to libc 0.2.112 to pick up the `POLLRDHUP` fix.
* Update to cargo-fuzz 0.11, for compatibility with Rust nightly.
This appears to be the fix for rust-fuzz/cargo-fuzz#277.
Peepmatic was an early attempt at a DSL for peephole optimizations, with the
idea that maybe sometime in the future we could user it for instruction
selection as well. It didn't really pan out, however:
* Peepmatic wasn't quite flexible enough, and adding new operators or snippets
of code implemented externally in Rust was a bit of a pain.
* The performance was never competitive with the hand-written peephole
optimizers. It was *very* size efficient, but that came at the cost of
run-time efficiency. Everything was table-based and interpreted, rather than
generating any Rust code.
Ultimately, because of these reasons, we never turned Peepmatic on by default.
These days, we just landed the ISLE domain-specific language, and it is better
suited than Peepmatic for all the things that Peepmatic was originally designed
to do. It is more flexible and easy to integrate with external Rust code. It is
has better time efficiency, meeting or even beating hand-written code. I think a
small part of the reason why ISLE excels in these things is because its design
was informed by Peepmatic's failures. I still plan on continuing Peepmatic's
mission to make Cranelift's peephole optimizer passes generated from DSL rewrite
rules, but using ISLE instead of Peepmatic.
Thank you Peepmatic, rest in peace!
- The Windows line-ending canonicalization was incomplete: we need to
canonicalize the manifest text itself too!
- The "meta deterministic check" runs the cranelift-codegen build script
N times outside of the source tree, examining what it produces to
ensure the output is always the same (is detministic). This works fine
when everything comes from the internal DSL, but when reading ISLE,
this breaks because we no longer have the ISLE source paths.
The initial ISLE integration did not hit this because without the
`rebuild-isle` feature, it simply did nothing in the build script;
now, with the manifest check, we hit the issue.
The fix for now is just to turn off all ISLE-specific behavior in the
build script by setting a special-purpose Cargo feature in the
specific CI job. Eventually IMHO we should remove or find a better way
to do this check.
On the build side, this commit introduces two things:
1. The automatic generation of various ISLE definitions for working with
CLIF. Specifically, it generates extern type definitions for clif opcodes and
the clif instruction data `enum`, as well as extractors for matching each clif
instructions. This happens inside the `cranelift-codegen-meta` crate.
2. The compilation of ISLE DSL sources to Rust code, that can be included in the
main `cranelift-codegen` compilation.
Next, this commit introduces the integration glue code required to get
ISLE-generated Rust code hooked up in clif-to-x64 lowering. When lowering a clif
instruction, we first try to use the ISLE code path. If it succeeds, then we are
done lowering this instruction. If it fails, then we proceed along the existing
hand-written code path for lowering.
Finally, this commit ports many lowering rules over from hand-written,
open-coded Rust to ISLE.
In the process of supporting ISLE, this commit also makes the x64 `Inst` capable
of expressing SSA by supporting 3-operand forms for all of the existing
instructions that only have a 2-operand form encoding:
dst = src1 op src2
Rather than only the typical x86-64 2-operand form:
dst = dst op src
This allows `MachInst` to be in SSA form, since `dst` and `src1` are
disentangled.
("3-operand" and "2-operand" are a little bit of a misnomer since not all
operations are binary operations, but we do the same thing for, e.g., unary
operations by disentangling the sole operand from the result.)
There are two motivations for this change:
1. To allow ISLE lowering code to have value-equivalence semantics. We want ISLE
lowering to translate a CLIF expression that evaluates to some value into a
`MachInst` expression that evaluates to the same value. We want both the
lowering itself and the resulting `MachInst` to be pure and referentially
transparent. This is both a nice paradigm for compiler writers that are
authoring and maintaining lowering rules and is a prerequisite to any sort of
formal verification of our lowering rules in the future.
2. Better align `MachInst` with `regalloc2`'s API, which requires that the input
be in SSA form.
This commit adds the `pooling-allocator` feature to both the `wasmtime` and
`wasmtime-runtime` crates.
The feature controls whether or not the pooling allocator implementation is
built into the runtime and exposed as a supported instance allocation strategy
in the wasmtime API.
The feature is on by default for the `wasmtime` crate.
Closes#3513.
Before this commit we actually have two builders checking for security
advisories on CI, one is `cargo audit` and one is `cargo deny`. The
`cargo deny` builder is slightly different in that it checks a few other
things about our dependency tree such as licenses, duplicates, etc. This
commit removes the advisory check from `cargo deny` on CI and then moves
the `cargo audit` check to a separate workflow.
The `cargo audit` check will now run nightly and will open an issue on
the Wasmtime repository when an advisory is found. This should help make
it such that our CI is never broken by the publication of an advisory
but we're still promptly notified whenever an advisory is made. I've
updated the release process notes to indicate that the open issues
should be double-checked to ensure that there are no open advisories
that we need to take care of.
This commit removes the Lightbeam backend from Wasmtime as per [RFC 14].
This backend hasn't received maintenance in quite some time, and as [RFC
14] indicates this doesn't meet the threshold for keeping the code
in-tree, so this commit removes it.
A fast "baseline" compiler may still be added in the future. The
addition of such a backend should be in line with [RFC 14], though, with
the principles we now have for stable releases of Wasmtime. I'll close
out Lightbeam-related issues once this is merged.
[RFC 14]: https://github.com/bytecodealliance/rfcs/pull/14
* Use rsix to make system calls in Wasmtime.
`rsix` is a system call wrapper crate that we use in `wasi-common`,
which can provide the following advantages in the rest of Wasmtime:
- It eliminates some `unsafe` blocks in Wasmtime's code. There's
still an `unsafe` block in the library, but this way, the `unsafe`
is factored out and clearly scoped.
- And, it makes error handling more consistent, factoring out code for
checking return values and `io::Error::last_os_error()`, and code that
does `errno::set_errno(0)`.
This doesn't cover *all* system calls; `rsix` doesn't implement
signal-handling APIs, and this doesn't cover calls made through `std` or
crates like `userfaultfd`, `rand`, and `region`.
* Finished the Markdown Parser Example for Wasmtime
* Made requested changes
* Tiny change to explanation of `--dir` CLI arg
* Add `bash` annotations to shell script code blocks
* Trying to fix the markdown example bug
* Figured out rustdoc, and what needed to be done
* Made requested changes
Co-authored-by: Till Schneidereit <till@tillschneidereit.net>
* Remove unnecessary into_iter/map
Forgotten from a previous refactoring, this variable was already of the
right type!
* Move `wasmtime_jit::Compiler` into `wasmtime`
This `Compiler` struct is mostly a historical artifact at this point and
wasn't necessarily pulling much weight any more. This organization also
doesn't lend itself super well to compiling out `cranelift` when the
`Compiler` here is used for both parallel iteration configuration
settings as well as compilation.
The movement into `wasmtime` is relatively small, with
`Module::build_artifacts` being the main function added here which is a
merging of the previous functions removed from the `wasmtime-jit` crate.
* Add a `cranelift` compile-time feature to `wasmtime`
This commit concludes the saga of refactoring Wasmtime and making
Cranelift an optional dependency by adding a new Cargo feature to the
`wasmtime` crate called `cranelift`, which is enabled by default.
This feature is implemented by having a new cfg for `wasmtime` itself,
`cfg(compiler)`, which is used wherever compilation is necessary. This
bubbles up to disable APIs such as `Module::new`, `Func::new`,
`Engine::precompile_module`, and a number of `Config` methods affecting
compiler configuration. Checks are added to CI that when built in this
mode Wasmtime continues to successfully build. It's hoped that although
this is effectively "sprinkle `#[cfg]` until things compile" this won't
be too too bad to maintain over time since it's also an use case we're
interested in supporting.
With `cranelift` disabled the only way to create a `Module` is with the
`Module::deserialize` method, which requires some form of precompiled
artifact.
Two consequences of this change are:
* `Module::serialize` is also disabled in this mode. The reason for this
is that serialized modules contain ISA/shared flags encoded in them
which were used to produce the compiled code. There's no storage for
this if compilation is disabled. This could probably be re-enabled in
the future if necessary, but it may not end up being all that necessary.
* Deserialized modules are not checked to ensure that their ISA/shared
flags are compatible with the host CPU. This is actually already the
case, though, with normal modules. We'll likely want to fix this in
the future using a shared implementation for both these locations.
Documentation should be updated to indicate that `cranelift` can be
disabled, although it's not really the most prominent documentation
because this is expected to be a somewhat niche use case (albeit
important, just not too common).
* Always enable cranelift for the C API
* Fix doc example builds
* Fix check tests on GitHub Actions
* Reimplement how unwind information is stored
This commit is a major refactoring of how unwind information is stored
after compilation of a function has finished. Previously we would store
the raw `UnwindInfo` as a result of compilation and this would get
serialized/deserialized alongside the rest of the ELF object that
compilation creates. Whenever functions were registered with
`CodeMemory` this would also result in registering unwinding information
dynamically at runtime, which in the case of Unix, for example, would
dynamically created FDE/CIE entries on-the-fly.
Eventually I'd like to support compiling Wasmtime without Cranelift, but
this means that `UnwindInfo` wouldn't be easily available to decode into
and create unwinding information from. To solve this I've changed the
ELF object created to have the unwinding information encoded into it
ahead-of-time so loading code into memory no longer needs to create
unwinding tables. This change has two different implementations for
Windows/Unix:
* On Windows the implementation was much easier. The unwinding
information on Windows is already stored after the function itself in
the text section. This was actually slightly duplicated in object
building and in code memory allocation. Now the object building
continues to do the same, recording unwinding information after
functions, and code memory no longer manually tracks this.
Additionally Wasmtime will emit a special custom section in the object
file with unwinding information which is the list of
`RUNTIME_FUNCTION` structures that `RtlAddFunctionTable` expects. This
means that the object file has all the information precompiled into it
and registration at runtime is simply passing a few pointers around to
the runtime.
* Unix was a little bit more difficult than Windows. Today a `.eh_frame`
section is created on-the-fly with offsets in FDEs specified as the
absolute address that functions are loaded at. This absolute
address hindered the ability to precompile the FDE into the object
file itself. I've switched how addresses are encoded, though, to using
`DW_EH_PE_pcrel` which means that FDE addresses are now specified
relative to the FDE itself. This means that we can maintain a fixed
offset between the `.eh_frame` loaded in memory and the beginning of
code memory. When doing so this enables precompiling the `.eh_frame`
section into the object file and at runtime when loading an object no
further construction of unwinding information is needed.
The overall result of this commit is that unwinding information is no
longer stored in its cranelift-data-structure form on disk. This means
that this unwinding information format is only present during
compilation, which will make it that much easier to compile out
cranelift in the future.
This commit also significantly refactors `CodeMemory` since the way
unwinding information is handled is not much different from before.
Previously `CodeMemory` was suitable for incrementally adding more and
more functions to it, but nowadays a `CodeMemory` either lives per
module (in which case all functions are known up front) or it's created
once-per-`Func::new` with two trampolines. In both cases we know all
functions up front so the functionality of incrementally adding more and
more segments is no longer needed. This commit removes the ability to
add a function-at-a-time in `CodeMemory` and instead it can now only
load objects in their entirety. A small helper function is added to
build a small object file for trampolines in `Func::new` to handle
allocation there.
Finally, this commit also folds the `wasmtime-obj` crate directly into
the `wasmtime-cranelift` crate and its builder structure to be more
amenable to this strategy of managing unwinding tables.
It is not intentional to have any real functional change as a result of
this commit. This might accelerate loading a module from cache slightly
since less work is needed to manage the unwinding information, but
that's just a side benefit from the main goal of this commit which is to
remove the dependence on cranelift unwinding information being available
at runtime.
* Remove isa reexport from wasmtime-environ
* Trim down reexports of `cranelift-codegen`
Remove everything non-essential so that only the bits which will need to
be refactored out of cranelift remain.
* Fix debug tests
* Review comments
* Port wasi-common to io-lifetimes.
This ports wasi-common from unsafe-io to io-lifetimes.
Ambient authority is now indicated via calls to `ambient_authority()`
from the ambient-authority crate, rather than using `unsafe` blocks.
The `GetSetFdFlags::set_fd_flags` function is now split into two phases,
to simplify lifetimes in implementations which need to close and re-open
the underlying file.
* Use posish for errno values instead of libc.
This eliminates one of the few remaining direct libc dependencies.
* Port to posish::io::poll.
Use posish::io::poll instead of calling libc directly. This factors out
more code from Wasmtime, and eliminates the need to manipulate raw file
descriptors directly.
And, this eliminates the last remaining direct dependency on libc in
wasi-common.
* Port wasi-c-api to io-lifetimes.
* Update to posish 0.16.0.
* Embeded NULs in filenames now get `EINVAL` instead of `EILSEQ`.
* Accept either `EILSEQ` or `EINVAL` for embedded NULs.
* Bump the nightly toolchain to 2021-07-12.
This fixes build errors on the semver crate, which as of this writing
builds with latest nightly and stable but not 2021-04-11, the old pinned
version.
* Have cap-std-sync re-export ambient_authority so that users get the same version.
Implement Wasmtime's new API as designed by RFC 11. This is quite a large commit which has had lots of discussion externally, so for more information it's best to read the RFC thread and the PR thread.
This commit attempts to slim down our CI (more from #2933) by removing
testing both in debug and release mode. I can't actually recall a
concrete issue that this has turned up on CI itself, and otherwise we're
spending quite a lot of time building all of the dev-dependencies in
release mode when testing.
Additionally it removes testing for nightly/beta channels of Rust. One
of the main benefits of this, staying on top of breakage, is already
moot because we pin to a nightly anyway. We have a few nightly
references elsewhere in CI (fuzzing/docs) so we can largely rely on that
(and upstream testing with rust-lang/rust). We in general shouldn't need
to do nightly/beta testing on all builds.
The release builders were actually the only location that MinGW and
AArch64 was tested however. This means that the old nightly/beta
builders are now replaced with AArch64 and MinGW builders. Overall, the
changes made to CI here are:
* Upgrade to QEMU 6.0.0. I thought this would make aarch64 emulation
faster, but it didn't. Seems good to stay up to date though.
* Replace nightly/beta testing in debug mode with MinGW and AArch64 testing.
* Use `-g0` for C compilation on MinGW because otherwise `gcc` as used
on CI generates an ICE (!!)
* Exclude `wasi-crypto` from testing. We already exclude
`wasmtime-wasi-crypto` and it was an accident we were testing the
`wasi-crypto` crate (which isn't even part of this workspace).
* Remove testing DWARF on the old backend step, which nowadays didn't
actually do that.
* Remove testing on release builders, making then purely tasked with
release builds, nothing else.
* Rename `QEMU_VERSION` to `QEMU_BUILD_VERSION` so qemu doesn't just
immediately exit after printing its version.
Timing wise the release builds are ~20-30 minutes faster, depending on
the platform. This is not really because of testing time but rather we
have a huge dependency tree when `dev-dependencies` are considered
(criterion, tokio, proptest, ...).
MinGW tests are pretty fast since we don't run examples (we're not too
interested in doing examples there, just windows/mac/linux coverage).
AArch64 tests are run with optimizations enabled because unoptimized
tests take ~45 minutes to finish while optimized tests take ~20 minutes.
The build is naturally much faster in debug mode but apparently under
QEMU emulation the debug mode binaries are *extremely* slow compared to
the release binaries, which means that extra time we spend compiling
release tests is more than made up by faster test emulation time.
Closes#2938
This commit removes the publish step in GitHub actions, insteading
folding all functionality into the release build steps. This avoids
having a separately scheduled job after all the release build jobs which
ends up getting delayed for quite a long time given the current
scheduling algorithm.
This involves refactoring the tarball assembly scripts and refactoring the
github asset upload script too. Tarball assembly now manages everything
internally and does platform-specific bits where necessary. The upload
script is restructured to be run in parallel (in theory) and hopefully
catches various errors and tries to not stomp over everyone else's work.
The main trickiness here is handling `dev`, which is less critical for
correctness than than tags themselves.
As a small tweak build-wise the QEMU build for cross-compiled builders
is now cached unlike before where it was unconditionally built, shaving
a minute or two off build time.
Also move the gh-pages pushing step from the `publish` phase to just
this singular doc builder.
The motivation for this is to eventually remove the `publish` step since
it interacts badly with GitHub's scheduling of actions. This is
hopefully the first step towards that by removing the doc publish part
of the phase.
* First remove `fail-fast: false` annotations to fail faster. If desired
this could always be added in a on-off fashion to PRs.
* Next use the new `concurrency` feature to try to cancel previous
builds, ideally meaning that if a branch is pushed to multiple times
it only runs CI once.
Looks like GitHub Actions takes 10m+ to upload the documentation and
nearly 10 minutes to download it. I suspect this has to do with the
creation of thousands of files, and using `tar` here is likely much
faster. Let's test it out!