* Speed up index fetches on CI
Use the `sparse` protocol from Rust 1.68.0 which should shave a minute
or two off most steps on CI.
* Update nightly toolchains in CI
prtest:full
* Fix date
Aside from a few new features (notably automatic registry suggestions), this
release removes the need to import description for criteria that are not
directly used, and adds an explicit version to the cargo-vet instance.
Noted in #5954 this'll report `cargo vet` status checks on PRs that
modify the `supply-chain` directory in addition to `Cargo.lock`
modifications that already happen.
I saw some PRs fail this step earlier today due to rate limits but it
ended up not failing the entire PR's CI due to it not being listed in
the final set of dependencies, so add it there.
* Don't run LLDB tests on PRs
These take an extra minute or so, so only run them on the full test
suite of a merge instead of on all PRs as well.
* Add a test for the x64 isa files
This guarantees that if cranelift's x64 backend is modified that the
tests will be run on a PR, even if other backends were also modified.
These mostly only validate changes to `Cargo.lock` so skip these checks
by default on PRs which generally never need to trigger them. If
`Cargo.lock` changes, however, then run them for PRs.
GitHub recently made its merge queue feature available for use in public
repositories owned by organizations meaning that the Wasmtime repository
is a candidate for using this. GitHub's Merge Queue feature is a system
that's similar to Rust's bors integration where PRs are tested before
merging and only passing PRs are merged. This implements the "not rocket
science" rule where the `main` branch of Wasmtime, for example, is
always tested and passes CI. This is in contrast to our current
implementation of CI where PRs are merged when they pass their own CI,
but the code that was tested is not guaranteed to be the state of `main`
when the PR is merged, meaning that we're at risk now of a failing
`main` branch despite all merged PRs being green. While this has
happened with Wasmtime this is not a common occurrence, however.
The main motivation, instead, to use GitHub's Merge Queue feature is
that it will enable Wasmtime to greatly reduce the amount of CI running
on PRs themselves. Currently the full test suite runs on every push to
every PR, meaning that our workers on GitHub Actions are frequently
clogged throughout weekdays and PRs can take quite some time to come
back with a successful run. Through the use of a Merge Queue, however,
we're able to configure only a small handful of checks to run on PRs
while deferring the main body of checks to happening on the
merge-via-the-queue itself. This is hoped to free up capacity on CI and
overall improve CI times for Wasmtime and Cranelift developers.
The implementation of all of this required quite a lot of plumbing and
retooling of our CI. I've been testing this in an [external
repository][testrepo] and I think everything is working now. A list of
changes made in this PR are:
* The `build.yml` workflow is merged back into the `main.yml` workflow
as the original reason to split it out is not longer applicable (it'll
run on all merges). This was also done to fit in the dependency graph
of jobs of one workflow.
* Publication of the `gh-pages` branch, the `dev` tag artifacts, and
release artifacts have been moved to a separate
`publish-artifacts.yml` workflow. This workflow runs on all pushes to
`main` and all tags. This workflow no longer actually preforms any
builds, however, and relies on a merge queue or similar being used for
branches/tags where artifacts are downloaded from the workflow run to
be uploaded. For pushes to `main` this works because a merge queue is
run meaning that by the time the push happens all artifacts are ready.
For release branches this is handled by..
* The `push-tag.yml` workflow is subsumed by the `main.yml` workflow. CI
for a tag being pushed will upload artifacts to a release in GitHub,
meaning that all builds must finish first for the commit. The
`main.yml` workflow at the end now scans commits for the preexisting
magical marker and pushes a tag if necessary.
* CI is currently a flat list of "run all these jobs" and this is now
rearchitected to a "fan out" approach where some jobs run to determine
the next jobs to run which then get "joined" into a finish step. The
purpose for this is somewhat nuanced and this has implications for CI
runtime as well. The Merge Queue feature requires branches to be
protected with "these checks must pass" and then the same checks are
gates both to enter the merge queue as well as pass the merge queue.
The saving grace, however, is that a "skipped" check counts as
passing, meaning checks can be skipped on PRs but run to completion on
the merge queue. A problem with this though is the build matrix used
for tests where PRs want to only run one element of the build matrix
ideally but there's no means on GitHub Actions right now for the
skipped entries to show up as skipped easily (or not that I know of).
This means that the "join" step serves the purpose of being the single
gate for both PR and merge queue CI and there's just more inputs to it
for merge queue CI. The major consequence of this decision is that
GitHub's actions scheduling doesn't work out well here. Jobs are
scheduled in a FIFO order meaning that the job for "ok complete the CI
run" is queued up after everything else has completed, possibly
after lots of other CI requests in the middle for other PRs. The hope
here is that by using a merge queue we can keep CI relatively under
control and this won't affect merge times too much.
* All jobs in the `main.yml` workflow will not automatically cancel the
entire run if they fail. Previously this fail-fast behavior was only
part of the matrix runs (and just for that matrix), but this is
required to make the merge queue expedient. The gate of the merge
queue is the final "join" step which is only executed once all
dependencies have finished. This means, for example, that if rustfmt
fails quickly then the tests which take longer might run for quite
awhile before the join step reports failure, meaning that the PR sits
in the queue for longer than needed being tested when we know it's
already going to fail. By having all jobs cancel the run this means
that failures immediately bail out and mark the whole job as
cancelled.
* A new "determine" CI job was added to determine what CI actually needs
to run. This is a "choke point" which is scheduled at the start of CI
that quickly figures out what else needs to be run. This notably
indicates whether large swaths of ci (the `run-full` flag) like the
build matrix are executed. Additionally this dynamically calculates a
matrix of tests to run based on a new `./ci/build-test-matrix.js`
script. Various inputs are considered for this such as:
1. All pushes, meaning merge queue branches or release-branch merges,
will run full CI.
2. PRs to release branches will run full CI.
3. PRs to `main`, the most common, determine what to run based on
what's modified and what's in the commit message.
Some examples for (3) above are if modifications are made to
`cranelift/codegen/src/isa/*` then that corresponding builder is
executed on CI. If the `crates/c-api` directory is modified then the
CMake-based tests are run on PRs but are otherwise skipped.
Annotations in commit messages such as `prtest:*` can be used to
explicitly request testing.
Before this PR merges to `main` would perform two full runs of CI: one
on the PR itself and one on the merge to `main`. Note that the one as a
merge to `main` was quite frequently cancelled due to a merge happening
later. Additionally before this PR there was always the risk of a bad
merge where what was merged ended up creating a `main` that failed CI to
to a non-code-related merge conflict.
After this PR merges to `main` will perform one full run of CI, the one
as part of the merge queue. PRs themselves will perform one test job
most of the time otherwise. The `main` branch is additionally always
guaranteed to pass tests via the merge queue feature.
For release branches, before this PR merges would perform two full
builds - one for the PR and one for the merge. A third build was then
required for the release tag itself. This is now cut down to two full
builds, one for the PR and one for the merge. The reason for this is
that the merge queue feature currently can't be used for our
wildcard-based `release-*` branch protections. It is now possible,
however, to turn on required CI checks for the `release-*` branch PRs so
we can at least have a "hit the button and forget" strategy for merging
PRs now.
Note that this change to CI is not without its risks. The Merge Queue
feature is still in beta and is quite new for GitHub. One bug that
Trevor and I uncovered is that if a PR is being tested in the merge
queue and a contributor pushes to their PR then the PR isn't removed
from the merge queue but is instead merged when CI is successful, losing
the changes that the contributor pushed (what's merged is what was
tested). We suspect that GitHub will fix this, however.
Additionally though there's the risk that this may increase merge time
for PRs to Wasmtime in practice. The Merge Queue feature has the ability
to "batch" PRs together for a merge but this is only done if concurrent
builds are allowed. This means that if 5 PRs are batched together then 5
separate merges would be created for the stack of 5 PRs. If the CI for
all 5 merged together passes then everything is merged, otherwise a PR
is kicked out. We can't easily do this, however, since a major purpose
for the merge queue for us would be to cut down on usage of CI builders
meaning the max concurrency would be set to 1 meaning that only one PR
at a time will be merged. This means PRs may sit in the queue for awhile
since previously many `main`-based builds are cancelled due to
subsequent merges of other PRs, but now they must all run to 100%
completion.
[testrepo]: https://github.com/bytecodealliance/wasmtime-merge-queue-testing
This fixes the build issue identified in #5664 at the toolchain level
rather than working around it in our own build. The next step in fixing
this will be to remove the nightly override in the future when the
toolchain becomes stable.
This allows the `wasmtime` binary provided in our release artifacts to
cross-compile: `wasmtime compile` can build a `.cwasm` for any platform
that Wasmtime supports, not just the host platform. This may be useful
in some deployment scenarios.
We don't turn on `all-arch` by default because it increases build time
and binary size of Wasmtime itself, and other embedders of the
`wasmtime` crate won't necessarily want this; hence, we set it only as
part of the CI build configuration.
Fixes#5655.
We accidentally broke the build for Android when introducing the jit-icache-coherence
crate. To avoid this happening again, add a check job just to ensure that it can build.
See #5323 and #5331 for context.
We accidentally broke the build for FreeBSD when introducing the jit-icache-coherence
crate. To avoid this happening again, add a check job just to ensure that it can build.
See #5323 and #5331 for context.
* Use the `sym` operator for inline assembly
Avoids extra `#[no_mangle]` functions and undue symbols being exposed
from Wasmtime. This is a newly stabilized feature in Rust 1.66.0. I've
also added a `rust-version` entry to the `wasmtime` crate to try to head
off possible reports in the future about odd error messages or usage of
unstable features if the rustc version is too old.
* Fix a s390x warning
* Add `rust-version` annotation to Wasmtime crate
As the other main entrypoint for embedders.
CI is currently broken because `ubuntu-latest` moved to 22.04, which is
missing at least one package (`libclang1-9` used in our CI jobs) and may
be causing other issues as well.
This PR pins us back to 20.04; separately we should look into upgrading
when issues are resolved.
This hopefully will speed things up slightly since ASan can add a chunk
to the compile time. Unsure if this will actually help, though, so let's
find out!
* wiggle: no longer need to guard wasmtime integration behind a feature
this existed so we could use wiggle in lucet, but lucet is long EOL
* replace wiggle::Trap with wiggle::wasmtime_crate::Trap
* wiggle tests: unwrap traps because we cant assert_eq on them
* wasi-common: emit a wasmtime::Trap instead of a wiggle::Trap
formally add a dependency on wasmtime here to make it obvious, though
we do now have a transitive one via wiggle no matter what (and therefore
can get rid of the default-features=false on the wiggle dep)
* wasi-nn: use wasmtime::Trap instead of wiggle::Trap
there's no way the implementation of this func is actually
a good idea, it will panic the host process on any error,
but I'll ask @mtr to fix that
* wiggle test-helpers examples: fixes
* wasi-common cant cross compile to wasm32-unknown-emscripten anymore
this was originally for the WASI polyfill for web targets. Those days
are way behind us now.
* wasmtime wont compile for armv7-unknown-linux-gnueabihf either
* Upgrade our github actions to "node16"
Each github actions run has a lot of warnings about using node12 so this
upgrades our repository to using node16. I'm hoping no other changes are
needed and I suspect other actions we're using are on node12 and will
need further updates, but this should help pin down what's remaining.
* Update `actions/checkout` workflow to `v3`
* Update to `actions/cache@v3`
* Update to `actions/upload-artifact@v3`
* Drop usage of `actions-rs/toolchain`
* Update to `actions/setup-python@v4`
* Update mdbook version
This commit fixes the `push-tag.yml` workflow to work with the new
`Cargo.toml` manifest since workspace inheritance was added. This
additionally fixes some warnings coming up on CI about our usage of
deprecated features on github actions.
In #5023 we are seeing a failing CI job (see [1]); after four attempted
restarts, it 404's each time when trying to download OpenVino from the
Intel apt mirrors.
This PR temporarily removes the wasi-nn CI job from our CI configuration
so that we have green CI and can merge other work.
[1] https://github.com/bytecodealliance/wasmtime/actions/runs/3200861896/jobs/5228903240
* Make wasmtime build for windows-aarch64
* Add check for win arm64 build.
* Fix checks for winarm64 key in workflows.
* Add target in windows arm64 build.
* Add tracking issue for Windows ARM64 trap handling
* Leverage Cargo's workspace inheritance feature
This commit is an attempt to reduce the complexity of the Cargo
manifests in this repository with Cargo's workspace-inheritance feature
becoming stable in Rust 1.64.0. This feature allows specifying fields in
the root workspace `Cargo.toml` which are then reused throughout the
workspace. For example this PR shares definitions such as:
* All of the Wasmtime-family of crates now use `version.workspace =
true` to have a single location which defines the version number.
* All crates use `edition.workspace = true` to have one default edition
for the entire workspace.
* Common dependencies are listed in `[workspace.dependencies]` to avoid
typing the same version number in a lot of different places (e.g. the
`wasmparser = "0.89.0"` is now in just one spot.
Currently the workspace-inheritance feature doesn't allow having two
different versions to inherit, so all of the Cranelift-family of crates
still manually specify their version. The inter-crate dependencies,
however, are shared amongst the root workspace.
This feature can be seen as a method of "preprocessing" of sorts for
Cargo manifests. This will help us develop Wasmtime but shouldn't have
any actual impact on the published artifacts -- everything's dependency
lists are still the same.
* Fix wasi-crypto tests
The release process failed last night due to me filling out the dates in
the release notes early (rather than leaving "Unreleased") which mean
there were no changes for each commit. Switch to passing `--allow-empty`
when making a commit to prevent this.
* Adds a github action to support x64 performance testing using a sightglass
This github action allows performance testing using sightglass. The
action is triggered either via a workflow dispatch or with the comment
'/bench_x64', in a pull request. Once triggered the action will send
a request to a private repository that supports using a self-hosted runner
to do comparisons of "refs/feature/commit" vs "refs/heads/main" for
wasmtime. If the action is triggered via a comment in a pull request
(with '/bench_x64') then the commit referenced by the pull request is used
for the comparison against refs/head/main. If triggered via a workflow
dispatch the interface will request the commit to compare against
refs/head/main. The results of the performance tests, run via sightglass,
will be a table showing a percentage change in clock ticks in various stages
requried for executing the benchmark, namely instantiate, compiliation,
and execution. This patch is intended to be just a starting patch with much
to tweak and improve. One of the TODOs will be adding support for aarch64
.. currently this patch supports only x64. Note also that the logic for
actually doing the comparison and parsing the results occurs with the action
associated with the private repo and so this patch itself (though the trigger)
is fairly straight forward.
* Refactor patch to consolidate all steps to here.
* Remove unused code
* Remvoes unused pull_request_review_comment trigger
* Match trigger word when contained anywhere in the pull request review message
* Remove redundant repo and ref variables for wasmtime_commit
* Minor comment update
* Remove command to install jq
* Remove printing of git config variables being used
* Fix token for posting results
* Update message explaining pct_change for benchmark results
* Revert TOKEN for publsh change
* Update message explaining results
* [fuzz] Remove some differential fuzz targets
The changes in #4515 do everything the `differential_spec` and
`differential_wasmi` fuzz target already do. These fuzz targets are now
redundant and this PR removes them. It also updates the fuzz
documentation slightly.
* Add source tarballs to our releases
This commit adds a small script to create a source tarball as part of
the release process. This goes further than requested by #3808 by
vendoring all Rust dependencies as well to be more in line with
"download the source once then build somewhere without a network".
Vendoring the Rust dependencies makes the tarball pretty beefy (67M
compressed, 500M uncompressed). Unfortunately most of this size comes
from vendored crates such as v8, pqcrypto-kyber, winapi, capstone-sys,
plotters, and web-sys. Only `winapi` in this list is actually needed for
`wasmtime`-the-binary and only on Windows as well but for now this is
the state of things related to `cargo vendor`. If this becomes an issue
we could specifically remove the bulky contents of crates in the
`vendor` directory such as `v8` since it's only used for fuzzing.
Closes#3808
* Review feedback
* Review comments
* Unconditionally enable component-model tests
* Remove an outdated test that wasn't previously being compiled
* Fix a component model doc test
* Try to decrease memory usage in qemu
* Add cmake compatibility to c-api
* Add CMake documentation to wasmtime.h
* Add CMake instructions in examples
* Modify CI for CMake support
* Use correct rust in CI
* Trigger build
* Refactor run-examples
* Reintroduce example_to_run in run-examples
* Replace run-examples crate with cmake
* Fix markdown formatting in examples readme
* Fix cmake test quotes
* Build rust wasm before cmake tests
* Pass CTEST_OUTPUT_ON_FAILURE
* Another cmake test
* Handle os differences in cmake test
* Fix bugs in memory and multimemory examples
With branch protections enabled that would otherwise mean that the PR
cannot be landed since CI is now required to run. These date-update PRs
typically come at odd off-hours for Wasmtime anyway so it should be fine
to run CI.
This PR removes all minutes and agendas in `meetings/`. These were
previously hosted in this repository, but we found that it makes things
somewhat more complex with respect to CI configuration and merge
permissions to have both small, CI-less changes to the text in
`meetings/` as well as changes to everything else in one repository.
The minutes and agendas have been split out into the repository at
https://github.com/bytecodealliance/meetings/, with all history
preserved. Future agenda additions and minutes contributions should go
there as PRs.
Finally, this PR adds a small note to our "Contributing" doc to note the
existence of the meetings and invite folks to ask to join if interested.
* ci: replace OpenVINO installer action
To test wasi-nn, we currently use an OpenVINO backend. The Wasmtime CI
must install OpenVINO using a custom GitHub action. This CI action has
not been updated in some time and in the meantime OpenVINO (and the
OpenVINO crates) have released several new versions.
https://github.com/abrown/install-openvino-action is an external action
that we plan to keep up to date with the latest releases. This change
replaces the current CI action with that one.
* wasi-nn: upgrade openvino dependency to v0.4.1
This eliminates a `lazy_static` dependency and changes a few parameters
to pass by reference. Importantly, it enables support for the latest
versions of OpenVINO (v2022.*) in wasi-nn.
* ci: update wasi-nn script to source correct env script
* ci: really use the correct path for the env script
Also, clarify which directory OpenVINO is installed in (the symlink may
not be present).
Now the fiber implementation on AArch64 authenticates function
return addresses and includes the relevant BTI instructions, except
on macOS.
Also, change the locations of the saved FP and LR registers on the
fiber stack to make them compliant with the Procedure Call Standard
for the Arm 64-bit Architecture.
Copyright (c) 2022, Arm Limited.
Right now the CI test job runs `cargo test --features component-model`
and runs all tests with this feature enabled, which takes a bit of time,
especially on our emulation-based targets. This seems to have become the
critical path, at least in some CI jobs I've been watching. This PR
restricts these runs to only component-model-specific tests when the
feature is enabled.
* sorta working in runtime
* wasmtime-runtime: get rid of wasm-backtrace feature
* wasmtime: factor to make backtraces recording optional. not configurable yet
* get rid of wasm-backtrace features
* trap tests: now a Trap optionally contains backtrace
* eliminate wasm-backtrace feature
* code review fixes
* ci: no more wasm-backtrace feature
* c_api: backtraces always enabled
* config: unwind required by backtraces and ref types
* plumbed
* test that disabling backtraces works
* code review comments
* fuzzing generator: wasm_backtrace is a runtime config now
* doc fix
* Initial skeleton of some component model processing
This commit is the first of what will likely be many to implement the
component model proposal in Wasmtime. This will be structured as a
series of incremental commits, most of which haven't been written yet.
My hope is to make this incremental and over time to make this easier to
review and easier to test each step in isolation.
Here much of the skeleton of how components are going to work in
Wasmtime is sketched out. This is not a complete implementation of the
component model so it's not all that useful yet, but some things you can
do are:
* Process the type section into a representation amenable for working
with in Wasmtime.
* Process the module section and register core wasm modules.
* Process the instance section for core wasm modules.
* Process core wasm module imports.
* Process core wasm instance aliasing.
* Ability to compile a component with core wasm embedded.
* Ability to instantiate a component with no imports.
* Ability to get functions from this component.
This is already starting to diverge from the previous module linking
representation where a `Component` will try to avoid unnecessary
metadata about the component and instead internally only have the bare
minimum necessary to instantiate the module. My hope is we can avoid
constructing most of the index spaces during instantiation only for it
to all ge thrown away. Additionally I'm predicting that we'll need to
see through processing where possible to know how to generate adapters
and where they are fused.
At this time you can't actually call a component's functions, and that's
the next PR that I would like to make.
* Add tests for the component model support
This commit uses the recently updated wasm-tools crates to add tests for
the component model added in the previous commit. This involved updating
the `wasmtime-wast` crate for component-model changes. Currently the
component support there is quite primitive, but enough to at least
instantiate components and verify the internals of Wasmtime are all
working correctly. Additionally some simple tests for the embedding API
have also been added.
* Refactor binary-compatible-builds for releases
I was poking around this yesterday and noticed a few things that could
be improved for our release builds:
* The centos container for the x86_64 builds contained a bunch of extra
tooling we no longer need such as python3 and a C++ compiler. Along
with custom toolchain things this could all get removed since the C we
include now is quite simple.
* The aarch64 and s390x cross-compiled builds had relatively high glibc
version requirements compared to the x86_64 build. This was because we
don't use a container to build the cross-compiled binaries. I added
containers here along the lines of the x86_64 build to use an older
glibc to build the release binary to lower our version requirement.
This lower the aarch64 version requirement from glibc 2.28 to 2.17.
Additionally the s390x requirement dropped from 2.28 to 2.16.
* To make the containers a bit easier to read/write I added
`Dockerfile`s for them in a new `ci/docker` directory instead of
hardcoding install commands in JS.
This isn't intended to be a really big change or anything for anyone,
but it's intended to keep our Linux-based builds consistent at least as
best we can.
* Remove temporary change
This PR fixes#4066: it modifies the Cranelift `build.rs` workflow to
invoke the ISLE DSL compiler on every compilation, rather than only
when the user specifies a special "rebuild ISLE" feature.
The main benefit of this change is that it vastly simplifies the mental
model required of developers, and removes a bunch of failure modes
we have tried to work around in other ways. There is now just one
"source of truth", the ISLE source itself, in the repository, and so there
is no need to understand a special "rebuild" step and how to handle
merge errors. There is no special process needed to develop the compiler
when modifying the DSL. And there is no "noise" in the git history produced
by constantly-regenerated files.
The two main downsides we discussed in #4066 are:
- Compile time could increase, by adding more to the "meta" step before the main build;
- It becomes less obvious where the source definitions are (everything becomes
more "magic"), which makes exploration and debugging harder.
This PR addresses each of these concerns:
1. To maintain reasonable compile time, it includes work to cut down the
dependencies of the `cranelift-isle` crate to *nothing* (only the Rust stdlib),
in the default build. It does this by putting the error-reporting bits
(`miette` crate) under an optional feature, and the logging (`log` crate) under
a feature-controlled macro, and manually writing an `Error` impl rather than
using `thiserror`. This completely avoids proc macros and the `syn` build slowness.
The user can still get nice errors out of `miette`: this is enabled by specifying
a Cargo feature `--features isle-errors`.
2. To allow the user to optionally inspect the generated source, which nominally
lives in a hard-to-find path inside `target/` now, this PR adds a feature `isle-in-source-tree`
that, as implied by the name, moves the target for ISLE generated source into
the source tree, at `cranelift/codegen/isle_generated_source/`. It seems reasonable
to do this when an explicit feature (opt-in) is specified because this is how ISLE regeneration
currently works as well. To prevent surprises, if the feature is *not* specified, the
build fails if this directory exists.