This PR updates the "coloring" scheme that accounts for side-effects in the MachInst lowering logic. As a result, the new backends will now be able to merge effectful operations (such as memory loads) *into* other operations; previously, only the other way (pure ops merged into effectful ops) was possible. This will allow, for example, a load+ALU-op combination, as is common on x86. It should even allow a load + ALU-op + store sequence to merge into one lowered instruction. The scheme arose from many fruitful discussions with @julian-seward1 (thanks!); significant credit is due to him for the insights here. The first insight is that given the right basic conditions, i.e. that the root instruction is the only use of an effectful instruction's result, all we need is that the "color" of the effectful instruction is *one less* than the color of the current instruction. It's easier to think about colors on the program points between instructions: if the color coming *out* of the first (effectful def) instruction and *in* to the second (effectful or effect-free use) instruction are the same, then they can merge. Basically the color denotes a version of global state; if the same, then no other effectful ops happened in the meantime. The second insight is that we can keep state as we scan, tracking the "current color", and *update* this when we sink (merge) an op. Hence when we sink a load into another op, we effectively *re-color* every instruction it moved over; this may allow further sinks. Consider the example (and assume that we consider loads effectful in order to conservatively ensure a strong memory model; otherwise, replace with other effectful value-producing insts): ``` v0 = load x v1 = load y v2 = add v0, 1 v3 = add v1, 1 ``` Scanning from bottom to top, we first see the add producing `v3` and we can sink the load producing `v1` into it, producing a load + ALU-op machine instruction. This is legal because `v1` moves over only `v2`, which is a pure instruction. Consider, though, `v2`: under a simple scheme that has no other context, `v0` could not sink to `v2` because it would move over `v1`, another load. But because we already sunk `v1` down to `v3`, we are free to sink `v0` to `v2`; the update of the "current color" during the scan allows this. This PR also cleans up the `LowerCtx` interface a bit at the same time: whereas previously it always gave some subset of (constant, mergeable inst, register) directly from `LowerCtx::get_input()`, it now returns zero or more of (constant, mergable inst) from `LowerCtx::maybe_get_input_as_source_or_const()`, and returns the register only from `LowerCtx::put_input_in_reg()`. This removes the need to explicitly denote uses of the register, so it's a little safer. Note that this PR does not actually make use of the new ability to merge loads into other ops; that will come in future PRs, especially to optimize the `x64` backend by using direct-memory operands.
Cranelift Code Generator
A Bytecode Alliance project
Cranelift is a low-level retargetable code generator. It translates a target-independent intermediate representation into executable machine code.
For more information, see the documentation.
For an example of how to use the JIT, see the SimpleJIT Demo, which implements a toy language.
For an example of how to use Cranelift to run WebAssembly code, see Wasmtime, which implements a standalone, embeddable, VM using Cranelift.
Status
Cranelift currently supports enough functionality to run a wide variety of programs, including all the functionality needed to execute WebAssembly MVP functions, although it needs to be used within an external WebAssembly embedding to be part of a complete WebAssembly implementation.
The x86-64 backend is currently the most complete and stable; other architectures are in various stages of development. Cranelift currently supports both the System V AMD64 ABI calling convention used on many platforms and the Windows x64 calling convention. The performance of code produced by Cranelift is not yet impressive, though we have plans to fix that.
The core codegen crates have minimal dependencies, support no_std mode (see below), and do not require any host floating-point support, and do not use callstack recursion.
Cranelift does not yet perform mitigations for Spectre or related security issues, though it may do so in the future. It does not currently make any security-relevant instruction timing guarantees. It has seen a fair amount of testing and fuzzing, although more work is needed before it would be ready for a production use case.
Cranelift's APIs are not yet stable.
Cranelift currently requires Rust 1.37 or later to build.
Contributing
If you're interested in contributing to Cranelift: thank you! We have a contributing guide which will help you getting involved in the Cranelift project.
Planned uses
Cranelift is designed to be a code generator for WebAssembly, but it is general enough to be useful elsewhere too. The initial planned uses that affected its design are:
- WebAssembly compiler for the SpiderMonkey engine in Firefox.
- Backend for the IonMonkey JavaScript JIT compiler in Firefox.
- Debug build backend for the Rust compiler.
- Wasmtime non-Web wasm engine.
Building Cranelift
Cranelift uses a conventional Cargo build process.
Cranelift consists of a collection of crates, and uses a Cargo
Workspace,
so for some cargo commands, such as cargo test, the --all is needed
to tell cargo to visit all of the crates.
test-all.sh at the top level is a script which runs all the cargo
tests and also performs code format, lint, and documentation checks.
Building with no_std
The following crates support `no_std`, although they do depend on liballoc:
- cranelift-entity
- cranelift-bforest
- cranelift-codegen
- cranelift-frontend
- cranelift-native
- cranelift-wasm
- cranelift-module
- cranelift-preopt
- cranelift
To use no_std mode, disable the std feature and enable the core feature. This currently requires nightly rust.
For example, to build `cranelift-codegen`:
cd cranelift-codegen
cargo build --no-default-features --features core
Or, when using cranelift-codegen as a dependency (in Cargo.toml):
[dependency.cranelift-codegen]
...
default-features = false
features = ["core"]
no_std support is currently "best effort". We won't try to break it, and we'll accept patches fixing problems, however we don't expect all developers to build and test no_std when submitting patches. Accordingly, the ./test-all.sh script does not test no_std.
There is a separate ./test-no_std.sh script that tests the no_std support in packages which support it.
It's important to note that cranelift still needs liballoc to compile. Thus, whatever environment is used must implement an allocator.
Also, to allow the use of HashMaps with no_std, an external crate called hashmap_core is pulled in (via the core feature). This is mostly the same as std::collections::HashMap, except that it doesn't have DOS protection. Just something to think about.
Log configuration
Cranelift uses the log crate to log messages at various levels. It doesn't
specify any maximal logging level, so embedders can choose what it should be;
however, this can have an impact of Cranelift's code size. You can use log
features to reduce the maximum logging level. For instance if you want to limit
the level of logging to warn messages and above in release mode:
[dependency.log]
...
features = ["release_max_level_warn"]
Editor Support
Editor support for working with Cranelift IR (clif) files: