Convert top-level *.rst files to markdown.

These files don't use any Sphinx has that Markdown lacks, and Markdown
is more widely used and easier to edit.
This commit is contained in:
Dan Gohman
2018-07-17 14:38:19 -07:00
parent 4d5451ad11
commit 8d0f34310f
6 changed files with 246 additions and 251 deletions

View File

@@ -1,143 +0,0 @@
========================
Cranelift Code Generator
========================
Cranelift is a low-level retargetable code generator. It translates a `target-independent
intermediate representation <https://cranelift.readthedocs.io/en/latest/langref.html>`_ into executable
machine code.
.. image:: https://readthedocs.org/projects/cranelift/badge/?version=latest
:target: https://cranelift.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://travis-ci.org/CraneStation/cranelift.svg?branch=master
:target: https://travis-ci.org/CraneStation/cranelift
:alt: Build Status
.. image:: https://badges.gitter.im/CraneStation/CraneStation.svg
:target: https://gitter.im/CraneStation/Lobby/~chat
:alt: Gitter chat
For more information, see `the documentation
<https://cranelift.readthedocs.io/en/latest/?badge=latest>`_.
Status
------
Cranelift currently supports enough functionality to run a wide variety of
programs, including all the functionality needed to execute WebAssembly MVP
functions, although it needs to be used within an external WebAssembly
embedding to be part of a complete WebAssembly implementation.
The x86-64 backend is currently the most complete and stable; other
architectures are in various stages of development. Cranelift currently supports
the System V AMD64 ABI calling convention used on many platforms, but does not
yet support the Windows x64 calling convention. The performance of code
produced by Cranelift is not yet impressive, though we have plans to fix that.
The core codegen crates have minimal dependencies, support
`no_std <#building-with-no-std>`_ mode, and do not require any host
floating-point support.
Cranelift does not yet perform mitigations for Spectre or related security
issues, though it may do so in the future. It does not currently make any
security-relevant instruction timing guarantees. It has seen a fair amount
of testing and fuzzing, although more work is needed before it would be
ready for a production use case.
Cranelift's APIs are not yet stable.
Cranelift currently supports Rust 1.22.1 and later. We intend to always support
the latest *stable* Rust. And, we currently support the version of Rust in the
latest Ubuntu LTS, although whether we will always do so is not yet determined.
Cranelift requires Python 2.7 or Python 3 to build.
Planned uses
------------
Cranelift is designed to be a code generator for WebAssembly, but it is general
enough to be useful elsewhere too. The initial planned uses that affected its
design are:
1. `WebAssembly compiler for the SpiderMonkey engine in Firefox
<spidermonkey.rst#phase-1-webassembly>`_.
2. `Backend for the IonMonkey JavaScript JIT compiler in Firefox
<spidermonkey.rst#phase-2-ionmonkey>`_.
3. `Debug build backend for the Rust compiler <rustc.rst>`_.
Building Cranelift
------------------
Cranelift uses a `conventional Cargo build process
<https://doc.rust-lang.org/cargo/guide/working-on-an-existing-project.html>`_.
Cranelift consists of a collection of crates, and uses a `Cargo Workspace
<https://doc.rust-lang.org/book/second-edition/ch14-03-cargo-workspaces.html>`_,
so for some cargo commands, such as
``cargo test``, the ``--all`` is needed to tell cargo to visit all
of the crates.
``test-all.sh`` at the top level is a script which runs all the cargo
tests and also performs code format, lint, and documentation checks.
Building with `no_std`
----------------------
The following crates support `no_std`:
- `cranelift-entity`
- `cranelift-codegen`
- `cranelift-frontend`
- `cranelift-native`
- `cranelift-wasm`
- `cranelift-module`
- `cranelift-simplejit`
- `cranelift`
To use `no_std` mode, disable the `std` feature and enable the `core` feature.
This currently requires nightly rust.
For example, to build `cranelift-codegen`:
.. code-block:: sh
cd lib/codegen
cargo build --no-default-features --features core
Or, when using `cranelift-codegen` as a dependency (in Cargo.toml):
.. code-block::
[dependency.cranelift-codegen]
...
default-features = false
features = ["core"]
`no_std` support is currently "best effort". We won't try to break it, and
we'll accept patches fixing problems, however we don't expect all developers to
build and test `no_std` when submitting patches. Accordingly, the
`./test-all.sh` script does not test `no_std`.
There is a separate `./test-no_std.sh` script that tests the `no_std`
support in packages which support it.
It's important to note that cranelift still needs liballoc to compile.
Thus, whatever environment is used must implement an allocator.
Also, to allow the use of HashMaps with `no_std`, an external crate called
`hashmap_core` is pulled in (via the `core` feature). This is mostly the same
as `std::collections::HashMap`, except that it doesn't have DOS protection.
Just something to think about.
Building the documentation
--------------------------
To build the Cranelift documentation, you need the `Sphinx documentation
generator <https://www.sphinx-doc.org/>`_::
$ pip install sphinx sphinx-autobuild sphinx_rtd_theme
$ cd cranelift/docs
$ make html
$ open _build/html/index.html
We don't support Sphinx versions before 1.4 since the format of index tuples
has changed.

137
cranelift/README.md Normal file
View File

@@ -0,0 +1,137 @@
Cranelift Code Generator
========================
Cranelift is a low-level retargetable code generator. It translates a
[target-independent intermediate
representation](https://cranelift.readthedocs.io/en/latest/langref.html)
into executable machine code.
[![Documentation Status](https://readthedocs.org/projects/cranelift/badge/?version=latest)](https://cranelift.readthedocs.io/en/latest/?badge=latest)
[![Build Status](https://travis-ci.org/CraneStation/cranelift.svg?branch=master)](https://travis-ci.org/CraneStation/cranelift)
[![Gitter chat](https://badges.gitter.im/CraneStation/CraneStation.svg)](https://gitter.im/CraneStation/Lobby/~chat)
For more information, see [the
documentation](https://cranelift.readthedocs.io/en/latest/?badge=latest).
Status
------
Cranelift currently supports enough functionality to run a wide variety
of programs, including all the functionality needed to execute
WebAssembly MVP functions, although it needs to be used within an
external WebAssembly embedding to be part of a complete WebAssembly
implementation.
The x86-64 backend is currently the most complete and stable; other
architectures are in various stages of development. Cranelift currently
supports the System V AMD64 ABI calling convention used on many
platforms, but does not yet support the Windows x64 calling convention.
The performance of code produced by Cranelift is not yet impressive,
though we have plans to fix that.
The core codegen crates have minimal dependencies, support
[no\_std](#building-with-no-std) mode, and do not require any host
floating-point support.
Cranelift does not yet perform mitigations for Spectre or related
security issues, though it may do so in the future. It does not
currently make any security-relevant instruction timing guarantees. It
has seen a fair amount of testing and fuzzing, although more work is
needed before it would be ready for a production use case.
Cranelift's APIs are not yet stable.
Cranelift currently supports Rust 1.22.1 and later. We intend to always
support the latest *stable* Rust. And, we currently support the version
of Rust in the latest Ubuntu LTS, although whether we will always do so
is not yet determined. Cranelift requires Python 2.7 or Python 3 to
build.
Planned uses
------------
Cranelift is designed to be a code generator for WebAssembly, but it is
general enough to be useful elsewhere too. The initial planned uses that
affected its design are:
1. [WebAssembly compiler for the SpiderMonkey engine in
Firefox](spidermonkey.md#phase-1-webassembly).
2. [Backend for the IonMonkey JavaScript JIT compiler in
Firefox](spidermonkey.md#phase-2-ionmonkey).
3. [Debug build backend for the Rust compiler](rustc.md).
Building Cranelift
------------------
Cranelift uses a [conventional Cargo build
process](https://doc.rust-lang.org/cargo/guide/working-on-an-existing-project.html).
Cranelift consists of a collection of crates, and uses a [Cargo
Workspace](https://doc.rust-lang.org/book/second-edition/ch14-03-cargo-workspaces.html),
so for some cargo commands, such as `cargo test`, the `--all` is needed
to tell cargo to visit all of the crates.
`test-all.sh` at the top level is a script which runs all the cargo
tests and also performs code format, lint, and documentation checks.
Building with no\_std
---------------------
The following crates support \`no\_std\`:
- cranelift-entity
- cranelift-codegen
- cranelift-frontend
- cranelift-native
- cranelift-wasm
- cranelift-module
- cranelift-simplejit
- cranelift
To use no\_std mode, disable the std feature and enable the core
feature. This currently requires nightly rust.
For example, to build \`cranelift-codegen\`:
``` {.sourceCode .sh}
cd lib/codegen
cargo build --no-default-features --features core
```
Or, when using cranelift-codegen as a dependency (in Cargo.toml):
``` {.sourceCode .}
[dependency.cranelift-codegen]
...
default-features = false
features = ["core"]
```
no\_std support is currently "best effort". We won't try to break it,
and we'll accept patches fixing problems, however we don't expect all
developers to build and test no\_std when submitting patches.
Accordingly, the ./test-all.sh script does not test no\_std.
There is a separate ./test-no\_std.sh script that tests the no\_std
support in packages which support it.
It's important to note that cranelift still needs liballoc to compile.
Thus, whatever environment is used must implement an allocator.
Also, to allow the use of HashMaps with no\_std, an external crate
called hashmap\_core is pulled in (via the core feature). This is mostly
the same as std::collections::HashMap, except that it doesn't have DOS
protection. Just something to think about.
Building the documentation
--------------------------
To build the Cranelift documentation, you need the [Sphinx documentation
generator](https://www.sphinx-doc.org/):
$ pip install sphinx sphinx-autobuild sphinx_rtd_theme
$ cd cranelift/docs
$ make html
$ open _build/html/index.html
We don't support Sphinx versions before 1.4 since the format of index
tuples has changed.

69
cranelift/rustc.md Normal file
View File

@@ -0,0 +1,69 @@
Cranelift in Rustc
==================
One goal for Cranelift is to be usable as a backend suitable for
compiling Rust in debug mode. This mode doesn't require a lot of
mid-level optimization, and it does want very fast compile times, and
this matches up fairly well with what we expect Cranelift's initial
strengths and weaknesses will be. Cranelift is being designed to take
aggressive advantage of multiple cores, and to be very efficient with
its use of memory.
Another goal is a "pretty good" backend. The idea here is to do the work
to get MIR-level inlining enabled, do some basic optimizations in
Cranelift to capture the low-hanging fruit, and then use that along with
good low-level optimizations to produce code which has a chance of being
decently fast, with quite fast compile times. It obviously wouldn't
compete with LLVM-based release builds in terms of optimization, but for
some users, completely unoptimized code is too slow to test with, so a
"pretty good" mode might be good enough.
There's plenty of work to do to achieve these goals, and if achieve
them, we'll have enabled a Rust compiler written entirely in Rust, and
enabled faster Rust compile times for important use cases.
With all that said, there is a potential goal beyond that, which is to
build a full optimizing release-capable backend. We can't predict how
far Cranelift will go yet, but we do have some crazy ideas about what
such a thing might look like, including:
- Take advantage of Rust language properties in the optimizer. With
LLVM, Rust is able to use annotations to describe some of its
aliasing guarantees, however the annotations are awkward and
limited. An optimizer that can represent the core aliasing
relationships that Rust provides directly has the potential to be
very powerful without the need for complex alias analysis logic.
Unsafe blocks are an interesting challenge, however in many simple
cases, like Vec, it may be possible to recover what the optimizer
needs to know.
- Design for superoptimization. Traditionally, compiler development
teams have spent many years of manual effort to identify patterns of
code that can be matched and replaced. Superoptimizers have been
contributing some to this effort, but in the future, we may be able
to reverse roles. Superoptimizers will do the bulk of the work, and
humans will contribute specialized optimizations that
superoptimizers miss. This has the potential to take a new optimizer
from scratch to diminishing-returns territory with much less manual
effort.
- Build an optimizer IR without the constraints of fast-debug-build
compilation. Cranelift's base IR is focused on Codegen, so a
full-strength optimizer would either use an IR layer on top of it
(possibly using Cranelift's flexible EntityMap system), or possibly
an independent IR that could be translated to/from the base IR.
Either way, this overall architecture would keep the optimizer out
of the way of the non-optimizing build path, which keeps that path
fast and simple, and gives the optimizer more flexibility. If we
then want to base the IR on a powerful data structure like the Value
State Dependence Graph (VSDG), we can do so with fewer compromises.
And, these ideas build on each other. For example, one of the challenges
for dependence-graph-oriented IRs like the VSDG is getting good enough
memory dependence information. But if we can get high-quality aliasing
information directly from the Rust front-end, we should be in great
shape. As another example, it's often harder for superoptimizers to
reason about control flow than expression graphs. But, graph-oriented
IRs like the VSDG represent control flow as control dependencies. It's
difficult to say how powerful this combination will be until we try it,
but if nothing else, it should be very convenient to express
pattern-matching over a single graph that includes both data and control
dependencies.

40
cranelift/spidermonkey.md Normal file
View File

@@ -0,0 +1,40 @@
Cranelift in SpiderMonkey
=========================
[SpiderMonkey](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey)
is the JavaScript and WebAssembly engine in Firefox. Cranelift is
designed to be used in SpiderMonkey with the goal of enabling better
code generation for ARM's 32-bit and 64-bit architectures, and building
a framework for improved low-level code optimizations in the future.
Phase 1: WebAssembly
--------------------
SpiderMonkey currently has two WebAssembly compilers: The tier 1
baseline compiler (not shown below) and the tier 2 compiler using the
IonMonkey JavaScript compiler's optimizations and register allocation.
![Cranelift in SpiderMonkey phase 1](media/spidermonkey1.png)
In phase 1, Cranelift aims to replace the IonMonkey-based tier 2
compiler for WebAssembly only. It will still be orchestrated by the
BaldrMonkey engine and compile WebAssembly modules on multiple threads.
Cranelift translates binary wasm functions directly into its own
intermediate representation, and it generates binary machine code
without depending on SpiderMonkey's macro assembler.
Phase 2: IonMonkey
------------------
The IonMonkey JIT compiler is designed to compile JavaScript code. It
uses two separate intermediate representations to do that:
- MIR is used for optimizations that are specific to JavaScript JIT
compilation. It has good support for JS types and the special tricks
needed to make JS fast.
- LIR is used for register allocation.
![Cranelift in SpiderMonkey phase 2](media/spidermonkey2.png)
Cranelift has its own register allocator, so the LIR representation can
be skipped when using Cranelift as a backend for IonMonkey.

View File

@@ -1,64 +0,0 @@
==================
Cranelift in Rustc
==================
One goal for Cranelift is to be usable as a backend suitable for compiling Rust
in debug mode. This mode doesn't require a lot of mid-level optimization, and it
does want very fast compile times, and this matches up fairly well with what we
expect Cranelift's initial strengths and weaknesses will be. Cranelift is being
designed to take aggressive advantage of multiple cores, and to be very efficient
with its use of memory.
Another goal is a "pretty good" backend. The idea here is to do the work to get
MIR-level inlining enabled, do some basic optimizations in Cranelift to capture the
low-hanging fruit, and then use that along with good low-level optimizations to
produce code which has a chance of being decently fast, with quite fast compile
times. It obviously wouldn't compete with LLVM-based release builds in terms of
optimization, but for some users, completely unoptimized code is too slow to test
with, so a "pretty good" mode might be good enough.
There's plenty of work to do to achieve these goals, and if achieve them, we'll have
enabled a Rust compiler written entirely in Rust, and enabled faster Rust compile
times for important use cases.
With all that said, there is a potential goal beyond that, which is to build a
full optimizing release-capable backend. We can't predict how far Cranelift will go
yet, but we do have some crazy ideas about what such a thing might look like,
including:
- Take advantage of Rust language properties in the optimizer. With LLVM, Rust is
able to use annotations to describe some of its aliasing guarantees, however the
annotations are awkward and limited. An optimizer that can represent the core
aliasing relationships that Rust provides directly has the potential to be very
powerful without the need for complex alias analysis logic. Unsafe blocks are an
interesting challenge, however in many simple cases, like Vec, it may be possible
to recover what the optimizer needs to know.
- Design for superoptimization. Traditionally, compiler development teams have
spent many years of manual effort to identify patterns of code that can be
matched and replaced. Superoptimizers have been contributing some to this
effort, but in the future, we may be able to reverse roles.
Superoptimizers will do the bulk of the work, and humans will contribute
specialized optimizations that superoptimizers miss. This has the potential to
take a new optimizer from scratch to diminishing-returns territory with much
less manual effort.
- Build an optimizer IR without the constraints of fast-debug-build compilation.
Cranelift's base IR is focused on Codegen, so a full-strength optimizer would either
use an IR layer on top of it (possibly using Cranelift's flexible EntityMap system),
or possibly an independent IR that could be translated to/from the base IR. Either
way, this overall architecture would keep the optimizer out of the way of the
non-optimizing build path, which keeps that path fast and simple, and gives the
optimizer more flexibility. If we then want to base the IR on a powerful data
structure like the Value State Dependence Graph (VSDG), we can do so with fewer
compromises.
And, these ideas build on each other. For example, one of the challenges for
dependence-graph-oriented IRs like the VSDG is getting good enough memory dependence
information. But if we can get high-quality aliasing information directly from the
Rust front-end, we should be in great shape. As another example, it's often harder
for superoptimizers to reason about control flow than expression graphs. But,
graph-oriented IRs like the VSDG represent control flow as control dependencies.
It's difficult to say how powerful this combination will be until we try it, but
if nothing else, it should be very convenient to express pattern-matching over a
single graph that includes both data and control dependencies.

View File

@@ -1,44 +0,0 @@
=========================
Cranelift in SpiderMonkey
=========================
`SpiderMonkey <https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey>`_ is the
JavaScript and WebAssembly engine in Firefox. Cranelift is designed to be used in SpiderMonkey with
the goal of enabling better code generation for ARM's 32-bit and 64-bit architectures, and building
a framework for improved low-level code optimizations in the future.
Phase 1: WebAssembly
--------------------
SpiderMonkey currently has two WebAssembly compilers: The tier 1 baseline compiler (not shown
below) and the tier 2 compiler using the IonMonkey JavaScript compiler's optimizations and register
allocation.
.. image:: media/spidermonkey1.png
:align: center
:width: 80%
:alt: Cranelift in SpiderMonkey phase 1
In phase 1, Cranelift aims to replace the IonMonkey-based tier 2 compiler for WebAssembly only. It
will still be orchestrated by the BaldrMonkey engine and compile WebAssembly modules on multiple
threads. Cranelift translates binary wasm functions directly into its own intermediate
representation, and it generates binary machine code without depending on SpiderMonkey's macro
assembler.
Phase 2: IonMonkey
------------------
The IonMonkey JIT compiler is designed to compile JavaScript code. It uses two separate
intermediate representations to do that:
- MIR is used for optimizations that are specific to JavaScript JIT compilation. It has good
support for JS types and the special tricks needed to make JS fast.
- LIR is used for register allocation.
.. image:: media/spidermonkey2.png
:align: center
:width: 80%
:alt: Cranelift in SpiderMonkey phase 2
Cranelift has its own register allocator, so the LIR representation can be skipped when using
Cranelift as a backend for IonMonkey.