* cranelift-wasm: translate Wasm loads into lower-level CLIF operations
Rather than using `heap_{load,store,addr}`.
* cranelift: Remove the `heap_{addr,load,store}` instructions
These are now legalized in the `cranelift-wasm` frontend.
* cranelift: Remove the `ir::Heap` entity from CLIF
* Port basic memory operation tests to .wat filetests
* Remove test for verifying CLIF heaps
* Remove `heap_addr` from replace_branching_instructions_and_cfg_predecessors.clif test
* Remove `heap_addr` from readonly.clif test
* Remove `heap_addr` from `table_addr.clif` test
* Remove `heap_addr` from the simd-fvpromote_low.clif test
* Remove `heap_addr` from simd-fvdemote.clif test
* Remove `heap_addr` from the load-op-store.clif test
* Remove the CLIF heap runtest
* Remove `heap_addr` from the global_value.clif test
* Remove `heap_addr` from fpromote.clif runtests
* Remove `heap_addr` from fdemote.clif runtests
* Remove `heap_addr` from memory.clif parser test
* Remove `heap_addr` from reject_load_readonly.clif test
* Remove `heap_addr` from reject_load_notrap.clif test
* Remove `heap_addr` from load_readonly_notrap.clif test
* Remove `static-heap-without-guard-pages.clif` test
Will be subsumed when we port `make-heap-load-store-tests.sh` to generating
`.wat` tests.
* Remove `static-heap-with-guard-pages.clif` test
Will be subsumed when we port `make-heap-load-store-tests.sh` over to `.wat`
tests.
* Remove more heap tests
These will be subsumed by porting `make-heap-load-store-tests.sh` over to `.wat`
tests.
* Remove `heap_addr` from `simple-alias.clif` test
* Remove `heap_addr` from partial-redundancy.clif test
* Remove `heap_addr` from multiple-blocks.clif test
* Remove `heap_addr` from fence.clif test
* Remove `heap_addr` from extends.clif test
* Remove runtests that rely on heaps
Heaps are not a thing in CLIF or the interpreter anymore
* Add generated load/store `.wat` tests
* Enable memory-related wasm features in `.wat` tests
* Remove CLIF heap from fcmp-mem-bug.clif test
* Add a mode for compiling `.wat` all the way to assembly in filetests
* Also generate WAT to assembly tests in `make-load-store-tests.sh`
* cargo fmt
* Reinstate `f{de,pro}mote.clif` tests without the heap bits
* Remove undefined doc link
* Remove outdated SVG and dot file from docs
* Add docs about `None` returns for base address computation helpers
* Factor out `env.heap_access_spectre_mitigation()` to a local
* Expand docs for `FuncEnvironment::heaps` trait method
* Restore f{de,pro}mote+load clif runtests with stack memory
100 lines
3.9 KiB
Rust
100 lines
3.9 KiB
Rust
//! Heaps to implement WebAssembly linear memories.
|
|
|
|
use cranelift_codegen::ir::{GlobalValue, Type};
|
|
use cranelift_entity::entity_impl;
|
|
|
|
/// An opaque reference to a [`HeapData`][crate::HeapData].
|
|
///
|
|
/// While the order is stable, it is arbitrary.
|
|
#[derive(Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord)]
|
|
#[cfg_attr(feature = "enable-serde", derive(serde::Serialize, serde::Deserialize))]
|
|
pub struct Heap(u32);
|
|
entity_impl!(Heap, "heap");
|
|
|
|
/// A heap implementing a WebAssembly linear memory.
|
|
///
|
|
/// Code compiled from WebAssembly runs in a sandbox where it can't access all
|
|
/// process memory. Instead, it is given a small set of memory areas to work in,
|
|
/// and all accesses are bounds checked. `cranelift-wasm` models this through
|
|
/// the concept of *heaps*.
|
|
///
|
|
/// Heap addresses can be smaller than the native pointer size, for example
|
|
/// unsigned `i32` offsets on a 64-bit architecture.
|
|
///
|
|
/// A heap appears as three consecutive ranges of address space:
|
|
///
|
|
/// 1. The *mapped pages* are the accessible memory range in the heap. A heap
|
|
/// may have a minimum guaranteed size which means that some mapped pages are
|
|
/// always present.
|
|
///
|
|
/// 2. The *unmapped pages* is a possibly empty range of address space that may
|
|
/// be mapped in the future when the heap is grown. They are addressable
|
|
/// but not accessible.
|
|
///
|
|
/// 3. The *offset-guard pages* is a range of address space that is guaranteed
|
|
/// to always cause a trap when accessed. It is used to optimize bounds
|
|
/// checking for heap accesses with a shared base pointer. They are
|
|
/// addressable but not accessible.
|
|
///
|
|
/// The *heap bound* is the total size of the mapped and unmapped pages. This is
|
|
/// the bound that `heap_addr` checks against. Memory accesses inside the heap
|
|
/// bounds can trap if they hit an unmapped page (which is not accessible).
|
|
///
|
|
/// Two styles of heaps are supported, *static* and *dynamic*. They behave
|
|
/// differently when resized.
|
|
///
|
|
/// #### Static heaps
|
|
///
|
|
/// A *static heap* starts out with all the address space it will ever need, so it
|
|
/// never moves to a different address. At the base address is a number of mapped
|
|
/// pages corresponding to the heap's current size. Then follows a number of
|
|
/// unmapped pages where the heap can grow up to its maximum size. After the
|
|
/// unmapped pages follow the offset-guard pages which are also guaranteed to
|
|
/// generate a trap when accessed.
|
|
///
|
|
/// #### Dynamic heaps
|
|
///
|
|
/// A *dynamic heap* can be relocated to a different base address when it is
|
|
/// resized, and its bound can move dynamically. The offset-guard pages move
|
|
/// when the heap is resized. The bound of a dynamic heap is stored in a global
|
|
/// value.
|
|
#[derive(Clone, PartialEq, Hash)]
|
|
#[cfg_attr(feature = "enable-serde", derive(serde::Serialize, serde::Deserialize))]
|
|
pub struct HeapData {
|
|
/// The address of the start of the heap's storage.
|
|
pub base: GlobalValue,
|
|
|
|
/// Guaranteed minimum heap size in bytes. Heap accesses before `min_size`
|
|
/// don't need bounds checking.
|
|
pub min_size: u64,
|
|
|
|
/// Size in bytes of the offset-guard pages following the heap.
|
|
pub offset_guard_size: u64,
|
|
|
|
/// Heap style, with additional style-specific info.
|
|
pub style: HeapStyle,
|
|
|
|
/// The index type for the heap.
|
|
pub index_type: Type,
|
|
}
|
|
|
|
/// Style of heap including style-specific information.
|
|
#[derive(Clone, PartialEq, Hash)]
|
|
#[cfg_attr(feature = "enable-serde", derive(serde::Serialize, serde::Deserialize))]
|
|
pub enum HeapStyle {
|
|
/// A dynamic heap can be relocated to a different base address when it is
|
|
/// grown.
|
|
Dynamic {
|
|
/// Global value providing the current bound of the heap in bytes.
|
|
bound_gv: GlobalValue,
|
|
},
|
|
|
|
/// A static heap has a fixed base address and a number of not-yet-allocated
|
|
/// pages before the offset-guard pages.
|
|
Static {
|
|
/// Heap bound in bytes. The offset-guard pages are allocated after the
|
|
/// bound.
|
|
bound: u64,
|
|
},
|
|
}
|