Remove heaps from core Cranelift, push them into cranelift-wasm (#5386)

* cranelift-wasm: translate Wasm loads into lower-level CLIF operations

Rather than using `heap_{load,store,addr}`.

* cranelift: Remove the `heap_{addr,load,store}` instructions

These are now legalized in the `cranelift-wasm` frontend.

* cranelift: Remove the `ir::Heap` entity from CLIF

* Port basic memory operation tests to .wat filetests

* Remove test for verifying CLIF heaps

* Remove `heap_addr` from replace_branching_instructions_and_cfg_predecessors.clif test

* Remove `heap_addr` from readonly.clif test

* Remove `heap_addr` from `table_addr.clif` test

* Remove `heap_addr` from the simd-fvpromote_low.clif test

* Remove `heap_addr` from simd-fvdemote.clif test

* Remove `heap_addr` from the load-op-store.clif test

* Remove the CLIF heap runtest

* Remove `heap_addr` from the global_value.clif test

* Remove `heap_addr` from fpromote.clif runtests

* Remove `heap_addr` from fdemote.clif runtests

* Remove `heap_addr` from memory.clif parser test

* Remove `heap_addr` from reject_load_readonly.clif test

* Remove `heap_addr` from reject_load_notrap.clif test

* Remove `heap_addr` from load_readonly_notrap.clif test

* Remove `static-heap-without-guard-pages.clif` test

Will be subsumed when we port `make-heap-load-store-tests.sh` to generating
`.wat` tests.

* Remove `static-heap-with-guard-pages.clif` test

Will be subsumed when we port `make-heap-load-store-tests.sh` over to `.wat`
tests.

* Remove more heap tests

These will be subsumed by porting `make-heap-load-store-tests.sh` over to `.wat`
tests.

* Remove `heap_addr` from `simple-alias.clif` test

* Remove `heap_addr` from partial-redundancy.clif test

* Remove `heap_addr` from multiple-blocks.clif test

* Remove `heap_addr` from fence.clif test

* Remove `heap_addr` from extends.clif test

* Remove runtests that rely on heaps

Heaps are not a thing in CLIF or the interpreter anymore

* Add generated load/store `.wat` tests

* Enable memory-related wasm features in `.wat` tests

* Remove CLIF heap from fcmp-mem-bug.clif test

* Add a mode for compiling `.wat` all the way to assembly in filetests

* Also generate WAT to assembly tests in `make-load-store-tests.sh`

* cargo fmt

* Reinstate `f{de,pro}mote.clif` tests without the heap bits

* Remove undefined doc link

* Remove outdated SVG and dot file from docs

* Add docs about `None` returns for base address computation helpers

* Factor out `env.heap_access_spectre_mitigation()` to a local

* Expand docs for `FuncEnvironment::heaps` trait method

* Restore f{de,pro}mote+load clif runtests with stack memory
This commit is contained in:
Nick Fitzgerald
2022-12-14 16:26:45 -08:00
committed by GitHub
parent e03d65cca7
commit c0b587ac5f
198 changed files with 2494 additions and 4232 deletions

View File

@@ -8,10 +8,9 @@ target aarch64
function %f0(i64 vmctx, i32) -> i32, i32, i32, i64, i64, i64 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
block0(v0: i64, v1: i32):
v2 = heap_addr.i64 heap0, v1, 12, 0
v2 = global_value.i64 gv1
;; Initial load. This will not be reused by anything below, even
;; though it does access the same address.

View File

@@ -8,10 +8,9 @@ target aarch64
function %f0(i64 vmctx, i32) -> i32, i32, i32, i32, i32, i32, i32, i32, i32, i32 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
block0(v0: i64, v1: i32):
v2 = heap_addr.i64 heap0, v1, 12, 0
v2 = global_value.i64 gv1
v3 = load.i32 v2+8
v4 = load.i32 vmctx v0+16

View File

@@ -7,11 +7,9 @@ target aarch64
function %f0(i64 vmctx, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
block0(v0: i64, v1: i32):
v2 = heap_addr.i64 heap0, v1, 12, 0
v2 = global_value.i64 gv1
v3 = load.i32 v2+8
brz v2, block1
jump block2

View File

@@ -8,7 +8,6 @@ target aarch64
function %f0(i64 vmctx, i32) -> i32, i32 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
fn0 = %g(i64 vmctx)
block0(v0: i64, v1: i32):
@@ -16,17 +15,17 @@ block0(v0: i64, v1: i32):
jump block2
block1:
v2 = heap_addr.i64 heap0, v1, 68, 0
v2 = global_value.i64 gv1
v3 = load.i32 v2+64
jump block3(v3)
block2:
v4 = heap_addr.i64 heap0, v1, 132, 0
v4 = global_value.i64 gv1
v5 = load.i32 v4+128
jump block3(v5)
block3(v6: i32):
v7 = heap_addr.i64 heap0, v1, 68, 0
v7 = global_value.i64 gv1
v8 = load.i32 v7+64
;; load should survive:
; check: v8 = load.i32 v7+64

View File

@@ -9,14 +9,13 @@ target aarch64
function %f0(i64 vmctx, i32) -> i32, i32, i32, i32 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
fn0 = %g(i64 vmctx)
block0(v0: i64, v1: i32):
v2 = heap_addr.i64 heap0, v1, 12, 0
v2 = global_value.i64 gv1
v3 = load.i32 v2+8
;; This should reuse the load above.
v4 = heap_addr.i64 heap0, v1, 12, 0
v4 = global_value.i64 gv1
v5 = load.i32 v4+8
; check: v5 -> v3
@@ -38,15 +37,14 @@ block0(v0: i64, v1: i32):
function %f1(i64 vmctx, i32) -> i32 {
gv0 = vmctx
gv1 = load.i64 notrap readonly aligned gv0+8
heap0 = static gv1, bound 0x1_0000_0000, offset_guard 0x8000_0000, index_type i32
fn0 = %g(i64 vmctx)
block0(v0: i64, v1: i32):
v2 = heap_addr.i64 heap0, v1, 12, 0
v2 = global_value.i64 gv1
store.i32 v1, v2+8
;; This load should pick up the store above.
v3 = heap_addr.i64 heap0, v1, 12, 0
v3 = global_value.i64 gv1
v4 = load.i32 v3+8
; check: v4 -> v1