Consolidate address calculations for atomics (#3143)
* Consolidate address calculations for atomics This commit consolidates all calcuations of guest addresses into one `prepare_addr` function. This notably remove the atomics-specifics paths as well as the `prepare_load` function (now renamed to `prepare_addr` and folded into `get_heap_addr`). The goal of this commit is to simplify how addresses are managed in the code generator for atomics to use all the shared infrastrucutre of other loads/stores as well. This additionally fixes #3132 via the use of `heap_addr` in clif for all operations. I also added a number of tests for loads/stores with varying alignments. Originally I was going to allow loads/stores to not be aligned since that's what the current formal specification says, but the overview of the threads proposal disagrees with the formal specification, so I figured I'd leave it as-is but adding tests probably doesn't hurt. Closes #3132 * Fix old backend * Guarantee misalignment checks happen before out-of-bounds
This commit is contained in:
@@ -25,6 +25,11 @@ fn run_wast(wast: &str, strategy: Strategy, pooling: bool) -> anyhow::Result<()>
|
||||
// by reference types.
|
||||
let reftypes = simd || wast.iter().any(|s| s == "reference-types");
|
||||
|
||||
// Threads aren't implemented in the old backend, so skip those tests.
|
||||
if threads && cfg!(feature = "old-x86-backend") {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut cfg = Config::new();
|
||||
cfg.wasm_simd(simd)
|
||||
.wasm_bulk_memory(bulk_mem)
|
||||
|
||||
Reference in New Issue
Block a user