Reimplement how unwind information is stored (#3180)
* Reimplement how unwind information is stored This commit is a major refactoring of how unwind information is stored after compilation of a function has finished. Previously we would store the raw `UnwindInfo` as a result of compilation and this would get serialized/deserialized alongside the rest of the ELF object that compilation creates. Whenever functions were registered with `CodeMemory` this would also result in registering unwinding information dynamically at runtime, which in the case of Unix, for example, would dynamically created FDE/CIE entries on-the-fly. Eventually I'd like to support compiling Wasmtime without Cranelift, but this means that `UnwindInfo` wouldn't be easily available to decode into and create unwinding information from. To solve this I've changed the ELF object created to have the unwinding information encoded into it ahead-of-time so loading code into memory no longer needs to create unwinding tables. This change has two different implementations for Windows/Unix: * On Windows the implementation was much easier. The unwinding information on Windows is already stored after the function itself in the text section. This was actually slightly duplicated in object building and in code memory allocation. Now the object building continues to do the same, recording unwinding information after functions, and code memory no longer manually tracks this. Additionally Wasmtime will emit a special custom section in the object file with unwinding information which is the list of `RUNTIME_FUNCTION` structures that `RtlAddFunctionTable` expects. This means that the object file has all the information precompiled into it and registration at runtime is simply passing a few pointers around to the runtime. * Unix was a little bit more difficult than Windows. Today a `.eh_frame` section is created on-the-fly with offsets in FDEs specified as the absolute address that functions are loaded at. This absolute address hindered the ability to precompile the FDE into the object file itself. I've switched how addresses are encoded, though, to using `DW_EH_PE_pcrel` which means that FDE addresses are now specified relative to the FDE itself. This means that we can maintain a fixed offset between the `.eh_frame` loaded in memory and the beginning of code memory. When doing so this enables precompiling the `.eh_frame` section into the object file and at runtime when loading an object no further construction of unwinding information is needed. The overall result of this commit is that unwinding information is no longer stored in its cranelift-data-structure form on disk. This means that this unwinding information format is only present during compilation, which will make it that much easier to compile out cranelift in the future. This commit also significantly refactors `CodeMemory` since the way unwinding information is handled is not much different from before. Previously `CodeMemory` was suitable for incrementally adding more and more functions to it, but nowadays a `CodeMemory` either lives per module (in which case all functions are known up front) or it's created once-per-`Func::new` with two trampolines. In both cases we know all functions up front so the functionality of incrementally adding more and more segments is no longer needed. This commit removes the ability to add a function-at-a-time in `CodeMemory` and instead it can now only load objects in their entirety. A small helper function is added to build a small object file for trampolines in `Func::new` to handle allocation there. Finally, this commit also folds the `wasmtime-obj` crate directly into the `wasmtime-cranelift` crate and its builder structure to be more amenable to this strategy of managing unwinding tables. It is not intentional to have any real functional change as a result of this commit. This might accelerate loading a module from cache slightly since less work is needed to manage the unwinding information, but that's just a side benefit from the main goal of this commit which is to remove the dependence on cranelift unwinding information being available at runtime. * Remove isa reexport from wasmtime-environ * Trim down reexports of `cranelift-codegen` Remove everything non-essential so that only the bits which will need to be refactored out of cranelift remain. * Fix debug tests * Review comments
This commit is contained in:
@@ -16,7 +16,6 @@ wasmtime-runtime = { path = "../runtime", version = "0.29.0" }
|
||||
wasmtime-cranelift = { path = "../cranelift", version = "0.29.0" }
|
||||
wasmtime-lightbeam = { path = "../lightbeam/wasmtime", version = "0.29.0", optional = true }
|
||||
wasmtime-profiling = { path = "../profiling", version = "0.29.0" }
|
||||
wasmtime-obj = { path = "../obj", version = "0.29.0" }
|
||||
rayon = { version = "1.0", optional = true }
|
||||
region = "2.2.0"
|
||||
thiserror = "1.0.4"
|
||||
@@ -26,8 +25,8 @@ more-asserts = "0.2.1"
|
||||
anyhow = "1.0"
|
||||
cfg-if = "1.0"
|
||||
log = "0.4"
|
||||
gimli = { version = "0.25.0", default-features = false, features = ["write"] }
|
||||
object = { version = "0.26.0", default-features = false, features = ["write"] }
|
||||
gimli = { version = "0.25.0", default-features = false, features = ["std", "read"] }
|
||||
object = { version = "0.26.0", default-features = false, features = ["std", "read_core", "elf"] }
|
||||
serde = { version = "1.0.94", features = ["derive"] }
|
||||
addr2line = { version = "0.16.0", default-features = false }
|
||||
|
||||
|
||||
@@ -1,44 +1,38 @@
|
||||
//! Memory management for executable code.
|
||||
|
||||
use crate::object::{
|
||||
utils::{try_parse_func_name, try_parse_trampoline_name},
|
||||
ObjectUnwindInfo,
|
||||
};
|
||||
use crate::unwind::UnwindRegistry;
|
||||
use crate::Compiler;
|
||||
use crate::unwind::UnwindRegistration;
|
||||
use anyhow::{Context, Result};
|
||||
use object::read::{File as ObjectFile, Object, ObjectSection, ObjectSymbol};
|
||||
use region;
|
||||
use std::collections::BTreeMap;
|
||||
use std::mem::ManuallyDrop;
|
||||
use std::{cmp, mem};
|
||||
use wasmtime_environ::{
|
||||
isa::unwind::UnwindInfo,
|
||||
wasm::{FuncIndex, SignatureIndex},
|
||||
CompiledFunction,
|
||||
};
|
||||
use wasmtime_environ::obj::{try_parse_func_name, try_parse_trampoline_name};
|
||||
use wasmtime_environ::wasm::{FuncIndex, SignatureIndex};
|
||||
use wasmtime_runtime::{Mmap, VMFunctionBody};
|
||||
|
||||
struct CodeMemoryEntry {
|
||||
mmap: ManuallyDrop<Mmap>,
|
||||
registry: ManuallyDrop<UnwindRegistry>,
|
||||
len: usize,
|
||||
unwind_registration: ManuallyDrop<Option<UnwindRegistration>>,
|
||||
text_len: usize,
|
||||
unwind_info_len: usize,
|
||||
}
|
||||
|
||||
impl CodeMemoryEntry {
|
||||
fn with_capacity(cap: usize) -> Result<Self> {
|
||||
let mmap = ManuallyDrop::new(Mmap::with_at_least(cap)?);
|
||||
let registry = ManuallyDrop::new(UnwindRegistry::new(mmap.as_ptr() as usize));
|
||||
fn new(text_len: usize, unwind_info_len: usize) -> Result<Self> {
|
||||
let mmap = ManuallyDrop::new(Mmap::with_at_least(text_len + unwind_info_len)?);
|
||||
Ok(Self {
|
||||
mmap,
|
||||
registry,
|
||||
len: 0,
|
||||
unwind_registration: ManuallyDrop::new(None),
|
||||
text_len,
|
||||
unwind_info_len,
|
||||
})
|
||||
}
|
||||
|
||||
// Note that this intentionally excludes any unwinding information, if
|
||||
// present, since consumers largely are only interested in code memory
|
||||
// itself.
|
||||
fn range(&self) -> (usize, usize) {
|
||||
let start = self.mmap.as_ptr() as usize;
|
||||
let end = start + self.len;
|
||||
let end = start + self.text_len;
|
||||
(start, end)
|
||||
}
|
||||
}
|
||||
@@ -47,23 +41,20 @@ impl Drop for CodeMemoryEntry {
|
||||
fn drop(&mut self) {
|
||||
unsafe {
|
||||
// The registry needs to be dropped before the mmap
|
||||
ManuallyDrop::drop(&mut self.registry);
|
||||
ManuallyDrop::drop(&mut self.unwind_registration);
|
||||
ManuallyDrop::drop(&mut self.mmap);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) struct CodeMemoryObjectAllocation<'a> {
|
||||
buf: &'a mut [u8],
|
||||
pub struct CodeMemoryObjectAllocation<'a, 'b> {
|
||||
pub code_range: &'a mut [u8],
|
||||
funcs: BTreeMap<FuncIndex, (usize, usize)>,
|
||||
trampolines: BTreeMap<SignatureIndex, (usize, usize)>,
|
||||
pub obj: ObjectFile<'b>,
|
||||
}
|
||||
|
||||
impl<'a> CodeMemoryObjectAllocation<'a> {
|
||||
pub fn code_range(self) -> &'a mut [u8] {
|
||||
self.buf
|
||||
}
|
||||
|
||||
impl<'a> CodeMemoryObjectAllocation<'a, '_> {
|
||||
pub fn funcs_len(&self) -> usize {
|
||||
self.funcs.len()
|
||||
}
|
||||
@@ -73,7 +64,7 @@ impl<'a> CodeMemoryObjectAllocation<'a> {
|
||||
}
|
||||
|
||||
pub fn funcs(&'a self) -> impl Iterator<Item = (FuncIndex, &'a mut [VMFunctionBody])> + 'a {
|
||||
let buf = self.buf as *const _ as *mut [u8];
|
||||
let buf = self.code_range as *const _ as *mut [u8];
|
||||
self.funcs.iter().map(move |(i, (start, len))| {
|
||||
(*i, unsafe {
|
||||
CodeMemory::view_as_mut_vmfunc_slice(&mut (*buf)[*start..*start + *len])
|
||||
@@ -84,7 +75,7 @@ impl<'a> CodeMemoryObjectAllocation<'a> {
|
||||
pub fn trampolines(
|
||||
&'a self,
|
||||
) -> impl Iterator<Item = (SignatureIndex, &'a mut [VMFunctionBody])> + 'a {
|
||||
let buf = self.buf as *const _ as *mut [u8];
|
||||
let buf = self.code_range as *const _ as *mut [u8];
|
||||
self.trampolines.iter().map(move |(i, (start, len))| {
|
||||
(*i, unsafe {
|
||||
CodeMemory::view_as_mut_vmfunc_slice(&mut (*buf)[*start..*start + *len])
|
||||
@@ -95,7 +86,6 @@ impl<'a> CodeMemoryObjectAllocation<'a> {
|
||||
|
||||
/// Memory manager for executable code.
|
||||
pub struct CodeMemory {
|
||||
current: Option<CodeMemoryEntry>,
|
||||
entries: Vec<CodeMemoryEntry>,
|
||||
published: usize,
|
||||
}
|
||||
@@ -109,140 +99,49 @@ impl CodeMemory {
|
||||
/// Create a new `CodeMemory` instance.
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
current: None,
|
||||
entries: Vec::new(),
|
||||
published: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Allocate a continuous memory block for a single compiled function.
|
||||
/// TODO: Reorganize the code that calls this to emit code directly into the
|
||||
/// mmap region rather than into a Vec that we need to copy in.
|
||||
pub fn allocate_for_function<'a>(
|
||||
&mut self,
|
||||
func: &'a CompiledFunction,
|
||||
) -> Result<&mut [VMFunctionBody]> {
|
||||
let size = Self::function_allocation_size(func);
|
||||
|
||||
let (buf, registry, start) = self.allocate(size)?;
|
||||
|
||||
let (_, _, vmfunc) = Self::copy_function(func, start as u32, buf, registry);
|
||||
|
||||
Ok(vmfunc)
|
||||
}
|
||||
|
||||
/// Make all allocated memory executable.
|
||||
pub fn publish(&mut self, compiler: &Compiler) {
|
||||
self.push_current(0)
|
||||
.expect("failed to push current memory map");
|
||||
pub fn publish(&mut self) {
|
||||
for entry in &mut self.entries[self.published..] {
|
||||
assert!(!entry.mmap.is_empty());
|
||||
|
||||
for CodeMemoryEntry {
|
||||
mmap: m,
|
||||
registry: r,
|
||||
..
|
||||
} in &mut self.entries[self.published..]
|
||||
{
|
||||
// Remove write access to the pages due to the relocation fixups.
|
||||
r.publish(compiler)
|
||||
.expect("failed to publish function unwind registry");
|
||||
|
||||
if !m.is_empty() {
|
||||
unsafe {
|
||||
region::protect(m.as_mut_ptr(), m.len(), region::Protection::READ_EXECUTE)
|
||||
}
|
||||
unsafe {
|
||||
// Switch the executable portion from read/write to
|
||||
// read/execute, notably not using read/write/execute to prevent
|
||||
// modifications.
|
||||
region::protect(
|
||||
entry.mmap.as_mut_ptr(),
|
||||
entry.text_len,
|
||||
region::Protection::READ_EXECUTE,
|
||||
)
|
||||
.expect("unable to make memory readonly and executable");
|
||||
|
||||
if entry.unwind_info_len == 0 {
|
||||
continue;
|
||||
}
|
||||
|
||||
// With all our memory setup use the platform-specific
|
||||
// `UnwindRegistration` implementation to inform the general
|
||||
// runtime that there's unwinding information available for all
|
||||
// our just-published JIT functions.
|
||||
*entry.unwind_registration = Some(
|
||||
UnwindRegistration::new(
|
||||
entry.mmap.as_mut_ptr(),
|
||||
entry.mmap.as_mut_ptr().add(entry.text_len),
|
||||
entry.unwind_info_len,
|
||||
)
|
||||
.expect("failed to create unwind info registration"),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
self.published = self.entries.len();
|
||||
}
|
||||
|
||||
/// Allocate `size` bytes of memory which can be made executable later by
|
||||
/// calling `publish()`. Note that we allocate the memory as writeable so
|
||||
/// that it can be written to and patched, though we make it readonly before
|
||||
/// actually executing from it.
|
||||
///
|
||||
/// A few values are returned:
|
||||
///
|
||||
/// * A mutable slice which references the allocated memory
|
||||
/// * A function table instance where unwind information is registered
|
||||
/// * The offset within the current mmap that the slice starts at
|
||||
///
|
||||
/// TODO: Add an alignment flag.
|
||||
fn allocate(&mut self, size: usize) -> Result<(&mut [u8], &mut UnwindRegistry, usize)> {
|
||||
assert!(size > 0);
|
||||
|
||||
if match &self.current {
|
||||
Some(e) => e.mmap.len() - e.len < size,
|
||||
None => true,
|
||||
} {
|
||||
self.push_current(cmp::max(0x10000, size))?;
|
||||
}
|
||||
|
||||
let e = self.current.as_mut().unwrap();
|
||||
let old_position = e.len;
|
||||
e.len += size;
|
||||
|
||||
Ok((
|
||||
&mut e.mmap.as_mut_slice()[old_position..e.len],
|
||||
&mut e.registry,
|
||||
old_position,
|
||||
))
|
||||
}
|
||||
|
||||
/// Calculates the allocation size of the given compiled function.
|
||||
fn function_allocation_size(func: &CompiledFunction) -> usize {
|
||||
match &func.unwind_info {
|
||||
Some(UnwindInfo::WindowsX64(info)) => {
|
||||
// Windows unwind information is required to be emitted into code memory
|
||||
// This is because it must be a positive relative offset from the start of the memory
|
||||
// Account for necessary unwind information alignment padding (32-bit alignment)
|
||||
((func.body.len() + 3) & !3) + info.emit_size()
|
||||
}
|
||||
_ => func.body.len(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Copies the data of the compiled function to the given buffer.
|
||||
///
|
||||
/// This will also add the function to the current unwind registry.
|
||||
fn copy_function<'a>(
|
||||
func: &CompiledFunction,
|
||||
func_start: u32,
|
||||
buf: &'a mut [u8],
|
||||
registry: &mut UnwindRegistry,
|
||||
) -> (u32, &'a mut [u8], &'a mut [VMFunctionBody]) {
|
||||
let func_len = func.body.len();
|
||||
let mut func_end = func_start + (func_len as u32);
|
||||
|
||||
let (body, mut remainder) = buf.split_at_mut(func_len);
|
||||
body.copy_from_slice(&func.body);
|
||||
let vmfunc = Self::view_as_mut_vmfunc_slice(body);
|
||||
|
||||
if let Some(UnwindInfo::WindowsX64(info)) = &func.unwind_info {
|
||||
// Windows unwind information is written following the function body
|
||||
// Keep unwind information 32-bit aligned (round up to the nearest 4 byte boundary)
|
||||
let unwind_start = (func_end + 3) & !3;
|
||||
let unwind_size = info.emit_size();
|
||||
let padding = (unwind_start - func_end) as usize;
|
||||
|
||||
let (slice, r) = remainder.split_at_mut(padding + unwind_size);
|
||||
|
||||
info.emit(&mut slice[padding..]);
|
||||
|
||||
func_end = unwind_start + (unwind_size as u32);
|
||||
remainder = r;
|
||||
}
|
||||
|
||||
if let Some(info) = &func.unwind_info {
|
||||
registry
|
||||
.register(func_start, func_len as u32, info)
|
||||
.expect("failed to register unwind information");
|
||||
}
|
||||
|
||||
(func_end, remainder, vmfunc)
|
||||
}
|
||||
|
||||
/// Convert mut a slice from u8 to VMFunctionBody.
|
||||
fn view_as_mut_vmfunc_slice(slice: &mut [u8]) -> &mut [VMFunctionBody] {
|
||||
let byte_ptr: *mut [u8] = slice;
|
||||
@@ -250,24 +149,6 @@ impl CodeMemory {
|
||||
unsafe { &mut *body_ptr }
|
||||
}
|
||||
|
||||
/// Pushes the current entry and allocates a new one with the given size.
|
||||
fn push_current(&mut self, new_size: usize) -> Result<()> {
|
||||
let previous = mem::replace(
|
||||
&mut self.current,
|
||||
if new_size == 0 {
|
||||
None
|
||||
} else {
|
||||
Some(CodeMemoryEntry::with_capacity(cmp::max(0x10000, new_size))?)
|
||||
},
|
||||
);
|
||||
|
||||
if let Some(e) = previous {
|
||||
self.entries.push(e);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Returns all published segment ranges.
|
||||
pub fn published_ranges<'a>(&'a self) -> impl Iterator<Item = (usize, usize)> + 'a {
|
||||
self.entries[..self.published]
|
||||
@@ -277,29 +158,52 @@ impl CodeMemory {
|
||||
|
||||
/// Allocates and copies the ELF image code section into CodeMemory.
|
||||
/// Returns references to functions and trampolines defined there.
|
||||
pub(crate) fn allocate_for_object<'a>(
|
||||
pub fn allocate_for_object<'a, 'b>(
|
||||
&'a mut self,
|
||||
obj: &ObjectFile,
|
||||
unwind_info: &[ObjectUnwindInfo],
|
||||
) -> Result<CodeMemoryObjectAllocation<'a>> {
|
||||
obj: &'b [u8],
|
||||
) -> Result<CodeMemoryObjectAllocation<'a, 'b>> {
|
||||
let obj = ObjectFile::parse(obj)
|
||||
.with_context(|| "failed to parse internal ELF compilation artifact")?;
|
||||
let text_section = obj.section_by_name(".text").unwrap();
|
||||
let text_section_size = text_section.size() as usize;
|
||||
|
||||
if text_section.size() == 0 {
|
||||
if text_section_size == 0 {
|
||||
// No code in the image.
|
||||
return Ok(CodeMemoryObjectAllocation {
|
||||
buf: &mut [],
|
||||
code_range: &mut [],
|
||||
funcs: BTreeMap::new(),
|
||||
trampolines: BTreeMap::new(),
|
||||
obj,
|
||||
});
|
||||
}
|
||||
|
||||
// Allocate chunk memory that spans entire code section.
|
||||
let (buf, registry, start) = self.allocate(text_section.size() as usize)?;
|
||||
buf.copy_from_slice(
|
||||
// Find the platform-specific unwind section, if present, which contains
|
||||
// unwinding tables that will be used to load unwinding information
|
||||
// dynamically at runtime.
|
||||
let unwind_section = obj.section_by_name(UnwindRegistration::section_name());
|
||||
let unwind_section_size = unwind_section
|
||||
.as_ref()
|
||||
.map(|s| s.size() as usize)
|
||||
.unwrap_or(0);
|
||||
|
||||
// Allocate memory for the text section and unwinding information if it
|
||||
// is present. Then we can copy in all of the code and unwinding memory
|
||||
// over.
|
||||
let entry = CodeMemoryEntry::new(text_section_size, unwind_section_size)?;
|
||||
self.entries.push(entry);
|
||||
let entry = self.entries.last_mut().unwrap();
|
||||
entry.mmap.as_mut_slice()[..text_section_size].copy_from_slice(
|
||||
text_section
|
||||
.data()
|
||||
.with_context(|| "cannot read section data")?,
|
||||
.with_context(|| "cannot read text section data")?,
|
||||
);
|
||||
if let Some(section) = unwind_section {
|
||||
entry.mmap.as_mut_slice()[text_section_size..][..unwind_section_size].copy_from_slice(
|
||||
section
|
||||
.data()
|
||||
.with_context(|| "cannot read unwind section data")?,
|
||||
);
|
||||
}
|
||||
|
||||
// Track locations of all defined functions and trampolines.
|
||||
let mut funcs = BTreeMap::new();
|
||||
@@ -310,43 +214,21 @@ impl CodeMemory {
|
||||
if let Some(index) = try_parse_func_name(name) {
|
||||
let is_import = sym.section_index().is_none();
|
||||
if !is_import {
|
||||
funcs.insert(
|
||||
index,
|
||||
(start + sym.address() as usize, sym.size() as usize),
|
||||
);
|
||||
funcs.insert(index, (sym.address() as usize, sym.size() as usize));
|
||||
}
|
||||
} else if let Some(index) = try_parse_trampoline_name(name) {
|
||||
trampolines
|
||||
.insert(index, (start + sym.address() as usize, sym.size() as usize));
|
||||
trampolines.insert(index, (sym.address() as usize, sym.size() as usize));
|
||||
}
|
||||
}
|
||||
Err(_) => (),
|
||||
}
|
||||
}
|
||||
|
||||
// Register all unwind entries for functions and trampolines.
|
||||
// TODO will `u32` type for start/len be enough for large code base.
|
||||
for i in unwind_info {
|
||||
match i {
|
||||
ObjectUnwindInfo::Func(func_index, info) => {
|
||||
let (start, len) = funcs.get(&func_index).unwrap();
|
||||
registry
|
||||
.register(*start as u32, *len as u32, &info)
|
||||
.expect("failed to register unwind information");
|
||||
}
|
||||
ObjectUnwindInfo::Trampoline(trampoline_index, info) => {
|
||||
let (start, len) = trampolines.get(&trampoline_index).unwrap();
|
||||
registry
|
||||
.register(*start as u32, *len as u32, &info)
|
||||
.expect("failed to register unwind information");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(CodeMemoryObjectAllocation {
|
||||
buf: &mut buf[..text_section.size() as usize],
|
||||
code_range: &mut entry.mmap.as_mut_slice()[..text_section_size],
|
||||
funcs,
|
||||
trampolines,
|
||||
obj,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
//! JIT compilation.
|
||||
|
||||
use crate::instantiate::SetupError;
|
||||
use crate::object::{build_object, ObjectUnwindInfo};
|
||||
use object::write::Object;
|
||||
#[cfg(feature = "parallel-compilation")]
|
||||
use rayon::prelude::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
@@ -10,11 +8,9 @@ use std::collections::BTreeMap;
|
||||
use std::hash::{Hash, Hasher};
|
||||
use std::mem;
|
||||
use wasmparser::WasmFeatures;
|
||||
use wasmtime_environ::entity::EntityRef;
|
||||
use wasmtime_environ::wasm::{DefinedMemoryIndex, MemoryIndex};
|
||||
use wasmtime_environ::{
|
||||
CompiledFunctions, Compiler as EnvCompiler, CompilerBuilder, ModuleMemoryOffset,
|
||||
ModuleTranslation, Tunables, TypeTables, VMOffsets,
|
||||
CompiledFunctions, Compiler as EnvCompiler, CompilerBuilder, ModuleTranslation, Tunables,
|
||||
TypeTables,
|
||||
};
|
||||
|
||||
/// Select which kind of compilation to use.
|
||||
@@ -75,8 +71,7 @@ fn _assert_compiler_send_sync() {
|
||||
|
||||
#[allow(missing_docs)]
|
||||
pub struct Compilation {
|
||||
pub obj: Object,
|
||||
pub unwind_info: Vec<ObjectUnwindInfo>,
|
||||
pub obj: Vec<u8>,
|
||||
pub funcs: CompiledFunctions,
|
||||
}
|
||||
|
||||
@@ -118,40 +113,14 @@ impl Compiler {
|
||||
.into_iter()
|
||||
.collect::<CompiledFunctions>();
|
||||
|
||||
let dwarf_sections = if self.tunables.generate_native_debuginfo && !funcs.is_empty() {
|
||||
let ofs = VMOffsets::new(
|
||||
self.compiler
|
||||
.triple()
|
||||
.architecture
|
||||
.pointer_width()
|
||||
.unwrap()
|
||||
.bytes(),
|
||||
&translation.module,
|
||||
);
|
||||
let obj = self.compiler.emit_obj(
|
||||
&translation,
|
||||
types,
|
||||
&funcs,
|
||||
self.tunables.generate_native_debuginfo,
|
||||
)?;
|
||||
|
||||
let memory_offset = if ofs.num_imported_memories > 0 {
|
||||
ModuleMemoryOffset::Imported(ofs.vmctx_vmmemory_import(MemoryIndex::new(0)))
|
||||
} else if ofs.num_defined_memories > 0 {
|
||||
ModuleMemoryOffset::Defined(
|
||||
ofs.vmctx_vmmemory_definition_base(DefinedMemoryIndex::new(0)),
|
||||
)
|
||||
} else {
|
||||
ModuleMemoryOffset::None
|
||||
};
|
||||
self.compiler
|
||||
.emit_dwarf(&translation.debuginfo, &funcs, &memory_offset)
|
||||
.map_err(SetupError::DebugInfo)?
|
||||
} else {
|
||||
vec![]
|
||||
};
|
||||
|
||||
let (obj, unwind_info) = build_object(self, &translation, types, &funcs, dwarf_sections)?;
|
||||
|
||||
Ok(Compilation {
|
||||
obj,
|
||||
unwind_info,
|
||||
funcs,
|
||||
})
|
||||
Ok(Compilation { obj, funcs })
|
||||
}
|
||||
|
||||
/// Run the given closure in parallel if the compiler is configured to do so.
|
||||
|
||||
@@ -7,9 +7,7 @@ use crate::code_memory::CodeMemory;
|
||||
use crate::compiler::{Compilation, Compiler};
|
||||
use crate::debug::create_gdbjit_image;
|
||||
use crate::link::link_module;
|
||||
use crate::object::ObjectUnwindInfo;
|
||||
use anyhow::{Context, Result};
|
||||
use object::File as ObjectFile;
|
||||
use anyhow::Result;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::ops::Range;
|
||||
use std::sync::Arc;
|
||||
@@ -57,9 +55,6 @@ pub struct CompilationArtifacts {
|
||||
/// ELF image with functions code.
|
||||
obj: Box<[u8]>,
|
||||
|
||||
/// Unwind information for function code.
|
||||
unwind_info: Box<[ObjectUnwindInfo]>,
|
||||
|
||||
/// Descriptions of compiled functions
|
||||
funcs: PrimaryMap<DefinedFuncIndex, FunctionInfo>,
|
||||
|
||||
@@ -110,11 +105,7 @@ impl CompilationArtifacts {
|
||||
let list = compiler.run_maybe_parallel::<_, _, SetupError, _>(
|
||||
translations,
|
||||
|mut translation| {
|
||||
let Compilation {
|
||||
obj,
|
||||
unwind_info,
|
||||
funcs,
|
||||
} = compiler.compile(&mut translation, &types)?;
|
||||
let Compilation { obj, funcs } = compiler.compile(&mut translation, &types)?;
|
||||
|
||||
let ModuleTranslation {
|
||||
mut module,
|
||||
@@ -129,16 +120,9 @@ impl CompilationArtifacts {
|
||||
}
|
||||
}
|
||||
|
||||
let obj = obj.write().map_err(|_| {
|
||||
SetupError::Instantiate(InstantiationError::Resource(anyhow::anyhow!(
|
||||
"failed to create image memory"
|
||||
)))
|
||||
})?;
|
||||
|
||||
Ok(CompilationArtifacts {
|
||||
module: Arc::new(module),
|
||||
obj: obj.into_boxed_slice(),
|
||||
unwind_info: unwind_info.into_boxed_slice(),
|
||||
funcs: funcs
|
||||
.into_iter()
|
||||
.map(|(_, func)| FunctionInfo {
|
||||
@@ -221,34 +205,26 @@ impl CompiledModule {
|
||||
/// artifacts.
|
||||
pub fn from_artifacts_list(
|
||||
artifacts: Vec<CompilationArtifacts>,
|
||||
compiler: &Compiler,
|
||||
profiler: &dyn ProfilingAgent,
|
||||
compiler: &Compiler,
|
||||
) -> Result<Vec<Arc<Self>>, SetupError> {
|
||||
compiler.run_maybe_parallel(artifacts, |a| {
|
||||
CompiledModule::from_artifacts(a, compiler, profiler)
|
||||
})
|
||||
compiler.run_maybe_parallel(artifacts, |a| CompiledModule::from_artifacts(a, profiler))
|
||||
}
|
||||
|
||||
/// Creates `CompiledModule` directly from `CompilationArtifacts`.
|
||||
pub fn from_artifacts(
|
||||
artifacts: CompilationArtifacts,
|
||||
compiler: &Compiler,
|
||||
profiler: &dyn ProfilingAgent,
|
||||
) -> Result<Arc<Self>, SetupError> {
|
||||
// Allocate all of the compiled functions into executable memory,
|
||||
// copying over their contents.
|
||||
let (code_memory, code_range, finished_functions, trampolines) = build_code_memory(
|
||||
compiler,
|
||||
&artifacts.obj,
|
||||
&artifacts.module,
|
||||
&artifacts.unwind_info,
|
||||
)
|
||||
.map_err(|message| {
|
||||
SetupError::Instantiate(InstantiationError::Resource(anyhow::anyhow!(
|
||||
"failed to build code memory for functions: {}",
|
||||
message
|
||||
)))
|
||||
})?;
|
||||
let (code_memory, code_range, finished_functions, trampolines) =
|
||||
build_code_memory(&artifacts.obj, &artifacts.module).map_err(|message| {
|
||||
SetupError::Instantiate(InstantiationError::Resource(anyhow::anyhow!(
|
||||
"failed to build code memory for functions: {}",
|
||||
message
|
||||
)))
|
||||
})?;
|
||||
|
||||
// Register GDB JIT images; initialize profiler and load the wasm module.
|
||||
let dbg_jit_registration = if artifacts.native_debug_info_present {
|
||||
@@ -475,21 +451,17 @@ fn create_dbg_image(
|
||||
}
|
||||
|
||||
fn build_code_memory(
|
||||
compiler: &Compiler,
|
||||
obj: &[u8],
|
||||
module: &Module,
|
||||
unwind_info: &[ObjectUnwindInfo],
|
||||
) -> Result<(
|
||||
CodeMemory,
|
||||
(*const u8, usize),
|
||||
PrimaryMap<DefinedFuncIndex, *mut [VMFunctionBody]>,
|
||||
Vec<(SignatureIndex, VMTrampoline)>,
|
||||
)> {
|
||||
let obj = ObjectFile::parse(obj).with_context(|| "Unable to read obj")?;
|
||||
|
||||
let mut code_memory = CodeMemory::new();
|
||||
|
||||
let allocation = code_memory.allocate_for_object(&obj, unwind_info)?;
|
||||
let allocation = code_memory.allocate_for_object(obj)?;
|
||||
|
||||
// Populate the finished functions from the allocation
|
||||
let mut finished_functions = PrimaryMap::with_capacity(allocation.funcs_len());
|
||||
@@ -519,14 +491,17 @@ fn build_code_memory(
|
||||
trampolines.push((i, fnptr));
|
||||
}
|
||||
|
||||
let code_range = allocation.code_range();
|
||||
link_module(
|
||||
&allocation.obj,
|
||||
&module,
|
||||
allocation.code_range,
|
||||
&finished_functions,
|
||||
);
|
||||
|
||||
link_module(&obj, &module, code_range, &finished_functions);
|
||||
|
||||
let code_range = (code_range.as_ptr(), code_range.len());
|
||||
let code_range = (allocation.code_range.as_ptr(), allocation.code_range.len());
|
||||
|
||||
// Make all code compiled thus far executable.
|
||||
code_memory.publish(compiler);
|
||||
code_memory.publish();
|
||||
|
||||
Ok((code_memory, code_range, finished_functions, trampolines))
|
||||
}
|
||||
|
||||
@@ -25,7 +25,6 @@ mod compiler;
|
||||
mod debug;
|
||||
mod instantiate;
|
||||
mod link;
|
||||
mod object;
|
||||
mod unwind;
|
||||
|
||||
pub use crate::code_memory::CodeMemory;
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
//! Linking for JIT-compiled code.
|
||||
|
||||
use crate::object::utils::try_parse_func_name;
|
||||
use object::read::{Object, ObjectSection, Relocation, RelocationTarget};
|
||||
use object::{elf, File, ObjectSymbol, RelocationEncoding, RelocationKind};
|
||||
use std::ptr::{read_unaligned, write_unaligned};
|
||||
use wasmtime_environ::entity::PrimaryMap;
|
||||
use wasmtime_environ::obj::try_parse_func_name;
|
||||
use wasmtime_environ::wasm::DefinedFuncIndex;
|
||||
use wasmtime_environ::Module;
|
||||
use wasmtime_runtime::libcalls;
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
//! Object file generation.
|
||||
|
||||
use crate::Compiler;
|
||||
use object::write::Object;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::BTreeSet;
|
||||
use wasmtime_environ::isa::unwind::UnwindInfo;
|
||||
use wasmtime_environ::wasm::{FuncIndex, SignatureIndex};
|
||||
use wasmtime_environ::{CompiledFunctions, DwarfSection, ModuleTranslation, TypeTables};
|
||||
use wasmtime_obj::{ObjectBuilder, ObjectBuilderTarget};
|
||||
|
||||
pub use wasmtime_obj::utils;
|
||||
|
||||
/// Unwind information for object files functions (including trampolines).
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub enum ObjectUnwindInfo {
|
||||
Func(FuncIndex, UnwindInfo),
|
||||
Trampoline(SignatureIndex, UnwindInfo),
|
||||
}
|
||||
|
||||
// Builds ELF image from the module `Compilation`.
|
||||
pub(crate) fn build_object(
|
||||
compiler: &Compiler,
|
||||
translation: &ModuleTranslation,
|
||||
types: &TypeTables,
|
||||
funcs: &CompiledFunctions,
|
||||
dwarf_sections: Vec<DwarfSection>,
|
||||
) -> Result<(Object, Vec<ObjectUnwindInfo>), anyhow::Error> {
|
||||
const CODE_SECTION_ALIGNMENT: u64 = 0x1000;
|
||||
|
||||
let mut unwind_info = Vec::new();
|
||||
|
||||
// Preserve function unwind info.
|
||||
unwind_info.extend(funcs.iter().filter_map(|(index, func)| {
|
||||
func.unwind_info
|
||||
.as_ref()
|
||||
.map(|info| ObjectUnwindInfo::Func(translation.module.func_index(index), info.clone()))
|
||||
}));
|
||||
|
||||
// Build trampolines for every signature that can be used by this module.
|
||||
let signatures = translation
|
||||
.module
|
||||
.functions
|
||||
.iter()
|
||||
.filter_map(|(i, sig)| match translation.module.defined_func_index(i) {
|
||||
Some(i) if !translation.module.possibly_exported_funcs.contains(&i) => None,
|
||||
_ => Some(*sig),
|
||||
})
|
||||
.collect::<BTreeSet<_>>();
|
||||
let mut trampolines = Vec::with_capacity(signatures.len());
|
||||
for i in signatures {
|
||||
let func = compiler
|
||||
.compiler()
|
||||
.host_to_wasm_trampoline(&types.wasm_signatures[i])?;
|
||||
// Preserve trampoline function unwind info.
|
||||
if let Some(info) = &func.unwind_info {
|
||||
unwind_info.push(ObjectUnwindInfo::Trampoline(i, info.clone()))
|
||||
}
|
||||
trampolines.push((i, func));
|
||||
}
|
||||
|
||||
let target = ObjectBuilderTarget::new(compiler.compiler().triple().architecture)?;
|
||||
let mut builder = ObjectBuilder::new(target, &translation.module, funcs);
|
||||
builder
|
||||
.set_code_alignment(CODE_SECTION_ALIGNMENT)
|
||||
.set_trampolines(trampolines)
|
||||
.set_dwarf_sections(dwarf_sections);
|
||||
let obj = builder.build()?;
|
||||
|
||||
Ok((obj, unwind_info))
|
||||
}
|
||||
@@ -1,20 +1,10 @@
|
||||
//! Module for System V ABI unwind registry.
|
||||
|
||||
use crate::Compiler;
|
||||
use anyhow::{bail, Result};
|
||||
use gimli::{
|
||||
write::{Address, EhFrame, EndianVec, FrameTable, Writer},
|
||||
RunTimeEndian,
|
||||
};
|
||||
use wasmtime_environ::isa::unwind::UnwindInfo;
|
||||
use anyhow::Result;
|
||||
|
||||
/// Represents a registry of function unwind information for System V ABI.
|
||||
pub struct UnwindRegistry {
|
||||
base_address: usize,
|
||||
functions: Vec<gimli::write::FrameDescriptionEntry>,
|
||||
frame_table: Vec<u8>,
|
||||
/// Represents a registration of function unwind information for System V ABI.
|
||||
pub struct UnwindRegistration {
|
||||
registrations: Vec<usize>,
|
||||
published: bool,
|
||||
}
|
||||
|
||||
extern "C" {
|
||||
@@ -23,100 +13,34 @@ extern "C" {
|
||||
fn __deregister_frame(fde: *const u8);
|
||||
}
|
||||
|
||||
impl UnwindRegistry {
|
||||
/// Creates a new unwind registry with the given base address.
|
||||
pub fn new(base_address: usize) -> Self {
|
||||
Self {
|
||||
base_address,
|
||||
functions: Vec::new(),
|
||||
frame_table: Vec::new(),
|
||||
registrations: Vec::new(),
|
||||
published: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Registers a function given the start offset, length, and unwind information.
|
||||
pub fn register(&mut self, func_start: u32, _func_len: u32, info: &UnwindInfo) -> Result<()> {
|
||||
if self.published {
|
||||
bail!("unwind registry has already been published");
|
||||
}
|
||||
|
||||
match info {
|
||||
UnwindInfo::SystemV(info) => {
|
||||
self.functions.push(info.to_fde(Address::Constant(
|
||||
self.base_address as u64 + func_start as u64,
|
||||
)));
|
||||
}
|
||||
_ => bail!("unsupported unwind information"),
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Publishes all registered functions.
|
||||
pub fn publish(&mut self, compiler: &Compiler) -> Result<()> {
|
||||
if self.published {
|
||||
bail!("unwind registry has already been published");
|
||||
}
|
||||
|
||||
if self.functions.is_empty() {
|
||||
self.published = true;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
self.set_frame_table(compiler)?;
|
||||
|
||||
unsafe {
|
||||
self.register_frames();
|
||||
}
|
||||
|
||||
self.published = true;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn set_frame_table(&mut self, compiler: &Compiler) -> Result<()> {
|
||||
let mut table = FrameTable::default();
|
||||
let cie_id = table.add_cie(match compiler.compiler().create_systemv_cie() {
|
||||
Some(cie) => cie,
|
||||
None => bail!("ISA does not support System V unwind information"),
|
||||
});
|
||||
|
||||
let functions = std::mem::replace(&mut self.functions, Vec::new());
|
||||
|
||||
for func in functions {
|
||||
table.add_fde(cie_id, func);
|
||||
}
|
||||
|
||||
let mut eh_frame = EhFrame(EndianVec::new(RunTimeEndian::default()));
|
||||
table.write_eh_frame(&mut eh_frame).unwrap();
|
||||
|
||||
impl UnwindRegistration {
|
||||
/// Registers precompiled unwinding information with the system.
|
||||
///
|
||||
/// The `_base_address` field is ignored here (only used on other
|
||||
/// platforms), but the `unwind_info` and `unwind_len` parameters should
|
||||
/// describe an in-memory representation of a `.eh_frame` section. This is
|
||||
/// typically arranged for by the `wasmtime-obj` crate.
|
||||
pub unsafe fn new(
|
||||
_base_address: *mut u8,
|
||||
unwind_info: *mut u8,
|
||||
unwind_len: usize,
|
||||
) -> Result<UnwindRegistration> {
|
||||
let mut registrations = Vec::new();
|
||||
if cfg!(any(
|
||||
all(target_os = "linux", target_env = "gnu"),
|
||||
target_os = "freebsd"
|
||||
)) {
|
||||
// libgcc expects a terminating "empty" length, so write a 0 length at the end of the table.
|
||||
eh_frame.0.write_u32(0).unwrap();
|
||||
}
|
||||
|
||||
self.frame_table = eh_frame.0.into_vec();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
unsafe fn register_frames(&mut self) {
|
||||
if cfg!(any(
|
||||
all(target_os = "linux", target_env = "gnu"),
|
||||
target_os = "freebsd"
|
||||
)) {
|
||||
// On gnu (libgcc), `__register_frame` will walk the FDEs until an entry of length 0
|
||||
let ptr = self.frame_table.as_ptr();
|
||||
__register_frame(ptr);
|
||||
self.registrations.push(ptr as usize);
|
||||
// On gnu (libgcc), `__register_frame` will walk the FDEs until an
|
||||
// entry of length 0
|
||||
__register_frame(unwind_info);
|
||||
registrations.push(unwind_info as usize);
|
||||
} else {
|
||||
// For libunwind, `__register_frame` takes a pointer to a single FDE
|
||||
let start = self.frame_table.as_ptr();
|
||||
let end = start.add(self.frame_table.len());
|
||||
// For libunwind, `__register_frame` takes a pointer to a single
|
||||
// FDE. Note that we subtract 4 from the length of unwind info since
|
||||
// wasmtime-encode .eh_frame sections always have a trailing 32-bit
|
||||
// zero for the platforms above.
|
||||
let start = unwind_info;
|
||||
let end = start.add(unwind_len - 4);
|
||||
let mut current = start;
|
||||
|
||||
// Walk all of the entries in the frame table and register them
|
||||
@@ -126,31 +50,36 @@ impl UnwindRegistry {
|
||||
// Skip over the CIE
|
||||
if current != start {
|
||||
__register_frame(current);
|
||||
self.registrations.push(current as usize);
|
||||
registrations.push(current as usize);
|
||||
}
|
||||
|
||||
// Move to the next table entry (+4 because the length itself is not inclusive)
|
||||
// Move to the next table entry (+4 because the length itself is
|
||||
// not inclusive)
|
||||
current = current.add(len + 4);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(UnwindRegistration { registrations })
|
||||
}
|
||||
|
||||
pub fn section_name() -> &'static str {
|
||||
"_wasmtime_eh_frame"
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for UnwindRegistry {
|
||||
impl Drop for UnwindRegistration {
|
||||
fn drop(&mut self) {
|
||||
if self.published {
|
||||
unsafe {
|
||||
// libgcc stores the frame entries as a linked list in decreasing sort order
|
||||
// based on the PC value of the registered entry.
|
||||
//
|
||||
// As we store the registrations in increasing order, it would be O(N^2) to
|
||||
// deregister in that order.
|
||||
//
|
||||
// To ensure that we just pop off the first element in the list upon every
|
||||
// deregistration, walk our list of registrations backwards.
|
||||
for fde in self.registrations.iter().rev() {
|
||||
__deregister_frame(*fde as *const _);
|
||||
}
|
||||
unsafe {
|
||||
// libgcc stores the frame entries as a linked list in decreasing
|
||||
// sort order based on the PC value of the registered entry.
|
||||
//
|
||||
// As we store the registrations in increasing order, it would be
|
||||
// O(N^2) to deregister in that order.
|
||||
//
|
||||
// To ensure that we just pop off the first element in the list upon
|
||||
// every deregistration, walk our list of registrations backwards.
|
||||
for fde in self.registrations.iter().rev() {
|
||||
__deregister_frame(*fde as *const _);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,92 +1,46 @@
|
||||
//! Module for Windows x64 ABI unwind registry.
|
||||
|
||||
use crate::Compiler;
|
||||
use anyhow::{bail, Result};
|
||||
use wasmtime_environ::isa::unwind::UnwindInfo;
|
||||
use std::mem;
|
||||
use winapi::um::winnt;
|
||||
|
||||
/// Represents a registry of function unwind information for Windows x64 ABI.
|
||||
pub struct UnwindRegistry {
|
||||
base_address: usize,
|
||||
functions: Vec<winnt::RUNTIME_FUNCTION>,
|
||||
published: bool,
|
||||
pub struct UnwindRegistration {
|
||||
functions: usize,
|
||||
}
|
||||
|
||||
impl UnwindRegistry {
|
||||
/// Creates a new unwind registry with the given base address.
|
||||
pub fn new(base_address: usize) -> Self {
|
||||
Self {
|
||||
base_address,
|
||||
functions: Vec::new(),
|
||||
published: false,
|
||||
impl UnwindRegistration {
|
||||
pub unsafe fn new(
|
||||
base_address: *mut u8,
|
||||
unwind_info: *mut u8,
|
||||
unwind_len: usize,
|
||||
) -> Result<UnwindRegistration> {
|
||||
assert!(unwind_info as usize % 4 == 0);
|
||||
let unit_len = mem::size_of::<winnt::RUNTIME_FUNCTION>();
|
||||
assert!(unwind_len % unit_len == 0);
|
||||
if winnt::RtlAddFunctionTable(
|
||||
unwind_info as *mut _,
|
||||
(unwind_len / unit_len) as u32,
|
||||
base_address as u64,
|
||||
) == 0
|
||||
{
|
||||
bail!("failed to register function table");
|
||||
}
|
||||
|
||||
Ok(UnwindRegistration {
|
||||
functions: unwind_info as usize,
|
||||
})
|
||||
}
|
||||
|
||||
/// Registers a function given the start offset, length, and unwind information.
|
||||
pub fn register(&mut self, func_start: u32, func_len: u32, info: &UnwindInfo) -> Result<()> {
|
||||
if self.published {
|
||||
bail!("unwind registry has already been published");
|
||||
}
|
||||
|
||||
match info {
|
||||
UnwindInfo::WindowsX64(_) => {
|
||||
let mut entry = winnt::RUNTIME_FUNCTION::default();
|
||||
|
||||
entry.BeginAddress = func_start;
|
||||
entry.EndAddress = func_start + func_len;
|
||||
|
||||
// The unwind information should be immediately following the function
|
||||
// with padding for 4 byte alignment
|
||||
unsafe {
|
||||
*entry.u.UnwindInfoAddress_mut() = (entry.EndAddress + 3) & !3;
|
||||
}
|
||||
|
||||
self.functions.push(entry);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
_ => bail!("unsupported unwind information"),
|
||||
}
|
||||
}
|
||||
|
||||
/// Publishes all registered functions.
|
||||
pub fn publish(&mut self, _compiler: &Compiler) -> Result<()> {
|
||||
if self.published {
|
||||
bail!("unwind registry has already been published");
|
||||
}
|
||||
|
||||
self.published = true;
|
||||
|
||||
if !self.functions.is_empty() {
|
||||
// Windows heap allocations are 32-bit aligned, but assert just in case
|
||||
assert_eq!(
|
||||
(self.functions.as_mut_ptr() as u64) % 4,
|
||||
0,
|
||||
"function table allocation was not aligned"
|
||||
);
|
||||
|
||||
unsafe {
|
||||
if winnt::RtlAddFunctionTable(
|
||||
self.functions.as_mut_ptr(),
|
||||
self.functions.len() as u32,
|
||||
self.base_address as u64,
|
||||
) == 0
|
||||
{
|
||||
bail!("failed to register function table");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
pub fn section_name() -> &'static str {
|
||||
"_wasmtime_winx64_unwind"
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for UnwindRegistry {
|
||||
impl Drop for UnwindRegistration {
|
||||
fn drop(&mut self) {
|
||||
if self.published {
|
||||
unsafe {
|
||||
winnt::RtlDeleteFunctionTable(self.functions.as_mut_ptr());
|
||||
}
|
||||
unsafe {
|
||||
winnt::RtlDeleteFunctionTable(self.functions as _);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user