27 Commits

Author SHA1 Message Date
Dan Gohman
c59bb8db39 Update several dependencies. (#6171)
This updates to rustix 0.37.13, which contains some features we can use to
implement more features in wasi-common for the wasi-sockets API. This also
pulls in several other updates to avoid having multiple versions of rustix.

This does introduce multiple versions of windows-sys, as the errno and tokio
crates are currently using 0.45 while rustix and other dependencies have
updated to 0.48; PRs updating these are already in flight so this will
hopefully be resolved soon.

It also includes cap-std 1.0.14, which disables the use of `openat2` and
`statx` on Android, fixing a bug where some Android devices crash the
process when those syscalls are executed.
2023-04-20 14:03:49 +00:00
Alex Crichton
4ad86752de Fix libcall relocations for precompiled modules (#5608)
* Fix libcall relocations for precompiled modules

This commit fixes some asserts and support for relocation libcalls in
precompiled modules loaded from disk. In doing so this reworks how mmaps
are managed for files from disk. All non-file-backed `Mmap` entries are
read/write but file-backed versions were readonly. This commit changes
this such that all `Mmap` objects, even if they're file-backed, start as
read/write. The file-based versions all use copy-on-write to preserve
the private-ness of the mapping.

This is not functionally intended to change anything. Instead this
should have some more memory writable after a module is loaded but the
text section, for example, is still left as read/execute when loading is
finished. Additionally this makes modules compiled in memory more
consistent with modules loaded from disk.

* Update a comment

* Force images to become readonly during publish

This marks compiled images as entirely readonly during the
`CodeMemory::publish` step which happens just before the text section
becomes executable. This ensures that all images, no matter where they
come from, are guaranteed frozen before they start executing.
2023-01-25 12:09:15 -06:00
Anton Kirilov
d8b290898c Initial forward-edge CFI implementation (#3693)
* Initial forward-edge CFI implementation

Give the user the option to start all basic blocks that are targets
of indirect branches with the BTI instruction introduced by the
Branch Target Identification extension to the Arm instruction set
architecture.

Copyright (c) 2022, Arm Limited.

* Refactor `from_artifacts` to avoid second `make_executable` (#1)

This involves "parsing" twice but this is parsing just the header of an
ELF file so it's not a very intensive operation and should be ok to do
twice.

* Address the code review feedback

Copyright (c) 2022, Arm Limited.

Co-authored-by: Alex Crichton <alex@alexcrichton.com>
2022-09-08 09:35:58 -05:00
Alex Crichton
1321c234e5 Remove dependency on more-asserts (#4408)
* Remove dependency on `more-asserts`

In my recent adventures to do a bit of gardening on our dependencies I
noticed that there's a new major version for the `more-asserts` crate.
Instead of updating to this though I've opted to instead remove the
dependency since I don't think we heavily lean on this crate and
otherwise one-off prints are probably sufficient to avoid the need for
pulling in a whole crate for this.

* Remove exemption for `more-asserts`
2022-07-26 16:47:33 +00:00
Alex Crichton
601e8f3094 Remove dependency on the region crate (#4407)
This commit removes Wasmtime's dependency on the `region` crate. The
motivation for this came about when I was updating dependencies and saw
that `region` had a new major version at 3.0.0 as opposed to our
currently used 2.3 track. In reviewing the use cases of `region` within
Wasmtime I found two trends in particular which motivated this commit:

* Some unix-specific areas of `wasmtime_runtime` use
  `rustix::mm::mprotect` instead of `region::protect` already. This
  means that the usage of `region::protect` for changing virtual memory
  protections was already inconsistent.

* Many uses of `region::protect` were already in unix-specific regions
  which could make use of `rustix`.

Overall I opted to remove the dependency on the `region` crate to avoid
chasing its versions over time. Unix-specific changes of protections
were easily changed to `rustix::mm::mprotect`. There were two locations
where a windows/unix split is now required and I subjectively ruled
"that seems ok". Finally removing `region` also meant that the "what is
the current page size" query needed to be inlined into
`wasmtime_runtime`, which I have also subjectively ruled "that seems
fine".

Finally one final refactoring here was that the `unix.rs` and `linux.rs`
split for the pooling allocator was merged. These two files already only
differed in one function so I slapped a `cfg_if!` in there to help
reduce the duplication.
2022-07-07 21:28:25 +00:00
Alex Crichton
df1502531d Migrate from winapi to windows-sys (#4346)
* Migrate from `winapi` to `windows-sys`

I believe that Microsoft itself is supporting the development of
`windows-sys` and it's also used by `cap-std` now so this switches
Wasmtime's dependencies on Windows APIs from the `winapi` crate to the
`windows-sys` crate. We still have `winapi` in our dependency graph but
that may get phased out over time.

* Make windows-sys a target-specific dependency
2022-06-28 18:02:41 +00:00
Dan Gohman
fa36e86f2c Update WASI to cap-std 0.25 and windows-sys. (#4302)
This updates to rustix 0.35.6, and updates wasi-common to use cap-std 0.25 and
windows-sys (instead of winapi).

Changes include:

 - Better error code mappings on Windows.
 - Fixes undefined references to `utimensat` on Darwin.
 - Fixes undefined references to `preadv64` and `pwritev64` on Android.
 - Updates to io-lifetimes 0.7, which matches the io_safety API in Rust.
 - y2038 bug fixes for 32-bit platforms
2022-06-23 10:47:15 -07:00
Alex Crichton
c0c368d151 Use mmap'd *.cwasm as a source for memory initialization images (#3787)
* Skip memfd creation with precompiled modules

This commit updates the memfd support internally to not actually use a
memfd if a compiled module originally came from disk via the
`wasmtime::Module::deserialize_file` API. In this situation we already
have a file descriptor open and there's no need to copy a module's heap
image to a new file descriptor.

To facilitate a new source of `mmap` the currently-memfd-specific-logic
of creating a heap image is generalized to a new form of
`MemoryInitialization` which is attempted for all modules at
module-compile-time. This means that the serialized artifact to disk
will have the memory image in its entirety waiting for us. Furthermore
the memory image is ensured to be padded and aligned carefully to the
target system's page size, notably meaning that the data section in the
final object file is page-aligned and the size of the data section is
also page aligned.

This means that when a precompiled module is mapped from disk we can
reuse the underlying `File` to mmap all initial memory images. This
means that the offset-within-the-memory-mapped-file can differ for
memfd-vs-not, but that's just another piece of state to track in the
memfd implementation.

In the limit this waters down the term "memfd" for this technique of
quickly initializing memory because we no longer use memfd
unconditionally (only when the backing file isn't available).
This does however open up an avenue in the future to porting this
support to other OSes because while `memfd_create` is Linux-specific
both macOS and Windows support mapping a file with copy-on-write. This
porting isn't done in this PR and is left for a future refactoring.

Closes #3758

* Enable "memfd" support on all unix systems

Cordon off the Linux-specific bits and enable the memfd support to
compile and run on platforms like macOS which have a Linux-like `mmap`.
This only works if a module is mapped from a precompiled module file on
disk, but that's better than not supporting it at all!

* Fix linux compile

* Use `Arc<File>` instead of `MmapVecFileBacking`

* Use a named struct instead of mysterious tuples

* Comment about unsafety in `Module::deserialize_file`

* Fix tests

* Fix uffd compile

* Always align data segments

No need to have conditional alignment since their sizes are all aligned
anyway

* Update comment in build.rs

* Use rustix, not `region`

* Fix some confusing logic/names around memory indexes

These functions all work with memory indexes, not specifically defined
memory indexes.
2022-02-10 15:40:40 -06:00
Dan Gohman
ea0cb971fb Update to rustix 0.26.2. (#3521)
This pulls in a fix for Android, where Android's seccomp policy on older
versions is to make `openat2` irrecoverably crash the process, so we have
to do a version check up front rather than relying on `ENOSYS` to
determine if `openat2` is supported.

And it pulls in the fix for the link errors when multiple versions of
rsix/rustix are linked in.

And it has updates for two crate renamings: rsix has been renamed to
rustix, and unsafe-io has been renamed to io-extras.
2021-11-15 10:21:13 -08:00
Dan Gohman
e5ebef1b94 Use empty() instead of NONE with rsix flags types.
`empty()` is provided by all `bitflags` types, so it's more idiomatic
than having `NONE` values.
2021-09-30 08:14:13 -07:00
Dan Gohman
47490b4383 Use rsix to make system calls in Wasmtime. (#3355)
* Use rsix to make system calls in Wasmtime.

`rsix` is a system call wrapper crate that we use in `wasi-common`,
which can provide the following advantages in the rest of Wasmtime:

 - It eliminates some `unsafe` blocks in Wasmtime's code. There's
   still an `unsafe` block in the library, but this way, the `unsafe`
   is factored out and clearly scoped.

 - And, it makes error handling more consistent, factoring out code for
   checking return values and `io::Error::last_os_error()`, and code that
   does `errno::set_errno(0)`.

This doesn't cover *all* system calls; `rsix` doesn't implement
signal-handling APIs, and this doesn't cover calls made through `std` or
crates like `userfaultfd`, `rand`, and `region`.
2021-09-17 15:28:56 -07:00
Dan Gohman
05d113148d Use std::alloc::alloc instead of libc::posix_memalign.
This makes Cranelift use the Rust `alloc` API its allocations,
rather than directly calling into `libc`, which makes it respect
the `#[global_allocator]` configuration.

Also, use `region::page::ceil` instead of having our own copies of
that logic.
2021-08-31 15:49:50 -07:00
Alex Crichton
9e0c910023 Add a Module::deserialize_file method (#3266)
* Add a `Module::deserialize_file` method

This commit adds a new method to the `wasmtime::Module` type,
`deserialize_file`. This is intended to be the same as the `deserialize`
method except for the serialized module is present as an on-disk file.
This enables Wasmtime to internally use `mmap` to avoid copying bytes
around and generally makes loading a module much faster.

A C API is added in this commit as well for various bindings to use this
accelerated path now as well. Another option perhaps for a Rust-based
API is to have an API taking a `File` itself to allow for a custom file
descriptor in one way or another, but for now that's left for a possible
future refactoring if we find a use case.

* Fix compat with main - handle readdonly mmap

* wip

* Try to fix Windows support
2021-08-31 13:05:51 -05:00
Alex Crichton
e68aa99588 Implement the memory64 proposal in Wasmtime (#3153)
* Implement the memory64 proposal in Wasmtime

This commit implements the WebAssembly [memory64 proposal][proposal] in
both Wasmtime and Cranelift. In terms of work done Cranelift ended up
needing very little work here since most of it was already prepared for
64-bit memories at one point or another. Most of the work in Wasmtime is
largely refactoring, changing a bunch of `u32` values to something else.

A number of internal and public interfaces are changing as a result of
this commit, for example:

* Acessors on `wasmtime::Memory` that work with pages now all return
  `u64` unconditionally rather than `u32`. This makes it possible to
  accommodate 64-bit memories with this API, but we may also want to
  consider `usize` here at some point since the host can't grow past
  `usize`-limited pages anyway.

* The `wasmtime::Limits` structure is removed in favor of
  minimum/maximum methods on table/memory types.

* Many libcall intrinsics called by jit code now unconditionally take
  `u64` arguments instead of `u32`. Return values are `usize`, however,
  since the return value, if successful, is always bounded by host
  memory while arguments can come from any guest.

* The `heap_addr` clif instruction now takes a 64-bit offset argument
  instead of a 32-bit one. It turns out that the legalization of
  `heap_addr` already worked with 64-bit offsets, so this change was
  fairly trivial to make.

* The runtime implementation of mmap-based linear memories has changed
  to largely work in `usize` quantities in its API and in bytes instead
  of pages. This simplifies various aspects and reflects that
  mmap-memories are always bound by `usize` since that's what the host
  is using to address things, and additionally most calculations care
  about bytes rather than pages except for the very edge where we're
  going to/from wasm.

Overall I've tried to minimize the amount of `as` casts as possible,
using checked `try_from` and checked arithemtic with either error
handling or explicit `unwrap()` calls to tell us about bugs in the
future. Most locations have relatively obvious things to do with various
implications on various hosts, and I think they should all be roughly of
the right shape but time will tell. I mostly relied on the compiler
complaining that various types weren't aligned to figure out
type-casting, and I manually audited some of the more obvious locations.
I suspect we have a number of hidden locations that will panic on 32-bit
hosts if 64-bit modules try to run there, but otherwise I think we
should be generally ok (famous last words). In any case I wouldn't want
to enable this by default naturally until we've fuzzed it for some time.

In terms of the actual underlying implementation, no one should expect
memory64 to be all that fast. Right now it's implemented with
"dynamic" heaps which have a few consequences:

* All memory accesses are bounds-checked. I'm not sure how aggressively
  Cranelift tries to optimize out bounds checks, but I suspect not a ton
  since we haven't stressed this much historically.

* Heaps are always precisely sized. This means that every call to
  `memory.grow` will incur a `memcpy` of memory from the old heap to the
  new. We probably want to at least look into `mremap` on Linux and
  otherwise try to implement schemes where dynamic heaps have some
  reserved pages to grow into to help amortize the cost of
  `memory.grow`.

The memory64 spec test suite is scheduled to now run on CI, but as with
all the other spec test suites it's really not all that comprehensive.
I've tried adding more tests for basic things as I've had to implement
guards for them, but I wouldn't really consider the testing adequate
from just this PR itself. I did try to take care in one test to actually
allocate a 4gb+ heap and then avoid running that in the pooling
allocator or in emulation because otherwise that may fail or take
excessively long.

[proposal]: https://github.com/WebAssembly/memory64/blob/master/proposals/memory64/Overview.md

* Fix some tests

* More test fixes

* Fix wasmtime tests

* Fix doctests

* Revert to 32-bit immediate offsets in `heap_addr`

This commit updates the generation of addresses in wasm code to always
use 32-bit offsets for `heap_addr`, and if the calculated offset is
bigger than 32-bits we emit a manual add with an overflow check.

* Disable memory64 for spectest fuzzing

* Fix wrong offset being added to heap addr

* More comments!

* Clarify bytes/pages
2021-08-12 09:40:20 -05:00
Alex Crichton
7ce46043dc Add guard pages to the front of linear memories (#2977)
* Add guard pages to the front of linear memories

This commit implements a safety feature for Wasmtime to place guard
pages before the allocation of all linear memories. Guard pages placed
after linear memories are typically present for performance (at least)
because it can help elide bounds checks. Guard pages before a linear
memory, however, are never strictly needed for performance or features.
The intention of a preceding guard page is to help insulate against bugs
in Cranelift or other code generators, such as CVE-2021-32629.

This commit adds a `Config::guard_before_linear_memory` configuration
option, defaulting to `true`, which indicates whether guard pages should
be present both before linear memories as well as afterwards. Guard
regions continue to be controlled by
`{static,dynamic}_memory_guard_size` methods.

The implementation here affects both on-demand allocated memories as
well as the pooling allocator for memories. For on-demand memories this
adjusts the size of the allocation as well as adjusts the calculations
for the base pointer of the wasm memory. For the pooling allocator this
will place a singular extra guard region at the very start of the
allocation for memories. Since linear memories in the pooling allocator
are contiguous every memory already had a preceding guard region in
memory, it was just the previous memory's guard region afterwards. Only
the first memory needed this extra guard.

I've attempted to write some tests to help test all this, but this is
all somewhat tricky to test because the settings are pretty far away
from the actual behavior. I think, though, that the tests added here
should help cover various use cases and help us have confidence in
tweaking the various `Config` settings beyond their defaults.

Note that this also contains a semantic change where
`InstanceLimits::memory_reservation_size` has been removed. Instead this
field is now inferred from the `static_memory_maximum_size` and guard
size settings. This should hopefully remove some duplication in these
settings, canonicalizing on the guard-size/static-size settings as the
way to control memory sizes and virtual reservations.

* Update config docs

* Fix a typo

* Fix benchmark

* Fix wasmtime-runtime tests

* Fix some more tests

* Try to fix uffd failing test

* Review items

* Tweak 32-bit defaults

Makes the pooling allocator a bit more reasonable by default on 32-bit
with these settings.
2021-06-18 09:57:08 -05:00
Peter Huene
ff840b3d3b More PR feedback changes.
* More use of `anyhow`.
* Change `make_accessible` into `protect_linear_memory` to better demonstrate
  what it is used for; this will make the uffd implementation make a little
  more sense.
* Remove `create_memory_map` in favor of just creating the `Mmap` instances in
  the pooling allocator. This also removes the need for `MAP_NORESERVE` in the
  uffd implementation.
* Moar comments.
* Remove `BasePointerIterator` in favor of `impl Iterator`.
* The uffd implementation now only monitors linear memory pages and will only
  receive faults on pages that could potentially be accessible and never on a
  statically known guard page.
* Stop allocating memory or table pools if the maximum limit of the memory or
  table is 0.
2021-03-04 20:14:40 -08:00
Peter Huene
5b2f8789b2 Allow zero-sized allocations on Windows for Mmap. 2021-03-04 18:18:52 -08:00
Peter Huene
e71ccbf9bc Implement the pooling instance allocator.
This commit implements the pooling instance allocator.

The allocation strategy can be set with `Config::with_allocation_strategy`.

The pooling strategy uses the pooling instance allocator to preallocate a
contiguous region of memory for instantiating modules that adhere to various
limits.

The intention of the pooling instance allocator is to reserve as much of the
host address space needed for instantiating modules ahead of time and to reuse
committed memory pages wherever possible.
2021-03-04 18:18:51 -08:00
Chris Fallin
b8e31d7c8e Fix build warnings (errors on CI) due to mmap flag rename and deprecation. 2020-06-03 09:48:22 -07:00
Alex Crichton
317f598969 Update CodeMemory to be Send + Sync (#780)
* Update `CodeMemory` to be `Send + Sync`

This commit updates the `CodeMemory` type in wasmtime to be both `Send`
and `Sync` by updating the implementation of `Mmap` to not store raw
pointers. This avoids the need for an `unsafe impl` and leaves the
unsafety as it is currently.

* Run rustfmt

* Rename `offset` to `ptr`
2020-01-09 16:22:49 -06:00
Alex Crichton
eb1991c579 Revert "Remove the need for HostRef<Module> (#778)"
This reverts commit 7b33f1c619.

Pushed a few extra commits by accident, so reverting this.
2020-01-08 12:44:59 -08:00
Alex Crichton
7b33f1c619 Remove the need for HostRef<Module> (#778)
* Remove the need for `HostRef<Module>`

This commit continues previous work and also #708 by removing the need
to use `HostRef<Module>` in the API of the `wasmtime` crate. The API
changes performed here are:

* The `Module` type is now itself internally reference counted.
* The `Module::store` function now returns the `Store` that was used to
  create a `Module`
* Documentation for `Module` and its methods have been expanded.

* Fix compliation of test programs harness

* Fix the python extension

* Update `CodeMemory` to be `Send + Sync`

This commit updates the `CodeMemory` type in wasmtime to be both `Send`
and `Sync` by updating the implementation of `Mmap` to not store raw
pointers. This avoids the need for an `unsafe impl` and leaves the
unsafety as it is currently.

* Fix a typo
2020-01-08 14:42:37 -06:00
XAMPPRocky
907e7aac01 Clippy fixes (#692) 2019-12-24 12:50:07 -08:00
Peter Huene
6750605a61 Fix AppVerifier check regarding invalid call to VirtualFree. (#697)
Calls to `VirtualFree` that pass `MEM_RELEASE` must specify a size of 0
as the OS will be freeing the original range for the given base address.

The calls to free `MMap` memory on Windows were silently failing because
of an incorrect assertion (on Windows, `VirtualFree` returns non-zero
for success).

This was caught via AppVerifier while investigating a heap overrun issue
on a different PR.
2019-12-10 23:03:36 -08:00
Alex Crichton
39e57e3e9a Migrate back to std:: stylistically (#554)
* Migrate back to `std::` stylistically

This commit moves away from idioms such as `alloc::` and `core::` as
imports of standard data structures and types. Instead it migrates all
crates to uniformly use `std::` for importing standard data structures
and types. This also removes the `std` and `core` features from all
crates to and removes any conditional checking for `feature = "std"`

All of this support was previously added in #407 in an effort to make
wasmtime/cranelift "`no_std` compatible". Unfortunately though this
change comes at a cost:

* The usage of `alloc` and `core` isn't idiomatic. Especially trying to
  dual between types like `HashMap` from `std` as well as from
  `hashbrown` causes imports to be surprising in some cases.
* Unfortunately there was no CI check that crates were `no_std`, so none
  of them actually were. Many crates still imported from `std` or
  depended on crates that used `std`.

It's important to note, however, that **this does not mean that wasmtime
will not run in embedded environments**. The style of the code today and
idioms aren't ready in Rust to support this degree of multiplexing and
makes it somewhat difficult to keep up with the style of `wasmtime`.
Instead it's intended that embedded runtime support will be added as
necessary. Currently only `std` is necessary to build `wasmtime`, and
platforms that natively need to execute `wasmtime` will need to use a
Rust target that supports `std`. Note though that not all of `std` needs
to be supported, but instead much of it could be configured off to
return errors, and `wasmtime` would be configured to gracefully handle
errors.

The goal of this PR is to move `wasmtime` back to idiomatic usage of
features/`std`/imports/etc and help development in the short-term.
Long-term when platform concerns arise (if any) they can be addressed by
moving back to `no_std` crates (but fixing the issues mentioned above)
or ensuring that the target in Rust has `std` available.

* Start filling out platform support doc
2019-11-18 22:04:06 -08:00
Dan Gohman
1a0ed6e388 Use the more-asserts crate in more places.
This provides assert_le, assert_lt, and so on, which can print the
values of the operands.
2019-11-08 15:24:53 -08:00
Dan Gohman
22641de629 Initial reorg.
This is largely the same as #305, but updated for the current tree.
2019-11-08 06:35:40 -08:00