Cache configuration documentation
This commit is contained in:
279
CACHE_CONFIGURATION.md
Normal file
279
CACHE_CONFIGURATION.md
Normal file
@@ -0,0 +1,279 @@
|
||||
Wasmtime Cache Configuration
|
||||
============================
|
||||
|
||||
The cache configuration file uses the [toml] format.
|
||||
You can create a configuration file at the default location with:
|
||||
```
|
||||
$ wasmtime --create-cache-config
|
||||
```
|
||||
It will print the location regardless of the success.
|
||||
Please refer to the `--help` message for using a custom location.
|
||||
|
||||
All settings, except `enabled`, are **optional**.
|
||||
If the setting is not specified, the **default** value is used.
|
||||
***Thus, if you don't know what values to use, don't specify them.***
|
||||
The default values might be tuned in the future.
|
||||
|
||||
Wasmtime assumes all the options are in the `cache` section.
|
||||
|
||||
Example config:
|
||||
```toml
|
||||
[cache]
|
||||
enabled = true
|
||||
directory = "/nfs-share/wasmtime-cache/"
|
||||
cleanup-interval = "30m"
|
||||
files-total-size-soft-limit = "1Gi"
|
||||
```
|
||||
|
||||
Please refer to the [cache system] section to learn how it works.
|
||||
|
||||
If you think some default value should be tuned, some new settings
|
||||
should be introduced or some behavior should be changed, you are
|
||||
welcome to discuss it and contribute to [the Wasmtime repository].
|
||||
|
||||
[the Wasmtime repository]: https://github.com/CraneStation/wasmtime
|
||||
|
||||
Setting `enabled`
|
||||
-----------------
|
||||
- **type**: boolean
|
||||
- **format**: `true | false`
|
||||
- **default**: `true`
|
||||
|
||||
Specifies whether the cache system is used or not.
|
||||
|
||||
This field is *mandatory*.
|
||||
The default value is used when configuration file is not specified
|
||||
and none exists at the default location.
|
||||
|
||||
[`enabled`]: #setting-enabled
|
||||
|
||||
Setting `directory`
|
||||
-----------------
|
||||
- **type**: string (path)
|
||||
- **default**: look up `cache_dir` in [directories] crate
|
||||
|
||||
Specifies where the cache directory is. Must be an absolute path.
|
||||
|
||||
[`directory`]: #setting-directory
|
||||
|
||||
Setting `worker-event-queue-size`
|
||||
-----------------
|
||||
- **type**: string (SI prefix)
|
||||
- **format**: `"{integer}(K | M | G | T | P)?"`
|
||||
- **default**: `"16"`
|
||||
|
||||
Size of [cache worker] event queue.
|
||||
If the queue is full, incoming cache usage events will be dropped.
|
||||
|
||||
[`worker-event-queue-size`]: #setting-worker-event-queue-size
|
||||
|
||||
Setting `baseline-compression-level`
|
||||
------------------
|
||||
- **type**: integer
|
||||
- **default**: `3`, the default zstd compression level
|
||||
|
||||
Compression level used when a new cache file is being written by the [cache system].
|
||||
Wasmtime uses [zstd] compression.
|
||||
|
||||
[`baseline-compression-level`]: #setting-baseline-compression-level
|
||||
|
||||
Setting `optimized-compression-level`
|
||||
------------------
|
||||
- **type**: integer
|
||||
- **default**: `20`
|
||||
|
||||
Compression level used when the [cache worker] decides to recompress a cache file.
|
||||
Wasmtime uses [zstd] compression.
|
||||
|
||||
[`optimized-compression-level`]: #setting-optimized-compression-level
|
||||
|
||||
Setting `optimized-compression-usage-counter-threshold`
|
||||
------------------
|
||||
- **type**: string (SI prefix)
|
||||
- **format**: `"{integer}(K | M | G | T | P)?"`
|
||||
- **default**: `"256"`
|
||||
|
||||
One of the conditions for the [cache worker] to recompress a cache file
|
||||
is to have usage count of the file exceeding this threshold.
|
||||
|
||||
[`optimized-compression-usage-counter-threshold`]: #setting-optimized-compression-usage-counter-threshold
|
||||
|
||||
Setting `cleanup-interval`
|
||||
------------------
|
||||
- **type**: string (duration)
|
||||
- **format**: `"{integer}(s | m | h | d)"`
|
||||
- **default**: `"1h"`
|
||||
|
||||
When the [cache worker] is notified about a cache file being updated by the [cache system]
|
||||
and this interval has already passed since last cleaning up,
|
||||
the worker will attempt a new cleanup.
|
||||
|
||||
Please also refer to [`allowed-clock-drift-for-files-from-future`].
|
||||
|
||||
[`cleanup-interval`]: #setting-cleanup-interval
|
||||
|
||||
Setting `optimizing-compression-task-timeout`
|
||||
------------------
|
||||
- **type**: string (duration)
|
||||
- **format**: `"{integer}(s | m | h | d)"`
|
||||
- **default**: `"30m"`
|
||||
|
||||
When the [cache worker] decides to recompress a cache file, it makes sure that
|
||||
no other worker has started the task for this file within the last
|
||||
[`optimizing-compression-task-timeout`] interval.
|
||||
If some worker has started working on it, other workers are skipping this task.
|
||||
|
||||
Please also refer to the [`allowed-clock-drift-for-files-from-future`] section.
|
||||
|
||||
[`optimizing-compression-task-timeout`]: #setting-optimizing-compression-task-timeout
|
||||
|
||||
Setting `allowed-clock-drift-for-files-from-future`
|
||||
------------------
|
||||
- **type**: string (duration)
|
||||
- **format**: `"{integer}(s | m | h | d)"`
|
||||
- **default**: `"1d"`
|
||||
|
||||
### Locks
|
||||
When the [cache worker] attempts acquiring a lock for some task,
|
||||
it checks if some other worker has already acquired such a lock.
|
||||
To be fault tolerant and eventually execute every task,
|
||||
the locks expire after some interval.
|
||||
However, because of clock drifts and different timezones,
|
||||
it would happen that some lock was created in the future.
|
||||
This setting defines a tolerance limit for these locks.
|
||||
If the time has been changed in the system (i.e. two years backwards),
|
||||
the [cache system] should still work properly.
|
||||
Thus, these locks will be treated as expired
|
||||
(assuming the tolerance is not too big).
|
||||
|
||||
### Cache files
|
||||
Similarly to the locks, the cache files or their metadata might
|
||||
have modification time in distant future.
|
||||
The cache system tries to keep these files as long as possible.
|
||||
If the limits are not reached, the cache files will not be deleted.
|
||||
Otherwise, they will be treated as the oldest files, so they might survive.
|
||||
If the user actually uses the cache file, the modification time will be updated.
|
||||
|
||||
[`allowed-clock-drift-for-files-from-future`]: #setting-allowed-clock-drift-for-files-from-future
|
||||
|
||||
Setting `file-count-soft-limit`
|
||||
------------------
|
||||
- **type**: string (SI prefix)
|
||||
- **format**: `"{integer}(K | M | G | T | P)?"`
|
||||
- **default**: `"65536"`
|
||||
|
||||
Soft limit for the file count in the cache directory.
|
||||
|
||||
This doesn't include files with metadata.
|
||||
To learn more, please refer to the [cache system] section.
|
||||
|
||||
[`file-count-soft-limit`]: #setting-file-count-soft-limit
|
||||
|
||||
Setting `files-total-size-soft-limit`
|
||||
------------------
|
||||
- **type**: string (disk space)
|
||||
- **format**: `"{integer}(K | Ki | M | Mi | G | Gi | T | Ti | P | Pi)?"`
|
||||
- **default**: `"512Mi"`
|
||||
|
||||
Soft limit for the total size* of files in the cache directory.
|
||||
|
||||
This doesn't include files with metadata.
|
||||
To learn more, please refer to the [cache system] section.
|
||||
|
||||
*this is the file size, not the space physically occupied on the disk.
|
||||
|
||||
[`files-total-size-soft-limit`]: #setting-files-total-size-soft-limit
|
||||
|
||||
Setting `file-count-limit-percent-if-deleting`
|
||||
------------------
|
||||
- **type**: string (percent)
|
||||
- **format**: `"{integer}%"`
|
||||
- **default**: `"70%"`
|
||||
|
||||
If [`file-count-soft-limit`] is exceeded and the [cache worker] performs the cleanup task,
|
||||
then the worker will delete some cache files, so after the task,
|
||||
the file count should not exceed
|
||||
[`file-count-soft-limit`] * [`file-count-limit-percent-if-deleting`].
|
||||
|
||||
This doesn't include files with metadata.
|
||||
To learn more, please refer to the [cache system] section.
|
||||
|
||||
[`file-count-limit-percent-if-deleting`]: #setting-file-count-limit-percent-if-deleting
|
||||
|
||||
Setting `files-total-size-limit-percent-if-deleting`
|
||||
------------------
|
||||
- **type**: string (percent)
|
||||
- **format**: `"{integer}%"`
|
||||
- **default**: `"70%"`
|
||||
|
||||
If [`files-total-size-soft-limit`] is exceeded and [cache worker] performs the cleanup task,
|
||||
then the worker will delete some cache files, so after the task,
|
||||
the files total size should not exceed
|
||||
[`files-total-size-soft-limit`] * [`files-total-size-limit-percent-if-deleting`].
|
||||
|
||||
This doesn't include files with metadata.
|
||||
To learn more, please refer to the [cache system] section.
|
||||
|
||||
[`files-total-size-limit-percent-if-deleting`]: #setting-files-total-size-limit-percent-if-deleting
|
||||
|
||||
[toml]: https://github.com/toml-lang/toml
|
||||
[directories]: https://crates.io/crates/directories
|
||||
[cache system]: #how-does-the-cache-work
|
||||
[cache worker]: #how-does-the-cache-work
|
||||
[zstd]: https://facebook.github.io/zstd/
|
||||
[Least Recently Used (LRU)]: https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)
|
||||
|
||||
How does the cache work?
|
||||
========================
|
||||
|
||||
**This is an implementation detail and might change in the future.**
|
||||
Information provided here is meant to help understanding the big picture
|
||||
and configuring the cache.
|
||||
|
||||
There are two main components - the *cache system* and the *cache worker*.
|
||||
|
||||
Cache system
|
||||
------------
|
||||
|
||||
Handles GET and UPDATE cache requests.
|
||||
- **GET request** - simply loads the cache from disk if it is there.
|
||||
- **UPDATE request** - compresses received data with [zstd] and [`baseline-compression-level`], then writes the data to the disk.
|
||||
|
||||
In case of successful handling of a request, it notifies the *cache worker* about this
|
||||
event using the queue.
|
||||
The queue has a limited size of [`worker-event-queue-size`]. If it is full, it will drop
|
||||
new events until the *cache worker* pops some event from the queue.
|
||||
|
||||
Cache worker
|
||||
------------
|
||||
|
||||
The cache worker runs in a single thread with lower priority and pops events from the queue
|
||||
in a loop handling them one by one.
|
||||
|
||||
### On GET request
|
||||
1. Read the statistics file for the cache file,
|
||||
increase the usage counter and write it back to the disk.
|
||||
2. Attempt recompressing the cache file if all of the following conditions are met:
|
||||
- usage counter exceeds [`optimized-compression-usage-counter-threshold`],
|
||||
- the file is compressed with compression level lower than [`optimized-compression-level`],
|
||||
- no other worker has started working on this particular task within the last
|
||||
[`optimizing-compression-task-timeout`] interval.
|
||||
|
||||
When recompressing, [`optimized-compression-level`] is used as a compression level.
|
||||
|
||||
### On UPDATE request
|
||||
1. Write a fresh statistics file for the cache file.
|
||||
2. Clean up the cache if no worker has attempted to do this within the last [`cleanup-interval`].
|
||||
During this task:
|
||||
- all unrecognized files and expired task locks in cache directory will be deleted
|
||||
- if [`file-count-soft-limit`] or [`files-total-size-soft-limit`] is exceeded,
|
||||
then recognized files will be deleted according to
|
||||
[`file-count-limit-percent-if-deleting`] and [`files-total-size-limit-percent-if-deleting`].
|
||||
Wasmtime uses [Least Recently Used (LRU)] cache replacement policy and requires that
|
||||
the filesystem maintains proper mtime (modification time) of the files.
|
||||
Files with future mtimes are treated specially - more details
|
||||
in [`allowed-clock-drift-for-files-from-future`].
|
||||
|
||||
### Metadata files
|
||||
- every cached WebAssembly module has its own statistics file
|
||||
- every lock is a file
|
||||
@@ -71,7 +71,7 @@ steps:
|
||||
if [ "$AGENT_OS" = "Windows_NT" ]; then
|
||||
ext=.exe
|
||||
fi
|
||||
cp LICENSE README.md target/release/{wasmtime,wasm2obj}$ext $BUILD_BINARIESDIRECTORY/$BASENAME
|
||||
cp LICENSE README.md CACHE_CONFIGURATION.md target/release/{wasmtime,wasm2obj}$ext $BUILD_BINARIESDIRECTORY/$BASENAME
|
||||
displayName: Copy binaries
|
||||
|
||||
- bash: |
|
||||
|
||||
@@ -58,6 +58,9 @@
|
||||
<Component Id="README" Guid="*">
|
||||
<File Id="README.md" Source="README.md" KeyPath="yes" Checksum="yes"/>
|
||||
</Component>
|
||||
<Component Id="CACHE_CONFIGURATION" Guid="*">
|
||||
<File Id="CACHE_CONFIGURATION.md" Source="CACHE_CONFIGURATION.md" KeyPath="yes" Checksum="yes"/>
|
||||
</Component>
|
||||
</DirectoryRef>
|
||||
|
||||
<DirectoryRef Id="BINDIR">
|
||||
@@ -74,6 +77,7 @@
|
||||
<ComponentRef Id="wasm2obj.exe" />
|
||||
<ComponentRef Id="LICENSE" />
|
||||
<ComponentRef Id="README" />
|
||||
<ComponentRef Id="CACHE_CONFIGURATION" />
|
||||
<ComponentRef Id="InstallDir" />
|
||||
</Feature>
|
||||
<Feature Id="AddToPath"
|
||||
|
||||
@@ -50,7 +50,7 @@ use std::str;
|
||||
use std::str::FromStr;
|
||||
use target_lexicon::Triple;
|
||||
use wasmtime_debug::{emit_debugsections, read_debuginfo};
|
||||
use wasmtime_environ::cache_init;
|
||||
use wasmtime_environ::{cache_create_new_config, cache_init};
|
||||
use wasmtime_environ::{
|
||||
Compiler, Cranelift, ModuleEnvironment, ModuleVmctxInfo, Tunables, VMOffsets,
|
||||
};
|
||||
@@ -63,7 +63,8 @@ The translation is dependent on the environment chosen.
|
||||
The default is a dummy environment that produces placeholder values.
|
||||
|
||||
Usage:
|
||||
wasm2obj [--target TARGET] [-Odg] [--cache | --cache-config=<cache_config_file>] [--create-cache-config] [--enable-simd] <file> -o <output>
|
||||
wasm2obj [--target TARGET] [-Odg] [--cache | --cache-config=<cache_config_file>] [--enable-simd] <file> -o <output>
|
||||
wasm2obj --create-cache-config [--cache-config=<cache_config_file>]
|
||||
wasm2obj --help | --version
|
||||
|
||||
Options:
|
||||
@@ -73,10 +74,12 @@ Options:
|
||||
-g generate debug information
|
||||
-c, --cache enable caching system, use default configuration
|
||||
--cache-config=<cache_config_file>
|
||||
enable caching system, use specified cache configuration
|
||||
enable caching system, use specified cache configuration;
|
||||
can be used with --create-cache-config to specify custom file
|
||||
--create-cache-config
|
||||
used with --cache or --cache-config, creates default configuration and writes it to the disk,
|
||||
will fail if specified file already exists (or default file if used with --cache)
|
||||
creates default configuration and writes it to the disk,
|
||||
use with --cache-config to specify custom config file
|
||||
instead of default one
|
||||
--enable-simd enable proposed SIMD instructions
|
||||
-O, --optimize runs optimization passes on the translated functions
|
||||
--version print the Cranelift version
|
||||
@@ -91,7 +94,7 @@ struct Args {
|
||||
flag_g: bool,
|
||||
flag_debug: bool,
|
||||
flag_cache: bool, // TODO change to disable cache after implementing cache eviction
|
||||
flag_cache_config_file: Option<String>,
|
||||
flag_cache_config: Option<String>,
|
||||
flag_create_cache_config: bool,
|
||||
flag_enable_simd: bool,
|
||||
flag_optimize: bool,
|
||||
@@ -123,10 +126,25 @@ fn main() {
|
||||
Some(prefix)
|
||||
};
|
||||
|
||||
if args.flag_create_cache_config {
|
||||
match cache_create_new_config(args.flag_cache_config) {
|
||||
Ok(path) => {
|
||||
println!(
|
||||
"Successfully created new configuation file at {}",
|
||||
path.display()
|
||||
);
|
||||
return;
|
||||
}
|
||||
Err(err) => {
|
||||
eprintln!("Error: {}", err);
|
||||
process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let errors = cache_init(
|
||||
args.flag_cache || args.flag_cache_config_file.is_some(),
|
||||
args.flag_cache_config_file.as_ref(),
|
||||
args.flag_create_cache_config,
|
||||
args.flag_cache || args.flag_cache_config.is_some(),
|
||||
args.flag_cache_config.as_ref(),
|
||||
log_config,
|
||||
);
|
||||
|
||||
|
||||
@@ -45,7 +45,7 @@ use std::process::exit;
|
||||
use wabt;
|
||||
use wasi_common::preopen_dir;
|
||||
use wasmtime_api::{Config, Engine, HostRef, Instance, Module, Store};
|
||||
use wasmtime_environ::cache_init;
|
||||
use wasmtime_environ::{cache_create_new_config, cache_init};
|
||||
use wasmtime_interface_types::ModuleData;
|
||||
use wasmtime_jit::Features;
|
||||
use wasmtime_wasi::instantiate_wasi;
|
||||
@@ -62,8 +62,9 @@ including calling the start function if one is present. Additional functions
|
||||
given with --invoke are then called.
|
||||
|
||||
Usage:
|
||||
wasmtime [-odg] [--enable-simd] [--wasi-c] [--cache | --cache-config=<cache_config_file>] [--create-cache-config] [--preload=<wasm>...] [--env=<env>...] [--dir=<dir>...] [--mapdir=<mapping>...] <file> [<arg>...]
|
||||
wasmtime [-odg] [--enable-simd] [--wasi-c] [--cache | --cache-config=<cache_config_file>] [--create-cache-config] [--env=<env>...] [--dir=<dir>...] [--mapdir=<mapping>...] --invoke=<fn> <file> [<arg>...]
|
||||
wasmtime [-odg] [--enable-simd] [--wasi-c] [--cache | --cache-config=<cache_config_file>] [--preload=<wasm>...] [--env=<env>...] [--dir=<dir>...] [--mapdir=<mapping>...] <file> [<arg>...]
|
||||
wasmtime [-odg] [--enable-simd] [--wasi-c] [--cache | --cache-config=<cache_config_file>] [--env=<env>...] [--dir=<dir>...] [--mapdir=<mapping>...] --invoke=<fn> <file> [<arg>...]
|
||||
wasmtime --create-cache-config [--cache-config=<cache_config_file>]
|
||||
wasmtime --help | --version
|
||||
|
||||
Options:
|
||||
@@ -71,10 +72,12 @@ Options:
|
||||
-o, --optimize runs optimization passes on the translated functions
|
||||
-c, --cache enable caching system, use default configuration
|
||||
--cache-config=<cache_config_file>
|
||||
enable caching system, use specified cache configuration
|
||||
enable caching system, use specified cache configuration;
|
||||
can be used with --create-cache-config to specify custom file
|
||||
--create-cache-config
|
||||
used with --cache or --cache-config, creates default configuration and writes it to the disk,
|
||||
will fail if specified file already exists (or default file if used with --cache)
|
||||
creates default configuration and writes it to the disk,
|
||||
use with --cache-config to specify custom config file
|
||||
instead of default one
|
||||
-g generate debug information
|
||||
-d, --debug enable debug output on stderr/stdout
|
||||
--enable-simd enable proposed SIMD instructions
|
||||
@@ -94,7 +97,7 @@ struct Args {
|
||||
arg_arg: Vec<String>,
|
||||
flag_optimize: bool,
|
||||
flag_cache: bool, // TODO change to disable cache after implementing cache eviction
|
||||
flag_cache_config_file: Option<String>,
|
||||
flag_cache_config: Option<String>,
|
||||
flag_create_cache_config: bool,
|
||||
flag_debug: bool,
|
||||
flag_g: bool,
|
||||
@@ -222,10 +225,25 @@ fn rmain() -> Result<(), Error> {
|
||||
Some(prefix)
|
||||
};
|
||||
|
||||
if args.flag_create_cache_config {
|
||||
match cache_create_new_config(args.flag_cache_config) {
|
||||
Ok(path) => {
|
||||
println!(
|
||||
"Successfully created new configuation file at {}",
|
||||
path.display()
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
Err(err) => {
|
||||
eprintln!("Error: {}", err);
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let errors = cache_init(
|
||||
args.flag_cache || args.flag_cache_config_file.is_some(),
|
||||
args.flag_cache_config_file.as_ref(),
|
||||
args.flag_create_cache_config,
|
||||
args.flag_cache || args.flag_cache_config.is_some(),
|
||||
args.flag_cache_config.as_ref(),
|
||||
log_config,
|
||||
);
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ use pretty_env_logger;
|
||||
use serde::Deserialize;
|
||||
use std::path::Path;
|
||||
use std::process;
|
||||
use wasmtime_environ::cache_init;
|
||||
use wasmtime_environ::{cache_create_new_config, cache_init};
|
||||
use wasmtime_jit::{Compiler, Features};
|
||||
use wasmtime_wast::WastContext;
|
||||
|
||||
@@ -41,7 +41,8 @@ const USAGE: &str = "
|
||||
Wast test runner.
|
||||
|
||||
Usage:
|
||||
wast [-do] [--enable-simd] [--cache | --cache-config=<cache_config_file>] [--create-cache-config] <file>...
|
||||
wast [-do] [--enable-simd] [--cache | --cache-config=<cache_config_file>] <file>...
|
||||
wast --create-cache-config [--cache-config=<cache_config_file>]
|
||||
wast --help | --version
|
||||
|
||||
Options:
|
||||
@@ -50,10 +51,12 @@ Options:
|
||||
-o, --optimize runs optimization passes on the translated functions
|
||||
-c, --cache enable caching system, use default configuration
|
||||
--cache-config=<cache_config_file>
|
||||
enable caching system, use specified cache configuration
|
||||
enable caching system, use specified cache configuration;
|
||||
can be used with --create-cache-config to specify custom file
|
||||
--create-cache-config
|
||||
used with --cache or --cache-config, creates default configuration and writes it to the disk,
|
||||
will fail if specified file already exists (or default file if used with --cache)
|
||||
creates default configuration and writes it to the disk,
|
||||
use with --cache-config to specify custom config file
|
||||
instead of default one
|
||||
-d, --debug enable debug output on stderr/stdout
|
||||
--enable-simd enable proposed SIMD instructions
|
||||
";
|
||||
@@ -65,7 +68,7 @@ struct Args {
|
||||
flag_function: Option<String>,
|
||||
flag_optimize: bool,
|
||||
flag_cache: bool, // TODO change to disable cache after implementing cache eviction
|
||||
flag_cache_config_file: Option<String>,
|
||||
flag_cache_config: Option<String>,
|
||||
flag_create_cache_config: bool,
|
||||
flag_enable_simd: bool,
|
||||
}
|
||||
@@ -89,10 +92,25 @@ fn main() {
|
||||
Some(prefix)
|
||||
};
|
||||
|
||||
if args.flag_create_cache_config {
|
||||
match cache_create_new_config(args.flag_cache_config) {
|
||||
Ok(path) => {
|
||||
println!(
|
||||
"Successfully created new configuation file at {}",
|
||||
path.display()
|
||||
);
|
||||
return;
|
||||
}
|
||||
Err(err) => {
|
||||
eprintln!("Error: {}", err);
|
||||
process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let errors = cache_init(
|
||||
args.flag_cache || args.flag_cache_config_file.is_some(),
|
||||
args.flag_cache_config_file.as_ref(),
|
||||
args.flag_create_cache_config,
|
||||
args.flag_cache || args.flag_cache_config.is_some(),
|
||||
args.flag_cache_config.as_ref(),
|
||||
log_config,
|
||||
);
|
||||
|
||||
|
||||
@@ -18,8 +18,8 @@ use std::string::{String, ToString};
|
||||
mod config;
|
||||
mod worker;
|
||||
|
||||
pub use config::init;
|
||||
use config::{cache_config, CacheConfig};
|
||||
pub use config::{create_new_config, init};
|
||||
use worker::worker;
|
||||
|
||||
lazy_static! {
|
||||
|
||||
430
wasmtime-environ/src/cache/config.rs
vendored
430
wasmtime-environ/src/cache/config.rs
vendored
@@ -4,7 +4,10 @@ use super::worker;
|
||||
use directories::ProjectDirs;
|
||||
use lazy_static::lazy_static;
|
||||
use log::{debug, error, trace, warn};
|
||||
use serde::{de::Deserializer, ser::Serializer, Deserialize, Serialize};
|
||||
use serde::{
|
||||
de::{self, Deserializer},
|
||||
Deserialize,
|
||||
};
|
||||
use spin::Once;
|
||||
use std::fmt::Debug;
|
||||
use std::fs;
|
||||
@@ -17,76 +20,80 @@ use std::vec::Vec;
|
||||
|
||||
// wrapped, so we have named section in config,
|
||||
// also, for possible future compatibility
|
||||
#[derive(Serialize, Deserialize, Debug)]
|
||||
#[derive(Deserialize, Debug)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
struct Config {
|
||||
cache: CacheConfig,
|
||||
}
|
||||
|
||||
// todo: markdown documention of these options (name, format, default, explanation)
|
||||
// todo: don't flush default values (create config from simple template + url to docs)
|
||||
// todo: more user-friendly cache config creation
|
||||
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||
#[derive(Deserialize, Debug, Clone)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct CacheConfig {
|
||||
#[serde(skip)]
|
||||
errors: Vec<String>,
|
||||
|
||||
enabled: bool,
|
||||
directory: Option<PathBuf>,
|
||||
#[serde(rename = "worker-event-queue-size")]
|
||||
worker_event_queue_size: Option<usize>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "worker-event-queue-size",
|
||||
deserialize_with = "deserialize_si_prefix"
|
||||
)]
|
||||
worker_event_queue_size: Option<u64>,
|
||||
#[serde(rename = "baseline-compression-level")]
|
||||
baseline_compression_level: Option<i32>,
|
||||
#[serde(rename = "optimized-compression-level")]
|
||||
optimized_compression_level: Option<i32>,
|
||||
#[serde(rename = "optimized-compression-usage-counter-threshold")]
|
||||
#[serde(
|
||||
default,
|
||||
rename = "optimized-compression-usage-counter-threshold",
|
||||
deserialize_with = "deserialize_si_prefix"
|
||||
)]
|
||||
optimized_compression_usage_counter_threshold: Option<u64>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "cleanup-interval-in-seconds",
|
||||
serialize_with = "serialize_duration",
|
||||
rename = "cleanup-interval",
|
||||
deserialize_with = "deserialize_duration"
|
||||
)] // todo unit?
|
||||
)]
|
||||
cleanup_interval: Option<Duration>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "optimizing-compression-task-timeout-in-seconds",
|
||||
serialize_with = "serialize_duration",
|
||||
rename = "optimizing-compression-task-timeout",
|
||||
deserialize_with = "deserialize_duration"
|
||||
)] // todo unit?
|
||||
)]
|
||||
optimizing_compression_task_timeout: Option<Duration>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "allowed-clock-drift-for-locks-from-future",
|
||||
serialize_with = "serialize_duration",
|
||||
rename = "allowed-clock-drift-for-files-from-future",
|
||||
deserialize_with = "deserialize_duration"
|
||||
)] // todo unit?
|
||||
allowed_clock_drift_for_locks_from_future: Option<Duration>,
|
||||
#[serde(rename = "files-count-soft-limit")]
|
||||
files_count_soft_limit: Option<u64>,
|
||||
#[serde(rename = "files-total-size-soft-limit")]
|
||||
files_total_size_soft_limit: Option<u64>, // todo unit?
|
||||
#[serde(rename = "files-count-limit-percent-if-deleting")]
|
||||
files_count_limit_percent_if_deleting: Option<u8>, // todo format: integer + %
|
||||
#[serde(rename = "files-total-size-limit-percent-if-deleting")]
|
||||
)]
|
||||
allowed_clock_drift_for_files_from_future: Option<Duration>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "file-count-soft-limit",
|
||||
deserialize_with = "deserialize_si_prefix"
|
||||
)]
|
||||
file_count_soft_limit: Option<u64>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "files-total-size-soft-limit",
|
||||
deserialize_with = "deserialize_disk_space"
|
||||
)]
|
||||
files_total_size_soft_limit: Option<u64>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "file-count-limit-percent-if-deleting",
|
||||
deserialize_with = "deserialize_percent"
|
||||
)]
|
||||
file_count_limit_percent_if_deleting: Option<u8>,
|
||||
#[serde(
|
||||
default,
|
||||
rename = "files-total-size-limit-percent-if-deleting",
|
||||
deserialize_with = "deserialize_percent"
|
||||
)]
|
||||
files_total_size_limit_percent_if_deleting: Option<u8>,
|
||||
}
|
||||
|
||||
// toml-rs fails to serialize Duration ("values must be emitted before tables")
|
||||
// so we're providing custom functions for it
|
||||
fn serialize_duration<S>(duration: &Option<Duration>, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: Serializer,
|
||||
{
|
||||
duration.map(|d| d.as_secs()).serialize(serializer)
|
||||
}
|
||||
|
||||
fn deserialize_duration<'de, D>(deserializer: D) -> Result<Option<Duration>, D::Error>
|
||||
where
|
||||
D: Deserializer<'de>,
|
||||
{
|
||||
Ok(Option::<u64>::deserialize(deserializer)?.map(Duration::from_secs))
|
||||
}
|
||||
|
||||
// Private static, so only internal function can access it.
|
||||
static CONFIG: Once<CacheConfig> = Once::new();
|
||||
static INIT_CALLED: AtomicBool = AtomicBool::new(false);
|
||||
@@ -105,7 +112,6 @@ pub fn cache_config() -> &'static CacheConfig {
|
||||
pub fn init<P: AsRef<Path> + Debug>(
|
||||
enabled: bool,
|
||||
config_file: Option<P>,
|
||||
create_new_config: bool,
|
||||
init_file_per_thread_logger: Option<&'static str>,
|
||||
) -> &'static Vec<String> {
|
||||
INIT_CALLED
|
||||
@@ -116,9 +122,9 @@ pub fn init<P: AsRef<Path> + Debug>(
|
||||
"Cache system init must be called before using the system."
|
||||
);
|
||||
let conf_file_str = format!("{:?}", config_file);
|
||||
let conf = CONFIG.call_once(|| CacheConfig::from_file(enabled, config_file, create_new_config));
|
||||
let conf = CONFIG.call_once(|| CacheConfig::from_file(enabled, config_file));
|
||||
if conf.errors.is_empty() {
|
||||
if conf.enabled {
|
||||
if conf.enabled() {
|
||||
worker::init(init_file_per_thread_logger);
|
||||
}
|
||||
debug!("Cache init(\"{}\"): {:#?}", conf_file_str, conf)
|
||||
@@ -131,28 +137,184 @@ pub fn init<P: AsRef<Path> + Debug>(
|
||||
&conf.errors
|
||||
}
|
||||
|
||||
/// Creates a new configuration file at specified path, or default path if None is passed.
|
||||
/// Fails if file already exists.
|
||||
pub fn create_new_config<P: AsRef<Path> + Debug>(
|
||||
config_file: Option<P>,
|
||||
) -> Result<PathBuf, String> {
|
||||
trace!("Creating new config file, path: {:?}", config_file);
|
||||
|
||||
let config_file = config_file.as_ref().map_or_else(
|
||||
|| DEFAULT_CONFIG_PATH.as_ref().map(|p| p.as_ref()),
|
||||
|p| Ok(p.as_ref()),
|
||||
)?;
|
||||
|
||||
if config_file.exists() {
|
||||
Err(format!(
|
||||
"Specified config file already exists! Path: {}",
|
||||
config_file.display()
|
||||
))?;
|
||||
}
|
||||
|
||||
let parent_dir = config_file
|
||||
.parent()
|
||||
.ok_or_else(|| format!("Invalid cache config path: {}", config_file.display()))?;
|
||||
|
||||
fs::create_dir_all(parent_dir).map_err(|err| {
|
||||
format!(
|
||||
"Failed to create config directory, config path: {}, error: {}",
|
||||
config_file.display(),
|
||||
err
|
||||
)
|
||||
})?;
|
||||
|
||||
let content = "\
|
||||
# Comment out certain settings to use default values.
|
||||
# For more settings, please refer to the documentation:
|
||||
# https://github.com/CraneStation/wasmtime/blob/master/CACHE_CONFIGURATION.md
|
||||
|
||||
[cache]
|
||||
enabled = true
|
||||
";
|
||||
|
||||
fs::write(&config_file, &content).map_err(|err| {
|
||||
format!(
|
||||
"Failed to flush config to the disk, path: {}, msg: {}",
|
||||
config_file.display(),
|
||||
err
|
||||
)
|
||||
})?;
|
||||
|
||||
Ok(config_file.to_path_buf())
|
||||
}
|
||||
|
||||
// permitted levels from: https://docs.rs/zstd/0.4.28+zstd.1.4.3/zstd/stream/write/struct.Encoder.html
|
||||
const ZSTD_COMPRESSION_LEVELS: std::ops::RangeInclusive<i32> = 0..=21;
|
||||
lazy_static! {
|
||||
static ref PROJECT_DIRS: Option<ProjectDirs> =
|
||||
ProjectDirs::from("", "CraneStation", "wasmtime");
|
||||
static ref DEFAULT_CONFIG_PATH: Result<PathBuf, String> = PROJECT_DIRS
|
||||
.as_ref()
|
||||
.map(|proj_dirs| proj_dirs.config_dir().join("wasmtime-cache-config.toml"))
|
||||
.ok_or("Config file not specified and failed to get the default".to_string());
|
||||
}
|
||||
// TODO: values to be tuned
|
||||
|
||||
// Default settings, you're welcome to tune them!
|
||||
// TODO: what do we want to warn users about?
|
||||
const DEFAULT_WORKER_EVENT_QUEUE_SIZE: usize = 0x10;
|
||||
const WORKER_EVENT_QUEUE_SIZE_WARNING_TRESHOLD: usize = 3;
|
||||
|
||||
// At the moment of writing, the modules couldn't depend on anothers,
|
||||
// so we have at most one module per wasmtime instance
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_WORKER_EVENT_QUEUE_SIZE: u64 = 0x10;
|
||||
const WORKER_EVENT_QUEUE_SIZE_WARNING_TRESHOLD: u64 = 3;
|
||||
// should be quick and provide good enough compression
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_BASELINE_COMPRESSION_LEVEL: i32 = zstd::DEFAULT_COMPRESSION_LEVEL;
|
||||
// should provide significantly better compression than baseline
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_OPTIMIZED_COMPRESSION_LEVEL: i32 = 20;
|
||||
// shouldn't be to low to avoid recompressing too many files
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_OPTIMIZED_COMPRESSION_USAGE_COUNTER_THRESHOLD: u64 = 0x100;
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_CLEANUP_INTERVAL: Duration = Duration::from_secs(60 * 60);
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_OPTIMIZING_COMPRESSION_TASK_TIMEOUT: Duration = Duration::from_secs(30 * 60);
|
||||
const DEFAULT_ALLOWED_CLOCK_DRIFT_FOR_LOCKS_FROM_FUTURE: Duration =
|
||||
// the default assumes problems with timezone configuration on network share + some clock drift
|
||||
// please notice 24 timezones = max 23h difference between some of them
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_ALLOWED_CLOCK_DRIFT_FOR_FILES_FROM_FUTURE: Duration =
|
||||
Duration::from_secs(60 * 60 * 24);
|
||||
const DEFAULT_FILES_COUNT_SOFT_LIMIT: u64 = 0x10_000;
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_FILE_COUNT_SOFT_LIMIT: u64 = 0x10_000;
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_FILES_TOTAL_SIZE_SOFT_LIMIT: u64 = 1024 * 1024 * 512;
|
||||
const DEFAULT_FILES_COUNT_LIMIT_PERCENT_IF_DELETING: u8 = 70;
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_FILE_COUNT_LIMIT_PERCENT_IF_DELETING: u8 = 70;
|
||||
// if changed, update CACHE_CONFIGURATION.md
|
||||
const DEFAULT_FILES_TOTAL_SIZE_LIMIT_PERCENT_IF_DELETING: u8 = 70;
|
||||
|
||||
// Deserializers of our custom formats
|
||||
// can be replaced with const generics later
|
||||
macro_rules! generate_deserializer {
|
||||
($name:ident($numname:ident: $numty:ty, $unitname:ident: &str) -> $retty:ty {$body:expr}) => {
|
||||
fn $name<'de, D>(deserializer: D) -> Result<$retty, D::Error>
|
||||
where
|
||||
D: Deserializer<'de>,
|
||||
{
|
||||
let text = Option::<String>::deserialize(deserializer)?;
|
||||
let text = match text {
|
||||
None => return Ok(None),
|
||||
Some(text) => text,
|
||||
};
|
||||
let text = text.trim();
|
||||
let split_point = text.find(|c: char| !c.is_numeric());
|
||||
let (num, unit) = split_point.map_or_else(|| (text, ""), |p| text.split_at(p));
|
||||
let deserialized = (|| {
|
||||
let $numname = num.parse::<$numty>().ok()?;
|
||||
let $unitname = unit.trim();
|
||||
$body
|
||||
})();
|
||||
if deserialized.is_some() {
|
||||
Ok(deserialized)
|
||||
} else {
|
||||
Err(de::Error::custom(
|
||||
"Invalid value, please refer to the documentation",
|
||||
))
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
generate_deserializer!(deserialize_duration(num: u64, unit: &str) -> Option<Duration> {
|
||||
match unit {
|
||||
"s" => Some(Duration::from_secs(num)),
|
||||
"m" => Some(Duration::from_secs(num * 60)),
|
||||
"h" => Some(Duration::from_secs(num * 60 * 60)),
|
||||
"d" => Some(Duration::from_secs(num * 60 * 60 * 24)),
|
||||
_ => None,
|
||||
}
|
||||
});
|
||||
|
||||
generate_deserializer!(deserialize_si_prefix(num: u64, unit: &str) -> Option<u64> {
|
||||
match unit {
|
||||
"" => Some(num),
|
||||
"K" => num.checked_mul(1_000),
|
||||
"M" => num.checked_mul(1_000_000),
|
||||
"G" => num.checked_mul(1_000_000_000),
|
||||
"T" => num.checked_mul(1_000_000_000_000),
|
||||
"P" => num.checked_mul(1_000_000_000_000_000),
|
||||
_ => None,
|
||||
}
|
||||
});
|
||||
|
||||
generate_deserializer!(deserialize_disk_space(num: u64, unit: &str) -> Option<u64> {
|
||||
match unit {
|
||||
"" => Some(num),
|
||||
"K" => num.checked_mul(1_000),
|
||||
"Ki" => num.checked_mul(1u64 << 10),
|
||||
"M" => num.checked_mul(1_000_000),
|
||||
"Mi" => num.checked_mul(1u64 << 20),
|
||||
"G" => num.checked_mul(1_000_000_000),
|
||||
"Gi" => num.checked_mul(1u64 << 30),
|
||||
"T" => num.checked_mul(1_000_000_000_000),
|
||||
"Ti" => num.checked_mul(1u64 << 40),
|
||||
"P" => num.checked_mul(1_000_000_000_000_000),
|
||||
"Pi" => num.checked_mul(1u64 << 50),
|
||||
_ => None,
|
||||
}
|
||||
});
|
||||
|
||||
generate_deserializer!(deserialize_percent(num: u8, unit: &str) -> Option<u8> {
|
||||
match unit {
|
||||
"%" => Some(num),
|
||||
_ => None,
|
||||
}
|
||||
});
|
||||
|
||||
static CACHE_IMPROPER_CONFIG_ERROR_MSG: &'static str =
|
||||
"Cache system should be enabled and all settings must be validated or defaulted";
|
||||
|
||||
macro_rules! generate_setting_getter {
|
||||
($setting:ident: $setting_type:ty) => {
|
||||
/// Returns `$setting`.
|
||||
@@ -161,22 +323,22 @@ macro_rules! generate_setting_getter {
|
||||
pub fn $setting(&self) -> $setting_type {
|
||||
self
|
||||
.$setting
|
||||
.expect("All cache system settings must be validated or defaulted")
|
||||
.expect(CACHE_IMPROPER_CONFIG_ERROR_MSG)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
impl CacheConfig {
|
||||
generate_setting_getter!(worker_event_queue_size: usize);
|
||||
generate_setting_getter!(worker_event_queue_size: u64);
|
||||
generate_setting_getter!(baseline_compression_level: i32);
|
||||
generate_setting_getter!(optimized_compression_level: i32);
|
||||
generate_setting_getter!(optimized_compression_usage_counter_threshold: u64);
|
||||
generate_setting_getter!(cleanup_interval: Duration);
|
||||
generate_setting_getter!(optimizing_compression_task_timeout: Duration);
|
||||
generate_setting_getter!(allowed_clock_drift_for_locks_from_future: Duration);
|
||||
generate_setting_getter!(files_count_soft_limit: u64);
|
||||
generate_setting_getter!(allowed_clock_drift_for_files_from_future: Duration);
|
||||
generate_setting_getter!(file_count_soft_limit: u64);
|
||||
generate_setting_getter!(files_total_size_soft_limit: u64);
|
||||
generate_setting_getter!(files_count_limit_percent_if_deleting: u8);
|
||||
generate_setting_getter!(file_count_limit_percent_if_deleting: u8);
|
||||
generate_setting_getter!(files_total_size_limit_percent_if_deleting: u8);
|
||||
|
||||
/// Returns true if and only if the cache is enabled.
|
||||
@@ -190,7 +352,7 @@ impl CacheConfig {
|
||||
pub fn directory(&self) -> &PathBuf {
|
||||
self.directory
|
||||
.as_ref()
|
||||
.expect("All cache system settings must be validated or defaulted")
|
||||
.expect(CACHE_IMPROPER_CONFIG_ERROR_MSG)
|
||||
}
|
||||
|
||||
pub fn new_cache_disabled() -> Self {
|
||||
@@ -204,10 +366,10 @@ impl CacheConfig {
|
||||
optimized_compression_usage_counter_threshold: None,
|
||||
cleanup_interval: None,
|
||||
optimizing_compression_task_timeout: None,
|
||||
allowed_clock_drift_for_locks_from_future: None,
|
||||
files_count_soft_limit: None,
|
||||
allowed_clock_drift_for_files_from_future: None,
|
||||
file_count_soft_limit: None,
|
||||
files_total_size_soft_limit: None,
|
||||
files_count_limit_percent_if_deleting: None,
|
||||
file_count_limit_percent_if_deleting: None,
|
||||
files_total_size_limit_percent_if_deleting: None,
|
||||
}
|
||||
}
|
||||
@@ -224,17 +386,12 @@ impl CacheConfig {
|
||||
conf
|
||||
}
|
||||
|
||||
pub fn from_file<P: AsRef<Path>>(
|
||||
enabled: bool,
|
||||
config_file: Option<P>,
|
||||
create_new_config: bool,
|
||||
) -> Self {
|
||||
pub fn from_file<P: AsRef<Path>>(enabled: bool, config_file: Option<P>) -> Self {
|
||||
if !enabled {
|
||||
return Self::new_cache_disabled();
|
||||
}
|
||||
|
||||
let (mut config, path_if_flush_to_disk) =
|
||||
match Self::load_and_parse_file(config_file, create_new_config) {
|
||||
let mut config = match Self::load_and_parse_file(config_file) {
|
||||
Ok(data) => data,
|
||||
Err(err) => return Self::new_cache_with_errors(vec![err]),
|
||||
};
|
||||
@@ -247,46 +404,30 @@ impl CacheConfig {
|
||||
config.validate_optimized_compression_usage_counter_threshold_or_default();
|
||||
config.validate_cleanup_interval_or_default();
|
||||
config.validate_optimizing_compression_task_timeout_or_default();
|
||||
config.validate_allowed_clock_drift_for_locks_from_future_or_default();
|
||||
config.validate_files_count_soft_limit_or_default();
|
||||
config.validate_allowed_clock_drift_for_files_from_future_or_default();
|
||||
config.validate_file_count_soft_limit_or_default();
|
||||
config.validate_files_total_size_soft_limit_or_default();
|
||||
config.validate_files_count_limit_percent_if_deleting_or_default();
|
||||
config.validate_file_count_limit_percent_if_deleting_or_default();
|
||||
config.validate_files_total_size_limit_percent_if_deleting_or_default();
|
||||
|
||||
path_if_flush_to_disk.map(|p| config.flush_to_disk(p));
|
||||
|
||||
config.disable_if_any_error();
|
||||
config
|
||||
}
|
||||
|
||||
fn load_and_parse_file<P: AsRef<Path>>(
|
||||
config_file: Option<P>,
|
||||
create_new_config: bool,
|
||||
) -> Result<(Self, Option<PathBuf>), String> {
|
||||
fn load_and_parse_file<P: AsRef<Path>>(config_file: Option<P>) -> Result<Self, String> {
|
||||
// get config file path
|
||||
let (config_file, user_custom_file) = match config_file {
|
||||
Some(p) => (PathBuf::from(p.as_ref()), true),
|
||||
None => match &*PROJECT_DIRS {
|
||||
Some(proj_dirs) => (
|
||||
proj_dirs.config_dir().join("wasmtime-cache-config.toml"),
|
||||
false,
|
||||
),
|
||||
None => Err("Config file not specified and failed to get the default".to_string())?,
|
||||
},
|
||||
};
|
||||
let (config_file, user_custom_file) = config_file.as_ref().map_or_else(
|
||||
|| DEFAULT_CONFIG_PATH.as_ref().map(|p| (p.as_ref(), false)),
|
||||
|p| Ok((p.as_ref(), true)),
|
||||
)?;
|
||||
|
||||
// read config, or create an empty one
|
||||
// read config, or use default one
|
||||
let entity_exists = config_file.exists();
|
||||
match (create_new_config, entity_exists, user_custom_file) {
|
||||
(true, true, _) => Err(format!(
|
||||
"Tried to create a new config, but given entity already exists, path: {}",
|
||||
config_file.display()
|
||||
)),
|
||||
(true, false, _) => Ok((Self::new_cache_enabled_template(), Some(config_file))),
|
||||
(false, false, false) => Ok((Self::new_cache_enabled_template(), None)),
|
||||
(false, _, _) => match fs::read(&config_file) {
|
||||
match (entity_exists, user_custom_file) {
|
||||
(false, false) => Ok(Self::new_cache_enabled_template()),
|
||||
_ => match fs::read(&config_file) {
|
||||
Ok(bytes) => match toml::from_slice::<Config>(&bytes[..]) {
|
||||
Ok(config) => Ok((config.cache, None)),
|
||||
Ok(config) => Ok(config.cache),
|
||||
Err(err) => Err(format!(
|
||||
"Failed to parse config file, path: {}, error: {}",
|
||||
config_file.display(),
|
||||
@@ -421,16 +562,16 @@ impl CacheConfig {
|
||||
}
|
||||
}
|
||||
|
||||
fn validate_allowed_clock_drift_for_locks_from_future_or_default(&mut self) {
|
||||
if self.allowed_clock_drift_for_locks_from_future.is_none() {
|
||||
self.allowed_clock_drift_for_locks_from_future =
|
||||
Some(DEFAULT_ALLOWED_CLOCK_DRIFT_FOR_LOCKS_FROM_FUTURE);
|
||||
fn validate_allowed_clock_drift_for_files_from_future_or_default(&mut self) {
|
||||
if self.allowed_clock_drift_for_files_from_future.is_none() {
|
||||
self.allowed_clock_drift_for_files_from_future =
|
||||
Some(DEFAULT_ALLOWED_CLOCK_DRIFT_FOR_FILES_FROM_FUTURE);
|
||||
}
|
||||
}
|
||||
|
||||
fn validate_files_count_soft_limit_or_default(&mut self) {
|
||||
if self.files_count_soft_limit.is_none() {
|
||||
self.files_count_soft_limit = Some(DEFAULT_FILES_COUNT_SOFT_LIMIT);
|
||||
fn validate_file_count_soft_limit_or_default(&mut self) {
|
||||
if self.file_count_soft_limit.is_none() {
|
||||
self.file_count_soft_limit = Some(DEFAULT_FILE_COUNT_SOFT_LIMIT);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -440,13 +581,13 @@ impl CacheConfig {
|
||||
}
|
||||
}
|
||||
|
||||
fn validate_files_count_limit_percent_if_deleting_or_default(&mut self) {
|
||||
if self.files_count_limit_percent_if_deleting.is_none() {
|
||||
self.files_count_limit_percent_if_deleting =
|
||||
Some(DEFAULT_FILES_COUNT_LIMIT_PERCENT_IF_DELETING);
|
||||
fn validate_file_count_limit_percent_if_deleting_or_default(&mut self) {
|
||||
if self.file_count_limit_percent_if_deleting.is_none() {
|
||||
self.file_count_limit_percent_if_deleting =
|
||||
Some(DEFAULT_FILE_COUNT_LIMIT_PERCENT_IF_DELETING);
|
||||
}
|
||||
|
||||
let percent = self.files_count_limit_percent_if_deleting.unwrap();
|
||||
let percent = self.file_count_limit_percent_if_deleting.unwrap();
|
||||
if percent > 100 {
|
||||
self.errors.push(format!(
|
||||
"Invalid files count limit percent if deleting: {} not in range 0-100%",
|
||||
@@ -470,66 +611,6 @@ impl CacheConfig {
|
||||
}
|
||||
}
|
||||
|
||||
fn flush_to_disk(&mut self, path: PathBuf) {
|
||||
if !self.errors.is_empty() {
|
||||
return;
|
||||
}
|
||||
|
||||
trace!(
|
||||
"Flushing cache config file to the disk, path: {}",
|
||||
path.display()
|
||||
);
|
||||
|
||||
let parent_dir = match path.parent() {
|
||||
Some(p) => p,
|
||||
None => {
|
||||
self.errors
|
||||
.push(format!("Invalid cache config path: {}", path.display()));
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
match fs::create_dir_all(parent_dir) {
|
||||
Ok(()) => (),
|
||||
Err(err) => {
|
||||
self.errors.push(format!(
|
||||
"Failed to create config directory, config path: {}, error: {}",
|
||||
path.display(),
|
||||
err
|
||||
));
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
let serialized = match self.exec_as_config(|config| toml::to_string_pretty(&config)) {
|
||||
Ok(data) => data,
|
||||
Err(err) => {
|
||||
self.errors.push(format!(
|
||||
"Failed to serialize config, (unused) path: {}, msg: {}",
|
||||
path.display(),
|
||||
err
|
||||
));
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
let header = "# Automatically generated with defaults.\n\
|
||||
# Comment out certain fields to use default values.\n\n";
|
||||
|
||||
let content = format!("{}{}", header, serialized);
|
||||
match fs::write(&path, &content) {
|
||||
Ok(()) => (),
|
||||
Err(err) => {
|
||||
self.errors.push(format!(
|
||||
"Failed to flush config to the disk, path: {}, msg: {}",
|
||||
path.display(),
|
||||
err
|
||||
));
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn disable_if_any_error(&mut self) {
|
||||
if !self.errors.is_empty() {
|
||||
let mut conf = Self::new_cache_disabled();
|
||||
@@ -537,14 +618,7 @@ impl CacheConfig {
|
||||
mem::swap(&mut self.errors, &mut conf.errors);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn exec_as_config<T>(&mut self, closure: impl FnOnce(&mut Config) -> T) -> T {
|
||||
let mut config = Config {
|
||||
cache: CacheConfig::new_cache_disabled(),
|
||||
};
|
||||
mem::swap(self, &mut config.cache);
|
||||
let ret = closure(&mut config);
|
||||
mem::swap(self, &mut config.cache);
|
||||
ret
|
||||
}
|
||||
}
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
580
wasmtime-environ/src/cache/config/tests.rs
vendored
Normal file
580
wasmtime-environ/src/cache/config/tests.rs
vendored
Normal file
@@ -0,0 +1,580 @@
|
||||
use super::CacheConfig;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use std::time::Duration;
|
||||
use tempfile::{self, TempDir};
|
||||
|
||||
// note: config loading during validation creates cache directory to canonicalize its path,
|
||||
// that's why these function and macro always use custom cache directory
|
||||
// note: tempdir removes directory when being dropped, so we need to return it to the caller,
|
||||
// so the paths are valid
|
||||
fn test_prolog() -> (TempDir, PathBuf, PathBuf) {
|
||||
let temp_dir = tempfile::tempdir().expect("Can't create temporary directory");
|
||||
let cache_dir = temp_dir.path().join("cache-dir");
|
||||
let config_path = temp_dir.path().join("cache-config.toml");
|
||||
(temp_dir, cache_dir, config_path)
|
||||
}
|
||||
|
||||
macro_rules! load_config {
|
||||
($config_path:ident, $content_fmt:expr, $cache_dir:ident) => {{
|
||||
let config_path = &$config_path;
|
||||
let content = format!(
|
||||
$content_fmt,
|
||||
cache_dir = toml::to_string_pretty(&format!("{}", $cache_dir.display())).unwrap()
|
||||
);
|
||||
fs::write(config_path, content).expect("Failed to write test config file");
|
||||
CacheConfig::from_file(true, Some(config_path))
|
||||
}};
|
||||
}
|
||||
|
||||
// test without macros to test being disabled
|
||||
#[test]
|
||||
fn test_disabled() {
|
||||
let dir = tempfile::tempdir().expect("Can't create temporary directory");
|
||||
let config_path = dir.path().join("cache-config.toml");
|
||||
let config_content = "[cache]\n\
|
||||
enabled = true\n";
|
||||
fs::write(&config_path, config_content).expect("Failed to write test config file");
|
||||
let conf = CacheConfig::from_file(false, Some(&config_path));
|
||||
assert!(!conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
|
||||
let config_content = "[cache]\n\
|
||||
enabled = false\n";
|
||||
fs::write(&config_path, config_content).expect("Failed to write test config file");
|
||||
let conf = CacheConfig::from_file(true, Some(&config_path));
|
||||
assert!(!conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_unrecognized_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"unrecognized-setting = 42\n\
|
||||
[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
unrecognized-setting = 42",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_all_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
worker-event-queue-size = '16'\n\
|
||||
baseline-compression-level = 3\n\
|
||||
optimized-compression-level = 20\n\
|
||||
optimized-compression-usage-counter-threshold = '256'\n\
|
||||
cleanup-interval = '1h'\n\
|
||||
optimizing-compression-task-timeout = '30m'\n\
|
||||
allowed-clock-drift-for-files-from-future = '1d'\n\
|
||||
file-count-soft-limit = '65536'\n\
|
||||
files-total-size-soft-limit = '512Mi'\n\
|
||||
file-count-limit-percent-if-deleting = '70%'\n\
|
||||
files-total-size-limit-percent-if-deleting = '70%'",
|
||||
cd
|
||||
);
|
||||
check_conf(&conf, &cd);
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
// added some white spaces
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
worker-event-queue-size = ' 16\t'\n\
|
||||
baseline-compression-level = 3\n\
|
||||
optimized-compression-level =\t 20\n\
|
||||
optimized-compression-usage-counter-threshold = '256'\n\
|
||||
cleanup-interval = ' 1h'\n\
|
||||
optimizing-compression-task-timeout = '30 m'\n\
|
||||
allowed-clock-drift-for-files-from-future = '1\td'\n\
|
||||
file-count-soft-limit = '\t \t65536\t'\n\
|
||||
files-total-size-soft-limit = '512\t\t Mi '\n\
|
||||
file-count-limit-percent-if-deleting = '70\t%'\n\
|
||||
files-total-size-limit-percent-if-deleting = ' 70 %'",
|
||||
cd
|
||||
);
|
||||
check_conf(&conf, &cd);
|
||||
|
||||
fn check_conf(conf: &CacheConfig, cd: &PathBuf) {
|
||||
eprintln!("errors: {:#?}", conf.errors);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(
|
||||
conf.directory(),
|
||||
&fs::canonicalize(cd).expect("canonicalize failed")
|
||||
);
|
||||
assert_eq!(conf.worker_event_queue_size(), 0x10);
|
||||
assert_eq!(conf.baseline_compression_level(), 3);
|
||||
assert_eq!(conf.optimized_compression_level(), 20);
|
||||
assert_eq!(conf.optimized_compression_usage_counter_threshold(), 0x100);
|
||||
assert_eq!(conf.cleanup_interval(), Duration::from_secs(60 * 60));
|
||||
assert_eq!(
|
||||
conf.optimizing_compression_task_timeout(),
|
||||
Duration::from_secs(30 * 60)
|
||||
);
|
||||
assert_eq!(
|
||||
conf.allowed_clock_drift_for_files_from_future(),
|
||||
Duration::from_secs(60 * 60 * 24)
|
||||
);
|
||||
assert_eq!(conf.file_count_soft_limit(), 0x10_000);
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 512 * (1u64 << 20));
|
||||
assert_eq!(conf.file_count_limit_percent_if_deleting(), 70);
|
||||
assert_eq!(conf.files_total_size_limit_percent_if_deleting(), 70);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_compression_level_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
baseline-compression-level = 1\n\
|
||||
optimized-compression-level = 21",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.baseline_compression_level(), 1);
|
||||
assert_eq!(conf.optimized_compression_level(), 21);
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
baseline-compression-level = -1\n\
|
||||
optimized-compression-level = 21",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
baseline-compression-level = 15\n\
|
||||
optimized-compression-level = 10",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_si_prefix_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
worker-event-queue-size = '42'\n\
|
||||
optimized-compression-usage-counter-threshold = '4K'\n\
|
||||
file-count-soft-limit = '3M'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.worker_event_queue_size(), 42);
|
||||
assert_eq!(conf.optimized_compression_usage_counter_threshold(), 4_000);
|
||||
assert_eq!(conf.file_count_soft_limit(), 3_000_000);
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
worker-event-queue-size = '2G'\n\
|
||||
optimized-compression-usage-counter-threshold = '4444T'\n\
|
||||
file-count-soft-limit = '1P'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.worker_event_queue_size(), 2_000_000_000);
|
||||
assert_eq!(
|
||||
conf.optimized_compression_usage_counter_threshold(),
|
||||
4_444_000_000_000_000
|
||||
);
|
||||
assert_eq!(conf.file_count_soft_limit(), 1_000_000_000_000_000);
|
||||
|
||||
// different errors
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
worker-event-queue-size = '2g'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
file-count-soft-limit = 1",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
file-count-soft-limit = '-31337'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
file-count-soft-limit = '3.14M'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_disk_space_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '76'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 76);
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '42 Mi'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 42 * (1u64 << 20));
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '2 Gi'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 2 * (1u64 << 30));
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '31337 Ti'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 31337 * (1u64 << 40));
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '7 Pi'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 7 * (1u64 << 50));
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '7M'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.files_total_size_soft_limit(), 7_000_000);
|
||||
|
||||
// different errors
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '7 mi'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = 1",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '-31337'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-soft-limit = '3.14Ki'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_duration_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
cleanup-interval = '100s'\n\
|
||||
optimizing-compression-task-timeout = '3m'\n\
|
||||
allowed-clock-drift-for-files-from-future = '4h'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.cleanup_interval(), Duration::from_secs(100));
|
||||
assert_eq!(
|
||||
conf.optimizing_compression_task_timeout(),
|
||||
Duration::from_secs(3 * 60)
|
||||
);
|
||||
assert_eq!(
|
||||
conf.allowed_clock_drift_for_files_from_future(),
|
||||
Duration::from_secs(4 * 60 * 60)
|
||||
);
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
cleanup-interval = '2d'\n\
|
||||
optimizing-compression-task-timeout = '333 m'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(
|
||||
conf.cleanup_interval(),
|
||||
Duration::from_secs(2 * 24 * 60 * 60)
|
||||
);
|
||||
assert_eq!(
|
||||
conf.optimizing_compression_task_timeout(),
|
||||
Duration::from_secs(333 * 60)
|
||||
);
|
||||
|
||||
// different errors
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = '333'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = 333",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = '10 M'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = '10 min'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = '-10s'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
optimizing-compression-task-timeout = '1.5m'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_percent_settings() {
|
||||
let (_td, cd, cp) = test_prolog();
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
file-count-limit-percent-if-deleting = '62%'\n\
|
||||
files-total-size-limit-percent-if-deleting = '23 %'",
|
||||
cd
|
||||
);
|
||||
assert!(conf.enabled());
|
||||
assert!(conf.errors.is_empty());
|
||||
assert_eq!(conf.file_count_limit_percent_if_deleting(), 62);
|
||||
assert_eq!(conf.files_total_size_limit_percent_if_deleting(), 23);
|
||||
|
||||
// different errors
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-limit-percent-if-deleting = '23'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-limit-percent-if-deleting = '22.5%'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-limit-percent-if-deleting = '0.5'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-limit-percent-if-deleting = '-1%'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
|
||||
let conf = load_config!(
|
||||
cp,
|
||||
"[cache]\n\
|
||||
enabled = true\n\
|
||||
directory = {cache_dir}\n\
|
||||
files-total-size-limit-percent-if-deleting = '101%'",
|
||||
cd
|
||||
);
|
||||
assert!(!conf.enabled());
|
||||
assert!(!conf.errors.is_empty());
|
||||
}
|
||||
2
wasmtime-environ/src/cache/tests.rs
vendored
2
wasmtime-environ/src/cache/tests.rs
vendored
@@ -40,7 +40,7 @@ fn test_write_read_cache() {
|
||||
);
|
||||
fs::write(&config_path, config_content).expect("Failed to write test config file");
|
||||
|
||||
let errors = init(true, Some(&config_path), false, None);
|
||||
let errors = init(true, Some(&config_path), None);
|
||||
assert!(errors.is_empty());
|
||||
let cache_config = cache_config();
|
||||
assert!(cache_config.enabled());
|
||||
|
||||
30
wasmtime-environ/src/cache/worker.rs
vendored
30
wasmtime-environ/src/cache/worker.rs
vendored
@@ -69,6 +69,7 @@ pub(super) fn init(init_file_per_thread_logger: Option<&'static str>) {
|
||||
WORKER.call_once(|| worker);
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
enum CacheEvent {
|
||||
OnCacheGet(PathBuf),
|
||||
OnCacheUpdate(PathBuf),
|
||||
@@ -79,7 +80,11 @@ impl Worker {
|
||||
cache_config: &CacheConfig,
|
||||
init_file_per_thread_logger: Option<&'static str>,
|
||||
) -> Self {
|
||||
let (tx, rx) = sync_channel(cache_config.worker_event_queue_size());
|
||||
let queue_size = match cache_config.worker_event_queue_size() {
|
||||
num if num <= usize::max_value() as u64 => num as usize,
|
||||
_ => usize::max_value(),
|
||||
};
|
||||
let (tx, rx) = sync_channel(queue_size);
|
||||
|
||||
#[cfg(test)]
|
||||
let stats = Arc::new(WorkerStats::default());
|
||||
@@ -117,15 +122,16 @@ impl Worker {
|
||||
fn send_cache_event(&self, event: CacheEvent) {
|
||||
#[cfg(test)]
|
||||
let stats: &WorkerStats = self.stats.borrow();
|
||||
match self.sender.try_send(event) {
|
||||
match self.sender.try_send(event.clone()) {
|
||||
Ok(()) => {
|
||||
#[cfg(test)]
|
||||
stats.sent.fetch_add(1, atomic::Ordering::SeqCst);
|
||||
}
|
||||
Err(err) => {
|
||||
info!(
|
||||
"Failed to send asynchronously message to worker thread: {}",
|
||||
err
|
||||
"Failed to send asynchronously message to worker thread, \
|
||||
event: {:?}, error: {}",
|
||||
event, err
|
||||
);
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -285,7 +291,7 @@ impl WorkerThread {
|
||||
path.as_ref(),
|
||||
self.cache_config.optimizing_compression_task_timeout(),
|
||||
self.cache_config
|
||||
.allowed_clock_drift_for_locks_from_future(),
|
||||
.allowed_clock_drift_for_files_from_future(),
|
||||
) {
|
||||
p
|
||||
} else {
|
||||
@@ -406,7 +412,7 @@ impl WorkerThread {
|
||||
&cleanup_file,
|
||||
self.cache_config.cleanup_interval(),
|
||||
self.cache_config
|
||||
.allowed_clock_drift_for_locks_from_future(),
|
||||
.allowed_clock_drift_for_files_from_future(),
|
||||
)
|
||||
.is_none()
|
||||
{
|
||||
@@ -438,7 +444,7 @@ impl WorkerThread {
|
||||
let mut start_delete_idx_if_deleting_recognized_items: Option<usize> = None;
|
||||
|
||||
let total_size_limit = self.cache_config.files_total_size_soft_limit();
|
||||
let files_count_limit = self.cache_config.files_count_soft_limit();
|
||||
let file_count_limit = self.cache_config.file_count_soft_limit();
|
||||
let tsl_if_deleting = total_size_limit
|
||||
.checked_mul(
|
||||
self.cache_config
|
||||
@@ -446,8 +452,8 @@ impl WorkerThread {
|
||||
)
|
||||
.unwrap()
|
||||
/ 100;
|
||||
let fcl_if_deleting = files_count_limit
|
||||
.checked_mul(self.cache_config.files_count_limit_percent_if_deleting() as u64)
|
||||
let fcl_if_deleting = file_count_limit
|
||||
.checked_mul(self.cache_config.file_count_limit_percent_if_deleting() as u64)
|
||||
.unwrap()
|
||||
/ 100;
|
||||
|
||||
@@ -466,7 +472,7 @@ impl WorkerThread {
|
||||
}
|
||||
}
|
||||
|
||||
if total_size >= total_size_limit || (idx + 1) as u64 >= files_count_limit {
|
||||
if total_size >= total_size_limit || (idx + 1) as u64 >= file_count_limit {
|
||||
start_delete_idx = start_delete_idx_if_deleting_recognized_items;
|
||||
break;
|
||||
}
|
||||
@@ -578,7 +584,7 @@ impl WorkerThread {
|
||||
Some(&entry),
|
||||
&path,
|
||||
cache_config.cleanup_interval(),
|
||||
cache_config.allowed_clock_drift_for_locks_from_future(),
|
||||
cache_config.allowed_clock_drift_for_files_from_future(),
|
||||
) {
|
||||
continue; // skip active lock
|
||||
}
|
||||
@@ -597,7 +603,7 @@ impl WorkerThread {
|
||||
Some(&entry),
|
||||
&path,
|
||||
cache_config.optimizing_compression_task_timeout(),
|
||||
cache_config.allowed_clock_drift_for_locks_from_future(),
|
||||
cache_config.allowed_clock_drift_for_files_from_future(),
|
||||
) {
|
||||
add_unrecognized!(file: path);
|
||||
} // else: skip active lock
|
||||
|
||||
@@ -54,7 +54,7 @@ pub mod lightbeam;
|
||||
pub use crate::address_map::{
|
||||
FunctionAddressMap, InstructionAddressMap, ModuleAddressMap, ModuleVmctxInfo, ValueLabelsRanges,
|
||||
};
|
||||
pub use crate::cache::init as cache_init;
|
||||
pub use crate::cache::{create_new_config as cache_create_new_config, init as cache_init};
|
||||
pub use crate::compilation::{
|
||||
Compilation, CompileError, Compiler, Relocation, RelocationTarget, Relocations,
|
||||
};
|
||||
|
||||
@@ -2,7 +2,7 @@ use wasmtime_environ::cache_init;
|
||||
|
||||
#[test]
|
||||
fn test_cache_default_config_in_memory() {
|
||||
let errors = cache_init::<&str>(true, None, false, None);
|
||||
let errors = cache_init::<&str>(true, None, None);
|
||||
assert!(
|
||||
errors.is_empty(),
|
||||
"This test loads config from the default location, if there's one. Make sure it's correct!"
|
||||
|
||||
7
wasmtime-environ/tests/cache_disabled.rs
Normal file
7
wasmtime-environ/tests/cache_disabled.rs
Normal file
@@ -0,0 +1,7 @@
|
||||
use wasmtime_environ::cache_init;
|
||||
|
||||
#[test]
|
||||
fn test_cache_disabled() {
|
||||
let errors = cache_init::<&str>(false, None, None);
|
||||
assert!(errors.is_empty(), "Failed to disable cache system");
|
||||
}
|
||||
@@ -20,7 +20,7 @@ fn test_cache_fail_calling_init_twice() {
|
||||
);
|
||||
fs::write(&config_path, config_content).expect("Failed to write test config file");
|
||||
|
||||
let errors = cache_init(true, Some(&config_path), false, None);
|
||||
let errors = cache_init(true, Some(&config_path), None);
|
||||
assert!(errors.is_empty());
|
||||
let _errors = cache_init(true, Some(&config_path), false, None);
|
||||
let _errors = cache_init(true, Some(&config_path), None);
|
||||
}
|
||||
|
||||
@@ -18,6 +18,6 @@ fn test_cache_fail_invalid_config() {
|
||||
);
|
||||
fs::write(&config_path, config_content).expect("Failed to write test config file");
|
||||
|
||||
let errors = cache_init(true, Some(&config_path), false, None);
|
||||
let errors = cache_init(true, Some(&config_path), None);
|
||||
assert!(!errors.is_empty());
|
||||
}
|
||||
|
||||
@@ -5,6 +5,6 @@ use wasmtime_environ::cache_init;
|
||||
fn test_cache_fail_invalid_path_to_config() {
|
||||
let dir = tempfile::tempdir().expect("Can't create temporary directory");
|
||||
let config_path = dir.path().join("cache-config.toml"); // doesn't exist
|
||||
let errors = cache_init(true, Some(&config_path), false, None);
|
||||
let errors = cache_init(true, Some(&config_path), None);
|
||||
assert!(!errors.is_empty());
|
||||
}
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
use tempfile;
|
||||
use wasmtime_environ::cache_init;
|
||||
use wasmtime_environ::cache_create_new_config;
|
||||
|
||||
#[test]
|
||||
fn test_cache_write_default_config() {
|
||||
let dir = tempfile::tempdir().expect("Can't create temporary directory");
|
||||
let config_path = dir.path().join("cache-config.toml");
|
||||
|
||||
let errors = cache_init(true, Some(&config_path), true, None);
|
||||
assert!(errors.is_empty());
|
||||
let result = cache_create_new_config(Some(&config_path));
|
||||
assert!(result.is_ok());
|
||||
assert!(config_path.exists());
|
||||
assert_eq!(config_path, result.unwrap());
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user