Compare commits

...

93 commits
0.1.1 ... main

Author SHA1 Message Date
e1bf2cb467
chore: added Debian packaging using cargo-deb 2026-04-19 16:03:43 +02:00
f0d5522661
chore: update dependencies; bump version to 0.5.0 2026-04-14 21:30:27 +02:00
7987bef384
refactor(papermc_api): fix Clippy diagnostics 2026-04-14 21:04:57 +02:00
d38cb4ce39
feat(cli): add commands to list, view and download PaperMC builds 2026-04-14 20:57:33 +02:00
f2c4b97626
feat(papermc_api): add routes for per-version, project and build information 2026-04-13 22:31:24 +02:00
b4dce4b69a
feat(papermc_api): implement wrapper for PaperMC builds API 2026-03-31 19:14:17 +02:00
f2a0b6230f
refactor(backup): delta contributions function no longer takes reversed input 2025-05-24 23:39:54 +02:00
5f43d7b8b1
test(backup): add some more delta tests 2025-05-24 23:23:46 +02:00
15c4839a81
test(backup): add delta union test 2025-05-24 17:20:25 +02:00
22a6e68c7c
feat(backup): implement mutation methods and specialized PartialEq for State 2025-05-24 17:20:25 +02:00
3f00eee61e
feat(backup): add delta mutation methods; start union tests 2025-05-24 17:20:25 +02:00
5f7376ebb1
chore: bump versions to 0.4.2 2025-05-24 17:20:25 +02:00
638e228ba4
chore: add publish functionality to justfile 2025-05-17 20:16:46 +02:00
e8a92d7e07
chore: organize justfiles 2025-05-17 20:10:23 +02:00
6a8725489e
chore: bump dependencies 2025-04-30 17:40:28 +02:00
3ae19e2168
fix(backup): work with temporary file while writing json metadata file 2025-04-30 15:30:57 +02:00
08e77034a7
refactor: add Justfile; fix some clippy lints 2025-04-30 15:15:51 +02:00
abafd9a28c
refactor: split backup and alex into separate crate; set up workspace 2025-04-30 15:14:34 +02:00
d23227dd0b
fix(ci): use old platform
All checks were successful
ci/woodpecker/tag/release Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-08-13 10:19:29 +02:00
d3cb29b52e
chore: move PKGBUILD to separate repo 2023-08-13 10:19:29 +02:00
3cddea19c3
chore: revert to old platform syntax
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-08-12 15:47:05 +02:00
9ce2417528
chore: bump versions to 0.4.0
Some checks failed
ci/woodpecker/tag/arch-release Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/lint Pipeline failed
ci/woodpecker/push/build Pipeline failed
2023-08-12 15:06:58 +02:00
f2e781dd5a
chore: bump dependency versions
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
2023-08-12 14:50:06 +02:00
5bdd4e21b0
chore(ci): modernize config
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
2023-08-12 14:09:22 +02:00
8f190c489b
chore: update changelog 2023-08-12 13:58:13 +02:00
5f6366078c
fix: properly parse layers env var
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-08-12 13:46:46 +02:00
b3d1cec078
feat: granular locking for proper concurrent access to server process
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-08-12 11:44:35 +02:00
a51ff3937d
refactor: reorder imports
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-08-11 23:24:14 +02:00
db3bba5a42
feat: also allow run args to be passed from toml file 2023-08-11 23:13:17 +02:00
34d016fd3f
feat: allow passing global configuration as TOML file 2023-08-11 21:26:17 +02:00
bf83357464
feat: publish arch packages
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-08 16:11:36 +02:00
bfb264e823
docs: add some more help strings
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-08 15:34:24 +02:00
241bb4d68e
feat: add extract command 2023-07-08 15:31:01 +02:00
6cdc18742e
feat: don't read non-contributing archives for export 2023-07-08 14:50:18 +02:00
b924a054a6
chore: bump version to 0.3.1
All checks were successful
ci/woodpecker/tag/lint Pipeline was successful
ci/woodpecker/tag/clippy Pipeline was successful
ci/woodpecker/tag/build Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
ci/woodpecker/push/release Pipeline was successful
2023-07-08 14:12:18 +02:00
32d923e64b
refactor: this is fun
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-08 13:53:18 +02:00
1acfc9c422
refactor: have fun with rust's functional stuff
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-08 13:39:51 +02:00
fc8e8d37d3
refactor: remove some code duplication
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-08 10:32:56 +02:00
5567323473
feat: initially working export command
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-07 23:12:07 +02:00
80b814bcff
feat: further use State abstraction 2023-07-07 18:06:15 +02:00
4ec336eb86
feat: abstract State 2023-07-07 17:05:24 +02:00
6e216aa88f
feat: define delta difference & strict difference 2023-07-06 15:46:36 +02:00
75e9d7a9d2
chore: bumb version to 0.3.0
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/tag/clippy Pipeline was successful
ci/woodpecker/tag/lint Pipeline was successful
ci/woodpecker/push/release Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
ci/woodpecker/tag/build Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
2023-07-04 15:55:52 +02:00
e6fa8a0eeb
docs: add a few docstrings 2023-07-04 15:49:16 +02:00
55c5f24937
feat: specify output dirs for restore instead of using config & world
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-04 15:36:12 +02:00
36c441b8c2
feat: respect backup list filter option 2023-07-04 14:15:00 +02:00
f71db90922
feat: store end time as metadata
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-03 12:59:50 +02:00
2c256cf904
refactor: quick maths 2023-07-03 12:40:40 +02:00
bfd278abbe
feat: show backup sizes in list command
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-07-03 12:11:41 +02:00
c5193f0f3c
feat: store backup sizes in metadata file 2023-07-03 11:54:11 +02:00
a4a03ca4c5
feat: improve list view
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-24 13:51:37 +02:00
5159bfdddd
feat: basic list command 2023-06-24 13:35:59 +02:00
0eda768c03
chore: update readme
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-24 12:16:53 +02:00
a4e2a1276f
feat: restore backup chains using cli commands
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-23 22:47:38 +02:00
e373fc85f1
feat: create backups from cli for specific layer 2023-06-23 18:02:38 +02:00
1cfe13674d
refactor: structure code to allow expanding cli functionality
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-23 15:51:17 +02:00
d5cea49c8b
feat: further generalize backup code 2023-06-23 15:00:39 +02:00
03e21fda87
feat: show message describing what layer is backing up
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-23 13:01:19 +02:00
29636ffcdb
feat: implement backup layers using meta manager 2023-06-23 12:30:10 +02:00
a236c36a4f
feat: take backup layers as arguments 2023-06-23 10:53:17 +02:00
0a459ee30b
refactor: let backup manager calculate next backup time
All checks were successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-22 21:15:40 +02:00
4e8d0a8d25
refactor: we go rusty 2023-06-22 20:23:13 +02:00
188fb30343
fix: better serde bounds
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-22 20:10:37 +02:00
53dc3783ca
feat: store server info in metadata file; change cli flags
All checks were successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-20 19:31:50 +02:00
ef631fab1d
refactor: separate backup logic into own module
Some checks failed
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-19 14:04:38 +02:00
74a0b91fd1
refactor: remove open function
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-18 23:33:56 +02:00
b48c531d80
feat: configurable parameters for incremental backups
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-18 22:48:11 +02:00
b51d951688
feat: re-implement remove old backups
Some checks failed
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-18 21:56:43 +02:00
bb7b57899b
refactor: store backups in nested vecs instead; introduce concept of
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
chains
2023-06-18 21:15:05 +02:00
f7235fb342
refactor: move iterating over files to Path extension trait
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-17 12:08:46 +02:00
5275356353
feat: added backup cli command
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-16 17:23:36 +02:00
27d7e681c3
feat: temporarily disable "remove old backups"
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-15 22:54:17 +02:00
8add96b39b
feat: persistently store backup state
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
ci/woodpecker/push/build Pipeline was successful
2023-06-15 20:38:52 +02:00
d204c68400
fix: actually working incremental backup
Some checks failed
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
ci/woodpecker/push/clippy Pipeline failed
2023-06-15 09:56:40 +02:00
a9e7b215d1
feat: move running server to subcommand 2023-06-14 22:17:53 +02:00
fcc111b4ef
feat: possible incremental backup implementation using new abstraction 2023-06-14 21:47:59 +02:00
b7a678e32f
feat: lots of backup stuff 2023-06-13 17:43:47 +02:00
703a25e8be
refactor: use utc time 2023-06-13 15:12:30 +02:00
29d6713486
feat: implement own listing of files 2023-06-13 15:12:30 +02:00
4958257f6e
refactor: move backup logic to separate module 2023-06-13 15:12:30 +02:00
90aa929b73
feat: show backup time in message 2023-06-13 15:12:26 +02:00
9ce8199d5f
fix: use correct env var for backup dir
All checks were successful
ci/woodpecker/push/release Pipeline was successful
ci/woodpecker/tag/clippy Pipeline was successful
ci/woodpecker/tag/lint Pipeline was successful
ci/woodpecker/tag/build Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
2023-06-13 13:44:08 +02:00
375a68fbd6
chore: bump versions
Some checks failed
ci/woodpecker/push/build unknown status
ci/woodpecker/push/clippy unknown status
ci/woodpecker/push/lint unknown status
ci/woodpecker/push/release Pipeline was successful
ci/woodpecker/tag/clippy Pipeline was successful
ci/woodpecker/tag/lint Pipeline was successful
ci/woodpecker/tag/build Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
2023-06-13 13:02:27 +02:00
ce3dcdd4b1
chore: please clippy
Some checks failed
ci/woodpecker/push/build unknown status
ci/woodpecker/push/clippy unknown status
ci/woodpecker/push/lint unknown status
ci/woodpecker/push/release Pipeline was successful
2023-06-13 13:01:47 +02:00
5ae23c931a
feat: change jvm flags order 2023-06-13 13:00:42 +02:00
b08ba3853f
feat: add --dry flag 2023-06-13 12:53:50 +02:00
acb3cfd8e6
chore: update readme
Some checks failed
ci/woodpecker/push/release unknown status
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-13 11:51:18 +02:00
45d736d1bb
chore: bump version
Some checks failed
ci/woodpecker/push/build unknown status
ci/woodpecker/push/clippy unknown status
ci/woodpecker/push/lint unknown status
ci/woodpecker/tag/lint Pipeline was successful
ci/woodpecker/tag/build Pipeline was successful
ci/woodpecker/tag/clippy Pipeline was successful
ci/woodpecker/tag/release Pipeline was successful
ci/woodpecker/push/release Pipeline was successful
2023-06-13 11:40:18 +02:00
69ce8616d5
feat: custom message if backups failed
Some checks failed
ci/woodpecker/push/release unknown status
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-06 20:51:58 +02:00
50cdd3115f
feat: solely handle single terminating signal for now 2023-06-06 20:49:00 +02:00
0faa6a8578
feat: add basis for signal handling 2023-06-06 20:22:14 +02:00
f5fc8b588f
feat: properly backup config directory
Some checks failed
ci/woodpecker/push/release unknown status
ci/woodpecker/push/lint Pipeline was successful
ci/woodpecker/push/clippy Pipeline was successful
ci/woodpecker/push/build Pipeline was successful
2023-06-06 20:14:29 +02:00
640364405f
feat: add java optimisation flags 2023-06-06 19:27:35 +02:00
50 changed files with 6145 additions and 680 deletions

View file

@ -1,2 +1,7 @@
[alias]
runs = "run -- paper --config data/config --backup data/backups --world data/worlds --jar data/paper.jar"
runs = "run -- --config data/config --backup data/backups --world data/worlds --layers 2min,2,4,4;3min,3,2,2"
runrs = "run --release -- --config data/config --backup data/backups --world data/worlds --layers 2min,2,4,4;3min,3,2,2"
[target.aarch64-unknown-linux-musl]
linker = "aarch64-linux-gnu-gcc"
runner = "qemu-aarch64"

5
.dockerignore Normal file
View file

@ -0,0 +1,5 @@
*
!Cargo.toml
!Cargo.lock
!src/

3
.gitignore vendored
View file

@ -19,4 +19,5 @@ target/
# testing files
*.jar
data/
data*/
*.log

View file

@ -5,17 +5,17 @@ matrix:
platform: "linux/${ARCH}"
branches:
exclude: [main]
when:
branch:
exclude: [main]
event: push
pipeline:
steps:
build:
image: 'rust:1.70-alpine3.18'
image: 'rust:1.71-alpine3.18'
commands:
- apk add --no-cache build-base
- cargo build --verbose
- cargo test --verbose
# Binaries, even debug ones, should be statically compiled
- '[ "$(readelf -d target/debug/alex | grep NEEDED | wc -l)" = 0 ]'
when:
event: [push]

View file

@ -1,13 +1,13 @@
platform: 'linux/amd64'
branches:
exclude: [main]
when:
branch:
exclude: [ main ]
event: push
pipeline:
steps:
clippy:
image: 'rust:1.70'
image: 'rust:1.71'
commands:
- rustup component add clippy
- cargo clippy -- --no-deps -Dwarnings
when:
event: [push]

View file

@ -1,13 +1,13 @@
platform: 'linux/amd64'
branches:
exclude: [main]
when:
branch:
exclude: [ main ]
event: push
pipeline:
steps:
lint:
image: 'rust:1.70'
image: 'rust:1.71'
commands:
- rustup component add rustfmt
- cargo fmt -- --check
when:
event: [push]

View file

@ -4,19 +4,19 @@ matrix:
- 'linux/arm64'
platform: ${PLATFORM}
branches: [ main ]
pipeline:
when:
event: tag
steps:
build:
image: 'rust:1.70-alpine3.18'
image: 'rust:1.71-alpine3.18'
commands:
- apk add --no-cache build-base
- cargo build --release --verbose
# Ensure the release binary is also statically compiled
- '[ "$(readelf -d target/release/alex | grep NEEDED | wc -l)" = 0 ]'
- du -h target/release/alex
when:
event: tag
publish:
image: 'curlimages/curl'
@ -28,5 +28,3 @@ pipeline:
--user "Chewing_Bever:$GITEA_PASSWORD"
--upload-file target/release/alex
https://git.rustybever.be/api/packages/Chewing_Bever/generic/alex/"${CI_COMMIT_TAG}"/alex-"$(echo '${PLATFORM}' | sed 's:/:-:g')"
when:
event: tag

View file

@ -7,6 +7,104 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased](https://git.rustybever.be/Chewing_Bever/alex/src/branch/dev)
### Added
* Debian packages are now available in the [package registry](https://git.rustybever.be/Chewing_Bever/alex/packages)
## [0.5.0](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.5.0)
### Added
* CLI commands to interact with the PaperMC API
* list and view available Minecraft versions and builds per version
* Download jars to simplify manual server updating
## [0.4.2](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.4.2)
### Fixed
* Fix bug where JSON metadata file can be corrupted if crash occurs while
writing (data is now written to a temporary file before atomically renaming)
## [0.4.1](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.4.1)
### Changed
* Moved PKGBUILD to separate repo
* Properly update lock file
## [0.4.0](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.4.0)
### Added
* Extract command for working with the output of export
* Arch packages are now published to my bur repo
* Allow passing configuration variables from TOML file
### Changed
* Export command no longer reads backups that do not contribute to the final
state
* Running backups no longer block stdin input or shutdown
* Env vars `ALEX_CONFIG_DIR`, `ALEX_WORLD_DIR` and `ALEX_BACKUP_DIR` renamed to
`ALEX_CONFIG`, `ALEX_WORLD` and `ALEX_BACKUP` respectively
## [0.3.1](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.3.1)
### Added
* Export command to export any backup as a new full backup
## [0.3.0](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.3.0)
### Added
* Incremental backups
* Chain length describes how many incremental backups to create from the
same full backup
* "backups to keep" has been replaced by "chains to keep"
* Server type & version and backup size are now stored as metadata in the
metadata file
* Backup layers
* Store multiple chains of backups in parallel, configuring each with
different parameters (son-father-grandfather principle)
* CLI commands for creating, restoring & listing backups
### Changed
* Running the server now uses the `run` CLI subcommand
* `server_type` and `server_version` arguments are now optional flags
### Removed
* `max_backups` setting
## [0.2.2](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.2.2)
### Fixed
* Use correct env var for backup directory
## [0.2.1](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.2.1)
### Added
* `--dry` flag to inspect command that will be run
### Changed
* JVM flags now narrowly follow Aikar's specifications
## [0.2.0](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.2.0)
### Added
* Rudimentary signal handling for gently stopping server
* A single stop signal will trigger the Java process to shut down, but Alex
still expects to be run from a utility such as dumb-init
* Properly back up entire config directory
* Inject Java optimization flags
## [0.1.1](https://git.rustybever.be/Chewing_Bever/alex/src/tag/0.1.1)
### Changed

1268
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,20 +1,20 @@
[package]
name = "alex"
version = "0.1.0"
description = "Wrapper around Minecraft server processes, designed to complement Docker image installations."
[workspace]
resolver = "2"
members = [
'backup',
'alex',
'papermc-api'
]
[workspace.package]
version = "0.5.0"
authors = ["Jef Roosens"]
edition = "2021"
license-file = "LICENSE"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# Used for creating tarballs for backups
tar = "0.4.38"
# Used to compress said tarballs using gzip
flate2 = "1.0.26"
# Used for backup filenames
chrono = "0.4.26"
clap = { version = "4.3.1", features = ["derive", "env"] }
[workspace.dependencies]
chrono = { version = "0.4.26", features = ["serde"] }
serde = { version = "1.0.164", features = ["derive"] }
[profile.release]
lto = "fat"

67
Dockerfile Normal file
View file

@ -0,0 +1,67 @@
FROM rust:1.70-alpine3.18 AS builder
ARG DI_VER=1.2.5
WORKDIR /app
COPY . ./
RUN apk add --no-cache build-base unzip curl && \
curl -Lo - "https://github.com/Yelp/dumb-init/archive/refs/tags/v${DI_VER}.tar.gz" | tar -xzf - && \
cd "dumb-init-${DI_VER}" && \
make SHELL=/bin/sh && \
mv dumb-init ..
RUN cargo build && \
[ "$(readelf -d target/debug/alex | grep NEEDED | wc -l)" = 0 ]
# We use ${:-} instead of a default value because the argument is always passed
# to the build, it'll just be blank most likely
FROM eclipse-temurin:18-jre-alpine
# Build arguments
ARG MC_VERSION=1.19.4
ARG PAPERMC_VERSION=525
RUN addgroup -Sg 1000 paper && \
adduser -SHG paper -u 1000 paper
# Create worlds and config directory
WORKDIR /app
RUN mkdir -p worlds config/cache backups
# Download server file
ADD "https://papermc.io/api/v2/projects/paper/versions/$MC_VERSION/builds/$PAPERMC_VERSION/downloads/paper-$MC_VERSION-$PAPERMC_VERSION.jar" server.jar
# Make sure the server user can access all necessary folders
RUN chown -R paper:paper /app
# Store the cache in an anonymous volume, which means it won't get stored in the other volumes
VOLUME /app/config/cache
VOLUME /app/backups
COPY --from=builder /app/dumb-init /bin/dumb-init
COPY --from=builder /app/target/debug/alex /bin/alex
RUN chmod +x /bin/alex
# Default value to keep users from eating up all ram accidentally
ENV ALEX_CONFIG=/app/config \
ALEX_WORLD=/app/worlds \
ALEX_BACKUP=/app/backups \
ALEX_SERVER=paper \
ALEX_XMS=1024 \
ALEX_XMX=2048 \
ALEX_JAR=/app/server.jar \
ALEX_SERVER_VERSION="${MC_VERSION}-${PAPERMC_VERSION}" \
ALEX_LAYERS="2min,2,4,4;3min,3,2,2"
# Document exposed ports
EXPOSE 25565
# Switch to non-root user
USER paper:paper
ENTRYPOINT ["/bin/dumb-init", "--"]
CMD ["/bin/alex", "run"]

81
Justfile Normal file
View file

@ -0,0 +1,81 @@
# Build the local development build
[group('build')]
build:
cargo build --frozen --workspace
alias b := build
# Build release binaries for the supported architectures
[group('build')]
build-release:
cargo build \
--release \
--frozen \
--workspace \
--target x86_64-unknown-linux-musl \
--target aarch64-unknown-linux-musl
# Run all tests in the workspace
test:
cargo test --frozen --workspace
alias t := test
# Run cargofmt and clippy
check:
cargo fmt --check --all
cargo clippy \
--frozen \
--all -- \
--no-deps \
--deny 'clippy::all'
alias c := check
alias lint := check
fetch:
cargo fetch --locked
clean:
cargo clean
doc:
cargo doc --workspace --frozen
run:
mkdir -p data
cargo run --frozen --package alex -- run \
--config data/config \
--backup data/backups \
--world data/worlds \
--jar ./paper-1.21.5-77.jar \
--java '/usr/lib/jvm/java-21-openjdk/bin/java' \
--layers '2min,2,4,4;3min,3,2,2'
# Package the static release binaries as a Debian package
[group('package')]
package-deb: build-release
cargo deb \
--package alex \
--frozen \
--no-build \
--target x86_64-unknown-linux-musl \
--target aarch64-unknown-linux-musl
# Publish the binaries and packages for a new release
[group('package')]
publish-release tag: build-release package-deb
# Check the binaries are proper static binaries
[ "$(readelf -d target/x86_64-unknown-linux-musl/release/alex | grep NEEDED | wc -l)" = 0 ]
[ "$(readelf -d target/aarch64-unknown-linux-musl/release/alex | grep NEEDED | wc -l)" = 0 ]
curl \
--parallel --fail-early \
--netrc --upload-file target/x86_64-unknown-linux-musl/release/alex \
https://git.rustybever.be/api/packages/Chewing_Bever/generic/alex/"{{ tag }}"/alex-linux-amd64 \
--next \
--netrc --upload-file target/aarch64-unknown-linux-musl/release/alex \
https://git.rustybever.be/api/packages/Chewing_Bever/generic/alex/"{{ tag }}"/alex-linux-arm64 \
--next \
--netrc --upload-file target/debian/alex_{{ tag }}-1_amd64.deb \
https://git.rustybever.be/api/packages/Chewing_Bever/debian/pool/any/main/upload \
--next \
--netrc --upload-file target/debian/alex_{{ tag }}-1_arm64.deb \
https://git.rustybever.be/api/packages/Chewing_Bever/debian/pool/any/main/upload > /dev/null

136
README.md
View file

@ -1,3 +1,135 @@
# mc-wrapper
# Alex
A wrapper around a standard Minecraft server, written in Rust.
Alex is a wrapper around a typical Minecraft server process. It acts as the
parent process, and sits in between the user's input and the server's stdin.
This allows Alex to support additional commands that execute Rust code, notably
creating periodic backups.
## Installation
Alex is distributed as statically compiled binaries for Linux amd64 and arm64.
These can be found
[here](https://git.rustybever.be/Chewing_Bever/alex/packages).
### Arch
Arch users can install prebuilt `x86_64` & `aarch64` packages from my `bur`
repository. Add the following at the bottom of your `pacman.conf`:
```toml
[bur]
Server = https://arch.r8r.be/$repo/$arch
SigLevel = Optional
```
If you prefer building the package yourself, the PKGBUILD can be found
[here](https://git.rustybever.be/bur/alex-mc).
### Dockerfiles
You can easily install alex in your Docker images by letting Docker download it
for you. Add the following to your Dockerfile (replace with your required
version & architecture):
```dockerfile
ADD "https://git.rustybever.be/api/packages/Chewing_Bever/generic/alex/0.2.2/alex-linux-amd64" /bin/alex
```
## Why
The primary usecase for this is backups. A common problem I've had with
Minecraft backups is that they fail, because the server is writing to one of
the region files as the backup is being created. Alex solves this be sending
`save-off` and `save-all` to the server, before creating the tarball.
Afterwards, saving is enabled again with `save-on`.
## Features
* Create safe backups as gzip-compressed tarballs using the `backup` command
* Automatically create backups periodically
* Properly configures the process (working directory, optimisation flags)
* Configure everything as CLI arguments or environment variables
## Configuration
Most information can be retrieved easily by looking at the help command:
```
Wrapper around Minecraft server processes, designed to complement Docker image installations.
Usage: alex [OPTIONS] <COMMAND>
Commands:
run Run the server
backup Interact with the backup system without starting a server
help Print this message or the help of the given subcommand(s)
Options:
--config <CONFIG_DIR>
Directory where configs are stored, and where the server will run [env: ALEX_CONFIG_DIR=] [default: .]
--world <WORLD_DIR>
Directory where world files will be saved [env: ALEX_WORLD_DIR=] [default: ../worlds]
--backup <BACKUP_DIR>
Directory where backups will be stored [env: ALEX_BACKUP_DIR=] [default: ../backups]
--layers <LAYERS>
What backup layers to employ, provided as a list of tuples name,frequency,chains,chain_len delimited by semicolons (;) [env: ALEX_LAYERS=]
--server <SERVER>
Type of server [env: ALEX_SERVER=] [default: unknown] [possible values: unknown, paper, forge, vanilla]
--server-version <SERVER_VERSION>
Version string for the server, e.g. 1.19.4-545 [env: ALEX_SERVER_VERSION=] [default: ]
-h, --help
Print help
-V, --version
Print version
```
### Choosing layer parameters
One part of the configuration that does require some clarification is the layer
system. Alex can manage an arbitrary number of backup layers, each having its
own configuration. These layers can either use incremental or full backups,
depending on how they're configured.
These layers mostly correspond to the grandfather-father-son backup rotation
scheme. For example, one could have a layer that creates incremental backups
every 30 minutes, which are stored for 24 hours. This gives you 24 hours of
granular rollback in case your server suffers a crash. A second layer might
create a full backup every 24 hours, with backups being stored for 7 days. This
gives you 7 days worth of backups with the granularity of 24 hours. This
approach allows for greater versatility, while not having to store a large
amount of data. Thanks to incremental backups, frequent backups don't have to
take long at all.
A layer consists of 4 pieces of metadata:
* A name, which will be used in the file system and the in-game notifications
* The frequency, which describes in minutes how frequently a backup should be
created
* How many chains should be kept at all times
* How long each chain should be
These last two require some clarification. In Alex, a "chain" describes an
initial full backup and zero or more incremental backups that are created from
that initial full backup. This concept exists because an incremental backup has
no real meaning if its ancestors are not known. To restore one of these chains,
all backups in the chain need to be restored in-order. Note that a chain length
of 1 disables incremental backups entirely.
How many backups to keep is defined by how many chains should be stored.
Because an incremental backup needs to have its ancestors in order to be
restored, we can't simply "keep the last n backups", as this would break these
chains. Therefore, you configure how many backups to store using these chains.
For example, if you configure a layer to store 5 chains of length 4, you will
have 20 archive files on disk, namely 5 full backups and 15 incremental
backups. Note that Alex applies these rules to *full* chains. An in-progress
chain does not count towards this total. Therefore, you can have up to `n-1`
additional archive files, with `n` being the chain length, on disk.
To look at it from another perspective, say we wish to have a granularity of 30
minutes for a timespan of 24 hours. Then we could configure the layer to only
save a single chain, with a chain length of 48. If we prefer to have a few full
backups instead of a long chain of incremental backups, we could instead use a
chain length of 12 and store 4 chains. Either way, the total comes out to 48,
which spans 24 hours if we make a backup every 30 minutes.

18
alex-example.toml Normal file
View file

@ -0,0 +1,18 @@
config = "data/config"
world = "data/worlds"
backup = "data/backups"
server = "paper"
jar = './paper-1.21.5-77.jar'
java = '/usr/lib/jvm/java-21-openjdk/bin/java'
[[layers]]
name = "2min"
frequency = 2
chains = 4
chain_len = 4
[[layers]]
name = "3min"
frequency = 3
chains = 2
chain_len = 2

871
alex/Cargo.lock generated Normal file
View file

@ -0,0 +1,871 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "adler2"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627"
[[package]]
name = "alex"
version = "0.1.0"
dependencies = [
"backup",
"chrono",
"clap",
"figment",
"serde",
"signal-hook",
]
[[package]]
name = "android-tzdata"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0"
[[package]]
name = "android_system_properties"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
dependencies = [
"libc",
]
[[package]]
name = "anstream"
version = "0.6.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b"
dependencies = [
"anstyle",
"anstyle-parse",
"anstyle-query",
"anstyle-wincon",
"colorchoice",
"is_terminal_polyfill",
"utf8parse",
]
[[package]]
name = "anstyle"
version = "1.0.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9"
[[package]]
name = "anstyle-parse"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9"
dependencies = [
"utf8parse",
]
[[package]]
name = "anstyle-query"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c"
dependencies = [
"windows-sys",
]
[[package]]
name = "anstyle-wincon"
version = "3.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca3534e77181a9cc07539ad51f2141fe32f6c3ffd4df76db8ad92346b003ae4e"
dependencies = [
"anstyle",
"once_cell",
"windows-sys",
]
[[package]]
name = "atomic"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d818003e740b63afc82337e3160717f4f63078720a810b7b903e70a5d1d2994"
dependencies = [
"bytemuck",
]
[[package]]
name = "autocfg"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "backup"
version = "0.4.1"
dependencies = [
"chrono",
"flate2",
"serde",
"serde_json",
"tar",
]
[[package]]
name = "bitflags"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd"
[[package]]
name = "bumpalo"
version = "3.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1628fb46dfa0b37568d12e5edd512553eccf6a22a78e8bde00bb4aed84d5bdbf"
[[package]]
name = "bytemuck"
version = "1.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9134a6ef01ce4b366b50689c94f82c14bc72bc5d0386829828a2e2752ef7958c"
[[package]]
name = "cc"
version = "1.2.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04da6a0d40b948dfc4fa8f5bbf402b0fc1a64a28dbf7d12ffd683550f2c1b63a"
dependencies = [
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "chrono"
version = "0.4.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c469d952047f47f91b68d1cba3f10d63c11d73e4636f24f08daf0278abf01c4d"
dependencies = [
"android-tzdata",
"iana-time-zone",
"js-sys",
"num-traits",
"serde",
"wasm-bindgen",
"windows-link",
]
[[package]]
name = "clap"
version = "4.5.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eccb054f56cbd38340b380d4a8e69ef1f02f1af43db2f0cc817a4774d80ae071"
dependencies = [
"clap_builder",
"clap_derive",
]
[[package]]
name = "clap_builder"
version = "4.5.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd9466fac8543255d3b1fcad4762c5e116ffe808c8a3043d4263cd4fd4862a2"
dependencies = [
"anstream",
"anstyle",
"clap_lex",
"strsim",
]
[[package]]
name = "clap_derive"
version = "4.5.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09176aae279615badda0765c0c0b3f6ed53f4709118af73cf4655d85d1530cd7"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "clap_lex"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "colorchoice"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990"
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "crc32fast"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3"
dependencies = [
"cfg-if",
]
[[package]]
name = "equivalent"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "errno"
version = "0.3.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "976dd42dc7e85965fe702eb8164f21f450704bdde31faefd6471dba214cb594e"
dependencies = [
"libc",
"windows-sys",
]
[[package]]
name = "figment"
version = "0.10.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8cb01cd46b0cf372153850f4c6c272d9cbea2da513e07538405148f95bd789f3"
dependencies = [
"atomic",
"pear",
"serde",
"toml",
"uncased",
"version_check",
]
[[package]]
name = "filetime"
version = "0.2.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "35c0522e981e68cbfa8c3f978441a5f34b30b96e146b33cd3359176b50fe8586"
dependencies = [
"cfg-if",
"libc",
"libredox",
"windows-sys",
]
[[package]]
name = "flate2"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ced92e76e966ca2fd84c8f7aa01a4aea65b0eb6648d72f7c8f3e2764a67fece"
dependencies = [
"crc32fast",
"miniz_oxide",
]
[[package]]
name = "hashbrown"
version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
[[package]]
name = "heck"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "iana-time-zone"
version = "0.1.63"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0c919e5debc312ad217002b8048a17b7d83f80703865bbfcfebb0458b0b27d8"
dependencies = [
"android_system_properties",
"core-foundation-sys",
"iana-time-zone-haiku",
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
]
[[package]]
name = "iana-time-zone-haiku"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
dependencies = [
"cc",
]
[[package]]
name = "indexmap"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cea70ddb795996207ad57735b50c5982d8844f38ba9ee5f1aedcfb708a2aa11e"
dependencies = [
"equivalent",
"hashbrown",
]
[[package]]
name = "inlinable_string"
version = "0.1.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8fae54786f62fb2918dcfae3d568594e50eb9b5c25bf04371af6fe7516452fb"
[[package]]
name = "is_terminal_polyfill"
version = "1.70.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf"
[[package]]
name = "itoa"
version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "js-sys"
version = "0.3.77"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f"
dependencies = [
"once_cell",
"wasm-bindgen",
]
[[package]]
name = "libc"
version = "0.2.172"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
[[package]]
name = "libredox"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0ff37bd590ca25063e35af745c343cb7a0271906fb7b37e4813e8f79f00268d"
dependencies = [
"bitflags",
"libc",
"redox_syscall",
]
[[package]]
name = "linux-raw-sys"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12"
[[package]]
name = "log"
version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "memchr"
version = "2.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "miniz_oxide"
version = "0.8.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3be647b768db090acb35d5ec5db2b0e1f1de11133ca123b9eacf5137868f892a"
dependencies = [
"adler2",
]
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
[[package]]
name = "once_cell"
version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "pear"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bdeeaa00ce488657faba8ebf44ab9361f9365a97bd39ffb8a60663f57ff4b467"
dependencies = [
"inlinable_string",
"pear_codegen",
"yansi",
]
[[package]]
name = "pear_codegen"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4bab5b985dc082b345f812b7df84e1bef27e7207b39e448439ba8bd69c93f147"
dependencies = [
"proc-macro2",
"proc-macro2-diagnostics",
"quote",
"syn",
]
[[package]]
name = "proc-macro2"
version = "1.0.95"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02b3e5e68a3a1a02aad3ec490a98007cbc13c37cbe84a3cd7b8e406d76e7f778"
dependencies = [
"unicode-ident",
]
[[package]]
name = "proc-macro2-diagnostics"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af066a9c399a26e020ada66a034357a868728e72cd426f3adcd35f80d88d88c8"
dependencies = [
"proc-macro2",
"quote",
"syn",
"version_check",
"yansi",
]
[[package]]
name = "quote"
version = "1.0.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d"
dependencies = [
"proc-macro2",
]
[[package]]
name = "redox_syscall"
version = "0.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2f103c6d277498fbceb16e84d317e2a400f160f46904d5f5410848c829511a3"
dependencies = [
"bitflags",
]
[[package]]
name = "rustix"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d97817398dd4bb2e6da002002db259209759911da105da92bec29ccb12cf58bf"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys",
]
[[package]]
name = "rustversion"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eded382c5f5f786b989652c49544c4877d9f015cc22e145a5ea8ea66c2921cd2"
[[package]]
name = "ryu"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "serde"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.140"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
[[package]]
name = "serde_spanned"
version = "0.6.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87607cb1398ed59d48732e575a4c28a7a8ebf2454b964fe3f224f2afc07909e1"
dependencies = [
"serde",
]
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "signal-hook"
version = "0.3.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8621587d4798caf8eb44879d42e56b9a93ea5dcd315a6487c357130095b62801"
dependencies = [
"libc",
"signal-hook-registry",
]
[[package]]
name = "signal-hook-registry"
version = "1.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9203b8055f63a2a00e2f593bb0510367fe707d7ff1e5c872de2f537b339e5410"
dependencies = [
"libc",
]
[[package]]
name = "strsim"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "syn"
version = "2.0.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ce2b7fc941b3a24138a0a7cf8e858bfc6a992e7978a068a5c760deb0ed43caf"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "tar"
version = "0.4.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d863878d212c87a19c1a610eb53bb01fe12951c0501cf5a0d65f724914a667a"
dependencies = [
"filetime",
"libc",
"xattr",
]
[[package]]
name = "toml"
version = "0.8.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05ae329d1f08c4d17a59bed7ff5b5a769d062e64a62d34a3261b219e62cd5aae"
dependencies = [
"serde",
"serde_spanned",
"toml_datetime",
"toml_edit",
]
[[package]]
name = "toml_datetime"
version = "0.6.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3da5db5a963e24bc68be8b17b6fa82814bb22ee8660f192bb182771d498f09a3"
dependencies = [
"serde",
]
[[package]]
name = "toml_edit"
version = "0.22.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "310068873db2c5b3e7659d2cc35d21855dbafa50d1ce336397c666e3cb08137e"
dependencies = [
"indexmap",
"serde",
"serde_spanned",
"toml_datetime",
"toml_write",
"winnow",
]
[[package]]
name = "toml_write"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfb942dfe1d8e29a7ee7fcbde5bd2b9a25fb89aa70caea2eba3bee836ff41076"
[[package]]
name = "uncased"
version = "0.9.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e1b88fcfe09e89d3866a5c11019378088af2d24c3fbd4f0543f96b479ec90697"
dependencies = [
"version_check",
]
[[package]]
name = "unicode-ident"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
[[package]]
name = "utf8parse"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "version_check"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]]
name = "wasm-bindgen"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1edc8929d7499fc4e8f0be2262a241556cfc54a0bea223790e71446f2aab1ef5"
dependencies = [
"cfg-if",
"once_cell",
"rustversion",
"wasm-bindgen-macro",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f0a0651a5c2bc21487bde11ee802ccaf4c51935d0d3d42a6101f98161700bc6"
dependencies = [
"bumpalo",
"log",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fe63fc6d09ed3792bd0897b314f53de8e16568c2b3f7982f468c0bf9bd0b407"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ae87ea40c9f689fc23f209965b6fb8a99ad69aeeb0231408be24920604395de"
dependencies = [
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a05d73b933a847d6cccdda8f838a22ff101ad9bf93e33684f39c1f5f0eece3d"
dependencies = [
"unicode-ident",
]
[[package]]
name = "windows-core"
version = "0.61.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4763c1de310c86d75a878046489e2e5ba02c649d185f21c67d4cf8a56d098980"
dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-strings",
]
[[package]]
name = "windows-implement"
version = "0.60.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a47fddd13af08290e67f4acabf4b459f647552718f683a7b415d290ac744a836"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-interface"
version = "0.59.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bd9211b69f8dcdfa817bfd14bf1c97c9188afa36f4750130fcdf3f400eca9fa8"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-link"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76840935b766e1b0a05c0066835fb9ec80071d4c09a16f6bd5f7e655e3c14c38"
[[package]]
name = "windows-result"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c64fd11a4fd95df68efcfee5f44a294fe71b8bc6a91993e2791938abcc712252"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a2ba9642430ee452d5a7aa78d72907ebe8cfda358e8cb7918a2050581322f97"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.59.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "winnow"
version = "0.7.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6cb8234a863ea0e8cd7284fcdd4f145233eb00fee02bbdd9861aec44e6477bc5"
dependencies = [
"memchr",
]
[[package]]
name = "xattr"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d65cbf2f12c15564212d48f4e3dfb87923d25d611f2aed18f4cb23f0413d89e"
dependencies = [
"libc",
"rustix",
]
[[package]]
name = "yansi"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cfe53a6657fd280eaa890a3bc59152892ffa3e30101319d168b781ed6529b049"

19
alex/Cargo.toml Normal file
View file

@ -0,0 +1,19 @@
[package]
name = "alex"
description = "Wrapper around Minecraft server processes, designed to complement Docker image installations."
version.workspace = true
edition.workspace = true
authors.workspace = true
license-file.workspace = true
[dependencies]
backup = { path = "../backup" }
papermc-api = { path = "../papermc-api" }
chrono.workspace = true
serde.workspace = true
clap = { version = "4.5.37", features = ["derive", "env"] }
signal-hook = "0.3.15"
figment = { version = "0.10.10", features = ["env", "toml"] }
ureq = "3.3.0"

16
alex/Justfile Normal file
View file

@ -0,0 +1,16 @@
build:
cargo build --frozen
alias b := build
test:
cargo test --frozen
alias t := test
check:
cargo fmt --check
cargo clippy \
--frozen \
-- \
--no-deps \
--deny 'clippy::all'
alias c := check

303
alex/src/cli/backup.rs Normal file
View file

@ -0,0 +1,303 @@
use std::io;
use std::path::{Path, PathBuf};
use chrono::{TimeZone, Utc};
use clap::{Args, Subcommand};
use crate::other;
use backup::Backup;
#[derive(Subcommand)]
pub enum BackupCommands {
/// List all tracked backups
///
/// Note that this will only list backups for the layers currently configured, and will ignore
/// any other layers also present in the backup directory.
List(BackupListArgs),
/// Manually create a new backup
///
/// Note that backups created using this command will count towards the length of a chain, and
/// can therefore shorten how far back in time your backups will be stored.
Create(BackupCreateArgs),
/// Restore a backup
///
/// This command will restore the selected backup by extracting its entire chain up to and
/// including the requested backup in-order.
Restore(BackupRestoreArgs),
/// Export a backup into a full archive
///
/// Just like the restore command, this will extract each backup from the chain up to and
/// including the requested backup, but instead of writing the files to disk, they will be
/// recompressed into a new tarball, resulting in a new tarball containing a full backup.
Export(BackupExportArgs),
/// Extract an archive file, which is assumed to be a full backup.
///
/// This command mostly exists as a convenience method for working with the output of `export`.
Extract(BackupExtractArgs),
}
#[derive(Args)]
pub struct BackupArgs {
#[command(subcommand)]
pub command: BackupCommands,
}
#[derive(Args)]
pub struct BackupCreateArgs {
/// What layer to create a backup in
layer: String,
}
#[derive(Args)]
pub struct BackupListArgs {
/// What layer to list
layer: Option<String>,
}
#[derive(Args)]
pub struct BackupRestoreArgs {
/// Path to the backup inside the backup directory to restore
path: PathBuf,
/// Directory to store config in
output_config: PathBuf,
/// Directory to store worlds in
output_worlds: PathBuf,
/// Whether to overwrite the contents of the output directories
///
/// If set, the output directories will be completely cleared before trying to restore the
/// backup.
#[arg(short, long, default_value_t = false)]
force: bool,
/// Create output directories if they don't exist
#[arg(short, long, default_value_t = false)]
make: bool,
}
#[derive(Args)]
pub struct BackupExportArgs {
/// Path to the backup inside the backup directory to export
path: PathBuf,
/// Path to store the exported archive
output: PathBuf,
/// Create output directories if they don't exist
#[arg(short, long, default_value_t = false)]
make: bool,
}
#[derive(Args)]
pub struct BackupExtractArgs {
/// Path to the backup to extract
path: PathBuf,
/// Directory to store config in
output_config: PathBuf,
/// Directory to store worlds in
output_worlds: PathBuf,
/// Whether to overwrite the contents of the output directories
///
/// If set, the output directories will be completely cleared before trying to restore the
/// backup.
#[arg(short, long, default_value_t = false)]
force: bool,
/// Create output directories if they don't exist
#[arg(short, long, default_value_t = false)]
make: bool,
}
impl BackupArgs {
pub fn run(&self, cli: &super::Config) -> io::Result<()> {
match &self.command {
BackupCommands::Create(args) => args.run(cli),
BackupCommands::List(args) => args.run(cli),
BackupCommands::Restore(args) => args.run(cli),
BackupCommands::Export(args) => args.run(cli),
BackupCommands::Extract(args) => args.run(cli),
}
}
}
impl BackupCreateArgs {
pub fn run(&self, cli: &super::Config) -> io::Result<()> {
let mut meta = cli.meta()?;
if let Some(res) = meta.create_backup(&self.layer) {
res
} else {
Err(io::Error::new(io::ErrorKind::Other, "Unknown layer"))
}
}
}
impl BackupListArgs {
pub fn run(&self, cli: &super::Config) -> io::Result<()> {
let meta = cli.meta()?;
// A bit scuffed? Sure
for (name, manager) in meta
.managers()
.iter()
.filter(|(name, _)| self.layer.is_none() || &self.layer.as_ref().unwrap() == name)
{
println!("{}", name);
for chain in manager.chains().iter().filter(|c| !c.is_empty()) {
let mut iter = chain.iter();
println!(" {}", iter.next().unwrap());
for backup in iter {
println!(" {}", backup);
}
}
}
Ok(())
}
}
/// Tries to parse the given path as the path to a backup inside the backup directory with a
/// formatted timestamp.
fn parse_backup_path(
backup_dir: &Path,
backup_path: &Path,
) -> io::Result<(String, chrono::DateTime<Utc>)> {
if !backup_path.starts_with(backup_dir) {
return Err(other("Provided file is not inside the backup directory."));
}
let layer = if let Some(parent) = backup_path.parent() {
// Backup files should be stored nested inside a layer's folder
if parent != backup_dir {
parent.file_name().unwrap().to_string_lossy()
} else {
return Err(other("Invalid path."));
}
} else {
return Err(other("Invalid path."));
};
let timestamp = if let Some(filename) = backup_path.file_name() {
Utc.datetime_from_str(&filename.to_string_lossy(), Backup::FILENAME_FORMAT)
.map_err(|_| other("Invalid filename."))?
} else {
return Err(other("Invalid filename."));
};
Ok((layer.to_string(), timestamp))
}
impl BackupRestoreArgs {
pub fn run(&self, cli: &super::Config) -> io::Result<()> {
let backup_dir = cli.backup.canonicalize()?;
// Create directories if needed
if self.make {
std::fs::create_dir_all(&self.output_config)?;
std::fs::create_dir_all(&self.output_worlds)?;
}
let output_config = self.output_config.canonicalize()?;
let output_worlds = self.output_worlds.canonicalize()?;
// Parse input path
let backup_path = self.path.canonicalize()?;
let (layer, timestamp) = parse_backup_path(&backup_dir, &backup_path)?;
let meta = cli.meta()?;
// Clear previous contents of directories
let mut entries = output_config
.read_dir()?
.chain(output_worlds.read_dir()?)
.peekable();
if entries.peek().is_some() && !self.force {
return Err(other("Output directories are not empty. If you wish to overwrite these contents, use the force flag."));
}
for entry in entries {
let path = entry?.path();
if path.is_dir() {
std::fs::remove_dir_all(path)?;
} else {
std::fs::remove_file(path)?;
}
}
let dirs = vec![
(PathBuf::from("config"), output_config),
(PathBuf::from("worlds"), output_worlds),
];
// Restore the backup
if let Some(res) = meta.restore_backup(&layer, timestamp, &dirs) {
res
} else {
Err(other("Unknown layer"))
}
}
}
impl BackupExportArgs {
pub fn run(&self, cli: &super::Config) -> io::Result<()> {
let backup_dir = cli.backup.canonicalize()?;
if self.make {
if let Some(parent) = &self.output.parent() {
std::fs::create_dir_all(parent)?;
}
}
// Parse input path
let backup_path = self.path.canonicalize()?;
let (layer, timestamp) = parse_backup_path(&backup_dir, &backup_path)?;
let meta = cli.meta()?;
if let Some(res) = meta.export_backup(&layer, timestamp, &self.output) {
res
} else {
Err(other("Unknown layer"))
}
}
}
impl BackupExtractArgs {
pub fn run(&self, _cli: &super::Config) -> io::Result<()> {
// Create directories if needed
if self.make {
std::fs::create_dir_all(&self.output_config)?;
std::fs::create_dir_all(&self.output_worlds)?;
}
let output_config = self.output_config.canonicalize()?;
let output_worlds = self.output_worlds.canonicalize()?;
let backup_path = self.path.canonicalize()?;
// Clear previous contents of directories
let mut entries = output_config
.read_dir()?
.chain(output_worlds.read_dir()?)
.peekable();
if entries.peek().is_some() && !self.force {
return Err(other("Output directories are not empty. If you wish to overwrite these contents, use the force flag."));
}
for entry in entries {
let path = entry?.path();
if path.is_dir() {
std::fs::remove_dir_all(path)?;
} else {
std::fs::remove_file(path)?;
}
}
let dirs = vec![
(PathBuf::from("config"), output_config),
(PathBuf::from("worlds"), output_worlds),
];
Backup::extract_archive(backup_path, &dirs)
}
}

47
alex/src/cli/config.rs Normal file
View file

@ -0,0 +1,47 @@
use std::{io, path::PathBuf};
use serde::{Deserialize, Serialize};
use crate::server::{Metadata, ServerType};
use backup::{ManagerConfig, MetaManager};
#[derive(Serialize, Deserialize, Debug)]
pub struct Config {
pub config: PathBuf,
pub world: PathBuf,
pub backup: PathBuf,
pub layers: Vec<ManagerConfig>,
pub server: ServerType,
pub server_version: String,
}
impl Default for Config {
fn default() -> Self {
Self {
config: PathBuf::from("."),
world: PathBuf::from("../worlds"),
backup: PathBuf::from("../backups"),
layers: Vec::new(),
server: ServerType::Unknown,
server_version: String::from(""),
}
}
}
impl Config {
/// Convenience method to initialize backup manager from the cli arguments
pub fn meta(&self) -> io::Result<MetaManager<Metadata>> {
let metadata = Metadata {
server_type: self.server,
server_version: self.server_version.clone(),
};
let dirs = vec![
(PathBuf::from("config"), self.config.canonicalize()?),
(PathBuf::from("worlds"), self.world.canonicalize()?),
];
let mut meta = MetaManager::new(self.backup.canonicalize()?, dirs, metadata);
meta.add_all(&self.layers)?;
Ok(meta)
}
}

125
alex/src/cli/mod.rs Normal file
View file

@ -0,0 +1,125 @@
mod backup;
mod config;
mod papermc;
mod run;
use std::{path::PathBuf, str::FromStr};
use clap::{Args, Parser, Subcommand};
use figment::{
providers::{Env, Format, Serialized, Toml},
Figment,
};
use serde::{Deserialize, Serialize};
use crate::server::ServerType;
use ::backup::ManagerConfig;
use backup::BackupArgs;
use config::Config;
use run::RunCli;
#[derive(Parser, Serialize)]
#[command(author, version, about, long_about = None)]
pub struct Cli {
#[command(subcommand)]
#[serde(skip)]
pub command: Commands,
/// Path to a TOML configuration file
#[arg(long = "config-file", global = true)]
pub config_file: Option<PathBuf>,
#[command(flatten)]
pub args: CliArgs,
}
#[derive(Args, Serialize, Deserialize, Clone)]
pub struct CliArgs {
/// Directory where configs are stored, and where the server will run
#[arg(long, value_name = "CONFIG_DIR", global = true)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub config: Option<PathBuf>,
/// Directory where world files will be saved
#[arg(long, value_name = "WORLD_DIR", global = true)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub world: Option<PathBuf>,
/// Directory where backups will be stored
#[arg(long, value_name = "BACKUP_DIR", global = true)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub backup: Option<PathBuf>,
/// What backup layers to employ, provided as a list of tuples name,frequency,chains,chain_len
/// delimited by semicolons (;).
#[arg(long, global = true, value_delimiter = ';')]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub layers: Option<Vec<ManagerConfig>>,
/// Type of server
#[arg(long, global = true)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub server: Option<ServerType>,
/// Version string for the server, e.g. 1.19.4-545
#[arg(long, global = true)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub server_version: Option<String>,
}
#[derive(Subcommand)]
pub enum Commands {
/// Run the server
Run(RunCli),
/// Interact with the backup system without starting a server
Backup(BackupArgs),
/// Interact with the PaperMC API and download new JARs
#[command(name = "papermc")]
PaperMC(papermc::Cli),
}
impl Cli {
pub fn run(&self) -> crate::Result<()> {
let config = self.config(&self.args)?;
match &self.command {
Commands::Run(args) => args.run(self, &config),
Commands::Backup(args) => Ok(args.run(&config)?),
Commands::PaperMC(cli) => cli.run(),
}
}
pub fn config<T, U>(&self, args: &U) -> crate::Result<T>
where
T: Default + Serialize + for<'de> Deserialize<'de>,
U: Serialize,
{
let toml_file = self
.config_file
.clone()
.unwrap_or(PathBuf::from(Env::var_or("ALEX_CONFIG_FILE", "")));
let mut figment = Figment::new()
.merge(Serialized::defaults(T::default()))
.merge(Toml::file(toml_file))
.merge(Env::prefixed("ALEX_").ignore(&["ALEX_LAYERS"]));
// Layers need to be parsed separately, as the env var format is different than the one
// serde expects
if let Some(layers_env) = Env::var("ALEX_LAYERS") {
let res = layers_env
.split(';')
.map(ManagerConfig::from_str)
.collect::<Vec<_>>();
if res.iter().any(|e| e.is_err()) {
return Err(crate::other("Invalid layer configuration").into());
}
let layers: Vec<_> = res.iter().flatten().collect();
figment = figment.merge(Serialized::default("layers", layers));
}
Ok(figment.merge(Serialized::defaults(args)).extract()?)
}
}

213
alex/src/cli/papermc.rs Normal file
View file

@ -0,0 +1,213 @@
use std::path::{Path, PathBuf};
use chrono::Local;
use clap::{Args, Subcommand};
#[derive(Args)]
pub struct Cli {
#[command(subcommand)]
pub command: Commands,
}
#[derive(Subcommand)]
pub enum Commands {
/// Show information or a specific version or build
Show(ShowArgs),
/// List the available versions, or builds for a specific version
List(ListArgs),
/// Download the jar for a specific build
Download(DownloadBuildArgs),
}
#[derive(Args)]
pub struct ShowArgs {
/// Version to show information for
version: String,
/// Build within version to show information for
build: Option<String>,
}
#[derive(Args)]
pub struct ListArgs {
/// If provided, list the available builds for this version
version: Option<String>,
}
#[derive(Args)]
pub struct DownloadBuildArgs {
/// Version of build to download
version: String,
/// Build number for build to download
build: String,
/// Path to store the new JAR file in; stores JAR in the local directory if not specified
#[arg(short, long, value_name = "OUT_PATH")]
out: Option<PathBuf>,
}
impl Cli {
pub fn run(&self) -> crate::Result<()> {
match &self.command {
Commands::Show(ShowArgs {
version,
build: None,
}) => show_version(&version),
Commands::List(ListArgs { version: None }) => list_versions(),
Commands::List(ListArgs {
version: Some(version),
}) => list_builds(version),
Commands::Show(ShowArgs {
version,
build: Some(build),
}) => show_build(&version, &build),
Commands::Download(args) => {
download_build(&args.version, &args.build, args.out.as_deref())
}
}
Ok(())
}
}
fn show_version(version_str: &str) {
let client = papermc_api::Client::new();
let version = match client.project("paper").version(version_str).info() {
Ok(version) => version,
Err(err) => {
println!("failed to query API: {err}");
return;
}
};
println!("id : {}", version.id);
println!("status : {}", version.support_status);
println!("builds : {}", version.builds.len());
println!("Min. Java : {}", version.java.minimum_version);
println!("Java flags: {}", version.java.recommended_flags.join(" "))
}
fn show_build(version_str: &str, build_str: &str) {
let client = papermc_api::Client::new();
let build = match client
.project("paper")
.version(version_str)
.build(build_str)
{
Ok(build) => build,
Err(err) => {
println!("failed to query API: {err}");
return;
}
};
println!("id : {}", build.id);
println!("time : {}", build.time.with_timezone(&Local));
println!("channel: {}", build.channel);
println!("commits:");
for commit in build.commits {
println!("- SHA : {}", commit.sha);
println!(" time : {}", commit.time.with_timezone(&Local));
println!(" message: {}", commit.message);
}
println!("downloads:");
for (name, download) in build.downloads.iter() {
println!(" {name}:");
println!(" name: {}", download.name);
println!(" size: {}", download.size);
println!(" URL : {}", download.url);
println!(" checksums:");
for (name, value) in download.checksums.iter() {
println!(" - {}: {}", name, value);
}
}
}
fn list_versions() {
let client = papermc_api::Client::new();
match client.project("paper").versions() {
Ok(versions) => {
for version in versions {
println!("{} ({})", version.id, version.support_status);
}
}
Err(err) => {
println!("failed to query API: {err}");
}
}
}
fn list_builds(version: &str) {
let client = papermc_api::Client::new();
match client.project("paper").version(version).builds() {
Ok(builds) => {
for build in builds {
println!("- id : {}", build.id);
println!(" time : {}", build.time.with_timezone(&Local));
println!(" channel: {}", build.channel);
}
}
Err(err) => {
println!("failed to query API: {err}");
}
}
}
fn download_build(version: &str, build: &str, out_path: Option<&Path>) {
let client = papermc_api::Client::new();
let build = match client.project("paper").version(version).build(build) {
Ok(build) => build,
Err(err) => {
println!("failed to query API: {err}");
return;
}
};
let filename = format!("paper-{}-{}.jar", version, build.id);
let dest_path = match out_path {
Some(path) if path.is_dir() => path.join(filename),
Some(path) => path.to_path_buf(),
None => PathBuf::from(filename),
};
let download_url = match build
.downloads
.get("server:default")
.or(build.downloads.values().next())
{
Some(download) => &download.url,
None => {
println!("no download URLs found for build.");
return;
}
};
let mut f = match std::fs::File::create(dest_path) {
Ok(f) => f,
Err(err) => {
println!("failed to create destination file: {err}");
return;
}
};
let mut res = match ureq::get(download_url).call() {
Ok(res) => res,
Err(err) => {
println!("failed to download file: {err}");
return;
}
};
let mut reader = res.body_mut().as_reader();
if let Err(err) = std::io::copy(&mut reader, &mut f) {
println!("failed to download file: {err}");
}
}

View file

@ -0,0 +1,22 @@
use std::path::PathBuf;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
pub struct Config {
pub jar: PathBuf,
pub java: String,
pub xms: u64,
pub xmx: u64,
}
impl Default for Config {
fn default() -> Self {
Self {
jar: PathBuf::from("server.jar"),
java: String::from("java"),
xms: 1024,
xmx: 2048,
}
}
}

103
alex/src/cli/run/mod.rs Normal file
View file

@ -0,0 +1,103 @@
mod config;
use std::{path::PathBuf, sync::Arc};
use clap::Args;
use serde::{Deserialize, Serialize};
use crate::{server, signals, stdin};
use config::Config;
#[derive(Args)]
pub struct RunCli {
#[command(flatten)]
pub args: RunArgs,
/// Don't actually run the server, but simply output the server configuration that would have
/// been ran
#[arg(short, long, default_value_t = false)]
pub dry: bool,
}
#[derive(Args, Serialize, Deserialize, Clone)]
pub struct RunArgs {
/// Server jar to execute
#[arg(long, value_name = "JAR_PATH")]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub jar: Option<PathBuf>,
/// Java command to run the server jar with
#[arg(long, value_name = "JAVA_CMD")]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub java: Option<String>,
/// XMS value in megabytes for the server instance
#[arg(long)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub xms: Option<u64>,
/// XMX value in megabytes for the server instance
#[arg(long)]
#[serde(skip_serializing_if = "::std::option::Option::is_none")]
pub xmx: Option<u64>,
}
fn backups_thread(server: Arc<server::ServerProcess>) {
loop {
let next_scheduled_time = {
server
.backups
.read()
.unwrap()
.next_scheduled_time()
.unwrap()
};
let now = chrono::offset::Utc::now();
if next_scheduled_time > now {
std::thread::sleep((next_scheduled_time - now).to_std().unwrap());
}
// We explicitely ignore the error here, as we don't want the thread to fail
let _ = server.backup();
}
}
impl RunCli {
pub fn run(&self, cli: &super::Cli, global: &super::Config) -> crate::Result<()> {
let config: Config = cli.config(&self.args)?;
let (_, mut signals) = signals::install_signal_handlers()?;
let mut cmd = server::ServerCommand::new(global.server, &global.server_version)
.java(&config.java)
.jar(config.jar.clone())
.config(global.config.clone())
.world(global.world.clone())
.backup(global.backup.clone())
.managers(global.layers.clone())
.xms(config.xms)
.xmx(config.xmx);
cmd.canonicalize()?;
if self.dry {
print!("{}", cmd);
return Ok(());
}
let counter = Arc::new(cmd.spawn()?);
if !global.layers.is_empty() {
let clone = Arc::clone(&counter);
std::thread::spawn(move || backups_thread(clone));
}
// Spawn thread that handles the main stdin loop
let clone = Arc::clone(&counter);
std::thread::spawn(move || stdin::handle_stdin(clone));
// Signal handler loop exits the process when necessary
Ok(signals::handle_signals(&mut signals, counter)?)
}
}

34
alex/src/error.rs Normal file
View file

@ -0,0 +1,34 @@
use std::{fmt, io};
pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug)]
pub enum Error {
IO(io::Error),
Figment(figment::Error),
Other(Box<dyn std::error::Error>),
}
impl fmt::Display for Error {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Error::IO(err) => write!(fmt, "{}", err),
Error::Figment(err) => write!(fmt, "{}", err),
Error::Other(err) => write!(fmt, "{}", err),
}
}
}
impl std::error::Error for Error {}
impl From<io::Error> for Error {
fn from(err: io::Error) -> Self {
Error::IO(err)
}
}
impl From<figment::Error> for Error {
fn from(err: figment::Error) -> Self {
Error::Figment(err)
}
}

42
alex/src/main.rs Normal file
View file

@ -0,0 +1,42 @@
mod cli;
mod error;
mod server;
mod signals;
mod stdin;
use std::io;
use clap::Parser;
use crate::cli::Cli;
pub use error::{Error, Result};
pub fn other(msg: &str) -> io::Error {
io::Error::new(io::ErrorKind::Other, msg)
}
// fn commands_backup(cli: &Cli, args: &BackupArgs) -> io::Result<()> {
// let metadata = server::Metadata {
// server_type: cli.server,
// server_version: cli.server_version.clone(),
// };
// let dirs = vec![
// (PathBuf::from("config"), cli.config.clone()),
// (PathBuf::from("worlds"), cli.world.clone()),
// ];
// let mut meta = MetaManager::new(cli.backup.clone(), dirs, metadata);
// meta.add_all(&cli.layers)?;
// match &args.command {
// BackupCommands::List => ()
// }
// // manager.create_backup()?;
// // manager.remove_old_backups()
// }
fn main() -> crate::Result<()> {
let cli = Cli::parse();
cli.run()
}

210
alex/src/server/command.rs Normal file
View file

@ -0,0 +1,210 @@
use crate::server::{Metadata, ServerProcess, ServerType};
use backup::ManagerConfig;
use backup::MetaManager;
use std::fmt;
use std::fs::File;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::process::{Command, Stdio};
pub struct ServerCommand {
type_: ServerType,
version: String,
java: String,
jar: PathBuf,
config_dir: PathBuf,
world_dir: PathBuf,
backup_dir: PathBuf,
xms: u64,
xmx: u64,
managers: Vec<ManagerConfig>,
}
impl ServerCommand {
pub fn new(type_: ServerType, version: &str) -> Self {
ServerCommand {
type_,
version: String::from(version),
java: String::from("java"),
jar: PathBuf::from("server.jar"),
config_dir: PathBuf::from("config"),
world_dir: PathBuf::from("worlds"),
backup_dir: PathBuf::from("backups"),
xms: 1024,
xmx: 2048,
managers: Vec::new(),
}
}
pub fn java(mut self, java: &str) -> Self {
self.java = String::from(java);
self
}
pub fn jar<T: AsRef<Path>>(mut self, path: T) -> Self {
self.jar = PathBuf::from(path.as_ref());
self
}
pub fn config<T: AsRef<Path>>(mut self, path: T) -> Self {
self.config_dir = PathBuf::from(path.as_ref());
self
}
pub fn world<T: AsRef<Path>>(mut self, path: T) -> Self {
self.world_dir = PathBuf::from(path.as_ref());
self
}
pub fn backup<T: AsRef<Path>>(mut self, path: T) -> Self {
self.backup_dir = PathBuf::from(path.as_ref());
self
}
pub fn xms(mut self, v: u64) -> Self {
self.xms = v;
self
}
pub fn xmx(mut self, v: u64) -> Self {
self.xmx = v;
self
}
pub fn managers(mut self, configs: Vec<ManagerConfig>) -> Self {
self.managers = configs;
self
}
fn accept_eula(&self) -> std::io::Result<()> {
let mut eula_path = self.config_dir.clone();
eula_path.push("eula.txt");
let mut eula_file = File::create(eula_path)?;
eula_file.write_all(b"eula=true")?;
Ok(())
}
/// Canonicalize all paths to absolute paths. Without this command, all paths will be
/// interpreted relatively from the config directory.
pub fn canonicalize(&mut self) -> std::io::Result<()> {
// To avoid any issues, we use absolute paths for everything when spawning the process
self.jar = self.jar.canonicalize()?;
self.config_dir = self.config_dir.canonicalize()?;
self.world_dir = self.world_dir.canonicalize()?;
self.backup_dir = self.backup_dir.canonicalize()?;
Ok(())
}
fn create_cmd(&self) -> std::process::Command {
let mut cmd = Command::new(&self.java);
// Apply JVM optimisation flags
// https://aikar.co/2018/07/02/tuning-the-jvm-g1gc-garbage-collector-flags-for-minecraft/
cmd.arg(format!("-Xms{}M", self.xms))
.arg(format!("-Xmx{}M", self.xmx))
.args([
"-XX:+UseG1GC",
"-XX:+ParallelRefProcEnabled",
"-XX:MaxGCPauseMillis=200",
"-XX:+UnlockExperimentalVMOptions",
"-XX:+DisableExplicitGC",
"-XX:+AlwaysPreTouch",
]);
if self.xms > 12 * 1024 {
cmd.args([
"-XX:G1NewSizePercent=40",
"-XX:G1MaxNewSizePercent=50",
"-XX:G1HeapRegionSize=16M",
"-XX:G1ReservePercent=15",
]);
} else {
cmd.args([
"-XX:G1NewSizePercent=30",
"-XX:G1MaxNewSizePercent=40",
"-XX:G1HeapRegionSize=8M",
"-XX:G1ReservePercent=20",
]);
}
cmd.args(["-XX:G1HeapWastePercent=5", "-XX:G1MixedGCCountTarget=4"]);
if self.xms > 12 * 1024 {
cmd.args(["-XX:InitiatingHeapOccupancyPercent=20"]);
} else {
cmd.args(["-XX:InitiatingHeapOccupancyPercent=15"]);
}
cmd.args([
"-XX:G1MixedGCLiveThresholdPercent=90",
"-XX:G1RSetUpdatingPauseTimePercent=5",
"-XX:SurvivorRatio=32",
"-XX:+PerfDisableSharedMem",
"-XX:MaxTenuringThreshold=1",
"-Dusing.aikars.flags=https://mcflags.emc.gs",
"-Daikars.new.flags=true",
]);
cmd.current_dir(&self.config_dir)
.arg("-jar")
.arg(&self.jar)
.arg("--universe")
.arg(&self.world_dir)
.arg("--nogui")
.stdin(Stdio::piped());
cmd
}
pub fn spawn(&mut self) -> std::io::Result<ServerProcess> {
let metadata = Metadata {
server_type: self.type_,
server_version: self.version.clone(),
};
let dirs = vec![
(PathBuf::from("config"), self.config_dir.clone()),
(PathBuf::from("worlds"), self.world_dir.clone()),
];
let mut meta = MetaManager::new(self.backup_dir.clone(), dirs, metadata);
meta.add_all(&self.managers)?;
let mut cmd = self.create_cmd();
self.accept_eula()?;
let child = cmd.spawn()?;
Ok(ServerProcess::new(meta, child))
}
}
impl fmt::Display for ServerCommand {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let cmd = self.create_cmd();
writeln!(f, "Command: {}", self.java)?;
writeln!(f, "Working dir: {}", self.config_dir.as_path().display())?;
// Print command env vars
writeln!(f, "Environment:")?;
for (key, val) in cmd.get_envs().filter(|(_, v)| v.is_some()) {
let val = val.unwrap();
writeln!(f, " {}={}", key.to_string_lossy(), val.to_string_lossy())?;
}
// Print command arguments
writeln!(f, "Arguments:")?;
for arg in cmd.get_args() {
writeln!(f, " {}", arg.to_string_lossy())?;
}
Ok(())
}
}

37
alex/src/server/mod.rs Normal file
View file

@ -0,0 +1,37 @@
mod command;
mod process;
pub use command::ServerCommand;
pub use process::ServerProcess;
use clap::ValueEnum;
use serde::{Deserialize, Serialize};
use std::fmt;
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum, Serialize, Deserialize, Debug)]
#[serde(rename_all = "lowercase")]
pub enum ServerType {
Unknown,
Paper,
Forge,
Vanilla,
}
impl fmt::Display for ServerType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let s = match self {
ServerType::Unknown => "Unknown",
ServerType::Paper => "PaperMC",
ServerType::Forge => "Forge",
ServerType::Vanilla => "Vanilla",
};
write!(f, "{}", s)
}
}
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct Metadata {
pub server_type: ServerType,
pub server_version: String,
}

View file

@ -0,0 +1,95 @@
use std::{io::Write, process::Child, sync::RwLock};
use crate::server::Metadata;
use backup::MetaManager;
pub struct ServerProcess {
child: RwLock<Child>,
pub backups: RwLock<MetaManager<Metadata>>,
}
impl ServerProcess {
pub fn new(manager: MetaManager<Metadata>, child: Child) -> ServerProcess {
ServerProcess {
child: RwLock::new(child),
backups: RwLock::new(manager),
}
}
pub fn send_command(&self, cmd: &str) -> std::io::Result<()> {
match cmd.trim() {
"stop" | "exit" => self.stop()?,
"backup" => self.backup()?,
s => self.custom(s)?,
}
Ok(())
}
fn custom(&self, cmd: &str) -> std::io::Result<()> {
let child = self.child.write().unwrap();
let mut stdin = child.stdin.as_ref().unwrap();
stdin.write_all(format!("{}\n", cmd.trim()).as_bytes())?;
stdin.flush()?;
Ok(())
}
pub fn stop(&self) -> std::io::Result<()> {
self.custom("stop")?;
self.child.write().unwrap().wait()?;
Ok(())
}
pub fn kill(&self) -> std::io::Result<()> {
self.child.write().unwrap().kill()
}
/// Perform a backup by disabling the server's save feature and flushing its data, before
/// creating an archive file.
pub fn backup(&self) -> std::io::Result<()> {
// We explicitely lock this entire function to prevent parallel backups
let mut backups = self.backups.write().unwrap();
let layer_name = String::from(backups.next_scheduled_layer().unwrap());
self.custom(&format!("say starting backup for layer '{}'", layer_name))?;
// Make sure the server isn't modifying the files during the backup
self.custom("save-off")?;
self.custom("save-all")?;
// TODO implement a better mechanism
// We wait some time to (hopefully) ensure the save-all call has completed
std::thread::sleep(std::time::Duration::from_secs(10));
let start_time = chrono::offset::Utc::now();
let res = backups.perform_backup_cycle();
// The server's save feature needs to be enabled again even if the archive failed to create
self.custom("save-on")?;
self.custom("save-all")?;
let duration = chrono::offset::Utc::now() - start_time;
let duration_str = format!(
"{}m{}s",
duration.num_seconds() / 60,
duration.num_seconds() % 60
);
if res.is_ok() {
self.custom(&format!(
"say backup created for layer '{}' in {}",
layer_name, duration_str
))?;
} else {
self.custom(&format!(
"an error occured after {} while creating backup for layer '{}'",
duration_str, layer_name
))?;
}
res
}
}

69
alex/src/signals.rs Normal file
View file

@ -0,0 +1,69 @@
use std::{
io,
sync::{atomic::AtomicBool, Arc},
};
use signal_hook::{
consts::TERM_SIGNALS,
flag,
iterator::{Signals, SignalsInfo},
};
use crate::server;
/// Install the required signal handlers for terminating signals.
pub fn install_signal_handlers() -> io::Result<(Arc<AtomicBool>, SignalsInfo)> {
let term = Arc::new(AtomicBool::new(false));
// For each terminating signal, we register both a shutdown handler and a handler that sets an
// atomic bool. With this, the process will get killed immediately once it receives a second
// termination signal (e.g. a double ctrl-c).
// https://docs.rs/signal-hook/0.3.15/signal_hook/#a-complex-signal-handling-with-a-background-thread
for sig in TERM_SIGNALS {
// When terminated by a second term signal, exit with exit code 1.
// This will do nothing the first time (because term_now is false).
flag::register_conditional_shutdown(*sig, 1, Arc::clone(&term))?;
// But this will "arm" the above for the second time, by setting it to true.
// The order of registering these is important, if you put this one first, it will
// first arm and then terminate all in the first round.
flag::register(*sig, Arc::clone(&term))?;
}
let signals = TERM_SIGNALS;
Ok((term, Signals::new(signals)?))
}
/// Loop that handles terminating signals as they come in.
pub fn handle_signals(
signals: &mut SignalsInfo,
server: Arc<server::ServerProcess>,
) -> io::Result<()> {
let mut force = false;
// We only register terminating signals, so we don't need to differentiate between what kind of
// signal came in
for _ in signals {
// If term is already true, this is the second signal, meaning we kill the process
// immediately.
// This will currently not work, as the initial stop command will block the kill from
// happening.
if force {
return server.kill();
}
// The stop command runs in a separate thread to avoid blocking the signal handling loop.
// After stopping the server, the thread terminates the process.
else {
let clone = Arc::clone(&server);
std::thread::spawn(move || {
let _ = clone.stop();
std::process::exit(0);
});
}
force = true;
}
Ok(())
}

24
alex/src/stdin.rs Normal file
View file

@ -0,0 +1,24 @@
use std::{io, sync::Arc};
use crate::server;
pub fn handle_stdin(server: Arc<server::ServerProcess>) {
let stdin = io::stdin();
let input = &mut String::new();
loop {
input.clear();
if stdin.read_line(input).is_err() {
continue;
};
if let Err(e) = server.send_command(input) {
println!("{}", e);
};
if input.trim() == "stop" {
std::process::exit(0);
}
}
}

547
backup/Cargo.lock generated Normal file
View file

@ -0,0 +1,547 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "adler2"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627"
[[package]]
name = "android-tzdata"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0"
[[package]]
name = "android_system_properties"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
dependencies = [
"libc",
]
[[package]]
name = "autocfg"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "backup"
version = "0.4.1"
dependencies = [
"chrono",
"flate2",
"serde",
"serde_json",
"tar",
]
[[package]]
name = "bitflags"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd"
[[package]]
name = "bumpalo"
version = "3.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1628fb46dfa0b37568d12e5edd512553eccf6a22a78e8bde00bb4aed84d5bdbf"
[[package]]
name = "cc"
version = "1.2.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04da6a0d40b948dfc4fa8f5bbf402b0fc1a64a28dbf7d12ffd683550f2c1b63a"
dependencies = [
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "chrono"
version = "0.4.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c469d952047f47f91b68d1cba3f10d63c11d73e4636f24f08daf0278abf01c4d"
dependencies = [
"android-tzdata",
"iana-time-zone",
"js-sys",
"num-traits",
"serde",
"wasm-bindgen",
"windows-link",
]
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "crc32fast"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3"
dependencies = [
"cfg-if",
]
[[package]]
name = "errno"
version = "0.3.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "976dd42dc7e85965fe702eb8164f21f450704bdde31faefd6471dba214cb594e"
dependencies = [
"libc",
"windows-sys",
]
[[package]]
name = "filetime"
version = "0.2.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "35c0522e981e68cbfa8c3f978441a5f34b30b96e146b33cd3359176b50fe8586"
dependencies = [
"cfg-if",
"libc",
"libredox",
"windows-sys",
]
[[package]]
name = "flate2"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ced92e76e966ca2fd84c8f7aa01a4aea65b0eb6648d72f7c8f3e2764a67fece"
dependencies = [
"crc32fast",
"miniz_oxide",
]
[[package]]
name = "iana-time-zone"
version = "0.1.63"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0c919e5debc312ad217002b8048a17b7d83f80703865bbfcfebb0458b0b27d8"
dependencies = [
"android_system_properties",
"core-foundation-sys",
"iana-time-zone-haiku",
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
]
[[package]]
name = "iana-time-zone-haiku"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
dependencies = [
"cc",
]
[[package]]
name = "itoa"
version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "js-sys"
version = "0.3.77"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f"
dependencies = [
"once_cell",
"wasm-bindgen",
]
[[package]]
name = "libc"
version = "0.2.172"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
[[package]]
name = "libredox"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0ff37bd590ca25063e35af745c343cb7a0271906fb7b37e4813e8f79f00268d"
dependencies = [
"bitflags",
"libc",
"redox_syscall",
]
[[package]]
name = "linux-raw-sys"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd945864f07fe9f5371a27ad7b52a172b4b499999f1d97574c9fa68373937e12"
[[package]]
name = "log"
version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "memchr"
version = "2.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "miniz_oxide"
version = "0.8.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3be647b768db090acb35d5ec5db2b0e1f1de11133ca123b9eacf5137868f892a"
dependencies = [
"adler2",
]
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
[[package]]
name = "once_cell"
version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "proc-macro2"
version = "1.0.95"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02b3e5e68a3a1a02aad3ec490a98007cbc13c37cbe84a3cd7b8e406d76e7f778"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d"
dependencies = [
"proc-macro2",
]
[[package]]
name = "redox_syscall"
version = "0.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2f103c6d277498fbceb16e84d317e2a400f160f46904d5f5410848c829511a3"
dependencies = [
"bitflags",
]
[[package]]
name = "rustix"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d97817398dd4bb2e6da002002db259209759911da105da92bec29ccb12cf58bf"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys",
]
[[package]]
name = "rustversion"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eded382c5f5f786b989652c49544c4877d9f015cc22e145a5ea8ea66c2921cd2"
[[package]]
name = "ryu"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "serde"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.140"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "syn"
version = "2.0.101"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ce2b7fc941b3a24138a0a7cf8e858bfc6a992e7978a068a5c760deb0ed43caf"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "tar"
version = "0.4.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d863878d212c87a19c1a610eb53bb01fe12951c0501cf5a0d65f724914a667a"
dependencies = [
"filetime",
"libc",
"xattr",
]
[[package]]
name = "unicode-ident"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
[[package]]
name = "wasm-bindgen"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1edc8929d7499fc4e8f0be2262a241556cfc54a0bea223790e71446f2aab1ef5"
dependencies = [
"cfg-if",
"once_cell",
"rustversion",
"wasm-bindgen-macro",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f0a0651a5c2bc21487bde11ee802ccaf4c51935d0d3d42a6101f98161700bc6"
dependencies = [
"bumpalo",
"log",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fe63fc6d09ed3792bd0897b314f53de8e16568c2b3f7982f468c0bf9bd0b407"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ae87ea40c9f689fc23f209965b6fb8a99ad69aeeb0231408be24920604395de"
dependencies = [
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a05d73b933a847d6cccdda8f838a22ff101ad9bf93e33684f39c1f5f0eece3d"
dependencies = [
"unicode-ident",
]
[[package]]
name = "windows-core"
version = "0.61.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4763c1de310c86d75a878046489e2e5ba02c649d185f21c67d4cf8a56d098980"
dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-strings",
]
[[package]]
name = "windows-implement"
version = "0.60.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a47fddd13af08290e67f4acabf4b459f647552718f683a7b415d290ac744a836"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-interface"
version = "0.59.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bd9211b69f8dcdfa817bfd14bf1c97c9188afa36f4750130fcdf3f400eca9fa8"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-link"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76840935b766e1b0a05c0066835fb9ec80071d4c09a16f6bd5f7e655e3c14c38"
[[package]]
name = "windows-result"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c64fd11a4fd95df68efcfee5f44a294fe71b8bc6a91993e2791938abcc712252"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a2ba9642430ee452d5a7aa78d72907ebe8cfda358e8cb7918a2050581322f97"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.59.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
dependencies = [
"windows-targets",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "xattr"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d65cbf2f12c15564212d48f4e3dfb87923d25d611f2aed18f4cb23f0413d89e"
dependencies = [
"libc",
"rustix",
]

14
backup/Cargo.toml Normal file
View file

@ -0,0 +1,14 @@
[package]
name = "backup"
version.workspace = true
edition.workspace = true
[dependencies]
chrono.workspace = true
serde.workspace = true
# Used for creating tarballs for backups
tar = "0.4.38"
# Used to compress said tarballs using gzip
flate2 = "1.1.1"
serde_json = "1.0.96"

16
backup/Justfile Normal file
View file

@ -0,0 +1,16 @@
build:
cargo build --frozen
alias b := build
test:
cargo test --frozen
alias t := test
check:
cargo fmt --check
cargo clippy \
--frozen \
-- \
--no-deps \
--deny 'clippy::all'
alias c := check

295
backup/src/delta.rs Normal file
View file

@ -0,0 +1,295 @@
use std::{borrow::Borrow, fmt, path::PathBuf};
use serde::{Deserialize, Serialize};
use super::State;
/// Represents the changes relative to the previous backup
#[derive(Debug, Serialize, Deserialize, Clone, Default, PartialEq, Eq)]
pub struct Delta {
/// What files were added/modified in each part of the tarball.
pub added: State,
/// What files were removed in this backup, in comparison to the previous backup. For full
/// backups, this will always be empty, as they do not consider previous backups.
/// The map stores a separate list for each top-level directory, as the contents of these
/// directories can come for different source directories.
pub removed: State,
}
impl Delta {
/// Returns whether the delta is empty by checking whether both its added and removed state
/// return true for their `is_empty`.
pub fn is_empty(&self) -> bool {
self.added.is_empty() && self.removed.is_empty()
}
/// Calculate the union of this delta with another delta.
///
/// The union of two deltas is a delta that produces the same state as if you were to apply
/// both deltas in-order. Note that this operation is not commutative.
pub fn union(&self, delta: &Self) -> Self {
let mut out = self.clone();
for (dir, added) in delta.added.iter() {
// Files that were removed in the current state, but added in the new state, are no
// longer removed
if let Some(orig_removed) = out.removed.get_mut(dir) {
orig_removed.retain(|k| !added.contains(k));
}
// Newly added files are added to the state as well
if let Some(orig_added) = out.added.get_mut(dir) {
orig_added.extend(added.iter().cloned());
} else {
out.added.insert(dir.clone(), added.clone());
}
}
for (dir, removed) in delta.removed.iter() {
// Files that were originally added, but now deleted are removed from the added list
if let Some(orig_added) = out.added.get_mut(dir) {
orig_added.retain(|k| !removed.contains(k));
}
// Newly removed files are added to the state as well
if let Some(orig_removed) = out.removed.get_mut(dir) {
orig_removed.extend(removed.iter().cloned());
} else {
out.removed.insert(dir.clone(), removed.clone());
}
}
out
}
// Calculate the difference between this delta and the other delta.
//
// The difference simply means removing all adds and removes that are also performed in the
// other delta.
pub fn difference(&self, other: &Self) -> Self {
let mut out = self.clone();
for (dir, added) in out.added.iter_mut() {
// If files are added in the other delta, we don't add them in this delta
if let Some(other_added) = other.added.get(dir) {
added.retain(|k| !other_added.contains(k));
};
}
for (dir, removed) in out.removed.iter_mut() {
// If files are removed in the other delta, we don't remove them in this delta either
if let Some(other_removed) = other.removed.get(dir) {
removed.retain(|k| !other_removed.contains(k));
}
}
out
}
// Calculate the strict difference between this delta and the other delta.
//
// The strict difference is a difference where all operations that would be overwritten by the
// other delta are also removed (a.k.a. adding a file after removing it, or vice versa)
pub fn strict_difference(&self, other: &Self) -> Self {
let mut out = self.difference(other);
for (dir, added) in out.added.iter_mut() {
// Remove additions that are removed in the other delta
if let Some(other_removed) = other.removed.get(dir) {
added.retain(|k| !other_removed.contains(k));
}
}
for (dir, removed) in out.removed.iter_mut() {
// Remove removals that are re-added in the other delta
if let Some(other_added) = other.added.get(dir) {
removed.retain(|k| !other_added.contains(k));
}
}
out
}
/// Given a chain of deltas, calculate the "contribution" for each state.
///
/// For each delta, its contribution is the part of its added and removed files that isn't
/// overwritten by any of its following deltas.
pub fn contributions<I>(deltas: I) -> Vec<State>
where
I: IntoIterator,
I::IntoIter: DoubleEndedIterator,
I::Item: Borrow<Delta>,
{
let mut contributions: Vec<State> = Vec::new();
let mut deltas = deltas.into_iter().rev();
if let Some(first_delta) = deltas.next() {
// From last to first, we calculate the strict difference of the delta with the union of all its
// following deltas. The list of added files of this difference is the contribution for
// that delta.
contributions.push(first_delta.borrow().added.clone());
let mut union_future = first_delta.borrow().clone();
for delta in deltas {
contributions.push(delta.borrow().strict_difference(&union_future).added);
union_future = union_future.union(delta.borrow());
}
}
contributions.reverse();
contributions
}
/// Append the given files to the directory's list of added files
pub fn append_added<I>(&mut self, dir: impl Into<PathBuf>, files: I)
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
self.added.append_dir(dir, files);
}
/// Wrapper around the `append_added` method for a builder-style construction of delta's
pub fn with_added<I>(mut self, dir: impl Into<PathBuf>, files: I) -> Self
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
self.append_added(dir, files);
self
}
/// Append the given files to the directory's list of removed files
pub fn append_removed<I>(&mut self, dir: impl Into<PathBuf>, files: I)
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
self.removed.append_dir(dir, files);
}
/// Wrapper around the `append_removed` method for a builder-style construction of delta's
pub fn with_removed<I>(mut self, dir: impl Into<PathBuf>, files: I) -> Self
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
self.append_removed(dir, files);
self
}
}
impl fmt::Display for Delta {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let added_count: usize = self.added.values().map(|s| s.len()).sum();
let removed_count: usize = self.removed.values().map(|s| s.len()).sum();
write!(f, "+{}-{}", added_count, removed_count)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_union_disjunct_dirs() {
let a = Delta::default()
.with_added("dir_added_1", ["file1", "file2"])
.with_removed("dir_removed_1", ["file1", "file2"]);
let b = Delta::default()
.with_added("dir_added_3", ["file1", "file2"])
.with_removed("dir_removed_3", ["file1", "file2"]);
let expected = Delta::default()
.with_added("dir_added_1", ["file1", "file2"])
.with_added("dir_added_3", ["file1", "file2"])
.with_removed("dir_removed_1", ["file1", "file2"])
.with_removed("dir_removed_3", ["file1", "file2"]);
assert_eq!(expected, a.union(&b));
assert_eq!(expected, b.union(&a));
}
#[test]
fn test_union_disjunct_files() {
let a = Delta::default()
.with_added("dir_added_1", ["file1", "file2"])
.with_removed("dir_removed_1", ["file1", "file2"]);
let b = Delta::default()
.with_added("dir_added_1", ["file3", "file4"])
.with_removed("dir_removed_1", ["file3", "file4"]);
let expected = Delta::default()
.with_added("dir_added_1", ["file1", "file2", "file3", "file4"])
.with_removed("dir_removed_1", ["file1", "file2", "file3", "file4"]);
assert_eq!(expected, a.union(&b));
assert_eq!(expected, b.union(&a));
}
#[test]
fn test_union_full_revert() {
let a = Delta::default().with_added("dir_1", ["file1", "file2"]);
let b = Delta::default().with_removed("dir_1", ["file1", "file2"]);
let expected = Delta::default().with_removed("dir_1", ["file1", "file2"]);
assert_eq!(expected, a.union(&b));
let expected = Delta::default().with_added("dir_1", ["file1", "file2"]);
assert_eq!(expected, b.union(&a));
}
#[test]
fn test_difference() {
let a = Delta::default()
.with_added("dir1", ["file1", "file2"])
.with_removed("dir1", ["file3", "file4"]);
let b = Delta::default()
.with_added("dir1", ["file1"])
.with_removed("dir1", ["file3"]);
let expected = Delta::default()
.with_added("dir1", ["file2"])
.with_removed("dir1", ["file4"]);
assert_eq!(a.difference(&b), expected);
assert_eq!(b.difference(&a), Delta::default());
}
#[test]
fn test_strict_difference() {
let a = Delta::default()
.with_added("dir1", ["file1", "file2"])
.with_removed("dir1", ["file3", "file4"]);
let b = Delta::default()
.with_added("dir1", ["file1", "file4"])
.with_removed("dir1", ["file3"]);
let expected = Delta::default().with_added("dir1", ["file2"]);
assert_eq!(a.strict_difference(&b), expected);
assert_eq!(b.strict_difference(&a), Delta::default());
}
#[test]
fn test_contributions() {
let deltas = [
Delta::default().with_added("dir1", ["file4"]),
Delta::default().with_added("dir1", ["file1", "file2"]),
Delta::default()
.with_added("dir1", ["file1"])
.with_added("dir2", ["file3"]),
Delta::default()
.with_added("dir1", ["file2"])
.with_removed("dir2", ["file3"]),
];
let expected = [
State::default().with_dir("dir1", ["file4"]),
State::default(),
State::default().with_dir("dir1", ["file1"]),
State::default().with_dir("dir1", ["file2"]),
];
assert_eq!(Delta::contributions(deltas), expected);
}
}

43
backup/src/io_ext.rs Normal file
View file

@ -0,0 +1,43 @@
use std::io::{self, Write};
/// Wrapper around the Write trait that counts how many bytes have been written in total.
/// Heavily inspired by https://stackoverflow.com/a/42189386
pub struct CountingWrite<W> {
inner: W,
count: usize,
}
impl<W> CountingWrite<W>
where
W: Write,
{
pub fn new(writer: W) -> Self {
Self {
inner: writer,
count: 0,
}
}
pub fn bytes_written(&self) -> usize {
self.count
}
}
impl<W> Write for CountingWrite<W>
where
W: Write,
{
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let res = self.inner.write(buf);
if let Ok(count) = res {
self.count += count;
}
res
}
fn flush(&mut self) -> io::Result<()> {
self.inner.flush()
}
}

319
backup/src/lib.rs Normal file
View file

@ -0,0 +1,319 @@
mod delta;
mod io_ext;
pub mod manager;
mod path;
mod state;
use std::{
collections::HashSet,
fmt,
fs::File,
io,
path::{Path, PathBuf},
};
use chrono::Utc;
use flate2::{read::GzDecoder, write::GzEncoder, Compression};
use serde::{Deserialize, Serialize};
use delta::Delta;
pub use manager::Manager;
pub use manager::ManagerConfig;
pub use manager::MetaManager;
use path::PathExt;
pub use state::State;
const BYTE_SUFFIXES: [&str; 5] = ["B", "KiB", "MiB", "GiB", "TiB"];
pub fn other(msg: &str) -> io::Error {
io::Error::new(io::ErrorKind::Other, msg)
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub enum BackupType {
Full,
Incremental,
}
/// Represents a successful backup
#[derive(Serialize, Deserialize, Debug)]
pub struct Backup<T: Clone> {
/// When the backup was started (also corresponds to the name)
pub start_time: chrono::DateTime<Utc>,
/// When the backup finished
pub end_time: chrono::DateTime<Utc>,
pub size: usize,
/// Type of the backup
pub type_: BackupType,
pub delta: Delta,
/// Additional metadata that can be associated with a given backup
pub metadata: Option<T>,
}
impl Backup<()> {
pub const FILENAME_FORMAT: &str = "%Y-%m-%d_%H-%M-%S.tar.gz";
/// Return the path to a backup file by properly formatting the data.
pub fn path<P: AsRef<Path>>(backup_dir: P, start_time: chrono::DateTime<Utc>) -> PathBuf {
let backup_dir = backup_dir.as_ref();
let filename = format!("{}", start_time.format(Self::FILENAME_FORMAT));
backup_dir.join(filename)
}
/// Extract an archive.
///
/// # Arguments
///
/// * `backup_path` - Path to the archive to extract
/// * `dirs` - list of tuples `(path_in_tar, dst_dir)` with `dst_dir` the directory on-disk
/// where the files stored under `path_in_tar` inside the tarball should be extracted to.
pub fn extract_archive<P: AsRef<Path>>(
archive_path: P,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> io::Result<()> {
let tar_gz = File::open(archive_path)?;
let enc = GzDecoder::new(tar_gz);
let mut ar = tar::Archive::new(enc);
// Unpack each file by matching it with one of the destination directories and extracting
// it to the right path
for entry in ar.entries()? {
let mut entry = entry?;
let entry_path_in_tar = entry.path()?.to_path_buf();
for (path_in_tar, dst_dir) in dirs {
if entry_path_in_tar.starts_with(path_in_tar) {
let dst_path =
dst_dir.join(entry_path_in_tar.strip_prefix(path_in_tar).unwrap());
// Ensure all parent directories are present
std::fs::create_dir_all(dst_path.parent().unwrap())?;
entry.unpack(dst_path)?;
break;
}
}
}
Ok(())
}
}
impl<T: Clone> Backup<T> {
/// Set the backup's metadata.
pub fn set_metadata(&mut self, metadata: T) {
self.metadata = Some(metadata);
}
/// Create a new Full backup, populated with the given directories.
///
/// # Arguments
///
/// * `backup_dir` - Directory to store archive in
/// * `dirs` - list of tuples `(path_in_tar, src_dir)` with `path_in_tar` the directory name
/// under which `src_dir`'s contents should be stored in the archive
///
/// # Returns
///
/// The `Backup` instance describing this new backup.
pub fn create<P: AsRef<Path>>(
backup_dir: P,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> io::Result<Self> {
let start_time = chrono::offset::Utc::now();
let path = Backup::path(backup_dir, start_time);
let tar_gz = io_ext::CountingWrite::new(File::create(path)?);
let enc = GzEncoder::new(tar_gz, Compression::default());
let mut ar = tar::Builder::new(enc);
let mut delta = Delta::default();
for (dir_in_tar, src_dir) in dirs {
let mut added_files: HashSet<PathBuf> = HashSet::new();
for entry in src_dir.read_dir_recursive()?.ignored("cache").files() {
let path = entry?.path();
let stripped = path.strip_prefix(src_dir).unwrap();
ar.append_path_with_name(&path, dir_in_tar.join(stripped))?;
added_files.insert(stripped.to_path_buf());
}
delta.added.insert(dir_in_tar.to_path_buf(), added_files);
}
let mut enc = ar.into_inner()?;
// The docs recommend running try_finish before unwrapping using finish
enc.try_finish()?;
let tar_gz = enc.finish()?;
Ok(Backup {
type_: BackupType::Full,
start_time,
end_time: chrono::Utc::now(),
size: tar_gz.bytes_written(),
delta,
metadata: None,
})
}
/// Create a new Incremental backup from the given state, populated with the given directories.
///
/// # Arguments
///
/// * `previous_state` - State the file system was in during the previous backup in the chain
/// * `previous_start_time` - Start time of the previous backup; used to filter files
/// * `backup_dir` - Directory to store archive in
/// * `dirs` - list of tuples `(path_in_tar, src_dir)` with `path_in_tar` the directory name
/// under which `src_dir`'s contents should be stored in the archive
///
/// # Returns
///
/// The `Backup` instance describing this new backup.
pub fn create_from<P: AsRef<Path>>(
previous_state: State,
previous_start_time: chrono::DateTime<Utc>,
backup_dir: P,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> io::Result<Self> {
let start_time = chrono::offset::Utc::now();
let path = Backup::path(backup_dir, start_time);
let tar_gz = io_ext::CountingWrite::new(File::create(path)?);
let enc = GzEncoder::new(tar_gz, Compression::default());
let mut ar = tar::Builder::new(enc);
let mut delta = Delta::default();
for (dir_in_tar, src_dir) in dirs {
let mut all_files: HashSet<PathBuf> = HashSet::new();
let mut added_files: HashSet<PathBuf> = HashSet::new();
for entry in src_dir.read_dir_recursive()?.ignored("cache").files() {
let path = entry?.path();
let stripped = path.strip_prefix(src_dir).unwrap();
if !path.not_modified_since(previous_start_time) {
ar.append_path_with_name(&path, dir_in_tar.join(stripped))?;
added_files.insert(stripped.to_path_buf());
}
all_files.insert(stripped.to_path_buf());
}
delta.added.insert(dir_in_tar.clone(), added_files);
if let Some(previous_files) = previous_state.get(dir_in_tar) {
delta.removed.insert(
dir_in_tar.to_path_buf(),
previous_files.difference(&all_files).cloned().collect(),
);
}
}
let mut enc = ar.into_inner()?;
// The docs recommend running try_finish before unwrapping using finish
enc.try_finish()?;
let tar_gz = enc.finish()?;
Ok(Backup {
type_: BackupType::Incremental,
start_time,
end_time: chrono::Utc::now(),
size: tar_gz.bytes_written(),
delta,
metadata: None,
})
}
/// Restore the backup by extracting its contents to the respective directories.
///
/// # Arguments
///
/// * `backup_dir` - Backup directory where the file is stored
/// * `dirs` - list of tuples `(path_in_tar, dst_dir)` with `dst_dir` the directory on-disk
/// where the files stored under `path_in_tar` inside the tarball should be extracted to.
pub fn restore<P: AsRef<Path>>(
&self,
backup_dir: P,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> io::Result<()> {
let backup_path = Backup::path(backup_dir, self.start_time);
Backup::extract_archive(backup_path, dirs)?;
// Remove any files
for (path_in_tar, dst_dir) in dirs {
if let Some(removed) = self.delta.removed.get(path_in_tar) {
for path in removed {
let dst_path = dst_dir.join(path);
std::fs::remove_file(dst_path)?;
}
}
}
Ok(())
}
pub fn open<P: AsRef<Path>>(&self, backup_dir: P) -> io::Result<tar::Archive<GzDecoder<File>>> {
let path = Backup::path(backup_dir, self.start_time);
let tar_gz = File::open(path)?;
let enc = GzDecoder::new(tar_gz);
Ok(tar::Archive::new(enc))
}
/// Open this backup's archive and append all its files that are part of the provided state to
/// the archive file.
pub fn append<P: AsRef<Path>>(
&self,
backup_dir: P,
state: &State,
ar: &mut tar::Builder<GzEncoder<File>>,
) -> io::Result<()> {
let mut own_ar = self.open(backup_dir)?;
for entry in own_ar.entries()? {
let entry = entry?;
let entry_path_in_tar = entry.path()?.to_path_buf();
if state.contains(&entry_path_in_tar) {
let header = entry.header().clone();
ar.append(&header, entry)?;
}
}
Ok(())
}
}
impl<T: Clone> fmt::Display for Backup<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let letter = match self.type_ {
BackupType::Full => 'F',
BackupType::Incremental => 'I',
};
// Pretty-print size
// If your backup is a petabyte or larger, this will crash and you need to re-evaluate your
// life choices
let index = self.size.ilog(1024) as usize;
let size = self.size as f64 / (1024.0_f64.powi(index as i32));
let duration = self.end_time - self.start_time;
write!(
f,
"{} ({}, {}m{}s, {:.2}{}, {})",
self.start_time.format(Backup::FILENAME_FORMAT),
letter,
duration.num_seconds() / 60,
duration.num_seconds() % 60,
size,
BYTE_SUFFIXES[index],
self.delta
)
}
}

View file

@ -0,0 +1,46 @@
use std::{error::Error, fmt, str::FromStr};
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct ManagerConfig {
pub name: String,
pub frequency: u32,
pub chains: u64,
pub chain_len: u64,
}
#[derive(Debug)]
pub struct ParseManagerConfigErr;
impl Error for ParseManagerConfigErr {}
impl fmt::Display for ParseManagerConfigErr {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "parse manager config err")
}
}
impl FromStr for ManagerConfig {
type Err = ParseManagerConfigErr;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let splits: Vec<&str> = s.split(',').collect();
if let [name, frequency, chains, chain_len] = splits[..] {
let name: String = name.parse().map_err(|_| ParseManagerConfigErr)?;
let frequency: u32 = frequency.parse().map_err(|_| ParseManagerConfigErr)?;
let chains: u64 = chains.parse().map_err(|_| ParseManagerConfigErr)?;
let chain_len: u64 = chain_len.parse().map_err(|_| ParseManagerConfigErr)?;
Ok(ManagerConfig {
name,
chains,
chain_len,
frequency,
})
} else {
Err(ParseManagerConfigErr)
}
}
}

149
backup/src/manager/meta.rs Normal file
View file

@ -0,0 +1,149 @@
use std::{
collections::HashMap,
io,
path::{Path, PathBuf},
};
use chrono::Utc;
use serde::{Deserialize, Serialize};
use super::{Manager, ManagerConfig};
/// Manages a collection of backup layers, allowing them to be utilized as a single object.
pub struct MetaManager<T>
where
T: Clone + Serialize + for<'de> Deserialize<'de> + std::fmt::Debug,
{
backup_dir: PathBuf,
dirs: Vec<(PathBuf, PathBuf)>,
default_metadata: T,
managers: HashMap<String, Manager<T>>,
}
impl<T> MetaManager<T>
where
T: Clone + Serialize + for<'de> Deserialize<'de> + std::fmt::Debug,
{
pub fn new<P: Into<PathBuf>>(
backup_dir: P,
dirs: Vec<(PathBuf, PathBuf)>,
default_metadata: T,
) -> Self {
MetaManager {
backup_dir: backup_dir.into(),
dirs,
default_metadata,
managers: HashMap::new(),
}
}
/// Add a new manager to track, initializing it first.
pub fn add(&mut self, config: &ManagerConfig) -> io::Result<()> {
// Backup dir itself should exist, but we control its contents, so we can create
// separate directories for each layer
let path = self.backup_dir.join(&config.name);
// If the directory already exists, that's okay
match std::fs::create_dir(&path) {
Ok(()) => (),
Err(e) => match e.kind() {
io::ErrorKind::AlreadyExists => (),
_ => return Err(e),
},
};
let mut manager = Manager::new(
path,
self.dirs.clone(),
self.default_metadata.clone(),
config.chain_len,
config.chains,
chrono::Duration::minutes(config.frequency.into()),
);
manager.load()?;
self.managers.insert(config.name.clone(), manager);
Ok(())
}
/// Convenient wrapper for `add`.
pub fn add_all(&mut self, configs: &Vec<ManagerConfig>) -> io::Result<()> {
for config in configs {
self.add(config)?;
}
Ok(())
}
/// Return the name of the next scheduled layer, if one or more managers are present.
pub fn next_scheduled_layer(&self) -> Option<&str> {
self.managers
.iter()
.min_by_key(|(_, m)| m.next_scheduled_time())
.map(|(k, _)| k.as_str())
}
/// Return the earliest scheduled time for the underlying managers.
pub fn next_scheduled_time(&self) -> Option<chrono::DateTime<Utc>> {
self.managers
.values()
.map(|m| m.next_scheduled_time())
.min()
}
/// Perform a backup cycle for the earliest scheduled manager.
pub fn perform_backup_cycle(&mut self) -> io::Result<()> {
if let Some(manager) = self
.managers
.values_mut()
.min_by_key(|m| m.next_scheduled_time())
{
manager.create_backup()?;
manager.remove_old_backups()
} else {
Ok(())
}
}
/// Create a manual backup for a specific layer
pub fn create_backup(&mut self, layer: &str) -> Option<io::Result<()>> {
if let Some(manager) = self.managers.get_mut(layer) {
let mut res = manager.create_backup();
if res.is_ok() {
res = manager.remove_old_backups();
}
Some(res)
} else {
None
}
}
/// Restore a backup for a specific layer
pub fn restore_backup(
&self,
layer: &str,
start_time: chrono::DateTime<Utc>,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> Option<io::Result<()>> {
self.managers
.get(layer)
.map(|manager| manager.restore_backup(start_time, dirs))
}
pub fn export_backup<P: AsRef<Path>>(
&self,
layer: &str,
start_time: chrono::DateTime<Utc>,
output_path: P,
) -> Option<io::Result<()>> {
self.managers
.get(layer)
.map(|manager| manager.export_backup(start_time, output_path))
}
pub fn managers(&self) -> &HashMap<String, Manager<T>> {
&self.managers
}
}

265
backup/src/manager/mod.rs Normal file
View file

@ -0,0 +1,265 @@
mod config;
mod meta;
use std::{
fs::{File, OpenOptions},
io,
path::{Path, PathBuf},
};
use chrono::{SubsecRound, Utc};
use flate2::{write::GzEncoder, Compression};
use serde::{Deserialize, Serialize};
use super::{Backup, BackupType, Delta, State};
use crate::other;
pub use config::ManagerConfig;
pub use meta::MetaManager;
/// Manages a single backup layer consisting of one or more chains of backups.
pub struct Manager<T>
where
T: Clone + Serialize + for<'de> Deserialize<'de> + std::fmt::Debug,
{
backup_dir: PathBuf,
dirs: Vec<(PathBuf, PathBuf)>,
default_metadata: T,
chain_len: u64,
chains_to_keep: u64,
frequency: chrono::Duration,
chains: Vec<Vec<Backup<T>>>,
}
impl<T> Manager<T>
where
T: Clone + Serialize + for<'de> Deserialize<'de> + std::fmt::Debug,
{
const METADATA_FILE: &str = "alex.json";
pub fn new<P: Into<PathBuf>>(
backup_dir: P,
dirs: Vec<(PathBuf, PathBuf)>,
metadata: T,
chain_len: u64,
chains_to_keep: u64,
frequency: chrono::Duration,
) -> Self {
Self {
backup_dir: backup_dir.into(),
dirs,
default_metadata: metadata,
chain_len,
chains_to_keep,
frequency,
chains: Vec::new(),
}
}
/// Create a new backup, either full or incremental, depending on the state of the current
/// chain.
pub fn create_backup(&mut self) -> io::Result<()> {
// We start a new chain if the current chain is complete, or if there isn't a first chain
// yet
if let Some(current_chain) = self.chains.last() {
let current_chain_len: u64 = current_chain.len().try_into().unwrap();
if current_chain_len >= self.chain_len {
self.chains.push(Vec::new());
}
} else {
self.chains.push(Vec::new());
}
let current_chain = self.chains.last_mut().unwrap();
let mut backup = if !current_chain.is_empty() {
let previous_backup = current_chain.last().unwrap();
let previous_state = State::from(current_chain.iter().map(|b| &b.delta));
Backup::create_from(
previous_state,
previous_backup.start_time,
&self.backup_dir,
&self.dirs,
)?
} else {
Backup::create(&self.backup_dir, &self.dirs)?
};
backup.set_metadata(self.default_metadata.clone());
current_chain.push(backup);
self.save()?;
Ok(())
}
/// Delete all backups associated with outdated chains, and forget those chains.
pub fn remove_old_backups(&mut self) -> io::Result<()> {
let chains_to_store: usize = self.chains_to_keep.try_into().unwrap();
if chains_to_store < self.chains.len() {
let mut remove_count: usize = self.chains.len() - chains_to_store;
// We only count finished chains towards the list of stored chains
let chain_len: usize = self.chain_len.try_into().unwrap();
if self.chains.last().unwrap().len() < chain_len {
remove_count -= 1;
}
for chain in self.chains.drain(..remove_count) {
for backup in chain {
let path = Backup::path(&self.backup_dir, backup.start_time);
std::fs::remove_file(path)?;
}
}
self.save()?;
}
Ok(())
}
/// Write the in-memory state to disk.
///
/// The state is first written to a temporary file before being (atomically, depending on the
/// file system) renamed to the final path.
pub fn save(&self) -> io::Result<()> {
let dest_path = self.backup_dir.join(Self::METADATA_FILE);
let dest_ext = dest_path
.extension()
.map(|ext| ext.to_string_lossy().to_string())
.unwrap_or(String::new());
let temp_path = dest_path.with_extension(format!("{dest_ext}.temp"));
let json_file = File::create(&temp_path)?;
serde_json::to_writer(json_file, &self.chains)?;
// Rename temp file to the destination path after writing was successful
std::fs::rename(temp_path, dest_path)?;
Ok(())
}
/// Overwrite the in-memory state with the on-disk state.
pub fn load(&mut self) -> io::Result<()> {
let json_file = match File::open(self.backup_dir.join(Self::METADATA_FILE)) {
Ok(f) => f,
Err(e) => {
// Don't error out if the file isn't there, it will be created when necessary
if e.kind() == io::ErrorKind::NotFound {
self.chains = Vec::new();
return Ok(());
} else {
return Err(e);
}
}
};
self.chains = serde_json::from_reader(json_file)?;
Ok(())
}
/// Calculate the next time a backup should be created. If no backup has been created yet, it
/// will return now.
pub fn next_scheduled_time(&self) -> chrono::DateTime<Utc> {
self.chains
.last()
.and_then(|last_chain| last_chain.last())
.map(|last_backup| last_backup.start_time + self.frequency)
.unwrap_or_else(chrono::offset::Utc::now)
}
/// Search for a chain containing a backup with the specified start time.
///
/// # Returns
///
/// A tuple (chain, index) with index being the index of the found backup in the returned
/// chain.
fn find(&self, start_time: chrono::DateTime<Utc>) -> Option<(&Vec<Backup<T>>, usize)> {
for chain in &self.chains {
if let Some(index) = chain
.iter()
.position(|b| b.start_time.trunc_subsecs(0) == start_time)
{
return Some((chain, index));
}
}
None
}
/// Restore the backup with the given start time by restoring its chain up to and including the
/// backup, in order.
pub fn restore_backup(
&self,
start_time: chrono::DateTime<Utc>,
dirs: &Vec<(PathBuf, PathBuf)>,
) -> io::Result<()> {
self.find(start_time)
.ok_or_else(|| other("Unknown layer."))
.and_then(|(chain, index)| {
for backup in chain.iter().take(index + 1) {
backup.restore(&self.backup_dir, dirs)?;
}
Ok(())
})
}
/// Export the backup with the given start time as a new full archive.
pub fn export_backup<P: AsRef<Path>>(
&self,
start_time: chrono::DateTime<Utc>,
output_path: P,
) -> io::Result<()> {
self.find(start_time)
.ok_or_else(|| other("Unknown layer."))
.and_then(|(chain, index)| {
match chain[index].type_ {
// A full backup is simply copied to the output path
BackupType::Full => std::fs::copy(
Backup::path(&self.backup_dir, chain[index].start_time),
output_path,
)
.map(|_| ()),
// Incremental backups are exported one by one according to their contribution
BackupType::Incremental => {
let contributions =
Delta::contributions(chain.iter().take(index + 1).map(|b| &b.delta));
let tar_gz = OpenOptions::new()
.write(true)
.create(true)
.open(output_path.as_ref())?;
let enc = GzEncoder::new(tar_gz, Compression::default());
let mut ar = tar::Builder::new(enc);
// We only need to consider backups that have a non-empty contribution.
// This allows us to skip reading backups that have been completely
// overwritten by their successors anyways.
for (contribution, backup) in contributions
.iter()
.zip(chain.iter().take(index + 1))
.filter(|(contribution, _)| !contribution.is_empty())
{
println!("{}", &backup);
backup.append(&self.backup_dir, contribution, &mut ar)?;
}
let mut enc = ar.into_inner()?;
enc.try_finish()
}
}
})
}
/// Get a reference to the underlying chains
pub fn chains(&self) -> &Vec<Vec<Backup<T>>> {
&self.chains
}
}

149
backup/src/path.rs Normal file
View file

@ -0,0 +1,149 @@
use std::{
collections::HashSet,
ffi::OsString,
fs::{self, DirEntry},
io,
path::{Path, PathBuf},
};
use chrono::{Local, Utc};
pub struct ReadDirRecursive {
ignored: HashSet<OsString>,
read_dir: fs::ReadDir,
dir_stack: Vec<PathBuf>,
files_only: bool,
}
impl ReadDirRecursive {
/// Start the iterator for a new directory
pub fn start<P: AsRef<Path>>(path: P) -> io::Result<Self> {
let path = path.as_ref();
let read_dir = path.read_dir()?;
Ok(ReadDirRecursive {
ignored: HashSet::new(),
read_dir,
dir_stack: Vec::new(),
files_only: false,
})
}
pub fn ignored<S: Into<OsString>>(mut self, s: S) -> Self {
self.ignored.insert(s.into());
self
}
pub fn files(mut self) -> Self {
self.files_only = true;
self
}
/// Tries to populate the `read_dir` field with a new `ReadDir` instance to consume.
fn next_read_dir(&mut self) -> io::Result<bool> {
if let Some(path) = self.dir_stack.pop() {
self.read_dir = path.read_dir()?;
Ok(true)
} else {
Ok(false)
}
}
/// Convenience method to add a new directory to the stack.
fn push_entry(&mut self, entry: &io::Result<DirEntry>) {
if let Ok(entry) = entry {
if entry.path().is_dir() {
self.dir_stack.push(entry.path());
}
}
}
/// Determine whether an entry should be returned by the iterator.
fn should_return(&self, entry: &io::Result<DirEntry>) -> bool {
if let Ok(entry) = entry {
let mut res = !self.ignored.contains(&entry.file_name());
// Please just let me combine these already
if self.files_only {
if let Ok(file_type) = entry.file_type() {
res = res && file_type.is_file();
}
// We couldn't determine if it's a file, so we don't return it
else {
res = false;
}
}
res
} else {
true
}
}
}
impl Iterator for ReadDirRecursive {
type Item = io::Result<DirEntry>;
fn next(&mut self) -> Option<Self::Item> {
loop {
// First, we try to consume the current directory's items
while let Some(entry) = self.read_dir.next() {
self.push_entry(&entry);
if self.should_return(&entry) {
return Some(entry);
}
}
// If we get an error while setting up a new directory, we return this, otherwise we
// keep trying to consume the directories
match self.next_read_dir() {
Ok(true) => (),
// There's no more directories to traverse, so the iterator is done
Ok(false) => return None,
Err(e) => return Some(Err(e)),
}
}
}
}
pub trait PathExt {
/// Confirm whether the file has not been modified since the given timestamp.
///
/// This function will only return true if it can determine with certainty that the file hasn't
/// been modified.
///
/// # Args
///
/// * `timestamp` - Timestamp to compare modified time with
///
/// # Returns
///
/// True if the file has not been modified for sure, false otherwise.
fn not_modified_since(&self, timestamp: chrono::DateTime<Utc>) -> bool;
/// An extension of the `read_dir` command that runs through the entire underlying directory
/// structure using breadth-first search
fn read_dir_recursive(&self) -> io::Result<ReadDirRecursive>;
}
impl PathExt for Path {
fn not_modified_since(&self, timestamp: chrono::DateTime<Utc>) -> bool {
self.metadata()
.and_then(|m| m.modified())
.map(|last_modified| {
let t: chrono::DateTime<Utc> = last_modified.into();
let t = t.with_timezone(&Local);
t < timestamp
})
.unwrap_or(false)
}
fn read_dir_recursive(&self) -> io::Result<ReadDirRecursive> {
ReadDirRecursive::start(self)
}
}

162
backup/src/state.rs Normal file
View file

@ -0,0 +1,162 @@
use std::{
borrow::Borrow,
collections::{HashMap, HashSet},
ops::{Deref, DerefMut},
path::{Path, PathBuf},
};
use serde::{Deserialize, Serialize};
use crate::Delta;
/// Struct that represents a current state for a backup. This struct acts as a smart pointer around
/// a HashMap.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct State(HashMap<PathBuf, HashSet<PathBuf>>);
impl State {
/// Apply the delta to the current state.
pub fn apply(&mut self, delta: &Delta) {
// First we add new files, then we remove the old ones
for (dir, added) in delta.added.iter() {
if let Some(current) = self.0.get_mut(dir) {
current.extend(added.iter().cloned());
} else {
self.0.insert(dir.clone(), added.clone());
}
}
for (dir, removed) in delta.removed.iter() {
if let Some(current) = self.0.get_mut(dir) {
current.retain(|k| !removed.contains(k));
}
}
}
/// Returns whether the provided relative path is part of the given state.
pub fn contains<P: AsRef<Path>>(&self, path: P) -> bool {
let path = path.as_ref();
self.0.iter().any(|(dir, files)| {
path.starts_with(dir) && files.contains(path.strip_prefix(dir).unwrap())
})
}
/// Returns whether the state is empty.
///
/// Note that this does not necessarily mean that the state does not contain any sets, but
/// rather that any sets that it does contain are also empty.
pub fn is_empty(&self) -> bool {
self.0.values().all(|s| s.is_empty())
}
pub fn append_dir<I>(&mut self, dir: impl Into<PathBuf>, files: I)
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
let dir = dir.into();
let files = files.into_iter().map(Into::into);
if let Some(dir_files) = self.0.get_mut(&dir) {
dir_files.extend(files);
} else {
self.0.insert(dir, files.collect());
}
}
pub fn with_dir<I>(mut self, dir: impl Into<PathBuf>, files: I) -> Self
where
I: IntoIterator,
I::Item: Into<PathBuf>,
{
self.append_dir(dir, files);
self
}
}
impl PartialEq for State {
fn eq(&self, other: &Self) -> bool {
let self_non_empty = self.0.values().filter(|files| !files.is_empty()).count();
let other_non_empty = other.0.values().filter(|files| !files.is_empty()).count();
if self_non_empty != other_non_empty {
return false;
}
// If both states have the same number of non-empty directories, then comparing each
// directory of one with the other will only be true if their list of non-empty directories
// is identical.
self.0
.iter()
.all(|(dir, files)| files.is_empty() || other.0.get(dir).map_or(false, |v| v == files))
}
}
impl Eq for State {}
impl<T> From<T> for State
where
T: IntoIterator,
T::Item: Borrow<Delta>,
{
fn from(deltas: T) -> Self {
let mut state = State::default();
for delta in deltas {
state.apply(delta.borrow());
}
state
}
}
impl AsRef<HashMap<PathBuf, HashSet<PathBuf>>> for State {
fn as_ref(&self) -> &HashMap<PathBuf, HashSet<PathBuf>> {
&self.0
}
}
impl Deref for State {
type Target = HashMap<PathBuf, HashSet<PathBuf>>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for State {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_eq() {
let a = State::default().with_dir("dir1", ["file1", "file2"]);
let b = State::default().with_dir("dir1", ["file1", "file2"]);
assert_eq!(a, b);
let b = b.with_dir("dir2", ["file3"]);
assert_ne!(a, b);
}
#[test]
fn test_eq_empty_dirs() {
let a = State::default().with_dir("dir1", ["file1", "file2"]);
let b = State::default()
.with_dir("dir1", ["file1", "file2"])
.with_dir("dir2", Vec::<PathBuf>::new());
assert_eq!(a, b);
let b = b.with_dir("dir2", ["file3"]);
assert_ne!(a, b);
}
}

10
papermc-api/Cargo.toml Normal file
View file

@ -0,0 +1,10 @@
[package]
name = "papermc-api"
version.workspace = true
edition.workspace = true
[dependencies]
serde.workspace = true
chrono.workspace = true
serde_json = "1.0.149"
ureq = { version = "3.3.0", features = ["json"] }

View file

@ -0,0 +1,19 @@
fn main() {
let client = papermc_api::Client::new();
let projects = client.projects().unwrap();
for project in projects {
println!("project: {:?}", project);
}
let versions = client.project("paper").versions().unwrap();
for version in versions {
println!("version: {:?}", version);
}
let latest = client.project("paper").version("1.21.1").latest().unwrap();
println!("latest: {:?}", latest);
let builds = client.project("paper").version("1.21.10").builds().unwrap();
println!("number of builds: {}", builds.len());
}

22
papermc-api/src/error.rs Normal file
View file

@ -0,0 +1,22 @@
#[derive(Debug)]
pub enum Error {
Ureq(ureq::Error),
BadBody,
}
impl std::error::Error for Error {}
impl From<ureq::Error> for Error {
fn from(value: ureq::Error) -> Self {
Self::Ureq(value)
}
}
impl std::fmt::Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Ureq(err) => err.fmt(f),
Self::BadBody => f.write_str("bad response body"),
}
}
}

247
papermc-api/src/lib.rs Normal file
View file

@ -0,0 +1,247 @@
use serde_json::Value;
pub use error::Error;
use crate::models::{Build, BuildCommit, BuildDownload, Java, Project, Version};
mod error;
mod models;
pub struct Client {
agent: ureq::Agent,
}
pub const BASE_URL: &str = "https://fill.papermc.io/v3";
impl Default for Client {
fn default() -> Self {
Self::new()
}
}
impl Client {
pub fn new() -> Self {
Self {
agent: ureq::agent(),
}
}
pub fn projects(&self) -> Result<Vec<Project>, Error> {
let mut res = self.agent.get(format!("{}/projects", BASE_URL)).call()?;
let body_json: Value = res.body_mut().read_json()?;
let projects = body_json["projects"].as_array().ok_or(Error::BadBody)?;
projects.iter().map(parse_project_json).collect()
}
pub fn project<'a>(&'a self, project: &'a str) -> ProjectQuery<'a> {
ProjectQuery {
agent: &self.agent,
project,
}
}
}
pub struct ProjectQuery<'a> {
agent: &'a ureq::Agent,
project: &'a str,
}
impl<'a> ProjectQuery<'a> {
pub fn info(&self) -> Result<Project, Error> {
let mut res = self
.agent
.get(format!("{}/projects/{}", BASE_URL, self.project))
.call()?;
let body_json: Value = res.body_mut().read_json()?;
parse_project_json(&body_json)
}
pub fn versions(&self) -> Result<Vec<Version>, Error> {
let mut res = self
.agent
.get(format!("{}/projects/{}/versions", BASE_URL, self.project))
.call()?;
let body_json: Value = res.body_mut().read_json()?;
let versions = body_json["versions"].as_array().ok_or(Error::BadBody)?;
versions.iter().map(parse_version_json).collect()
}
pub fn version(&self, version: &'a str) -> VersionQuery<'a> {
VersionQuery {
agent: self.agent,
project: self.project,
version,
}
}
}
pub struct VersionQuery<'a> {
agent: &'a ureq::Agent,
project: &'a str,
version: &'a str,
}
impl<'a> VersionQuery<'a> {
pub fn info(&self) -> Result<Version, Error> {
let mut res = self
.agent
.get(format!(
"{}/projects/{}/versions/{}",
BASE_URL, self.project, self.version
))
.call()?;
let body_json: Value = res.body_mut().read_json()?;
parse_version_json(&body_json)
}
pub fn builds(&self) -> Result<Vec<Build>, Error> {
let mut res = self
.agent
.get(format!(
"{}/projects/{}/versions/{}/builds",
BASE_URL, self.project, self.version
))
.call()?;
let body_json: Value = res.body_mut().read_json()?;
body_json
.as_array()
.ok_or(Error::BadBody)?
.iter()
.map(parse_build_json)
.collect()
}
pub fn build(&self, build: &str) -> Result<Build, Error> {
let mut res = self
.agent
.get(format!(
"{}/projects/{}/versions/{}/builds/{}",
BASE_URL, self.project, self.version, build
))
.call()?;
let body_json: Value = res.body_mut().read_json()?;
parse_build_json(&body_json)
}
pub fn latest(&self) -> Result<Build, Error> {
self.build("latest")
}
}
fn parse_project_json(value: &Value) -> Result<Project, Error> {
Ok(Project {
id: value["project"]["id"]
.as_str()
.map(|s| s.to_string())
.ok_or(Error::BadBody)?,
name: value["project"]["name"]
.as_str()
.map(|s| s.to_string())
.ok_or(Error::BadBody)?,
// Flatten map of versions into one array
versions: value["versions"]
.as_object()
.ok_or(Error::BadBody)?
.iter()
.map(|(_, versions)| versions.as_array().ok_or(Error::BadBody))
// Collect into error to propagate error of any of the versions
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.flatten()
.map(|v| v.as_str().ok_or(Error::BadBody).map(|s| s.to_string()))
.collect::<Result<_, _>>()?,
})
}
fn parse_version_json(value: &Value) -> Result<Version, Error> {
Ok(Version {
id: value["version"]["id"]
.as_str()
.map(String::from)
.ok_or(Error::BadBody)?,
support_status: value["version"]["support"]["status"]
.as_str()
.ok_or(Error::BadBody)?
.parse()
.map_err(|_| Error::BadBody)?,
java: Java {
minimum_version: value["version"]["java"]["version"]["minimum"]
.as_u64()
.ok_or(Error::BadBody)?,
recommended_flags: value["version"]["java"]["flags"]["recommended"]
.as_array()
.ok_or(Error::BadBody)?
.iter()
.map(|v| v.as_str().map(String::from).ok_or(Error::BadBody))
.collect::<Result<_, _>>()?,
},
builds: value["builds"]
.as_array()
.ok_or(Error::BadBody)?
.iter()
.map(|v| v.as_u64().ok_or(Error::BadBody))
.collect::<Result<_, _>>()?,
})
}
fn parse_build_json(value: &Value) -> Result<Build, Error> {
Ok(Build {
id: value["id"].as_u64().ok_or(Error::BadBody)?,
time: chrono::DateTime::parse_from_rfc3339(value["time"].as_str().ok_or(Error::BadBody)?)
.map_err(|_| Error::BadBody)?
.into(),
channel: value["channel"]
.as_str()
.ok_or(Error::BadBody)?
.parse()
.map_err(|_| Error::BadBody)?,
commits: value["commits"]
.as_array()
.ok_or(Error::BadBody)?
.iter()
.map(|build| {
Ok(BuildCommit {
sha: build["sha"].as_str().ok_or(Error::BadBody)?.to_string(),
time: chrono::DateTime::parse_from_rfc3339(
build["time"].as_str().ok_or(Error::BadBody)?,
)
.map_err(|_| Error::BadBody)?
.into(),
message: build["message"].as_str().ok_or(Error::BadBody)?.to_string(),
})
})
.collect::<Result<_, Error>>()?,
downloads: value["downloads"]
.as_object()
.ok_or(Error::BadBody)?
.iter()
.map(|(key, build)| {
Ok((
key.to_string(),
BuildDownload {
name: build["name"].as_str().ok_or(Error::BadBody)?.to_string(),
size: build["size"].as_u64().ok_or(Error::BadBody)?,
url: build["url"].as_str().ok_or(Error::BadBody)?.to_string(),
checksums: build["checksums"]
.as_object()
.ok_or(Error::BadBody)?
.into_iter()
.map(|(key, value)| {
Ok((
key.to_string(),
value.as_str().ok_or(Error::BadBody)?.to_string(),
))
})
.collect::<Result<_, Error>>()?,
},
))
})
.collect::<Result<_, Error>>()?,
})
}

109
papermc-api/src/models.rs Normal file
View file

@ -0,0 +1,109 @@
use std::{collections::HashMap, fmt::Display, str::FromStr};
use chrono::{DateTime, Utc};
#[derive(Debug)]
pub struct Project {
pub id: String,
pub name: String,
pub versions: Vec<String>,
}
#[derive(Debug)]
pub enum SupportStatus {
Supported,
Unsupported,
}
pub struct SupportStatusParseError;
impl FromStr for SupportStatus {
type Err = SupportStatusParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"SUPPORTED" => Ok(Self::Supported),
"UNSUPPORTED" => Ok(Self::Unsupported),
_ => Err(SupportStatusParseError),
}
}
}
impl Display for SupportStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Supported => f.write_str("SUPPORTED"),
Self::Unsupported => f.write_str("UNSUPPORTED"),
}
}
}
#[derive(Debug)]
pub struct Java {
pub minimum_version: u64,
pub recommended_flags: Vec<String>,
}
#[derive(Debug)]
pub struct Version {
pub id: String,
pub support_status: SupportStatus,
pub java: Java,
pub builds: Vec<u64>,
}
#[derive(Debug)]
pub struct Build {
pub id: u64,
pub time: DateTime<Utc>,
pub channel: BuildChannel,
pub commits: Vec<BuildCommit>,
pub downloads: HashMap<String, BuildDownload>,
}
#[derive(Debug)]
pub enum BuildChannel {
Alpha,
Beta,
Stable,
}
pub struct BuildChannelParseError;
impl FromStr for BuildChannel {
type Err = BuildChannelParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"STABLE" => Ok(Self::Stable),
"BETA" => Ok(Self::Beta),
"ALPHA" => Ok(Self::Alpha),
_ => Err(BuildChannelParseError),
}
}
}
impl Display for BuildChannel {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Alpha => f.write_str("ALPHA"),
Self::Beta => f.write_str("BETA"),
Self::Stable => f.write_str("STABLE"),
}
}
}
#[derive(Debug)]
pub struct BuildCommit {
pub sha: String,
pub time: DateTime<Utc>,
pub message: String,
}
#[derive(Debug)]
pub struct BuildDownload {
pub name: String,
pub checksums: HashMap<String, String>,
pub size: u64,
pub url: String,
}

View file

@ -1,123 +0,0 @@
mod server;
use clap::Parser;
use server::ServerType;
use std::io;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
#[derive(Parser)]
#[command(author, version, about, long_about = None)]
struct Cli {
/// Type of server
type_: ServerType,
/// Version string for the server, e.g. 1.19.4-545
#[arg(env = "ALEX_SERVER_VERSION")]
server_version: String,
/// Server jar to execute
#[arg(
long,
value_name = "JAR_PATH",
default_value = "server.jar",
env = "ALEX_JAR"
)]
jar: PathBuf,
/// Directory where configs are stored, and where the server will run
#[arg(
long,
value_name = "CONFIG_DIR",
default_value = ".",
env = "ALEX_CONFIG_DIR"
)]
config: PathBuf,
/// Directory where world files will be saved
#[arg(
long,
value_name = "WORLD_DIR",
default_value = "../worlds",
env = "ALEX_WORLD_DIR"
)]
world: PathBuf,
/// Directory where backups will be stored
#[arg(
long,
value_name = "BACKUP_DIR",
default_value = "../backups",
env = "ALEX_WORLD_DIR"
)]
backup: PathBuf,
/// Java command to run the server jar with
#[arg(long, value_name = "JAVA_CMD", default_value_t = String::from("java"), env = "ALEX_JAVA")]
java: String,
/// XMS value in megabytes for the server instance
#[arg(long, default_value_t = 1024, env = "ALEX_XMS")]
xms: u64,
/// XMX value in megabytes for the server instance
#[arg(long, default_value_t = 2048, env = "ALEX_XMX")]
xmx: u64,
/// How many backups to keep
#[arg(short = 'n', long, default_value_t = 7, env = "ALEX_MAX_BACKUPS")]
max_backups: u64,
/// How frequently to perform a backup, in minutes; 0 to disable.
#[arg(short = 't', long, default_value_t = 0, env = "ALEX_FREQUENCY")]
frequency: u64,
}
fn backups_thread(counter: Arc<Mutex<server::ServerProcess>>, frequency: u64) {
loop {
std::thread::sleep(std::time::Duration::from_secs(frequency * 60));
{
let mut server = counter.lock().unwrap();
// We explicitely ignore the error here, as we don't want the thread to fail
let _ = server.backup();
}
}
}
fn main() {
let cli = Cli::parse();
let cmd = server::ServerCommand::new(cli.type_, &cli.server_version)
.java(&cli.java)
.jar(cli.jar)
.config(cli.config)
.world(cli.world)
.backup(cli.backup)
.xms(cli.xms)
.xmx(cli.xmx)
.max_backups(cli.max_backups);
let counter = Arc::new(Mutex::new(cmd.spawn().expect("Failed to start server.")));
if cli.frequency > 0 {
let clone = Arc::clone(&counter);
std::thread::spawn(move || backups_thread(clone, cli.frequency));
}
let stdin = io::stdin();
let input = &mut String::new();
loop {
input.clear();
if stdin.read_line(input).is_err() {
continue;
};
{
let mut server = counter.lock().unwrap();
if let Err(e) = server.send_command(input) {
println!("{}", e);
};
}
if input.trim() == "stop" {
break;
}
}
}

View file

@ -1,137 +0,0 @@
use crate::server::ServerProcess;
use clap::ValueEnum;
use std::fmt;
use std::fs::File;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::process::{Command, Stdio};
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum)]
pub enum ServerType {
Paper,
Forge,
Vanilla,
}
impl fmt::Display for ServerType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let s = match self {
ServerType::Paper => "PaperMC",
ServerType::Forge => "Forge",
ServerType::Vanilla => "Vanilla",
};
write!(f, "{}", s)
}
}
pub struct ServerCommand {
type_: ServerType,
version: String,
java: String,
jar: PathBuf,
config_dir: PathBuf,
world_dir: PathBuf,
backup_dir: PathBuf,
xms: u64,
xmx: u64,
max_backups: u64,
}
impl ServerCommand {
pub fn new(type_: ServerType, version: &str) -> Self {
ServerCommand {
type_,
version: String::from(version),
java: String::from("java"),
jar: PathBuf::from("server.jar"),
config_dir: PathBuf::from("config"),
world_dir: PathBuf::from("worlds"),
backup_dir: PathBuf::from("backups"),
xms: 1024,
xmx: 2048,
max_backups: 7,
}
}
pub fn java(mut self, java: &str) -> Self {
self.java = String::from(java);
self
}
pub fn jar<T: AsRef<Path>>(mut self, path: T) -> Self {
self.jar = PathBuf::from(path.as_ref());
self
}
pub fn config<T: AsRef<Path>>(mut self, path: T) -> Self {
self.config_dir = PathBuf::from(path.as_ref());
self
}
pub fn world<T: AsRef<Path>>(mut self, path: T) -> Self {
self.world_dir = PathBuf::from(path.as_ref());
self
}
pub fn backup<T: AsRef<Path>>(mut self, path: T) -> Self {
self.backup_dir = PathBuf::from(path.as_ref());
self
}
pub fn xms(mut self, v: u64) -> Self {
self.xms = v;
self
}
pub fn xmx(mut self, v: u64) -> Self {
self.xmx = v;
self
}
pub fn max_backups(mut self, v: u64) -> Self {
self.max_backups = v;
self
}
fn accept_eula(&self) -> std::io::Result<()> {
let mut eula_path = self.config_dir.clone();
eula_path.push("eula.txt");
let mut eula_file = File::create(eula_path)?;
eula_file.write_all(b"eula=true")?;
Ok(())
}
pub fn spawn(self) -> std::io::Result<ServerProcess> {
// To avoid any issues, we use absolute paths for everything when spawning the process
let jar = self.jar.canonicalize()?;
let config_dir = self.config_dir.canonicalize()?;
let world_dir = self.world_dir.canonicalize()?;
let backup_dir = self.backup_dir.canonicalize()?;
self.accept_eula()?;
let child = Command::new(&self.java)
.current_dir(&config_dir)
.arg("-jar")
.arg(&jar)
.arg("--universe")
.arg(&world_dir)
.arg("--nogui")
.stdin(Stdio::piped())
.spawn()?;
Ok(ServerProcess::new(
self.type_,
self.version,
config_dir,
world_dir,
backup_dir,
self.max_backups,
child,
))
}
}

View file

@ -1,5 +0,0 @@
mod command;
mod process;
pub use command::{ServerCommand, ServerType};
pub use process::ServerProcess;

View file

@ -1,159 +0,0 @@
use crate::server::ServerType;
use flate2::write::GzEncoder;
use flate2::Compression;
use std::io::Write;
use std::path::PathBuf;
use std::process::Child;
#[link(name = "c")]
extern "C" {
fn geteuid() -> u32;
fn getegid() -> u32;
}
pub struct ServerProcess {
type_: ServerType,
version: String,
config_dir: PathBuf,
world_dir: PathBuf,
backup_dir: PathBuf,
max_backups: u64,
child: Child,
}
impl ServerProcess {
pub fn new(
type_: ServerType,
version: String,
config_dir: PathBuf,
world_dir: PathBuf,
backup_dir: PathBuf,
max_backups: u64,
child: Child,
) -> ServerProcess {
ServerProcess {
type_,
version,
config_dir,
world_dir,
backup_dir,
max_backups,
child,
}
}
pub fn send_command(&mut self, cmd: &str) -> std::io::Result<()> {
match cmd.trim() {
"stop" | "exit" => self.stop()?,
"backup" => self.backup()?,
s => self.custom(s)?,
}
Ok(())
}
fn custom(&mut self, cmd: &str) -> std::io::Result<()> {
let mut stdin = self.child.stdin.as_ref().unwrap();
stdin.write_all(format!("{}\n", cmd.trim()).as_bytes())?;
stdin.flush()?;
Ok(())
}
pub fn stop(&mut self) -> std::io::Result<()> {
self.custom("stop")?;
self.child.wait()?;
Ok(())
}
/// Perform a backup by disabling the server's save feature and flushing its data, before
/// creating an archive file.
pub fn backup(&mut self) -> std::io::Result<()> {
self.custom("say backing up server")?;
// Make sure the server isn't modifying the files during the backup
self.custom("save-off")?;
self.custom("save-all")?;
// TODO implement a better mechanism
// We wait some time to (hopefully) ensure the save-all call has completed
std::thread::sleep(std::time::Duration::from_secs(10));
let res = self.create_backup_archive();
if res.is_ok() {
self.remove_old_backups()?;
}
// The server's save feature needs to be enabled again even if the archive failed to create
self.custom("save-on")?;
self.custom("say server backed up successfully")?;
res
}
/// Create a new compressed backup archive of the server's data.
fn create_backup_archive(&mut self) -> std::io::Result<()> {
// Create a gzip-compressed tarball of the worlds folder
let filename = format!(
"{}",
chrono::offset::Local::now().format("%Y-%m-%d_%H-%M-%S.tar.gz")
);
let path = self.backup_dir.join(filename);
let tar_gz = std::fs::File::create(path)?;
let enc = GzEncoder::new(tar_gz, Compression::default());
let mut tar = tar::Builder::new(enc);
tar.append_dir_all("worlds", &self.world_dir)?;
// We don't store all files in the config, as this would include caches
tar.append_path_with_name(
self.config_dir.join("server.properties"),
"config/server.properties",
)?;
// We add a file to the backup describing for what version it was made
let info = format!("{} {}", self.type_, self.version);
let info_bytes = info.as_bytes();
let mut header = tar::Header::new_gnu();
header.set_size(info_bytes.len().try_into().unwrap());
header.set_mode(0o100644);
unsafe {
header.set_gid(getegid().into());
header.set_uid(geteuid().into());
}
tar.append_data(&mut header, "info.txt", info_bytes)?;
// tar.append_dir_all("config", &self.config_dir)?;
// Backup file gets finalized in the drop
Ok(())
}
/// Remove the oldest backups
fn remove_old_backups(&mut self) -> std::io::Result<()> {
// The naming format used allows us to sort the backups by name and still get a sorting by
// creation time
let mut backups = std::fs::read_dir(&self.backup_dir)?
.filter_map(|res| res.map(|e| e.path()).ok())
.collect::<Vec<PathBuf>>();
backups.sort();
let max_backups: usize = self.max_backups.try_into().unwrap();
if backups.len() > max_backups {
let excess_backups = backups.len() - max_backups;
for backup in &backups[0..excess_backups] {
std::fs::remove_file(backup)?;
}
}
Ok(())
}
}