34 Commits

Author SHA1 Message Date
hdbg
f709650bb1 refactor(server::{user_agent, client}): move auth part to separate function to not to pollute actor session with one-time concerns
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
2026-03-01 23:35:22 +01:00
hdbg
004b14a168 refactor(transport): convert Bi trait to use async_trait 2026-03-01 13:18:48 +01:00
hdbg
4169b2ba42 refactor: consolidate auth messages into client and user_agent packages
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-03-01 12:49:38 +01:00
hdbg
3cc63474a8 chore(server): update Cargo.lock dependencies
Some checks failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-26 22:41:50 +01:00
hdbg
3478204b9f tests(user-agent): basic auth tests similar to server 2026-02-26 22:41:24 +01:00
hdbg
61c65ddbcb feat(useragent): initial connection impl 2026-02-26 22:22:44 +01:00
hdbg
3401205cbd refactor(transport): simplify converters 2026-02-26 21:42:24 +01:00
hdbg
1799aef6f8 refactor(transport): implemented Bi stream based abstraction for actor communication with next loop override 2026-02-26 19:22:33 +01:00
hdbg
fe8c5e1bd2 housekeeping(useragent): rename
Some checks failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-21 13:54:47 +01:00
hdbg
cbbe1f8881 feat(proto): add URL parsing and TLS certificate management 2026-02-18 14:09:58 +01:00
hdbg
7438d62695 misc: create license and readme 2026-02-17 22:20:30 +01:00
hdbg
4236f2c36d refactor(server): reogranized actors, context, and db modules into <dir>/mod.rs structure
Some checks failed
ci/woodpecker/push/server-lint Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 22:29:48 +01:00
hdbg
76ff535619 refactor(server::tests): moved integration-like tests into tests/ 2026-02-16 22:27:59 +01:00
hdbg
b3566c8af6 refactor(server): separated global actors into their own handle 2026-02-16 21:58:14 +01:00
hdbg
bdb9f01757 refactor(server): actors reorganization & linter fixes 2026-02-16 21:43:59 +01:00
hdbg
0805e7a846 feat(keyholder): add seal method and unseal integration tests 2026-02-16 21:38:29 +01:00
hdbg
eb9cbc88e9 feat(server::user-agent): Unseal implemented 2026-02-16 21:17:06 +01:00
hdbg
dd716da4cd test(keyholder): remove unused imports from test modules 2026-02-16 21:15:13 +01:00
hdbg
1545db7428 fix(ci): add protoc installation for lints
Some checks failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 21:14:55 +01:00
hdbg
20ac84b60c fix(ci): add clippy installation in mise.toml
Some checks failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 21:04:13 +01:00
hdbg
8f6dda871b refactor(actors): rename BootstrapActor to Bootstrapper 2026-02-16 21:01:53 +01:00
hdbg
47108ed8ad chore(supply-chain): update cargo-vet audits and trusted publishers
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 20:52:31 +01:00
hdbg
359df73c2e feat(server::key_holder): unique index on (root_key_id, nonce) to avoid nonce reuse 2026-02-16 20:45:15 +01:00
hdbg
ce03b7e15d feat(server::key_holder): ability to remotely get current state 2026-02-16 20:40:36 +01:00
hdbg
e4038d9188 refactor(keyholder): rename KeyHolderActor to KeyHolder and optimize db connection lifetime 2026-02-16 20:36:47 +01:00
hdbg
c82339d764 security(server::key_holder): replaced nonce-caching with exclusive transaction fetching nonce from the database 2026-02-16 18:23:25 +01:00
hdbg
c5b51f4b70 feat(server): UserAgent seal/unseal
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
2026-02-16 14:00:23 +01:00
hdbg
6b8f8c9ff7 feat(unseal): add unseal protocol support for user agents 2026-02-15 13:04:55 +01:00
hdbg
8263bc6b6f feat(server): boot mechanism 2026-02-15 01:44:12 +01:00
hdbg
a6c849f268 ci: add server linting pipeline for Rust code quality checks 2026-02-14 23:44:16 +01:00
hdbg
d8d65da0b4 test(user-agent): add challenge-response auth flow test 2026-02-14 23:43:36 +01:00
hdbg
abdf4e3893 tests(server): UserAgent invalid bootstrap token 2026-02-14 19:48:37 +01:00
hdbg
4bac70a6e9 security(server): cargo-vet proper init
All checks were successful
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline was successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-14 19:16:09 +01:00
hdbg
54a41743be housekeeping(server): trimmed-down dependencies 2026-02-14 19:04:50 +01:00
124 changed files with 7705 additions and 1423 deletions

View File

@@ -8,7 +8,7 @@ when:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
- name: audit
image: jdxcode/mise:latest
directory: server
environment:

View File

@@ -0,0 +1,25 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: lint
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
- mise install rust
- mise install protoc
- mise exec rust -- cargo clippy --all-targets --all-features -- -D warnings

View File

@@ -0,0 +1,26 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: vet
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
# Install only the necessary Rust toolchain and test runner to speed up the CI
- mise install rust
- mise install cargo:cargo-vet
- mise exec cargo:cargo-vet -- cargo vet

View File

@@ -3,7 +3,6 @@
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
---
## 1. Peer Types

190
LICENSE Normal file
View File

@@ -0,0 +1,190 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2026 MarketTakers
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

13
README.md Normal file
View File

@@ -0,0 +1,13 @@
# Arbiter
> Policy-first multi-client wallet daemon, allowing permissioned transactions across blockchains
## Security warning
Arbiter can't meaningfully protect against host compromise. Potential attack flow:
- Attacker steals TLS keys from database
- Pretends to be server; just accepts user agent challenge solutions
- Pretend to be in sealed state and performing DH with client
- Steals user password and derives seal key
While this attack is highly targetive, it's still possible.
> This software is experimental. Do not use with funds you cannot afford to lose.

View File

@@ -1,89 +0,0 @@
name: app
description: "A new Flutter project."
# The following line prevents the package from being accidentally published to
# pub.dev using `flutter pub publish`. This is preferred for private packages.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
# The following defines the version and build number for your application.
# A version number is three numbers separated by dots, like 1.2.43
# followed by an optional build number separated by a +.
# Both the version and the builder number may be overridden in flutter
# build by specifying --build-name and --build-number, respectively.
# In Android, build-name is used as versionName while build-number used as versionCode.
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
# In iOS, build-name is used as CFBundleShortVersionString while build-number is used as CFBundleVersion.
# Read more about iOS versioning at
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
# In Windows, build-name is used as the major, minor, and patch parts
# of the product and file versions while build-number is used as the build suffix.
version: 1.0.0+1
environment:
sdk: ^3.10.8
# Dependencies specify other packages that your package needs in order to work.
# To automatically upgrade your package dependencies to the latest versions
# consider running `flutter pub upgrade --major-versions`. Alternatively,
# dependencies can be manually updated by changing the version numbers below to
# the latest version available on pub.dev. To see which dependencies have newer
# versions available, run `flutter pub outdated`.
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.8
dev_dependencies:
flutter_test:
sdk: flutter
# The "flutter_lints" package below contains a set of recommended lints to
# encourage good coding practices. The lint set provided by the package is
# activated in the `analysis_options.yaml` file located at the root of your
# package. See that file for information about deactivating specific lint
# rules and activating additional ones.
flutter_lints: ^6.0.0
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter packages.
flutter:
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
# To add assets to your application, add an assets section, like this:
# assets:
# - images/a_dot_burr.jpeg
# - images/a_dot_ham.jpeg
# An image asset can refer to one or more resolution-specific "variants", see
# https://flutter.dev/to/resolution-aware-images
# For details regarding adding assets from package dependencies, see
# https://flutter.dev/to/asset-from-package
# To add custom fonts to your application, add a fonts section here,
# in this "flutter" section. Each entry in this list should have a
# "family" key with the font family name, and a "fonts" key with a
# list giving the asset and other descriptors for the font. For
# example:
# fonts:
# - family: Schyler
# fonts:
# - asset: fonts/Schyler-Regular.ttf
# - asset: fonts/Schyler-Italic.ttf
# style: italic
# - family: Trajan Pro
# fonts:
# - asset: fonts/TrajanPro.ttf
# - asset: fonts/TrajanPro_Bold.ttf
# weight: 700
#
# For details regarding fonts from package dependencies,
# see https://flutter.dev/to/font-from-package

View File

@@ -10,10 +10,18 @@ backend = "cargo:cargo-features"
version = "0.11.1"
backend = "cargo:cargo-features-manager"
[[tools."cargo:cargo-insta"]]
version = "1.46.3"
backend = "cargo:cargo-insta"
[[tools."cargo:cargo-nextest"]]
version = "0.9.126"
backend = "cargo:cargo-nextest"
[[tools."cargo:cargo-shear"]]
version = "1.9.1"
backend = "cargo:cargo-shear"
[[tools."cargo:cargo-vet"]]
version = "0.10.2"
backend = "cargo:cargo-vet"

View File

@@ -2,9 +2,10 @@
"cargo:diesel_cli" = { version = "2.3.6", features = "sqlite,sqlite-bundled", default-features = false }
"cargo:cargo-audit" = "0.22.1"
"cargo:cargo-vet" = "0.10.2"
flutter = "3.38.9-stable"
protoc = "29.6"
rust = "1.93.0"
"rust" = {version = "1.93.0", components = "clippy"}
"cargo:cargo-features-manager" = "0.11.1"
"cargo:cargo-nextest" = "0.9.126"
"cargo:cargo-shear" = "latest"
"cargo:cargo-insta" = "1.46.3"

View File

@@ -2,30 +2,8 @@ syntax = "proto3";
package arbiter;
import "auth.proto";
message ClientRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
}
}
message ClientResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
}
}
message UserAgentRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
}
}
message UserAgentResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
}
}
import "client.proto";
import "user_agent.proto";
message ServerInfo {
string version = 1;
@@ -33,6 +11,6 @@ message ServerInfo {
}
service ArbiterService {
rpc Client(stream ClientRequest) returns (stream ClientResponse);
rpc UserAgent(stream UserAgentRequest) returns (stream UserAgentResponse);
rpc Client(stream arbiter.client.ClientRequest) returns (stream arbiter.client.ClientResponse);
rpc UserAgent(stream arbiter.user_agent.UserAgentRequest) returns (stream arbiter.user_agent.UserAgentResponse);
}

View File

@@ -1,12 +1,9 @@
syntax = "proto3";
package arbiter.auth;
import "google/protobuf/timestamp.proto";
package arbiter.client;
message AuthChallengeRequest {
bytes pubkey = 1;
optional string bootstrap_token = 2;
}
message AuthChallenge {
@@ -20,14 +17,14 @@ message AuthChallengeSolution {
message AuthOk {}
message ClientMessage {
message ClientRequest {
oneof payload {
AuthChallengeRequest auth_challenge_request = 1;
AuthChallengeSolution auth_challenge_solution = 2;
}
}
message ServerMessage {
message ClientResponse {
oneof payload {
AuthChallenge auth_challenge = 1;
AuthOk auth_ok = 2;

View File

@@ -1,14 +0,0 @@
syntax = "proto3";
package arbiter.unseal;
message UserAgentKeyRequest {}
message ServerKeyResponse {
bytes pubkey = 1;
}
message UserAgentSealedKey {
bytes sealed_key = 1;
bytes pubkey = 2;
bytes nonce = 3;
}

View File

@@ -0,0 +1,68 @@
syntax = "proto3";
package arbiter.user_agent;
import "google/protobuf/empty.proto";
message AuthChallengeRequest {
bytes pubkey = 1;
optional string bootstrap_token = 2;
}
message AuthChallenge {
bytes pubkey = 1;
int32 nonce = 2;
}
message AuthChallengeSolution {
bytes signature = 1;
}
message AuthOk {}
message UnsealStart {
bytes client_pubkey = 1;
}
message UnsealStartResponse {
bytes server_pubkey = 1;
}
message UnsealEncryptedKey {
bytes nonce = 1;
bytes ciphertext = 2;
bytes associated_data = 3;
}
enum UnsealResult {
UNSEAL_RESULT_UNSPECIFIED = 0;
UNSEAL_RESULT_SUCCESS = 1;
UNSEAL_RESULT_INVALID_KEY = 2;
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
}
enum VaultState {
VAULT_STATE_UNSPECIFIED = 0;
VAULT_STATE_UNBOOTSTRAPPED = 1;
VAULT_STATE_SEALED = 2;
VAULT_STATE_UNSEALED = 3;
VAULT_STATE_ERROR = 4;
}
message UserAgentRequest {
oneof payload {
AuthChallengeRequest auth_challenge_request = 1;
AuthChallengeSolution auth_challenge_solution = 2;
UnsealStart unseal_start = 3;
UnsealEncryptedKey unseal_encrypted_key = 4;
google.protobuf.Empty query_vault_state = 5;
}
}
message UserAgentResponse {
oneof payload {
AuthChallenge auth_challenge = 1;
AuthOk auth_ok = 2;
UnsealStartResponse unseal_start_response = 3;
UnsealResult unseal_result = 4;
VaultState vault_state = 5;
}
}

990
server/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,15 +9,12 @@ resolver = "3"
[workspace.dependencies]
prost = "0.14.3"
tonic = { version = "0.14.3", features = ["deflate", "gzip", "tls-connect-info", "zstd"] }
tracing = "0.1.44"
tokio = { version = "1.49.0", features = ["full"] }
ed25519 = "3.0.0-rc.4"
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
chrono = { version = "0.4.43", features = ["serde"] }
rand = "0.10.0"
uuid = "1.20.0"
rustls = "0.23.36"
smlang = "0.8.0"
miette = { version = "7.6.0", features = ["fancy", "serde"] }
@@ -26,4 +23,12 @@ async-trait = "0.1.89"
futures = "0.3.31"
tokio-stream = { version = "0.1.18", features = ["full"] }
kameo = "0.19.2"
prost-types = { version = "0.14.3", features = ["chrono"] }
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
rstest = "0.26.1"
rustls-pki-types = "1.14.0"
rcgen = { version = "0.14.7", features = [
"aws_lc_rs",
"pem",
"x509-parser",
"zeroize",
], default-features = false }

BIN
server/crates/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -3,5 +3,6 @@ name = "arbiter-client"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]

View File

@@ -3,23 +3,32 @@ name = "arbiter-proto"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]
tonic.workspace = true
prost.workspace = true
bytes = "1.11.1"
prost-derive = "0.14.3"
prost-types.workspace = true
tonic-prost = "0.14.3"
rkyv = "0.8.15"
tokio.workspace = true
futures.workspace = true
tonic-prost = "0.14.3"
prost = "0.14.3"
kameo.workspace = true
hex = "0.4.3"
url = "2.5.8"
miette.workspace = true
thiserror.workspace = true
rustls-pki-types.workspace = true
base64 = "0.22.1"
tracing.workspace = true
async-trait.workspace = true
[build-dependencies]
prost-build = "0.14.3"
serde_json = "1"
tonic-prost-build = "0.14.3"
[dev-dependencies]
rstest.workspace = true
rand.workspace = true
rcgen.workspace = true
[package.metadata.cargo-shear]
ignored = ["tonic-prost", "prost", "kameo"]

View File

@@ -3,12 +3,16 @@ use tonic_prost_build::configure;
static PROTOBUF_DIR: &str = "../../../protobufs";
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("cargo::rerun-if-changed={PROTOBUF_DIR}");
configure()
.message_attribute(".", "#[derive(::kameo::Reply)]")
.compile_protos(
&[
format!("{}/arbiter.proto", PROTOBUF_DIR),
format!("{}/auth.proto", PROTOBUF_DIR),
format!("{}/user_agent.proto", PROTOBUF_DIR),
format!("{}/client.proto", PROTOBUF_DIR),
],
&[PROTOBUF_DIR.to_string()],
)

View File

@@ -1,19 +1,24 @@
use crate::proto::auth::AuthChallenge;
pub mod transport;
pub mod url;
use base64::{Engine, prelude::BASE64_STANDARD};
pub mod proto {
tonic::include_proto!("arbiter");
pub mod auth {
tonic::include_proto!("arbiter.auth");
pub mod user_agent {
tonic::include_proto!("arbiter.user_agent");
}
pub mod client {
tonic::include_proto!("arbiter.client");
}
}
pub mod transport;
pub static BOOTSTRAP_TOKEN_PATH: &'static str = "bootstrap_token";
pub static BOOTSTRAP_PATH: &str = "bootstrap_token";
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
static ARBITER_HOME: &'static str = ".arbiter";
static ARBITER_HOME: &str = ".arbiter";
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
std::io::ErrorKind::PermissionDenied,
"can not get home directory",
@@ -25,7 +30,7 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
Ok(arbiter_home)
}
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
let concat_form = format!("{}:{}", challenge.nonce, hex::encode(&challenge.pubkey));
concat_form.into_bytes().to_vec()
pub fn format_challenge(nonce: i32, pubkey: &[u8]) -> Vec<u8> {
let concat_form = format!("{}:{}", nonce, BASE64_STANDARD.encode(pubkey));
concat_form.into_bytes()
}

View File

@@ -1,46 +1,293 @@
use futures::{Stream, StreamExt};
use tokio::sync::mpsc::{self, error::SendError};
use tonic::{Status, Streaming};
//! Transport-facing abstractions for protocol/session code.
//!
//! This module separates three concerns:
//!
//! - protocol/session logic wants a small duplex interface ([`Bi`])
//! - transport adapters push concrete stream items to an underlying IO layer
//! - transport boundaries translate between protocol-facing and transport-facing
//! item types via direction-specific converters
//!
//! [`Bi`] is intentionally minimal and transport-agnostic:
//! - [`Bi::recv`] yields inbound protocol messages
//! - [`Bi::send`] accepts outbound protocol/domain items
//!
//! # Generic Ordering Rule
//!
//! This module uses a single convention consistently: when a type or trait is
//! parameterized by protocol message directions, the generic parameters are
//! declared as `Inbound` first, then `Outbound`.
//!
//! For [`Bi`], that means `Bi<Inbound, Outbound>`:
//! - `recv() -> Option<Inbound>`
//! - `send(Outbound)`
//!
//! For adapter types that are parameterized by direction-specific converters,
//! inbound-related converter parameters are declared before outbound-related
//! converter parameters.
//!
//! [`RecvConverter`] and [`SendConverter`] are infallible conversion traits used
//! by adapters to map between protocol-facing and transport-facing item types.
//! The traits themselves are not result-aware; adapters decide how transport
//! errors are handled before (or instead of) conversion.
//!
//! [`grpc::GrpcAdapter`] combines:
//! - a tonic inbound stream
//! - a Tokio sender for outbound transport items
//! - a [`RecvConverter`] for the receive path
//! - a [`SendConverter`] for the send path
//!
//! [`DummyTransport`] is a no-op implementation useful for tests and local actor
//! execution where no real network stream exists.
//!
//! # Component Interaction
//!
//! ```text
//! inbound (network -> protocol)
//! ============================
//!
//! tonic::Streaming<RecvTransport>
//! -> grpc::GrpcAdapter::recv()
//! |
//! +--> on `Ok(item)`: RecvConverter::convert(RecvTransport) -> Inbound
//! +--> on `Err(status)`: log error and close stream (`None`)
//! -> Bi::recv()
//! -> protocol/session actor
//!
//! outbound (protocol -> network)
//! ==============================
//!
//! protocol/session actor
//! -> Bi::send(Outbound)
//! -> grpc::GrpcAdapter::send()
//! |
//! +--> SendConverter::convert(Outbound) -> SendTransport
//! -> Tokio mpsc::Sender<SendTransport>
//! -> tonic response stream
//! ```
//!
//! # Design Notes
//!
//! - `send()` returns [`Error`] only for transport delivery failures (for
//! example, when the outbound channel is closed).
//! - [`grpc::GrpcAdapter`] logs tonic receive errors and treats them as stream
//! closure (`None`).
//! - When protocol-facing and transport-facing types are identical, use
//! [`IdentityRecvConverter`] / [`IdentitySendConverter`].
use std::marker::PhantomData;
// Abstraction for stream for sans-io capabilities
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
type Error;
fn send(
&mut self,
item: Result<U, Status>,
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
use async_trait::async_trait;
/// Errors returned by transport adapters implementing [`Bi`].
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("Transport channel is closed")]
ChannelClosed,
}
// Bi-directional stream abstraction for handling gRPC streaming requests and responses
pub struct BiStream<T, U> {
pub request_stream: Streaming<T>,
pub response_sender: mpsc::Sender<Result<U, Status>>,
/// Minimal bidirectional transport abstraction used by protocol code.
///
/// `Bi<Inbound, Outbound>` models a duplex channel with:
/// - inbound items of type `Inbound` read via [`Bi::recv`]
/// - outbound items of type `Outbound` written via [`Bi::send`]
#[async_trait]
pub trait Bi<Inbound, Outbound>: Send + Sync + 'static {
async fn send(&mut self, item: Outbound) -> Result<(), Error>;
async fn recv(&mut self) -> Option<Inbound>;
}
impl<T, U> Stream for BiStream<T, U>
where
T: Send + 'static,
U: Send + 'static,
{
type Item = Result<T, Status>;
/// Converts transport-facing inbound items into protocol-facing inbound items.
pub trait RecvConverter: Send + Sync + 'static {
type Input;
type Output;
fn poll_next(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
self.request_stream.poll_next_unpin(cx)
fn convert(&self, item: Self::Input) -> Self::Output;
}
/// Converts protocol/domain outbound items into transport-facing outbound items.
pub trait SendConverter: Send + Sync + 'static {
type Input;
type Output;
fn convert(&self, item: Self::Input) -> Self::Output;
}
/// A [`RecvConverter`] that forwards values unchanged.
pub struct IdentityRecvConverter<T> {
_marker: PhantomData<T>,
}
impl<T> IdentityRecvConverter<T> {
pub fn new() -> Self {
Self {
_marker: PhantomData,
}
}
}
impl<T, U> Bi<T, U> for BiStream<T, U>
where
T: Send + 'static,
U: Send + 'static,
{
type Error = SendError<Result<U, Status>>;
async fn send(&mut self, item: Result<U, Status>) -> Result<(), Self::Error> {
self.response_sender.send(item).await
impl<T> Default for IdentityRecvConverter<T> {
fn default() -> Self {
Self::new()
}
}
impl<T> RecvConverter for IdentityRecvConverter<T>
where
T: Send + Sync + 'static,
{
type Input = T;
type Output = T;
fn convert(&self, item: Self::Input) -> Self::Output {
item
}
}
/// A [`SendConverter`] that forwards values unchanged.
pub struct IdentitySendConverter<T> {
_marker: PhantomData<T>,
}
impl<T> IdentitySendConverter<T> {
pub fn new() -> Self {
Self {
_marker: PhantomData,
}
}
}
impl<T> Default for IdentitySendConverter<T> {
fn default() -> Self {
Self::new()
}
}
impl<T> SendConverter for IdentitySendConverter<T>
where
T: Send + Sync + 'static,
{
type Input = T;
type Output = T;
fn convert(&self, item: Self::Input) -> Self::Output {
item
}
}
/// gRPC-specific transport adapters and helpers.
pub mod grpc {
use async_trait::async_trait;
use futures::StreamExt;
use tokio::sync::mpsc;
use tonic::Streaming;
use super::{Bi, Error, RecvConverter, SendConverter};
/// [`Bi`] adapter backed by a tonic gRPC bidirectional stream.
///
/// Tonic receive errors are logged and treated as stream closure (`None`).
/// The receive converter is only invoked for successful inbound transport
/// items.
pub struct GrpcAdapter<InboundConverter, OutboundConverter>
where
InboundConverter: RecvConverter,
OutboundConverter: SendConverter,
{
sender: mpsc::Sender<OutboundConverter::Output>,
receiver: Streaming<InboundConverter::Input>,
inbound_converter: InboundConverter,
outbound_converter: OutboundConverter,
}
impl<InboundTransport, Inbound, InboundConverter, OutboundConverter>
GrpcAdapter<InboundConverter, OutboundConverter>
where
InboundConverter: RecvConverter<Input = InboundTransport, Output = Inbound>,
OutboundConverter: SendConverter,
{
pub fn new(
sender: mpsc::Sender<OutboundConverter::Output>,
receiver: Streaming<InboundTransport>,
inbound_converter: InboundConverter,
outbound_converter: OutboundConverter,
) -> Self {
Self {
sender,
receiver,
inbound_converter,
outbound_converter,
}
}
}
#[async_trait]
impl<InboundConverter, OutboundConverter> Bi<InboundConverter::Output, OutboundConverter::Input>
for GrpcAdapter<InboundConverter, OutboundConverter>
where
InboundConverter: RecvConverter,
OutboundConverter: SendConverter,
OutboundConverter::Input: Send + 'static,
OutboundConverter::Output: Send + 'static,
{
#[tracing::instrument(level = "trace", skip(self, item))]
async fn send(&mut self, item: OutboundConverter::Input) -> Result<(), Error> {
let outbound = self.outbound_converter.convert(item);
self.sender
.send(outbound)
.await
.map_err(|_| Error::ChannelClosed)
}
#[tracing::instrument(level = "trace", skip(self))]
async fn recv(&mut self) -> Option<InboundConverter::Output> {
match self.receiver.next().await {
Some(Ok(item)) => Some(self.inbound_converter.convert(item)),
Some(Err(error)) => {
tracing::error!(error = ?error, "grpc transport recv failed; closing stream");
None
}
None => None,
}
}
}
}
/// No-op [`Bi`] transport for tests and manual actor usage.
///
/// `send` drops all items and succeeds. [`Bi::recv`] never resolves and therefore
/// does not busy-wait or spuriously close the stream.
pub struct DummyTransport<Inbound, Outbound> {
_marker: PhantomData<(Inbound, Outbound)>,
}
impl<Inbound, Outbound> DummyTransport<Inbound, Outbound> {
pub fn new() -> Self {
Self {
_marker: PhantomData,
}
}
}
impl<Inbound, Outbound> Default for DummyTransport<Inbound, Outbound> {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl<Inbound, Outbound> Bi<Inbound, Outbound> for DummyTransport<Inbound, Outbound>
where
Inbound: Send + Sync + 'static,
Outbound: Send + Sync + 'static,
{
async fn send(&mut self, _item: Outbound) -> Result<(), Error> {
Ok(())
}
async fn recv(&mut self) -> Option<Inbound> {
std::future::pending::<()>().await;
None
}
}

View File

@@ -0,0 +1,128 @@
use std::fmt::Display;
use base64::{Engine as _, prelude::BASE64_URL_SAFE};
use rustls_pki_types::CertificateDer;
const ARBITER_URL_SCHEME: &str = "arbiter";
const CERT_QUERY_KEY: &str = "cert";
const BOOTSTRAP_TOKEN_QUERY_KEY: &str = "bootstrap_token";
pub struct ArbiterUrl {
pub host: String,
pub port: u16,
pub ca_cert: CertificateDer<'static>,
pub bootstrap_token: Option<String>,
}
impl Display for ArbiterUrl {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut base = format!(
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
self.host,
self.port,
BASE64_URL_SAFE.encode(self.ca_cert.to_vec())
);
if let Some(token) = &self.bootstrap_token {
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
}
f.write_str(&base)
}
}
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
pub enum Error {
#[error("Invalid URL scheme, expected '{ARBITER_URL_SCHEME}://'")]
#[diagnostic(
code(arbiter::url::invalid_scheme),
help("The URL must start with '{ARBITER_URL_SCHEME}://'")
)]
InvalidScheme,
#[error("Missing host in URL")]
#[diagnostic(
code(arbiter::url::missing_host),
help("The URL must include a host, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:<port>'")
)]
MissingHost,
#[error("Missing port in URL")]
#[diagnostic(
code(arbiter::url::missing_port),
help("The URL must include a port, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:1234'")
)]
MissingPort,
#[error("Missing 'cert' query parameter in URL")]
#[diagnostic(
code(arbiter::url::missing_cert),
help("The URL must include a 'cert' query parameter")
)]
MissingCert,
#[error("Invalid base64 in 'cert' query parameter: {0}")]
#[diagnostic(code(arbiter::url::invalid_cert_base64))]
InvalidCertBase64(#[from] base64::DecodeError),
}
impl<'a> TryFrom<&'a str> for ArbiterUrl {
type Error = Error;
fn try_from(value: &'a str) -> Result<Self, Self::Error> {
let url = url::Url::parse(value).map_err(|_| Error::InvalidScheme)?;
if url.scheme() != ARBITER_URL_SCHEME {
return Err(Error::InvalidScheme);
}
let host = url.host_str().ok_or(Error::MissingHost)?.to_string();
let port = url.port().ok_or(Error::MissingPort)?;
let cert_str = url
.query_pairs()
.find(|(k, _)| k == CERT_QUERY_KEY)
.ok_or(Error::MissingCert)?
.1;
let cert = BASE64_URL_SAFE.decode(cert_str.as_ref())?;
let cert = CertificateDer::from_slice(&cert).into_owned();
let bootstrap_token = url
.query_pairs()
.find(|(k, _)| k == BOOTSTRAP_TOKEN_QUERY_KEY)
.map(|(_, v)| v.to_string());
Ok(ArbiterUrl {
host,
port,
ca_cert: cert,
bootstrap_token,
})
}
}
#[cfg(test)]
mod tests {
use rcgen::generate_simple_self_signed;
use rstest::rstest;
use super::*;
#[rstest]
fn test_parsing_correctness(
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
) {
let cert = generate_simple_self_signed(&["Arbiter CA".into()]).unwrap();
let cert = cert.cert.der();
let url = ArbiterUrl {
host: host.to_string(),
port: 1234,
ca_cert: cert.clone().into_owned(),
bootstrap_token,
};
let url_str = url.to_string();
let parsed_url = ArbiterUrl::try_from(url_str.as_str()).unwrap();
assert_eq!(url.host, parsed_url.host);
assert_eq!(url.port, parsed_url.port);
assert_eq!(url.ca_cert.to_vec(), parsed_url.ca_cert.to_vec());
assert_eq!(url.bootstrap_token, parsed_url.bootstrap_token);
}
}

BIN
server/crates/arbiter-server/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -3,26 +3,22 @@ name = "arbiter-server"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]
diesel = { version = "2.3.6", features = [
"sqlite",
"uuid",
"time",
"chrono",
"serde_json",
] }
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
diesel-async = { version = "0.7.4", features = [
"bb8",
"migrations",
"sqlite",
"tokio",
] }
ed25519.workspace = true
ed25519-dalek.workspace = true
arbiter-proto.path = "../arbiter-proto"
tracing.workspace = true
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tonic.workspace = true
tonic.features = ["tls-aws-lc"]
tokio.workspace = true
rustls.workspace = true
smlang.workspace = true
@@ -30,32 +26,23 @@ miette.workspace = true
thiserror.workspace = true
diesel_migrations = { version = "2.3.1", features = ["sqlite"] }
async-trait.workspace = true
statig = { version = "0.4.1", features = ["async"] }
secrecy = "0.10.3"
futures.workspace = true
tokio-stream.workspace = true
dashmap = "6.1.0"
rand.workspace = true
rcgen = { version = "0.14.7", features = [
"aws_lc_rs",
"pem",
"x509-parser",
"zeroize",
], default-features = false }
rkyv = { version = "0.8.15", features = [
"aligned",
"little_endian",
"pointer_width_64",
] }
restructed = "0.2.2"
rcgen.workspace = true
chrono.workspace = true
bytes = "1.11.1"
memsafe = "0.4.0"
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
zeroize = { version = "1.8.2", features = ["std", "simd"] }
kameo.workspace = true
prost-types.workspace = true
x25519-dalek.workspace = true
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
argon2 = { version = "0.5.3", features = ["zeroize"] }
restructed = "0.2.2"
strum = { version = "0.27.2", features = ["derive"] }
pem = "3.0.6"
[dev-dependencies]
insta = "1.46.3"
test-log = { version = "0.2", default-features = false, features = ["trace"] }
tempfile = "3.25.0"

View File

@@ -1,22 +1,50 @@
create table if not exists aead_encrypted (
create table if not exists root_key_history (
id INTEGER not null PRIMARY KEY,
current_nonce integer not null default(1), -- if re-encrypted, this should be incremented
-- root key stored as aead encrypted artifact, with only difference that it's decrypted by unseal key (derived from user password)
root_key_encryption_nonce blob not null default(1), -- if re-encrypted, this should be incremented. Used for encrypting root key
data_encryption_nonce blob not null default(1), -- nonce used for encrypting with key itself
ciphertext blob not null,
tag blob not null,
schema_version integer not null default(1) -- server would need to reencrypt, because this means that we have changed algorithm
schema_version integer not null default(1), -- server would need to reencrypt, because this means that we have changed algorithm
salt blob not null -- for key deriviation
) STRICT;
create table if not exists aead_encrypted (
id INTEGER not null PRIMARY KEY,
current_nonce blob not null default(1), -- if re-encrypted, this should be incremented
ciphertext blob not null,
tag blob not null,
schema_version integer not null default(1), -- server would need to reencrypt, because this means that we have changed algorithm
associated_root_key_id integer not null references root_key_history (id) on delete RESTRICT,
created_at integer not null default(unixepoch ('now'))
) STRICT;
create unique index if not exists uniq_nonce_per_root_key on aead_encrypted (
current_nonce,
associated_root_key_id
);
create table if not exists tls_history (
id INTEGER not null PRIMARY KEY,
cert text not null,
cert_key text not null, -- PEM Encoded private key
ca_cert text not null,
ca_key text not null, -- PEM Encoded private key
created_at integer not null default(unixepoch ('now'))
) STRICT;
-- This is a singleton
create table if not exists arbiter_settings (
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
root_key_id integer references aead_encrypted (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
cert_key blob not null,
cert blob not null
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
tls_id integer references tls_history (id) on delete RESTRICT
) STRICT;
insert into arbiter_settings (id) values (1) on conflict do nothing; -- ensure singleton row exists
create table if not exists useragent_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
nonce integer not null default(1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))
@@ -24,7 +52,7 @@ create table if not exists useragent_client (
create table if not exists program_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
nonce integer not null default(1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))

Binary file not shown.

View File

@@ -1,2 +0,0 @@
pub mod user_agent;
pub mod client;

View File

@@ -1,40 +1,37 @@
use arbiter_proto::{BOOTSTRAP_TOKEN_PATH, home_path};
use diesel::{ExpressionMethods, QueryDsl};
use arbiter_proto::{BOOTSTRAP_PATH, home_path};
use diesel::QueryDsl;
use diesel_async::RunQueryDsl;
use kameo::{Actor, messages};
use memsafe::MemSafe;
use miette::Diagnostic;
use rand::{RngExt, distr::StandardUniform, make_rng, rngs::StdRng};
use secrecy::SecretString;
use thiserror::Error;
use tracing::info;
use zeroize::{Zeroize, Zeroizing};
use crate::{
context::{self, ServerContext},
db::{self, DatabasePool, schema},
use rand::{
RngExt,
distr::{Alphanumeric},
make_rng,
rngs::StdRng,
};
use thiserror::Error;
use crate::db::{self, DatabasePool, schema};
const TOKEN_LENGTH: usize = 64;
pub async fn generate_token() -> Result<String, std::io::Error> {
let rng: StdRng = make_rng();
let token: String = rng
.sample_iter::<char, _>(StandardUniform)
.take(TOKEN_LENGTH)
.fold(Default::default(), |mut accum, char| {
let token: String = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
Default::default(),
|mut accum, char| {
accum += char.to_string().as_str();
accum
});
},
);
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
tokio::fs::write(home_path()?.join(BOOTSTRAP_PATH), token.as_str()).await?;
Ok(token)
}
#[derive(Error, Debug, Diagnostic)]
pub enum BootstrapError {
pub enum Error {
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::bootstrap::database))]
Database(#[from] db::PoolError),
@@ -49,12 +46,12 @@ pub enum BootstrapError {
}
#[derive(Actor)]
pub struct BootstrapActor {
pub struct Bootstrapper {
token: Option<String>,
}
impl BootstrapActor {
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrapError> {
impl Bootstrapper {
pub async fn new(db: &DatabasePool) -> Result<Self, Error> {
let mut conn = db.get().await?;
let row_count: i64 = schema::useragent_client::table
@@ -64,10 +61,9 @@ impl BootstrapActor {
drop(conn);
let token = if row_count == 0 {
let token = generate_token().await?;
info!(%token, "Generated bootstrap token");
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
Some(token)
} else {
None
@@ -75,15 +71,10 @@ impl BootstrapActor {
Ok(Self { token })
}
#[cfg(test)]
pub fn get_token(&self) -> Option<String> {
self.token.clone()
}
}
#[messages]
impl BootstrapActor {
impl Bootstrapper {
#[message]
pub fn is_correct_token(&self, token: String) -> bool {
match &self.token {
@@ -102,3 +93,11 @@ impl BootstrapActor {
}
}
}
#[messages]
impl Bootstrapper {
#[message]
pub fn get_token(&self) -> Option<String> {
self.token.clone()
}
}

View File

@@ -1,12 +0,0 @@
use arbiter_proto::{
proto::{ClientRequest, ClientResponse},
transport::Bi,
};
use crate::ServerContext;
pub(crate) async fn handle_client(
_context: ServerContext,
_bistream: impl Bi<ClientRequest, ClientResponse>,
) {
}

View File

@@ -0,0 +1,101 @@
use arbiter_proto::proto::client::{
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
client_request::Payload as ClientRequestPayload,
};
use ed25519_dalek::VerifyingKey;
use tracing::error;
use crate::actors::client::{
ConnectionProps,
auth::state::{AuthContext, AuthStateMachine},
session::ClientSession,
};
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
pub enum Error {
#[error("Unexpected message payload")]
UnexpectedMessagePayload,
#[error("Invalid client public key length")]
InvalidClientPubkeyLength,
#[error("Invalid client public key encoding")]
InvalidAuthPubkeyEncoding,
#[error("Database pool unavailable")]
DatabasePoolUnavailable,
#[error("Database operation failed")]
DatabaseOperationFailed,
#[error("Public key not registered")]
PublicKeyNotRegistered,
#[error("Invalid signature length")]
InvalidSignatureLength,
#[error("Invalid challenge solution")]
InvalidChallengeSolution,
#[error("Transport error")]
Transport,
}
mod state;
use state::*;
fn parse_auth_event(payload: ClientRequestPayload) -> Result<AuthEvents, Error> {
match payload {
ClientRequestPayload::AuthChallengeRequest(AuthChallengeRequest { pubkey }) => {
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
Ok(AuthEvents::AuthRequest(ChallengeRequest {
pubkey: pubkey.into(),
}))
}
ClientRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
solution: signature,
}))
}
}
}
pub async fn authenticate(props: &mut ConnectionProps) -> Result<VerifyingKey, Error> {
let mut state = AuthStateMachine::new(AuthContext::new(props));
loop {
let transport = state.context_mut().conn.transport.as_mut();
let Some(ClientRequest {
payload: Some(payload),
}) = transport.recv().await
else {
return Err(Error::Transport);
};
let event = parse_auth_event(payload)?;
match state.process_event(event).await {
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
Err(AuthError::ActionFailed(err)) => {
error!(?err, "State machine action failed");
return Err(err);
}
Err(AuthError::GuardFailed(err)) => {
error!(?err, "State machine guard failed");
return Err(err);
}
Err(AuthError::InvalidEvent) => {
error!("Invalid event for current state");
return Err(Error::InvalidChallengeSolution);
}
Err(AuthError::TransitionsFailed) => {
error!("Invalid state transition");
return Err(Error::InvalidChallengeSolution);
}
_ => (),
}
}
}
pub async fn authenticate_and_create(
mut props: ConnectionProps,
) -> Result<ClientSession, Error> {
let key = authenticate(&mut props).await?;
let session = ClientSession::new(props, key);
Ok(session)
}

View File

@@ -0,0 +1,136 @@
use arbiter_proto::proto::client::{
AuthChallenge, ClientResponse,
client_response::Payload as ClientResponsePayload,
};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
use diesel_async::RunQueryDsl;
use ed25519_dalek::VerifyingKey;
use tracing::error;
use super::Error;
use crate::{actors::client::ConnectionProps, db::schema};
pub struct ChallengeRequest {
pub pubkey: VerifyingKey,
}
pub struct ChallengeContext {
pub challenge: AuthChallenge,
pub key: VerifyingKey,
}
pub struct ChallengeSolution {
pub solution: Vec<u8>,
}
smlang::statemachine!(
name: Auth,
custom_error: true,
transitions: {
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
}
);
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
let mut db_conn = db.get().await.map_err(|e| {
error!(error = ?e, "Database pool error");
Error::DatabasePoolUnavailable
})?;
db_conn
.exclusive_transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::program_client::table
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
.select(schema::program_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::program_client::table)
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
.set(schema::program_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()
.map_err(|e| {
error!(error = ?e, "Database error");
Error::DatabaseOperationFailed
})?
.ok_or_else(|| {
error!(?pubkey_bytes, "Public key not found in database");
Error::PublicKeyNotRegistered
})
}
pub struct AuthContext<'a> {
pub(super) conn: &'a mut ConnectionProps,
}
impl<'a> AuthContext<'a> {
pub fn new(conn: &'a mut ConnectionProps) -> Self {
Self { conn }
}
}
impl AuthStateMachineContext for AuthContext<'_> {
type Error = Error;
async fn verify_solution(
&self,
ChallengeContext { challenge, key }: &ChallengeContext,
ChallengeSolution { solution }: &ChallengeSolution,
) -> Result<bool, Self::Error> {
let formatted_challenge =
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
let signature = solution.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
Error::InvalidChallengeSolution
})?;
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
Ok(valid)
}
async fn prepare_challenge(
&mut self,
ChallengeRequest { pubkey }: ChallengeRequest,
) -> Result<ChallengeContext, Self::Error> {
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
let challenge = AuthChallenge {
pubkey: pubkey.as_bytes().to_vec(),
nonce,
};
self.conn
.transport
.send(Ok(ClientResponse {
payload: Some(ClientResponsePayload::AuthChallenge(challenge.clone())),
}))
.await
.map_err(|e| {
error!(?e, "Failed to send auth challenge");
Error::Transport
})?;
Ok(ChallengeContext {
challenge,
key: pubkey,
})
}
fn provide_key(
&mut self,
state_data: &ChallengeContext,
_: ChallengeSolution,
) -> Result<VerifyingKey, Self::Error> {
Ok(state_data.key)
}
}

View File

@@ -0,0 +1,48 @@
use arbiter_proto::{
proto::client::{ClientRequest, ClientResponse},
transport::Bi,
};
use kameo::actor::Spawn;
use tracing::{error, info};
use crate::{actors::client::session::ClientSession, db};
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
pub enum ClientError {
#[error("Expected message with payload")]
MissingRequestPayload,
#[error("Unexpected request payload")]
UnexpectedRequestPayload,
#[error("State machine error")]
StateTransitionFailed,
#[error(transparent)]
Auth(#[from] auth::Error),
}
pub type Transport = Box<dyn Bi<ClientRequest, Result<ClientResponse, ClientError>> + Send>;
pub struct ConnectionProps {
pub(crate) db: db::DatabasePool,
pub(crate) transport: Transport,
}
impl ConnectionProps {
pub fn new(db: db::DatabasePool, transport: Transport) -> Self {
Self { db, transport }
}
}
pub mod auth;
pub mod session;
pub async fn connect_client(props: ConnectionProps) {
match auth::authenticate_and_create(props).await {
Ok(session) => {
ClientSession::spawn(session);
info!("Client authenticated, session started");
}
Err(err) => {
error!(?err, "Authentication failed, closing connection");
}
}
}

View File

@@ -0,0 +1,90 @@
use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
use ed25519_dalek::VerifyingKey;
use kameo::Actor;
use tokio::select;
use tracing::{error, info};
use crate::actors::client::{ClientError, ConnectionProps};
pub struct ClientSession {
props: ConnectionProps,
key: VerifyingKey,
}
impl ClientSession {
pub(crate) fn new(props: ConnectionProps, key: VerifyingKey) -> Self {
Self { props, key }
}
pub async fn process_transport_inbound(&mut self, req: ClientRequest) -> Output {
let msg = req.payload.ok_or_else(|| {
error!(actor = "client", "Received message with no payload");
ClientError::MissingRequestPayload
})?;
match msg {
_ => Err(ClientError::UnexpectedRequestPayload),
}
}
}
type Output = Result<ClientResponse, ClientError>;
impl Actor for ClientSession {
type Args = Self;
type Error = ();
async fn on_start(
args: Self::Args,
_: kameo::prelude::ActorRef<Self>,
) -> Result<Self, Self::Error> {
Ok(args)
}
async fn next(
&mut self,
_actor_ref: kameo::prelude::WeakActorRef<Self>,
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
) -> Option<kameo::mailbox::Signal<Self>> {
loop {
select! {
signal = mailbox_rx.recv() => {
return signal;
}
msg = self.props.transport.recv() => {
match msg {
Some(request) => {
match self.process_transport_inbound(request).await {
Ok(resp) => {
if self.props.transport.send(Ok(resp)).await.is_err() {
error!(actor = "client", reason = "channel closed", "send.failed");
return Some(kameo::mailbox::Signal::Stop);
}
}
Err(err) => {
let _ = self.props.transport.send(Err(err)).await;
return Some(kameo::mailbox::Signal::Stop);
}
}
}
None => {
info!(actor = "client", "transport.closed");
return Some(kameo::mailbox::Signal::Stop);
}
}
}
}
}
}
}
impl ClientSession {
pub fn new_test(db: crate::db::DatabasePool) -> Self {
use arbiter_proto::transport::DummyTransport;
let transport: super::Transport = Box::new(DummyTransport::new());
let props = ConnectionProps::new(db, transport);
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
Self { props, key }
}
}

View File

@@ -0,0 +1 @@
pub mod v1;

View File

@@ -0,0 +1,237 @@
use std::ops::Deref as _;
use argon2::{Algorithm, Argon2, password_hash::Salt as ArgonSalt};
use chacha20poly1305::{
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
aead::{AeadMut, Error, Payload},
};
use memsafe::MemSafe;
use rand::{
Rng as _, SeedableRng,
rngs::{StdRng, SysRng},
};
pub const ROOT_KEY_TAG: &[u8] = "arbiter/seal/v1".as_bytes();
pub const TAG: &[u8] = "arbiter/private-key/v1".as_bytes();
pub const NONCE_LENGTH: usize = 24;
#[derive(Default)]
pub struct Nonce([u8; NONCE_LENGTH]);
impl Nonce {
pub fn increment(&mut self) {
for i in (0..self.0.len()).rev() {
if self.0[i] == 0xFF {
self.0[i] = 0;
} else {
self.0[i] += 1;
break;
}
}
}
pub fn to_vec(&self) -> Vec<u8> {
self.0.to_vec()
}
}
impl<'a> TryFrom<&'a [u8]> for Nonce {
type Error = ();
fn try_from(value: &'a [u8]) -> Result<Self, Self::Error> {
if value.len() != NONCE_LENGTH {
return Err(());
}
let mut nonce = [0u8; NONCE_LENGTH];
nonce.copy_from_slice(value);
Ok(Self(nonce))
}
}
pub struct KeyCell(pub MemSafe<Key>);
impl From<MemSafe<Key>> for KeyCell {
fn from(value: MemSafe<Key>) -> Self {
Self(value)
}
}
impl TryFrom<MemSafe<Vec<u8>>> for KeyCell {
type Error = ();
fn try_from(mut value: MemSafe<Vec<u8>>) -> Result<Self, Self::Error> {
let value = value.read().unwrap();
if value.len() != size_of::<Key>() {
return Err(());
}
let mut cell = MemSafe::new(Key::default()).unwrap();
{
let mut cell_write = cell.write().unwrap();
let cell_slice: &mut [u8] = cell_write.as_mut();
cell_slice.copy_from_slice(&value);
}
Ok(Self(cell))
}
}
impl KeyCell {
pub fn new_secure_random() -> Self {
let mut key = MemSafe::new(Key::default()).unwrap();
{
let mut key_buffer = key.write().unwrap();
let key_buffer: &mut [u8] = key_buffer.as_mut();
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
rng.fill_bytes(key_buffer);
}
key.into()
}
pub fn encrypt_in_place(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
mut buffer: impl AsMut<Vec<u8>>,
) -> Result<(), Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let buffer = buffer.as_mut();
cipher.encrypt_in_place(nonce, associated_data, buffer)
}
pub fn decrypt_in_place(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
buffer: &mut MemSafe<Vec<u8>>,
) -> Result<(), Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let mut buffer = buffer.write().unwrap();
let buffer: &mut Vec<u8> = buffer.as_mut();
cipher.decrypt_in_place(nonce, associated_data, buffer)
}
pub fn encrypt(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
plaintext: impl AsRef<[u8]>,
) -> Result<Vec<u8>, Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let mut cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let ciphertext = cipher.encrypt(
nonce,
Payload {
msg: plaintext.as_ref(),
aad: associated_data,
},
)?;
Ok(ciphertext)
}
}
pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
pub fn generate_salt() -> Salt {
let mut salt = Salt::default();
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
rng.fill_bytes(&mut salt);
salt
}
/// User password might be of different length, have not enough entropy, etc...
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
pub fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
let mut key = MemSafe::new(Key::default()).unwrap();
{
let password_source = password.read().unwrap();
let mut key_buffer = key.write().unwrap();
let key_buffer: &mut [u8] = key_buffer.as_mut();
hasher
.hash_password_into(password_source.deref(), salt, key_buffer)
.unwrap();
}
key.into()
}
#[cfg(test)]
mod tests {
use super::*;
use memsafe::MemSafe;
#[test]
pub fn derive_seal_key_deterministic() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let password2 = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key1 = derive_seal_key(password, &salt);
let mut key2 = derive_seal_key(password2, &salt);
let key1_reader = key1.0.read().unwrap();
let key2_reader = key2.0.read().unwrap();
assert_eq!(key1_reader.deref(), key2_reader.deref());
}
#[test]
pub fn successful_derive() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key = derive_seal_key(password, &salt);
let key_reader = key.0.read().unwrap();
let key_ref = key_reader.deref();
assert_ne!(key_ref.as_slice(), &[0u8; 32][..]);
}
#[test]
pub fn encrypt_decrypt() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key = derive_seal_key(password, &salt);
let nonce = Nonce(*b"unique nonce 123 1231233"); // 24 bytes for XChaCha20Poly1305
let associated_data = b"associated data";
let mut buffer = b"secret data".to_vec();
key.encrypt_in_place(&nonce, associated_data, &mut buffer)
.unwrap();
assert_ne!(buffer, b"secret data");
let mut buffer = MemSafe::new(buffer).unwrap();
key.decrypt_in_place(&nonce, associated_data, &mut buffer)
.unwrap();
let buffer = buffer.read().unwrap();
assert_eq!(*buffer, b"secret data");
}
#[test]
// We should fuzz this
pub fn test_nonce_increment() {
let mut nonce = Nonce([0u8; NONCE_LENGTH]);
nonce.increment();
assert_eq!(
nonce.0,
[
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
]
);
}
}

View File

@@ -0,0 +1,407 @@
use diesel::{
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
dsl::{insert_into, update},
};
use diesel_async::{AsyncConnection, RunQueryDsl};
use kameo::{Actor, Reply, messages};
use memsafe::MemSafe;
use strum::{EnumDiscriminants, IntoDiscriminant};
use tracing::{error, info};
use crate::db::{
self,
models::{self, RootKeyHistory},
schema::{self},
};
use encryption::v1::{self, KeyCell, Nonce};
pub mod encryption;
#[derive(Default, EnumDiscriminants)]
#[strum_discriminants(derive(Reply), vis(pub))]
enum State {
#[default]
Unbootstrapped,
Sealed {
root_key_history_id: i32,
},
Unsealed {
root_key_history_id: i32,
root_key: KeyCell,
},
}
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
pub enum Error {
#[error("Keyholder is already bootstrapped")]
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
AlreadyBootstrapped,
#[error("Keyholder is not bootstrapped")]
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
NotBootstrapped,
#[error("Invalid key provided")]
#[diagnostic(code(arbiter::keyholder::invalid_key))]
InvalidKey,
#[error("Requested aead entry not found")]
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
NotFound,
#[error("Encryption error: {0}")]
#[diagnostic(code(arbiter::keyholder::encryption_error))]
Encryption(#[from] chacha20poly1305::aead::Error),
#[error("Database error: {0}")]
#[diagnostic(code(arbiter::keyholder::database_error))]
DatabaseConnection(#[from] db::PoolError),
#[error("Database transaction error: {0}")]
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
DatabaseTransaction(#[from] diesel::result::Error),
#[error("Broken database")]
#[diagnostic(code(arbiter::keyholder::broken_database))]
BrokenDatabase,
}
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
/// Provides API for encrypting and decrypting data using the vault root key.
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
#[derive(Actor)]
pub struct KeyHolder {
db: db::DatabasePool,
state: State,
}
#[messages]
impl KeyHolder {
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
let state = {
let mut conn = db.get().await?;
let (root_key_history,) = schema::arbiter_settings::table
.left_join(schema::root_key_history::table)
.select((Option::<RootKeyHistory>::as_select(),))
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
.await?;
match root_key_history {
Some(root_key_history) => State::Sealed {
root_key_history_id: root_key_history.id,
},
None => State::Unbootstrapped,
}
};
Ok(Self { db, state })
}
// Exclusive transaction to avoid race condtions if multiple keyholders write
// additional layer of protection against nonce-reuse
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
let mut conn = pool.get().await?;
let nonce = conn
.exclusive_transaction(|conn| {
Box::pin(async move {
let current_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(conn)
.await?;
let mut nonce =
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
error!(
"Broken database: invalid nonce for root key history id={}",
root_key_id
);
Error::BrokenDatabase
})?;
nonce.increment();
update(schema::root_key_history::table)
.filter(schema::root_key_history::id.eq(root_key_id))
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
.execute(conn)
.await?;
Result::<_, Error>::Ok(nonce)
})
})
.await?;
Ok(nonce)
}
#[message]
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
if !matches!(self.state, State::Unbootstrapped) {
return Err(Error::AlreadyBootstrapped);
}
let salt = v1::generate_salt();
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
let mut root_key = KeyCell::new_secure_random();
// Zero nonces are fine because they are one-time
let root_key_nonce = v1::Nonce::default();
let data_encryption_nonce = v1::Nonce::default();
let root_key_ciphertext: Vec<u8> = {
let root_key_reader = root_key.0.read().unwrap();
let root_key_reader = root_key_reader.as_slice();
seal_key
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
.map_err(|err| {
error!(?err, "Fatal bootstrap error");
Error::Encryption(err)
})?
};
let mut conn = self.db.get().await?;
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
let root_key_history_id = conn
.transaction(|conn| {
Box::pin(async move {
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
.values(&models::NewRootKeyHistory {
ciphertext: root_key_ciphertext,
tag: v1::ROOT_KEY_TAG.to_vec(),
root_key_encryption_nonce: root_key_nonce.to_vec(),
data_encryption_nonce: data_encryption_nonce_bytes,
schema_version: 1,
salt: salt.to_vec(),
})
.returning(schema::root_key_history::id)
.get_result(conn)
.await?;
update(schema::arbiter_settings::table)
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
})
})
.await?;
self.state = State::Unsealed {
root_key,
root_key_history_id,
};
info!("Keyholder bootstrapped successfully");
Ok(())
}
#[message]
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
let State::Sealed {
root_key_history_id,
} = &self.state
else {
return Err(Error::NotBootstrapped);
};
// We don't want to hold connection while doing expensive KDF work
let current_key = {
let mut conn = self.db.get().await?;
schema::root_key_history::table
.filter(schema::root_key_history::id.eq(*root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.select(RootKeyHistory::as_select())
.first(&mut conn)
.await?
};
let salt = &current_key.salt;
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
error!("Broken database: invalid salt for root key");
Error::BrokenDatabase
})?;
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|_| {
error!("Broken database: invalid nonce for root key");
Error::BrokenDatabase
},
)?;
seal_key
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
.map_err(|err| {
error!(?err, "Failed to unseal root key: invalid seal key");
Error::InvalidKey
})?;
self.state = State::Unsealed {
root_key_history_id: current_key.id,
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
error!(?err, "Broken database: invalid encryption key size");
Error::BrokenDatabase
})?,
};
info!("Keyholder unsealed successfully");
Ok(())
}
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
#[message]
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
let State::Unsealed { root_key, .. } = &mut self.state else {
return Err(Error::NotBootstrapped);
};
let row: models::AeadEncrypted = {
let mut conn = self.db.get().await?;
schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.filter(schema::aead_encrypted::id.eq(aead_id))
.first(&mut conn)
.await
.optional()?
.ok_or(Error::NotFound)?
};
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
error!(
"Broken database: invalid nonce for aead_encrypted id={}",
aead_id
);
Error::BrokenDatabase
})?;
let mut output = MemSafe::new(row.ciphertext).unwrap();
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
Ok(output)
}
// Creates new `aead_encrypted` entry in the database and returns it's ID
#[message]
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
let State::Unsealed {
root_key,
root_key_history_id,
} = &mut self.state
else {
return Err(Error::NotBootstrapped);
};
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
let mut ciphertext_buffer = plaintext.write().unwrap();
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
let ciphertext = std::mem::take(ciphertext_buffer);
let mut conn = self.db.get().await?;
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
.values(&models::NewAeadEncrypted {
ciphertext,
tag: v1::TAG.to_vec(),
current_nonce: nonce.to_vec(),
schema_version: 1,
associated_root_key_id: *root_key_history_id,
created_at: chrono::Utc::now().timestamp() as i32,
})
.returning(schema::aead_encrypted::id)
.get_result(&mut conn)
.await?;
Ok(aead_id)
}
#[message]
pub fn get_state(&self) -> StateDiscriminants {
self.state.discriminant()
}
#[message]
pub fn seal(&mut self) -> Result<(), Error> {
let State::Unsealed {
root_key_history_id,
..
} = &self.state
else {
return Err(Error::NotBootstrapped);
};
self.state = State::Sealed {
root_key_history_id: *root_key_history_id,
};
Ok(())
}
}
#[cfg(test)]
mod tests {
use diesel::SelectableHelper;
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::db::{self};
use super::*;
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.bootstrap(seal_key).await.unwrap();
actor
}
#[tokio::test]
#[test_log::test]
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
let db = db::create_test_pool().await;
let mut actor = bootstrapped_actor(&db).await;
let root_key_history_id = match actor.state {
State::Unsealed {
root_key_history_id,
..
} => root_key_history_id,
_ => panic!("expected unsealed state"),
};
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
.await
.unwrap();
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
.await
.unwrap();
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
let mut conn = db.get().await.unwrap();
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
let id = actor
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
.await
.unwrap();
let row: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
assert!(
row.current_nonce > n2.to_vec(),
"next write must advance nonce"
);
}
}

View File

@@ -0,0 +1,40 @@
use kameo::actor::{ActorRef, Spawn};
use miette::Diagnostic;
use thiserror::Error;
use crate::{
actors::{bootstrap::Bootstrapper, keyholder::KeyHolder},
db,
};
pub mod bootstrap;
pub mod client;
pub mod keyholder;
pub mod user_agent;
#[derive(Error, Debug, Diagnostic)]
pub enum SpawnError {
#[error("Failed to spawn Bootstrapper actor")]
#[diagnostic(code(SpawnError::Bootstrapper))]
Bootstrapper(#[from] bootstrap::Error),
#[error("Failed to spawn KeyHolder actor")]
#[diagnostic(code(SpawnError::KeyHolder))]
KeyHolder(#[from] keyholder::Error),
}
/// Long-lived actors that are shared across all connections and handle global state and operations
#[derive(Clone)]
pub struct GlobalActors {
pub key_holder: ActorRef<KeyHolder>,
pub bootstrapper: ActorRef<Bootstrapper>,
}
impl GlobalActors {
pub async fn spawn(db: db::DatabasePool) -> Result<Self, SpawnError> {
Ok(Self {
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
key_holder: KeyHolder::spawn(KeyHolder::new(db.clone()).await?),
})
}
}

View File

@@ -1,369 +0,0 @@
use arbiter_proto::proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallenge, AuthChallengeRequest, AuthOk, ClientMessage,
ServerMessage as AuthServerMessage, client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
use diesel_async::{AsyncConnection, RunQueryDsl};
use ed25519_dalek::VerifyingKey;
use futures::StreamExt;
use kameo::{
Actor,
actor::{ActorRef, Spawn},
error::SendError,
messages,
prelude::Context,
};
use tokio::sync::mpsc;
use tokio::sync::mpsc::Sender;
use tonic::Status;
use tracing::{error, info};
use crate::{
ServerContext,
context::bootstrap::{BootstrapActor, ConsumeToken},
db::{self, schema},
errors::GrpcStatusExt,
};
/// Context for state machine with validated key and sent challenge
/// Challenge is then transformed to bytes using shared function and verified
#[derive(Clone, Debug)]
pub struct ChallengeContext {
challenge: AuthChallenge,
key: VerifyingKey,
}
// Request context with deserialized public key for state machine.
// This intermediate struct is needed because the state machine branches depending on presence of bootstrap token,
// but we want to have the deserialized key in both branches.
#[derive(Clone, Debug)]
pub struct AuthRequestContext {
pubkey: VerifyingKey,
bootstrap_token: Option<String>,
}
smlang::statemachine!(
name: UserAgent,
derive_states: [Debug],
custom_error: false,
transitions: {
*Init + AuthRequest(AuthRequestContext) / auth_request_context = ReceivedAuthRequest(AuthRequestContext),
ReceivedAuthRequest(AuthRequestContext) + ReceivedBootstrapToken = Authenticated,
ReceivedAuthRequest(AuthRequestContext) + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Authenticated,
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
}
);
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn move_challenge(
&mut self,
state_data: &AuthRequestContext,
event_data: ChallengeContext,
) -> Result<ChallengeContext, ()> {
Ok(event_data)
}
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn auth_request_context(
&mut self,
event_data: AuthRequestContext,
) -> Result<AuthRequestContext, ()> {
Ok(event_data)
}
}
#[derive(Actor)]
pub struct UserAgentActor {
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
state: UserAgentStateMachine<DummyContext>,
tx: Sender<Result<UserAgentResponse, Status>>,
}
impl UserAgentActor {
pub(crate) fn new(
context: ServerContext,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db: context.db.clone(),
bootstapper: context.bootstrapper.clone(),
state: UserAgentStateMachine::new(DummyContext),
tx,
}
}
pub(crate) fn new_manual(
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db,
bootstapper,
state: UserAgentStateMachine::new(DummyContext),
tx,
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Status> {
self.state.process_event(event).map_err(|e| {
error!(?e, "State transition failed");
Status::internal("State machine error")
})?;
Ok(())
}
async fn auth_with_bootstrap_token(
&mut self,
pubkey: ed25519_dalek::VerifyingKey,
token: String,
) -> Result<UserAgentResponse, Status> {
let token_ok: bool = self
.bootstapper
.ask(ConsumeToken { token })
.await
.map_err(|e| {
error!(?pubkey, "Failed to consume bootstrap token: {e}");
Status::internal("Bootstrap token consumption failed")
})?;
if !token_ok {
error!(?pubkey, "Invalid bootstrap token provided");
return Err(Status::invalid_argument("Invalid bootstrap token"));
}
{
let mut conn = self.db.get().await.to_status()?;
diesel::insert_into(schema::useragent_client::table)
.values((
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
schema::useragent_client::nonce.eq(1),
))
.execute(&mut conn)
.await
.to_status()?;
}
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
}
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
let nonce: Option<i32> = {
let mut db_conn = self.db.get().await.to_status()?;
db_conn
.transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::useragent_client::table
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.select(schema::useragent_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::useragent_client::table)
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()
.to_status()?
};
let Some(nonce) = nonce else {
error!(?pubkey, "Public key not found in database");
return Err(Status::unauthenticated("Public key not registered"));
};
let challenge = auth::AuthChallenge {
pubkey: pubkey_bytes,
nonce: nonce,
};
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
challenge: challenge.clone(),
key: pubkey,
}))?;
info!(
?pubkey,
?challenge,
"Sent authentication challenge to client"
);
Ok(auth_response(ServerAuthPayload::AuthChallenge(challenge)))
}
fn verify_challenge_solution(
&self,
solution: &auth::AuthChallengeSolution,
) -> Result<(bool, &ChallengeContext), Status> {
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
else {
error!("Received challenge solution in invalid state");
return Err(Status::invalid_argument(
"Invalid state for challenge solution",
));
};
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
let signature = solution.signature.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
Status::invalid_argument("Invalid signature length")
})?;
let valid = challenge_context
.key
.verify_strict(&formatted_challenge, &signature)
.is_ok();
Ok((valid, challenge_context))
}
}
type Output = Result<UserAgentResponse, Status>;
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(AuthServerMessage {
payload: Some(payload),
})),
}
}
#[messages]
impl UserAgentActor {
#[message(ctx)]
pub async fn handle_auth_challenge_request(
&mut self,
req: AuthChallengeRequest,
ctx: &mut Context<Self, Output>,
) -> Output {
let pubkey = req.pubkey.as_array().ok_or(Status::invalid_argument(
"Expected pubkey to have specific length",
))?;
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|err| {
error!(?pubkey, "Failed to convert to VerifyingKey");
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
})?;
self.transition(UserAgentEvents::AuthRequest(AuthRequestContext {
pubkey,
bootstrap_token: req.bootstrap_token.clone(),
}))?;
match req.bootstrap_token {
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
None => self.auth_with_challenge(pubkey, req.pubkey).await,
}
}
#[message(ctx)]
pub async fn handle_auth_challenge_solution(
&mut self,
solution: auth::AuthChallengeSolution,
ctx: &mut Context<Self, Output>,
) -> Output {
let (valid, challenge_context) = self.verify_challenge_solution(&solution)?;
if valid {
info!(
?challenge_context,
"Client provided valid solution to authentication challenge"
);
self.transition(UserAgentEvents::ReceivedGoodSolution)?;
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
} else {
error!("Client provided invalid solution to authentication challenge");
self.transition(UserAgentEvents::ReceivedBadSolution)?;
Err(Status::unauthenticated("Invalid challenge solution"))
}
}
}
#[cfg(test)]
mod tests {
use arbiter_proto::proto::{
UserAgentResponse, auth::{AuthChallengeRequest, AuthOk},
user_agent_response::Payload as UserAgentResponsePayload,
};
use kameo::actor::Spawn;
use crate::{
actors::user_agent::HandleAuthChallengeRequest, context::bootstrap::BootstrapActor, db,
};
use super::UserAgentActor;
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_token_auth() {
let db = db::create_test_pool().await;
// explicitly not installing any user_agent pubkeys
let bootstrapper = BootstrapActor::new(&db).await.unwrap(); // this will create bootstrap token
let token = bootstrapper.get_token().unwrap();
let bootstrapper_ref = BootstrapActor::spawn(bootstrapper);
let user_agent = UserAgentActor::new_manual(
db.clone(),
bootstrapper_ref,
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
);
let user_agent_ref = UserAgentActor::spawn(user_agent);
// simulate client sending auth request with bootstrap token
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some(token),
},
})
.await
.expect("Shouldn't fail to send message");
// auth succeeded
assert_eq!(
result,
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(
arbiter_proto::proto::auth::ServerMessage {
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
AuthOk {},
)),
},
)),
}
);
}
}
mod transport;
pub(crate) use transport::handle_user_agent;

View File

@@ -0,0 +1,118 @@
use arbiter_proto::proto::user_agent::{
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
user_agent_request::Payload as UserAgentRequestPayload,
};
use ed25519_dalek::VerifyingKey;
use tracing::error;
use crate::actors::user_agent::{
ConnectionProps,
auth::state::{AuthContext, AuthStateMachine}, session::UserAgentSession,
};
#[derive(thiserror::Error, Debug, PartialEq)]
pub enum Error {
#[error("Unexpected message payload")]
UnexpectedMessagePayload,
#[error("Invalid client public key length")]
InvalidClientPubkeyLength,
#[error("Invalid client public key encoding")]
InvalidAuthPubkeyEncoding,
#[error("Database pool unavailable")]
DatabasePoolUnavailable,
#[error("Database operation failed")]
DatabaseOperationFailed,
#[error("Public key not registered")]
PublicKeyNotRegistered,
#[error("Transport error")]
Transport,
#[error("Invalid bootstrap token")]
InvalidBootstrapToken,
#[error("Bootstrapper actor unreachable")]
BootstrapperActorUnreachable,
#[error("Invalid challenge solution")]
InvalidChallengeSolution,
}
mod state;
use state::*;
fn parse_auth_event(payload: UserAgentRequestPayload) -> Result<AuthEvents, Error> {
match payload {
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
pubkey,
bootstrap_token: None,
}) => {
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
Ok(AuthEvents::AuthRequest(ChallengeRequest {
pubkey: pubkey.into(),
}))
}
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
pubkey,
bootstrap_token: Some(token),
}) => {
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
Ok(AuthEvents::BootstrapAuthRequest(BootstrapAuthRequest {
pubkey: pubkey.into(),
token,
}))
}
UserAgentRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
solution: signature,
}))
}
_ => Err(Error::UnexpectedMessagePayload),
}
}
pub async fn authenticate(props: &mut ConnectionProps) -> Result<VerifyingKey, Error> {
let mut state = AuthStateMachine::new(AuthContext::new(props));
loop {
// This is needed because `state` now holds mutable reference to `ConnectionProps`, so we can't directly access `props` here
let transport = state.context_mut().conn.transport.as_mut();
let Some(UserAgentRequest {
payload: Some(payload),
}) = transport.recv().await
else {
return Err(Error::Transport);
};
let event = parse_auth_event(payload)?;
match state.process_event(event).await {
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
Err(AuthError::ActionFailed(err)) => {
error!(?err, "State machine action failed");
return Err(err);
}
Err(AuthError::GuardFailed(err)) => {
error!(?err, "State machine guard failed");
return Err(err);
}
Err(AuthError::InvalidEvent) => {
error!("Invalid event for current state");
return Err(Error::InvalidChallengeSolution);
}
Err(AuthError::TransitionsFailed) => {
error!("Invalid state transition");
return Err(Error::InvalidChallengeSolution);
}
_ => (),
}
}
}
pub async fn authenticate_and_create(mut props: ConnectionProps) -> Result<UserAgentSession, Error> {
let key = authenticate(&mut props).await?;
let session = UserAgentSession::new(props, key.clone());
Ok(session)
}

View File

@@ -0,0 +1,202 @@
use arbiter_proto::proto::user_agent::{
AuthChallenge, UserAgentResponse,
user_agent_response::Payload as UserAgentResponsePayload,
};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
use diesel_async::RunQueryDsl;
use ed25519_dalek::VerifyingKey;
use tracing::error;
use super::Error;
use crate::{
actors::{bootstrap::ConsumeToken, user_agent::ConnectionProps},
db::schema,
};
pub struct ChallengeRequest {
pub pubkey: VerifyingKey,
}
pub struct BootstrapAuthRequest {
pub pubkey: VerifyingKey,
pub token: String,
}
pub struct ChallengeContext {
pub challenge: AuthChallenge,
pub key: VerifyingKey,
}
pub struct ChallengeSolution {
pub solution: Vec<u8>,
}
smlang::statemachine!(
name: Auth,
custom_error: true,
transitions: {
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
Init + BootstrapAuthRequest(BootstrapAuthRequest) [async verify_bootstrap_token] / provide_key_bootstrap = AuthOk(VerifyingKey),
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
}
);
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
let mut db_conn = db.get().await.map_err(|e| {
error!(error = ?e, "Database pool error");
Error::DatabasePoolUnavailable
})?;
db_conn
.exclusive_transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::useragent_client::table
.filter(schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()))
.select(schema::useragent_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::useragent_client::table)
.filter(schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()))
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()
.map_err(|e| {
error!(error = ?e, "Database error");
Error::DatabaseOperationFailed
})?
.ok_or_else(|| {
error!(?pubkey_bytes, "Public key not found in database");
Error::PublicKeyNotRegistered
})
}
async fn register_key(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<(), Error> {
let mut conn = db.get().await.map_err(|e| {
error!(error = ?e, "Database pool error");
Error::DatabasePoolUnavailable
})?;
diesel::insert_into(schema::useragent_client::table)
.values((
schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()),
schema::useragent_client::nonce.eq(1),
))
.execute(&mut conn)
.await
.map_err(|e| {
error!(error = ?e, "Database error");
Error::DatabaseOperationFailed
})?;
Ok(())
}
pub struct AuthContext<'a> {
pub(super) conn: &'a mut ConnectionProps,
}
impl<'a> AuthContext<'a> {
pub fn new(conn: &'a mut ConnectionProps) -> Self {
Self { conn }
}
}
impl AuthStateMachineContext for AuthContext<'_> {
type Error = Error;
async fn verify_solution(
&self,
ChallengeContext { challenge, key }: &ChallengeContext,
ChallengeSolution { solution }: &ChallengeSolution,
) -> Result<bool, Self::Error> {
let formatted_challenge =
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
let signature = solution.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
Error::InvalidChallengeSolution
})?;
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
Ok(valid)
}
async fn prepare_challenge(
&mut self,
ChallengeRequest { pubkey }: ChallengeRequest,
) -> Result<ChallengeContext, Self::Error> {
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
let challenge = AuthChallenge {
pubkey: pubkey.as_bytes().to_vec(),
nonce,
};
self.conn
.transport
.send(Ok(UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
}))
.await
.map_err(|e| {
error!(?e, "Failed to send auth challenge");
Error::Transport
})?;
Ok(ChallengeContext {
challenge,
key: pubkey,
})
}
#[allow(missing_docs)]
#[allow(clippy::result_unit_err)]
async fn verify_bootstrap_token(
&self,
BootstrapAuthRequest { pubkey, token }: &BootstrapAuthRequest,
) -> Result<bool, Self::Error> {
let token_ok: bool = self
.conn
.actors
.bootstrapper
.ask(ConsumeToken {
token: token.clone(),
})
.await
.map_err(|e| {
error!(?pubkey, "Failed to consume bootstrap token: {e}");
Error::BootstrapperActorUnreachable
})?;
if !token_ok {
error!(?pubkey, "Invalid bootstrap token provided");
return Err(Error::InvalidBootstrapToken);
}
register_key(&self.conn.db, pubkey.as_bytes()).await?;
Ok(true)
}
fn provide_key_bootstrap(
&mut self,
event_data: BootstrapAuthRequest,
) -> Result<VerifyingKey, Self::Error> {
Ok(event_data.pubkey)
}
fn provide_key(
&mut self,
state_data: &ChallengeContext,
_: ChallengeSolution,
) -> Result<VerifyingKey, Self::Error> {
Ok(state_data.key)
}
}

View File

@@ -0,0 +1,60 @@
use arbiter_proto::{
proto::user_agent::{UserAgentRequest, UserAgentResponse},
transport::Bi,
};
use kameo::actor::Spawn;
use tracing::{error, info};
use crate::{actors::{GlobalActors, user_agent::session::UserAgentSession}, db};
#[derive(Debug, thiserror::Error, PartialEq)]
pub enum UserAgentError {
#[error("Expected message with payload")]
MissingRequestPayload,
#[error("Unexpected request payload")]
UnexpectedRequestPayload,
#[error("Invalid state for unseal encrypted key")]
InvalidStateForUnsealEncryptedKey,
#[error("client_pubkey must be 32 bytes")]
InvalidClientPubkeyLength,
#[error("State machine error")]
StateTransitionFailed,
#[error("Vault is not available")]
KeyHolderActorUnreachable,
#[error(transparent)]
Auth(#[from] auth::Error),
}
pub type Transport =
Box<dyn Bi<UserAgentRequest, Result<UserAgentResponse, UserAgentError>> + Send>;
pub struct ConnectionProps {
db: db::DatabasePool,
actors: GlobalActors,
transport: Transport,
}
impl ConnectionProps {
pub fn new(db: db::DatabasePool, actors: GlobalActors, transport: Transport) -> Self {
Self {
db,
actors,
transport,
}
}
}
pub mod session;
pub mod auth;
pub async fn connect_user_agent(mut props: ConnectionProps) {
match auth::authenticate_and_create( props).await {
Ok(session) => {
UserAgentSession::spawn(session);
info!("User authenticated, session started");
},
Err(err) => {
error!(?err, "Authentication failed, closing connection");
},
}
}

View File

@@ -0,0 +1,241 @@
use std::{ops::DerefMut, sync::Mutex};
use arbiter_proto::proto::user_agent::{
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
UserAgentResponse, user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
use ed25519_dalek::VerifyingKey;
use kameo::{Actor, error::SendError};
use memsafe::MemSafe;
use tokio::select;
use tracing::{error, info};
use x25519_dalek::{EphemeralSecret, PublicKey};
use crate::actors::{
keyholder::{self, TryUnseal},
user_agent::{ConnectionProps, UserAgentError},
};
mod state;
use state::{DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine, UserAgentStates};
pub struct UserAgentSession {
props: ConnectionProps,
key: VerifyingKey,
state: UserAgentStateMachine<DummyContext>,
}
impl UserAgentSession {
pub(crate) fn new(props: ConnectionProps, key: VerifyingKey) -> Self {
Self {
props,
key,
state: UserAgentStateMachine::new(DummyContext),
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
self.state.process_event(event).map_err(|e| {
error!(?e, "State transition failed");
UserAgentError::StateTransitionFailed
})?;
Ok(())
}
pub async fn process_transport_inbound(&mut self, req: UserAgentRequest) -> Output {
let msg = req.payload.ok_or_else(|| {
error!(actor = "useragent", "Received message with no payload");
UserAgentError::MissingRequestPayload
})?;
match msg {
UserAgentRequestPayload::UnsealStart(unseal_start) => {
self.handle_unseal_request(unseal_start).await
}
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
}
_ => Err(UserAgentError::UnexpectedRequestPayload),
}
}
}
type Output = Result<UserAgentResponse, UserAgentError>;
fn response(payload: UserAgentResponsePayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(payload),
}
}
impl UserAgentSession {
async fn handle_unseal_request(&mut self, req: UnsealStart) -> Output {
let secret = EphemeralSecret::random();
let public_key = PublicKey::from(&secret);
let client_pubkey_bytes: [u8; 32] = req
.client_pubkey
.try_into()
.map_err(|_| UserAgentError::InvalidClientPubkeyLength)?;
let client_public_key = PublicKey::from(client_pubkey_bytes);
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
secret: Mutex::new(Some(secret)),
client_public_key,
}))?;
Ok(response(UserAgentResponsePayload::UnsealStartResponse(
UnsealStartResponse {
server_pubkey: public_key.as_bytes().to_vec(),
},
)))
}
async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
error!("Received unseal encrypted key in invalid state");
return Err(UserAgentError::InvalidStateForUnsealEncryptedKey);
};
let ephemeral_secret = {
let mut secret_lock = unseal_context.secret.lock().unwrap();
let secret = secret_lock.take();
match secret {
Some(secret) => secret,
None => {
drop(secret_lock);
error!("Ephemeral secret already taken");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
return Ok(response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)));
}
}
};
let nonce = XNonce::from_slice(&req.nonce);
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
let decryption_result = {
let mut write_handle = seal_key_buffer.write().unwrap();
let write_handle = write_handle.deref_mut();
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
};
match decryption_result {
Ok(_) => {
match self
.props
.actors
.key_holder
.ask(TryUnseal {
seal_key_raw: seal_key_buffer,
})
.await
{
Ok(_) => {
info!("Successfully unsealed key with client-provided key");
self.transition(UserAgentEvents::ReceivedValidKey)?;
Ok(response(UserAgentResponsePayload::UnsealResult(
UnsealResult::Success.into(),
)))
}
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
Err(SendError::HandlerError(err)) => {
error!(?err, "Keyholder failed to unseal key");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
Err(err) => {
error!(?err, "Failed to send unseal request to keyholder");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Err(UserAgentError::KeyHolderActorUnreachable)
}
}
}
Err(err) => {
error!(?err, "Failed to decrypt unseal key");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
}
}
}
impl Actor for UserAgentSession {
type Args = Self;
type Error = ();
async fn on_start(
args: Self::Args,
_: kameo::prelude::ActorRef<Self>,
) -> Result<Self, Self::Error> {
Ok(args)
}
async fn next(
&mut self,
_actor_ref: kameo::prelude::WeakActorRef<Self>,
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
) -> Option<kameo::mailbox::Signal<Self>> {
loop {
select! {
signal = mailbox_rx.recv() => {
return signal;
}
msg = self.props.transport.recv() => {
match msg {
Some(request) => {
match self.process_transport_inbound(request).await {
Ok(response) => {
if self.props.transport.send(Ok(response)).await.is_err() {
error!(actor = "useragent", reason = "channel closed", "send.failed");
return Some(kameo::mailbox::Signal::Stop);
}
}
Err(err) => {
let _ = self.props.transport.send(Err(err)).await;
return Some(kameo::mailbox::Signal::Stop);
}
}
}
None => {
info!(actor = "useragent", "transport.closed");
return Some(kameo::mailbox::Signal::Stop);
}
}
}
}
}
}
}
impl UserAgentSession {
pub fn new_test(db: crate::db::DatabasePool, actors: crate::actors::GlobalActors) -> Self {
use arbiter_proto::transport::DummyTransport;
let transport: super::Transport = Box::new(DummyTransport::new());
let props = ConnectionProps::new(db, actors, transport);
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
Self {
props,
key,
state: UserAgentStateMachine::new(DummyContext),
}
}
}

View File

@@ -0,0 +1,27 @@
use std::sync::Mutex;
use x25519_dalek::{EphemeralSecret, PublicKey};
pub struct UnsealContext {
pub client_public_key: PublicKey,
pub secret: Mutex<Option<EphemeralSecret>>,
}
smlang::statemachine!(
name: UserAgent,
custom_error: false,
transitions: {
*Idle + UnsealRequest(UnsealContext) / generate_temp_keypair = WaitingForUnsealKey(UnsealContext),
WaitingForUnsealKey(UnsealContext) + ReceivedValidKey = Unsealed,
WaitingForUnsealKey(UnsealContext) + ReceivedInvalidKey = Idle,
}
);
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
Ok(event_data)
}
}

View File

@@ -1,95 +0,0 @@
use super::UserAgentActor;
use arbiter_proto::proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallenge, AuthChallengeRequest, AuthOk, ClientMessage,
ServerMessage as AuthServerMessage, client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use futures::StreamExt;
use kameo::{
actor::{ActorRef, Spawn as _},
error::SendError,
};
use tokio::sync::mpsc;
use tonic::Status;
use tracing::error;
use crate::{
actors::user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution},
context::ServerContext,
};
pub(crate) async fn handle_user_agent(
context: ServerContext,
mut req_stream: tonic::Streaming<UserAgentRequest>,
tx: mpsc::Sender<Result<UserAgentResponse, Status>>,
) {
let actor = UserAgentActor::spawn(UserAgentActor::new(context, tx.clone()));
while let Some(Ok(req)) = req_stream.next().await
&& actor.is_alive()
{
match process_message(&actor, req).await {
Ok(resp) => {
if tx.send(Ok(resp)).await.is_err() {
error!(actor = "useragent", "Failed to send response to client");
break;
}
}
Err(status) => {
let _ = tx.send(Err(status)).await;
break;
}
}
}
actor.kill();
}
async fn process_message(
actor: &ActorRef<UserAgentActor>,
req: UserAgentRequest,
) -> Result<UserAgentResponse, Status> {
let msg = req.payload.ok_or_else(|| {
error!(actor = "useragent", "Received message with no payload");
Status::invalid_argument("Expected message with payload")
})?;
let UserAgentRequestPayload::AuthMessage(ClientMessage {
payload: Some(client_message),
}) = msg
else {
error!(
actor = "useragent",
"Received unexpected message type during authentication"
);
return Err(Status::invalid_argument(
"Expected AuthMessage with ClientMessage payload",
));
};
match client_message {
ClientAuthPayload::AuthChallengeRequest(req) => actor
.ask(HandleAuthChallengeRequest { req })
.await
.map_err(into_status),
ClientAuthPayload::AuthChallengeSolution(solution) => actor
.ask(HandleAuthChallengeSolution { solution })
.await
.map_err(into_status),
}
}
fn into_status<M>(e: SendError<M, Status>) -> Status {
match e {
SendError::HandlerError(status) => status,
_ => {
error!(actor = "useragent", "Failed to send message to actor");
Status::internal("session failure")
}
}
}

View File

@@ -1,162 +0,0 @@
use std::sync::Arc;
use diesel::OptionalExtension as _;
use diesel_async::RunQueryDsl as _;
use ed25519_dalek::VerifyingKey;
use kameo::actor::{ActorRef, Spawn};
use miette::Diagnostic;
use rand::rngs::StdRng;
use smlang::statemachine;
use thiserror::Error;
use tokio::sync::RwLock;
use crate::{
context::{
bootstrap::{BootstrapActor, generate_token},
lease::LeaseHandler,
tls::{TlsDataRaw, TlsManager},
},
db::{
self,
models::ArbiterSetting,
schema::{self, arbiter_settings},
},
};
pub(crate) mod bootstrap;
pub(crate) mod lease;
pub(crate) mod tls;
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Database setup failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_setup))]
DatabaseSetup(#[from] db::DatabaseSetupError),
#[error("Connection acquire failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_pool))]
DatabasePool(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::init::database_query))]
DatabaseQuery(#[from] diesel::result::Error),
#[error("TLS initialization failed: {0}")]
#[diagnostic(code(arbiter_server::init::tls_init))]
Tls(#[from] tls::TlsInitError),
#[error("Bootstrap token generation failed: {0}")]
#[diagnostic(code(arbiter_server::init::bootstrap_token))]
BootstrapToken(#[from] bootstrap::BootstrapError),
#[error("I/O Error: {0}")]
#[diagnostic(code(arbiter_server::init::io))]
Io(#[from] std::io::Error),
}
// TODO: Placeholder for secure root key cell implementation
pub struct KeyStorage;
statemachine! {
name: Server,
transitions: {
*NotBootstrapped + Bootstrapped = Sealed,
Sealed + Unsealed(KeyStorage) / move_key = Ready(KeyStorage),
Ready(KeyStorage) + Sealed / dispose_key = Sealed,
}
}
pub struct _Context;
impl ServerStateMachineContext for _Context {
fn move_key(&mut self, _event_data: KeyStorage) -> Result<KeyStorage, ()> {
todo!()
}
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn dispose_key(&mut self, _state_data: &KeyStorage) -> Result<(), ()> {
todo!()
}
}
pub(crate) struct _ServerContextInner {
pub db: db::DatabasePool,
pub state: RwLock<ServerStateMachine<_Context>>,
pub rng: StdRng,
pub tls: TlsManager,
pub bootstrapper: ActorRef<BootstrapActor>,
}
#[derive(Clone)]
pub(crate) struct ServerContext(Arc<_ServerContextInner>);
impl std::ops::Deref for ServerContext {
type Target = _ServerContextInner;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl ServerContext {
async fn load_tls(
db: &mut db::DatabaseConnection,
settings: Option<&ArbiterSetting>,
) -> Result<TlsManager, InitError> {
match &settings {
Some(settings) => {
let tls_data_raw = TlsDataRaw {
cert: settings.cert.clone(),
key: settings.cert_key.clone(),
};
Ok(TlsManager::new(Some(tls_data_raw)).await?)
}
None => {
let tls = TlsManager::new(None).await?;
let tls_data_raw = tls.bytes();
diesel::insert_into(arbiter_settings::table)
.values(&ArbiterSetting {
id: 1,
root_key_id: None,
cert_key: tls_data_raw.key,
cert: tls_data_raw.cert,
})
.execute(db)
.await?;
Ok(tls)
}
}
}
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
let mut conn = db.get().await?;
let rng = rand::make_rng();
let settings = arbiter_settings::table
.first::<ArbiterSetting>(&mut conn)
.await
.optional()?;
let tls = Self::load_tls(&mut conn, settings.as_ref()).await?;
drop(conn);
let mut state = ServerStateMachine::new(_Context);
if let Some(settings) = &settings
&& settings.root_key_id.is_some()
{
// TODO: pass the encrypted root key to the state machine and let it handle decryption and transition to Sealed
let _ = state.process_event(ServerEvents::Bootstrapped);
}
Ok(Self(Arc::new(_ServerContextInner {
bootstrapper: BootstrapActor::spawn(BootstrapActor::new(&db).await?),
db,
rng,
tls,
state: RwLock::new(state),
})))
}
}

View File

@@ -1,41 +0,0 @@
use std::sync::Arc;
use dashmap::DashSet;
#[derive(Clone, Default)]
struct LeaseStorage<T: Eq + std::hash::Hash>(Arc<DashSet<T>>);
// A lease that automatically releases the item when dropped
pub struct Lease<T: Clone + std::hash::Hash + Eq> {
item: T,
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> Drop for Lease<T> {
fn drop(&mut self) {
self.storage.0.remove(&self.item);
}
}
#[derive(Clone, Default)]
pub struct LeaseHandler<T: Clone + std::hash::Hash + Eq> {
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> LeaseHandler<T> {
pub fn new() -> Self {
Self {
storage: LeaseStorage(Arc::new(DashSet::new())),
}
}
pub fn acquire(&self, item: T) -> Result<Lease<T>, ()> {
if self.storage.0.insert(item.clone()) {
Ok(Lease {
item,
storage: self.storage.clone(),
})
} else {
Err(())
}
}
}

View File

@@ -0,0 +1,65 @@
use std::sync::Arc;
use miette::Diagnostic;
use thiserror::Error;
use crate::{
actors::GlobalActors,
context::tls::TlsManager,
db::{self},
};
pub mod tls;
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Database setup failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_setup))]
DatabaseSetup(#[from] db::DatabaseSetupError),
#[error("Connection acquire failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_pool))]
DatabasePool(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::init::database_query))]
DatabaseQuery(#[from] diesel::result::Error),
#[error("TLS initialization failed: {0}")]
#[diagnostic(code(arbiter_server::init::tls_init))]
Tls(#[from] tls::InitError),
#[error("Actor spawn failed: {0}")]
#[diagnostic(code(arbiter_server::init::actor_spawn))]
ActorSpawn(#[from] crate::actors::SpawnError),
#[error("I/O Error: {0}")]
#[diagnostic(code(arbiter_server::init::io))]
Io(#[from] std::io::Error),
}
pub struct _ServerContextInner {
pub db: db::DatabasePool,
pub tls: TlsManager,
pub actors: GlobalActors,
}
#[derive(Clone)]
pub struct ServerContext(Arc<_ServerContextInner>);
impl std::ops::Deref for ServerContext {
type Target = _ServerContextInner;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl ServerContext {
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
Ok(Self(Arc::new(_ServerContextInner {
actors: GlobalActors::spawn(db.clone()).await?,
tls: TlsManager::new(db.clone()).await?,
db,
})))
}
}

View File

@@ -1,13 +1,36 @@
use std::string::FromUtf8Error;
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper as _};
use diesel_async::{AsyncConnection, RunQueryDsl};
use miette::Diagnostic;
use rcgen::{Certificate, KeyPair};
use rustls::pki_types::CertificateDer;
use pem::Pem;
use rcgen::{
BasicConstraints, Certificate, CertificateParams, CertifiedIssuer, DistinguishedName, DnType,
IsCa, Issuer, KeyPair, KeyUsagePurpose,
};
use rustls::pki_types::{pem::PemObject};
use thiserror::Error;
use tonic::transport::CertificateDer;
use crate::db::{
self,
models::{NewTlsHistory, TlsHistory},
schema::{
arbiter_settings,
tls_history::{self},
},
};
const ENCODE_CONFIG: pem::EncodeConfig = {
let line_ending = match cfg!(target_family = "windows") {
true => pem::LineEnding::CRLF,
false => pem::LineEnding::LF,
};
pem::EncodeConfig::new().set_line_ending(line_ending)
};
#[derive(Error, Debug, Diagnostic)]
pub enum TlsInitError {
pub enum InitError {
#[error("Key generation error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
KeyGeneration(#[from] rcgen::Error),
@@ -19,71 +42,211 @@ pub enum TlsInitError {
#[error("Key deserialization error: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
KeyDeserializationError(rcgen::Error),
#[error("Database error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::database_error))]
DatabaseError(#[from] diesel::result::Error),
#[error("Pem deserialization error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::pem_deserialization))]
PemDeserializationError(#[from] rustls::pki_types::pem::Error),
#[error("Database pool acquire error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::database_pool_acquire))]
DatabasePoolAcquire(#[from] db::PoolError),
}
pub struct TlsData {
pub cert: CertificateDer<'static>,
pub keypair: KeyPair,
pub type PemCert = String;
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
pem::encode_config(
&Pem::new("CERTIFICATE", cert.to_vec()),
ENCODE_CONFIG,
)
}
pub struct TlsDataRaw {
pub cert: Vec<u8>,
pub key: Vec<u8>,
#[allow(unused)]
struct SerializedTls {
cert_pem: PemCert,
cert_key_pem: String,
}
impl TlsDataRaw {
pub fn serialize(cert: &TlsData) -> Self {
Self {
cert: cert.cert.as_ref().to_vec(),
key: cert.keypair.serialize_pem().as_bytes().to_vec(),
struct TlsCa {
issuer: Issuer<'static, KeyPair>,
cert: CertificateDer<'static>,
}
impl TlsCa {
fn generate() -> Result<Self, InitError> {
let keypair = KeyPair::generate()?;
let mut params = CertificateParams::new(["Arbiter Instance CA".into()])?;
params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained);
params.key_usages = vec![
KeyUsagePurpose::KeyCertSign,
KeyUsagePurpose::CrlSign,
KeyUsagePurpose::DigitalSignature,
];
let mut dn = DistinguishedName::new();
dn.push(DnType::CommonName, "Arbiter Instance CA");
params.distinguished_name = dn;
let certified_issuer = CertifiedIssuer::self_signed(params, keypair)?;
let cert_key_pem = certified_issuer.key().serialize_pem();
let issuer = Issuer::from_ca_cert_pem(
&certified_issuer.pem(),
KeyPair::from_pem(cert_key_pem.as_ref()).unwrap(),
)
.unwrap();
Ok(Self {
issuer,
cert: certified_issuer.der().clone(),
})
}
fn generate_leaf(&self) -> Result<TlsCert, InitError> {
let cert_key = KeyPair::generate()?;
let mut params = CertificateParams::new(["Arbiter Instance Leaf".into()])?;
params.is_ca = IsCa::NoCa;
params.key_usages = vec![
KeyUsagePurpose::DigitalSignature,
KeyUsagePurpose::KeyEncipherment,
];
let mut dn = DistinguishedName::new();
dn.push(DnType::CommonName, "Arbiter Instance Leaf");
params.distinguished_name = dn;
let new_cert = params.signed_by(&cert_key, &self.issuer)?;
Ok(TlsCert {
cert: new_cert,
cert_key,
})
}
pub fn deserialize(&self) -> Result<TlsData, TlsInitError> {
let cert = CertificateDer::from_slice(&self.cert).into_owned();
#[allow(unused)]
fn serialize(&self) -> Result<SerializedTls, InitError> {
let cert_key_pem = self.issuer.key().serialize_pem();
Ok(SerializedTls {
cert_pem: encode_cert_to_pem(&self.cert),
cert_key_pem,
})
}
let key =
String::from_utf8(self.key.clone()).map_err(TlsInitError::KeyInvalidFormat)?;
let keypair = KeyPair::from_pem(&key).map_err(TlsInitError::KeyDeserializationError)?;
Ok(TlsData { cert, keypair })
#[allow(unused)]
fn try_deserialize(cert_pem: &str, cert_key_pem: &str) -> Result<Self, InitError> {
let keypair =
KeyPair::from_pem(cert_key_pem).map_err(InitError::KeyDeserializationError)?;
let issuer = Issuer::from_ca_cert_pem(cert_pem, keypair)?;
Ok(Self {
issuer,
cert: CertificateDer::from_pem_slice(cert_pem.as_bytes())?,
})
}
}
fn generate_cert(key: &KeyPair) -> Result<Certificate, rcgen::Error> {
let params = rcgen::CertificateParams::new(vec![
"arbiter.local".to_string(),
"localhost".to_string(),
])?;
params.self_signed(key)
struct TlsCert {
cert: Certificate,
cert_key: KeyPair,
}
// TODO: Implement cert rotation
pub(crate) struct TlsManager {
data: TlsData,
pub struct TlsManager {
cert: CertificateDer<'static>,
keypair: KeyPair,
ca_cert: CertificateDer<'static>,
_db: db::DatabasePool,
}
impl TlsManager {
pub async fn new(data: Option<TlsDataRaw>) -> Result<Self, TlsInitError> {
match data {
Some(raw) => {
let tls_data = raw.deserialize()?;
Ok(Self { data: tls_data })
}
None => {
let keypair = KeyPair::generate()?;
let cert = generate_cert(&keypair)?;
let tls_data = TlsData {
cert: cert.der().clone(),
keypair,
pub async fn generate_new(db: &db::DatabasePool) -> Result<Self, InitError> {
let ca = TlsCa::generate()?;
let new_cert = ca.generate_leaf()?;
{
let mut conn = db.get().await?;
conn.transaction(|conn| {
Box::pin(async {
let new_tls_history = NewTlsHistory {
cert: new_cert.cert.pem(),
cert_key: new_cert.cert_key.serialize_pem(),
ca_cert: encode_cert_to_pem(&ca.cert),
ca_key: ca.issuer.key().serialize_pem(),
};
Ok(Self { data: tls_data })
let inserted_tls_history: i32 = diesel::insert_into(tls_history::table)
.values(&new_tls_history)
.returning(tls_history::id)
.get_result(conn)
.await?;
diesel::update(arbiter_settings::table)
.set(arbiter_settings::tls_id.eq(inserted_tls_history))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(())
})
})
.await?;
}
Ok(Self {
cert: new_cert.cert.der().clone(),
keypair: new_cert.cert_key,
ca_cert: ca.cert,
_db: db.clone(),
})
}
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
let cert_data: Option<TlsHistory> = {
let mut conn = db.get().await?;
arbiter_settings::table
.left_join(tls_history::table)
.select(Option::<TlsHistory>::as_select())
.first(&mut conn)
.await?
};
match cert_data {
Some(data) => {
let try_load = || -> Result<_, Box<dyn std::error::Error>> {
let keypair = KeyPair::from_pem(&data.cert_key)?;
let cert = CertificateDer::from_pem_slice(data.cert.as_bytes())?;
let ca_cert = CertificateDer::from_pem_slice(data.ca_cert.as_bytes())?;
Ok(Self {
cert,
keypair,
ca_cert,
_db: db.clone(),
})
};
match try_load() {
Ok(manager) => Ok(manager),
Err(e) => {
eprintln!("Failed to load existing TLS certs: {e}. Generating new ones.");
Self::generate_new(&db).await
}
}
}
None => Self::generate_new(&db).await,
}
}
pub fn bytes(&self) -> TlsDataRaw {
TlsDataRaw::serialize(&self.data)
pub fn cert(&self) -> &CertificateDer<'static> {
&self.cert
}
pub fn ca_cert(&self) -> &CertificateDer<'static> {
&self.ca_cert
}
pub fn cert_pem(&self) -> PemCert {
encode_cert_to_pem(&self.cert)
}
pub fn key_pem(&self) -> String {
self.keypair.serialize_pem()
}
}

View File

@@ -1,12 +1,7 @@
use std::sync::Arc;
use diesel::{
Connection as _, SqliteConnection,
connection::{SimpleConnection as _, TransactionManager},
};
use diesel::{Connection as _, SqliteConnection, connection::SimpleConnection as _};
use diesel_async::{
AsyncConnection, SimpleAsyncConnection,
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig, RecyclingMethod},
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig},
sync_connection_wrapper::SyncConnectionWrapper,
};
use diesel_migrations::{EmbeddedMigrations, MigrationHarness, embed_migrations};
@@ -22,30 +17,30 @@ pub type DatabasePool = diesel_async::pooled_connection::bb8::Pool<DatabaseConne
pub type PoolInitError = diesel_async::pooled_connection::PoolError;
pub type PoolError = diesel_async::pooled_connection::bb8::RunError;
static DB_FILE: &'static str = "arbiter.sqlite";
static DB_FILE: &str = "arbiter.sqlite";
const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
#[derive(Error, Diagnostic, Debug)]
pub enum DatabaseSetupError {
#[error("Failed to determine home directory")]
#[diagnostic(code(arbiter::db::home_dir_error))]
#[diagnostic(code(arbiter::db::home_dir))]
HomeDir(std::io::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::connection_error))]
#[diagnostic(code(arbiter::db::connection))]
Connection(diesel::ConnectionError),
#[error(transparent)]
#[diagnostic(code(arbiter::db::concurrency_error))]
#[diagnostic(code(arbiter::db::concurrency))]
ConcurrencySetup(diesel::result::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::migration_error))]
#[diagnostic(code(arbiter::db::migration))]
Migration(Box<dyn std::error::Error + Send + Sync>),
#[error(transparent)]
#[diagnostic(code(arbiter::db::pool_error))]
#[diagnostic(code(arbiter::db::pool))]
Pool(#[from] PoolInitError),
}
@@ -96,12 +91,12 @@ fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
#[tracing::instrument(level = "info")]
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
let database_url = url.map(String::from).unwrap_or(format!(
"{}?mode=rwc",
(database_path()?
let database_url = url.map(String::from).unwrap_or(
database_path()?
.to_str()
.expect("database path is not valid UTF-8"))
));
.expect("database path is not valid UTF-8")
.to_string(),
);
initialize_database(&database_url)?;
@@ -134,7 +129,6 @@ pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetu
Ok(pool)
}
#[cfg(test)]
pub async fn create_test_pool() -> DatabasePool {
use rand::distr::{Alphanumeric, SampleString as _};

View File

@@ -1,31 +1,74 @@
#![allow(unused)]
#![allow(clippy::all)]
use crate::db::schema::{self, aead_encrypted, arbiter_settings};
use crate::db::schema::{self, aead_encrypted, arbiter_settings, root_key_history, tls_history};
use diesel::{prelude::*, sqlite::Sqlite};
use restructed::Models;
pub mod types {
use chrono::{DateTime, Utc};
pub struct SqliteTimestamp(DateTime<Utc>);
}
#[derive(Queryable, Debug, Insertable)]
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[view(
NewAeadEncrypted,
derive(Insertable),
omit(id),
attributes_with = "deriveless"
)]
#[diesel(table_name = aead_encrypted, check_for_backend(Sqlite))]
pub struct AeadEncrypted {
pub id: i32,
pub ciphertext: Vec<u8>,
pub tag: Vec<u8>,
pub current_nonce: i32,
pub current_nonce: Vec<u8>,
pub schema_version: i32,
pub associated_root_key_id: i32, // references root_key_history.id
pub created_at: i32,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
pub struct ArbiterSetting {
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = root_key_history, check_for_backend(Sqlite))]
#[view(
NewRootKeyHistory,
derive(Insertable),
omit(id),
attributes_with = "deriveless"
)]
pub struct RootKeyHistory {
pub id: i32,
pub root_key_id: Option<i32>, // references aead_encrypted.id
pub cert_key: Vec<u8>,
pub cert: Vec<u8>,
pub ciphertext: Vec<u8>,
pub tag: Vec<u8>,
pub root_key_encryption_nonce: Vec<u8>,
pub data_encryption_nonce: Vec<u8>,
pub schema_version: i32,
pub salt: Vec<u8>,
}
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = tls_history, check_for_backend(Sqlite))]
#[view(
NewTlsHistory,
derive(Insertable),
omit(id, created_at),
attributes_with = "deriveless"
)]
pub struct TlsHistory {
pub id: i32,
pub cert: String,
pub cert_key: String, // PEM Encoded private key
pub ca_cert: String, // PEM Encoded certificate for cert signing
pub ca_key: String, // PEM Encoded public key for cert signing
pub created_at: i32,
}
#[derive(Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
pub struct ArbiterSettings {
pub id: i32,
pub root_key_id: Option<i32>, // references root_key_history.id
pub tls_id: Option<i32>, // references tls_history.id
}
#[derive(Queryable, Debug)]

View File

@@ -3,10 +3,12 @@
diesel::table! {
aead_encrypted (id) {
id -> Integer,
current_nonce -> Integer,
current_nonce -> Binary,
ciphertext -> Binary,
tag -> Binary,
schema_version -> Integer,
associated_root_key_id -> Integer,
created_at -> Integer,
}
}
@@ -14,8 +16,7 @@ diesel::table! {
arbiter_settings (id) {
id -> Integer,
root_key_id -> Nullable<Integer>,
cert_key -> Binary,
cert -> Binary,
tls_id -> Nullable<Integer>,
}
}
@@ -29,6 +30,29 @@ diesel::table! {
}
}
diesel::table! {
root_key_history (id) {
id -> Integer,
root_key_encryption_nonce -> Binary,
data_encryption_nonce -> Binary,
ciphertext -> Binary,
tag -> Binary,
schema_version -> Integer,
salt -> Binary,
}
}
diesel::table! {
tls_history (id) {
id -> Integer,
cert -> Text,
cert_key -> Text,
ca_cert -> Text,
ca_key -> Text,
created_at -> Integer,
}
}
diesel::table! {
useragent_client (id) {
id -> Integer,
@@ -39,11 +63,15 @@ diesel::table! {
}
}
diesel::joinable!(arbiter_settings -> aead_encrypted (root_key_id));
diesel::joinable!(aead_encrypted -> root_key_history (associated_root_key_id));
diesel::joinable!(arbiter_settings -> root_key_history (root_key_id));
diesel::joinable!(arbiter_settings -> tls_history (tls_id));
diesel::allow_tables_to_appear_in_same_query!(
aead_encrypted,
arbiter_settings,
program_client,
root_key_history,
tls_history,
useragent_client,
);

View File

@@ -1,24 +0,0 @@
use tonic::Status;
use tracing::error;
pub trait GrpcStatusExt<T> {
fn to_status(self) -> Result<T, Status>;
}
impl<T> GrpcStatusExt<T> for Result<T, diesel::result::Error> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database error");
Status::internal("Database error")
})
}
}
impl<T> GrpcStatusExt<T> for Result<T, crate::db::PoolError> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database pool error");
Status::internal("Database pool error")
})
}
}

View File

@@ -1,62 +1,188 @@
#![allow(unused)]
use std::sync::Arc;
#![forbid(unsafe_code)]
use arbiter_proto::{
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
transport::BiStream,
proto::{
client::{ClientRequest, ClientResponse},
user_agent::{UserAgentRequest, UserAgentResponse},
},
transport::{IdentityRecvConverter, SendConverter, grpc},
};
use async_trait::async_trait;
use tokio_stream::wrappers::ReceiverStream;
use tokio::sync::mpsc;
use tonic::{Request, Response, Status};
use tracing::info;
use crate::{
actors::{client::handle_client, user_agent::handle_user_agent},
actors::{
client::{self, ClientError, ConnectionProps as ClientConnectionProps, connect_client},
user_agent::{self, ConnectionProps, UserAgentError, connect_user_agent},
},
context::ServerContext,
};
pub mod actors;
mod context;
mod db;
mod errors;
pub mod context;
pub mod db;
const DEFAULT_CHANNEL_SIZE: usize = 1000;
struct UserAgentGrpcSender;
impl SendConverter for UserAgentGrpcSender {
type Input = Result<UserAgentResponse, UserAgentError>;
type Output = Result<UserAgentResponse, Status>;
fn convert(&self, item: Self::Input) -> Self::Output {
match item {
Ok(message) => Ok(message),
Err(err) => Err(user_agent_error_status(err)),
}
}
}
struct ClientGrpcSender;
impl SendConverter for ClientGrpcSender {
type Input = Result<ClientResponse, ClientError>;
type Output = Result<ClientResponse, Status>;
fn convert(&self, item: Self::Input) -> Self::Output {
match item {
Ok(message) => Ok(message),
Err(err) => Err(client_error_status(err)),
}
}
}
fn client_error_status(value: ClientError) -> Status {
match value {
ClientError::MissingRequestPayload | ClientError::UnexpectedRequestPayload => {
Status::invalid_argument("Expected message with payload")
}
ClientError::StateTransitionFailed => Status::internal("State machine error"),
ClientError::Auth(ref err) => client_auth_error_status(err),
}
}
fn client_auth_error_status(value: &client::auth::Error) -> Status {
use client::auth::Error;
match value {
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
Status::invalid_argument(value.to_string())
}
Error::InvalidAuthPubkeyEncoding => {
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
}
Error::InvalidSignatureLength => Status::invalid_argument("Invalid signature length"),
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
Status::unauthenticated(value.to_string())
}
Error::Transport => Status::internal("Transport error"),
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
Error::DatabaseOperationFailed => Status::internal("Database error"),
}
}
fn user_agent_error_status(value: UserAgentError) -> Status {
match value {
UserAgentError::MissingRequestPayload | UserAgentError::UnexpectedRequestPayload => {
Status::invalid_argument("Expected message with payload")
}
UserAgentError::InvalidStateForUnsealEncryptedKey => {
Status::failed_precondition("Invalid state for unseal encrypted key")
}
UserAgentError::InvalidClientPubkeyLength => {
Status::invalid_argument("client_pubkey must be 32 bytes")
}
UserAgentError::StateTransitionFailed => Status::internal("State machine error"),
UserAgentError::KeyHolderActorUnreachable => Status::internal("Vault is not available"),
UserAgentError::Auth(ref err) => auth_error_status(err),
}
}
fn auth_error_status(value: &user_agent::auth::Error) -> Status {
use user_agent::auth::Error;
match value {
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
Status::invalid_argument(value.to_string())
}
Error::InvalidAuthPubkeyEncoding => {
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
}
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
Status::unauthenticated(value.to_string())
}
Error::InvalidBootstrapToken => Status::invalid_argument("Invalid bootstrap token"),
Error::Transport => Status::internal("Transport error"),
Error::BootstrapperActorUnreachable => {
Status::internal("Bootstrap token consumption failed")
}
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
Error::DatabaseOperationFailed => Status::internal("Database error"),
}
}
pub struct Server {
context: ServerContext,
}
impl Server {
pub fn new(context: ServerContext) -> Self {
Self { context }
}
}
#[async_trait]
impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
type ClientStream = ReceiverStream<Result<ClientResponse, Status>>;
#[tracing::instrument(level = "debug", skip(self))]
async fn client(
&self,
request: Request<tonic::Streaming<ClientRequest>>,
) -> Result<Response<Self::ClientStream>, Status> {
let req_stream = request.into_inner();
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
tokio::spawn(handle_client(
self.context.clone(),
BiStream {
request_stream: req_stream,
response_sender: tx,
},
));
let transport = grpc::GrpcAdapter::new(
tx,
req_stream,
IdentityRecvConverter::<ClientRequest>::new(),
ClientGrpcSender,
);
let props = ClientConnectionProps::new(self.context.db.clone(), Box::new(transport));
tokio::spawn(connect_client(props));
info!(event = "connection established", "grpc.client");
Ok(Response::new(ReceiverStream::new(rx)))
}
#[tracing::instrument(level = "debug", skip(self))]
async fn user_agent(
&self,
request: Request<tonic::Streaming<UserAgentRequest>>,
) -> Result<Response<Self::UserAgentStream>, Status> {
let req_stream = request.into_inner();
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
tokio::spawn(handle_user_agent(self.context.clone(), req_stream, tx));
let transport = grpc::GrpcAdapter::new(
tx,
req_stream,
IdentityRecvConverter::<UserAgentRequest>::new(),
UserAgentGrpcSender,
);
let props = ConnectionProps::new(
self.context.db.clone(),
self.context.actors.clone(),
Box::new(transport),
);
tokio::spawn(connect_user_agent(props));
info!(event = "connection established", "grpc.user_agent");
Ok(Response::new(ReceiverStream::new(rx)))
}
}

View File

@@ -0,0 +1,53 @@
use std::net::SocketAddr;
use arbiter_proto::{proto::arbiter_service_server::ArbiterServiceServer, url::ArbiterUrl};
use arbiter_server::{Server, actors::bootstrap::GetToken, context::ServerContext, db};
use miette::miette;
use tonic::transport::{Identity, ServerTlsConfig};
use tracing::info;
const PORT: u16 = 50051;
#[tokio::main]
async fn main() -> miette::Result<()> {
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| tracing_subscriber::EnvFilter::new("info")),
)
.init();
info!("Starting arbiter server");
let db = db::create_pool(None).await?;
info!("Database ready");
let context = ServerContext::new(db).await?;
let addr: SocketAddr = format!("127.0.0.1:{PORT}").parse().expect("valid address");
info!(%addr, "Starting gRPC server");
let url = ArbiterUrl {
host: addr.ip().to_string(),
port: addr.port(),
ca_cert: context.tls.ca_cert().clone().into_owned(),
bootstrap_token: context.actors.bootstrapper.ask(GetToken).await.unwrap(),
};
info!(%url, "Server URL");
let tls = ServerTlsConfig::new().identity(Identity::from_pem(
context.tls.cert_pem(),
context.tls.key_pem(),
));
tonic::transport::Server::builder()
.tls_config(tls)
.map_err(|err| miette!("Faild to setup TLS: {err}"))?
.add_service(ArbiterServiceServer::new(Server::new(context)))
.serve(addr)
.await
.map_err(|e| miette::miette!("gRPC server error: {e}"))?;
unreachable!("gRPC server should run indefinitely");
}

View File

@@ -0,0 +1,4 @@
mod common;
#[path = "client/auth.rs"]
mod auth;

View File

@@ -0,0 +1,107 @@
use arbiter_proto::transport::Bi;
use arbiter_proto::proto::client::{
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
client_request::Payload as ClientRequestPayload,
client_response::Payload as ClientResponsePayload,
};
use arbiter_server::{
actors::client::{ConnectionProps, connect_client},
db::{self, schema},
};
use diesel::{ExpressionMethods as _, insert_into};
use diesel_async::RunQueryDsl;
use ed25519_dalek::Signer as _;
use super::common::ChannelTransport;
#[tokio::test]
#[test_log::test]
pub async fn test_unregistered_pubkey_rejected() {
let db = db::create_test_pool().await;
let (server_transport, mut test_transport) = ChannelTransport::new();
let props = ConnectionProps::new(db.clone(), Box::new(server_transport));
let task = tokio::spawn(connect_client(props));
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
test_transport
.send(ClientRequest {
payload: Some(ClientRequestPayload::AuthChallengeRequest(
AuthChallengeRequest {
pubkey: pubkey_bytes,
},
)),
})
.await
.unwrap();
// Auth fails, connect_client returns, transport drops
task.await.unwrap();
}
#[tokio::test]
#[test_log::test]
pub async fn test_challenge_auth() {
let db = db::create_test_pool().await;
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
{
let mut conn = db.get().await.unwrap();
insert_into(schema::program_client::table)
.values(schema::program_client::public_key.eq(pubkey_bytes.clone()))
.execute(&mut conn)
.await
.unwrap();
}
let (server_transport, mut test_transport) = ChannelTransport::new();
let props = ConnectionProps::new(db.clone(), Box::new(server_transport));
let task = tokio::spawn(connect_client(props));
// Send challenge request
test_transport
.send(ClientRequest {
payload: Some(ClientRequestPayload::AuthChallengeRequest(
AuthChallengeRequest {
pubkey: pubkey_bytes,
},
)),
})
.await
.unwrap();
// Read the challenge response
let response = test_transport
.recv()
.await
.expect("should receive challenge");
let challenge = match response {
Ok(resp) => match resp.payload {
Some(ClientResponsePayload::AuthChallenge(c)) => c,
other => panic!("Expected AuthChallenge, got {other:?}"),
},
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
};
// Sign the challenge and send solution
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
let signature = new_key.sign(&formatted_challenge);
test_transport
.send(ClientRequest {
payload: Some(ClientRequestPayload::AuthChallengeSolution(
AuthChallengeSolution {
signature: signature.to_bytes().to_vec(),
},
)),
})
.await
.unwrap();
// Auth completes, session spawned
task.await.unwrap();
}

View File

@@ -0,0 +1,75 @@
use arbiter_proto::transport::{Bi, Error};
use arbiter_server::{
actors::keyholder::KeyHolder,
db::{self, schema},
};
use async_trait::async_trait;
use diesel::QueryDsl;
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use tokio::sync::mpsc;
#[allow(dead_code)]
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
actor
.bootstrap(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
.await
.unwrap();
actor
}
#[allow(dead_code)]
pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
let mut conn = db.get().await.unwrap();
let id = schema::arbiter_settings::table
.select(schema::arbiter_settings::root_key_id)
.first::<Option<i32>>(&mut conn)
.await
.unwrap();
id.expect("root_key_id should be set after bootstrap")
}
pub struct ChannelTransport<T, Y> {
receiver: mpsc::Receiver<T>,
sender: mpsc::Sender<Y>,
}
impl<T, Y> ChannelTransport<T, Y> {
pub fn new() -> (Self, ChannelTransport<Y, T>) {
let (tx1, rx1) = mpsc::channel(10);
let (tx2, rx2) = mpsc::channel(10);
(
Self {
receiver: rx1,
sender: tx2,
},
ChannelTransport {
receiver: rx2,
sender: tx1,
},
)
}
}
#[async_trait]
impl<T, Y> Bi<T, Y> for ChannelTransport<T, Y>
where
T: Send + 'static,
Y: Send + 'static,
{
async fn send(&mut self, item: Y) -> Result<(), Error> {
self.sender
.send(item)
.await
.map_err(|_| Error::ChannelClosed)
}
async fn recv(&mut self) -> Option<T> {
self.receiver.recv().await
}
}

View File

@@ -0,0 +1,8 @@
mod common;
#[path = "keyholder/concurrency.rs"]
mod concurrency;
#[path = "keyholder/lifecycle.rs"]
mod lifecycle;
#[path = "keyholder/storage.rs"]
mod storage;

View File

@@ -0,0 +1,173 @@
use std::collections::{HashMap, HashSet};
use arbiter_server::{
actors::keyholder::{CreateNew, Error, KeyHolder},
db::{self, models, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
use diesel_async::RunQueryDsl;
use kameo::actor::{ActorRef, Spawn as _};
use memsafe::MemSafe;
use tokio::task::JoinSet;
use crate::common;
async fn write_concurrently(
actor: ActorRef<KeyHolder>,
prefix: &'static str,
count: usize,
) -> Vec<(i32, Vec<u8>)> {
let mut set = JoinSet::new();
for i in 0..count {
let actor = actor.clone();
set.spawn(async move {
let plaintext = format!("{prefix}-{i}").into_bytes();
let id = actor
.ask(CreateNew {
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
})
.await
.unwrap();
(id, plaintext)
});
}
let mut out = Vec::with_capacity(count);
while let Some(res) = set.join_next().await {
out.push(res.unwrap());
}
out
}
#[tokio::test]
#[test_log::test]
async fn concurrent_create_new_no_duplicate_nonces_() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
let writes = write_concurrently(actor, "nonce-unique", 32).await;
assert_eq!(writes.len(), 32);
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
assert_eq!(rows.len(), 32);
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
}
#[tokio::test]
#[test_log::test]
async fn concurrent_create_new_root_nonce_never_moves_backward() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
write_concurrently(actor, "root-max", 24).await;
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
let max_nonce = rows
.iter()
.map(|r| r.current_nonce.clone())
.max()
.expect("at least one row");
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(root_row.data_encryption_nonce, max_nonce);
}
#[tokio::test]
#[test_log::test]
async fn insert_failure_does_not_create_partial_row() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let root_key_history_id = common::root_key_history_id(&db).await;
let mut conn = db.get().await.unwrap();
let before_count: i64 = schema::aead_encrypted::table
.count()
.get_result(&mut conn)
.await
.unwrap();
let before_root_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(&mut conn)
.await
.unwrap();
sql_query(
"CREATE TRIGGER fail_aead_insert BEFORE INSERT ON aead_encrypted BEGIN SELECT RAISE(ABORT, 'forced test failure'); END;",
)
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::DatabaseTransaction(_)));
let mut conn = db.get().await.unwrap();
sql_query("DROP TRIGGER fail_aead_insert;")
.execute(&mut conn)
.await
.unwrap();
let after_count: i64 = schema::aead_encrypted::table
.count()
.get_result(&mut conn)
.await
.unwrap();
assert_eq!(
before_count, after_count,
"failed insert must not create row"
);
let after_root_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(&mut conn)
.await
.unwrap();
assert!(
after_root_nonce > before_root_nonce,
"current behavior allows nonce gap on failed insert"
);
}
#[tokio::test]
#[test_log::test]
async fn decrypt_roundtrip_after_high_concurrency() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
let writes = write_concurrently(actor, "roundtrip", 40).await;
let expected: HashMap<i32, Vec<u8>> = writes.into_iter().collect();
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
decryptor
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
.await
.unwrap();
for (id, plaintext) in expected {
let mut decrypted = decryptor.decrypt(id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
}

View File

@@ -0,0 +1,131 @@
use arbiter_server::{
actors::keyholder::{Error, KeyHolder},
db::{self, models, schema},
};
use diesel::{QueryDsl, SelectableHelper};
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::common;
#[tokio::test]
#[test_log::test]
async fn test_bootstrap() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.bootstrap(seal_key).await.unwrap();
let mut conn = db.get().await.unwrap();
let row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(row.schema_version, 1);
assert_eq!(
row.tag,
arbiter_server::actors::keyholder::encryption::v1::ROOT_KEY_TAG
);
assert!(!row.ciphertext.is_empty());
assert!(!row.salt.is_empty());
assert_eq!(
row.data_encryption_nonce,
arbiter_server::actors::keyholder::encryption::v1::Nonce::default().to_vec()
);
}
#[tokio::test]
#[test_log::test]
async fn test_bootstrap_rejects_double() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
let err = actor.bootstrap(seal_key2).await.unwrap_err();
assert!(matches!(err, Error::AlreadyBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_create_new_before_bootstrap_fails() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db).await.unwrap();
let err = actor
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_decrypt_before_bootstrap_fails() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db).await.unwrap();
let err = actor.decrypt(1).await.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_new_restores_sealed_state() {
let db = db::create_test_pool().await;
let actor = common::bootstrapped_keyholder(&db).await;
drop(actor);
let mut actor2 = KeyHolder::new(db).await.unwrap();
let err = actor2.decrypt(1).await.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_unseal_correct_password() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"survive a restart";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
drop(actor);
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.try_unseal(seal_key).await.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_unseal_wrong_then_correct_password() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"important data";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
drop(actor);
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
let err = actor.try_unseal(bad_key).await.unwrap_err();
assert!(matches!(err, Error::InvalidKey));
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.try_unseal(good_key).await.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}

View File

@@ -0,0 +1,161 @@
use std::collections::HashSet;
use arbiter_server::{
actors::keyholder::{Error, encryption::v1},
db::{self, models, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::common;
#[tokio::test]
#[test_log::test]
async fn test_create_decrypt_roundtrip() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"hello arbiter";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_decrypt_nonexistent_returns_not_found() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let err = actor.decrypt(9999).await.unwrap_err();
assert!(matches!(err, Error::NotFound));
}
#[tokio::test]
#[test_log::test]
async fn test_ciphertext_differs_across_entries() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"same content";
let id1 = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let id2 = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let mut conn = db.get().await.unwrap();
let row1: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id1))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
let row2: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id2))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
assert_ne!(row1.ciphertext, row2.ciphertext);
let mut d1 = actor.decrypt(id1).await.unwrap();
let mut d2 = actor.decrypt(id2).await.unwrap();
assert_eq!(*d1.read().unwrap(), plaintext);
assert_eq!(*d2.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_nonce_never_reused() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let n = 5;
for i in 0..n {
actor
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
.await
.unwrap();
}
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
assert_eq!(rows.len(), n);
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
for (i, row) in rows.iter().enumerate() {
let mut expected = v1::Nonce::default();
for _ in 0..=i {
expected.increment();
}
assert_eq!(row.current_nonce, expected.to_vec(), "nonce {i} mismatch");
}
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
let last_nonce = &rows.last().unwrap().current_nonce;
assert_eq!(&root_row.data_encryption_nonce, last_nonce);
}
#[tokio::test]
#[test_log::test]
async fn broken_db_nonce_format_fails_closed() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let root_key_history_id = common::root_key_history_id(&db).await;
let mut conn = db.get().await.unwrap();
update(
schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id)),
)
.set(schema::root_key_history::data_encryption_nonce.eq(vec![1, 2, 3]))
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::BrokenDatabase));
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let id = actor
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
.await
.unwrap();
let mut conn = db.get().await.unwrap();
update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id)))
.set(schema::aead_encrypted::current_nonce.eq(vec![7, 8]))
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor.decrypt(id).await.unwrap_err();
assert!(matches!(err, Error::BrokenDatabase));
}

View File

@@ -0,0 +1,6 @@
mod common;
#[path = "user_agent/auth.rs"]
mod auth;
#[path = "user_agent/unseal.rs"]
mod unseal;

View File

@@ -0,0 +1,161 @@
use arbiter_proto::proto::user_agent::{
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use arbiter_proto::transport::Bi;
use arbiter_server::{
actors::{
GlobalActors,
bootstrap::GetToken,
user_agent::{ConnectionProps, connect_user_agent},
},
db::{self, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
use diesel_async::RunQueryDsl;
use ed25519_dalek::Signer as _;
use super::common::ChannelTransport;
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_token_auth() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
let (server_transport, mut test_transport) = ChannelTransport::new();
let props = ConnectionProps::new(db.clone(), actors, Box::new(server_transport));
let task = tokio::spawn(connect_user_agent(props));
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
test_transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some(token),
},
)),
})
.await
.unwrap();
task.await.unwrap();
let mut conn = db.get().await.unwrap();
let stored_pubkey: Vec<u8> = schema::useragent_client::table
.select(schema::useragent_client::public_key)
.first::<Vec<u8>>(&mut conn)
.await
.unwrap();
assert_eq!(stored_pubkey, new_key.verifying_key().to_bytes().to_vec());
}
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_invalid_token_auth() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let (server_transport, mut test_transport) = ChannelTransport::new();
let props = ConnectionProps::new(db.clone(), actors, Box::new(server_transport));
let task = tokio::spawn(connect_user_agent(props));
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
test_transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some("invalid_token".to_string()),
},
)),
})
.await
.unwrap();
// Auth fails, connect_user_agent returns, transport drops
task.await.unwrap();
// Verify no key was registered
let mut conn = db.get().await.unwrap();
let count: i64 = schema::useragent_client::table
.count()
.get_result::<i64>(&mut conn)
.await
.unwrap();
assert_eq!(count, 0);
}
#[tokio::test]
#[test_log::test]
pub async fn test_challenge_auth() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
{
let mut conn = db.get().await.unwrap();
insert_into(schema::useragent_client::table)
.values(schema::useragent_client::public_key.eq(pubkey_bytes.clone()))
.execute(&mut conn)
.await
.unwrap();
}
let (server_transport, mut test_transport) = ChannelTransport::new();
let props = ConnectionProps::new(db.clone(), actors, Box::new(server_transport));
let task = tokio::spawn(connect_user_agent(props));
// Send challenge request
test_transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: None,
},
)),
})
.await
.unwrap();
// Read the challenge response
let response = test_transport
.recv()
.await
.expect("should receive challenge");
let challenge = match response {
Ok(resp) => match resp.payload {
Some(UserAgentResponsePayload::AuthChallenge(c)) => c,
other => panic!("Expected AuthChallenge, got {other:?}"),
},
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
};
// Sign the challenge and send solution
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
let signature = new_key.sign(&formatted_challenge);
test_transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(
AuthChallengeSolution {
signature: signature.to_bytes().to_vec(),
},
)),
})
.await
.unwrap();
// Auth completes, session spawned
task.await.unwrap();
}

View File

@@ -0,0 +1,184 @@
use arbiter_proto::proto::user_agent::{
UnsealEncryptedKey, UnsealResult, UnsealStart, UserAgentRequest,
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use arbiter_server::{
actors::{
GlobalActors,
keyholder::{Bootstrap, Seal},
user_agent::session::UserAgentSession,
},
db,
};
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
use memsafe::MemSafe;
use x25519_dalek::{EphemeralSecret, PublicKey};
async fn setup_sealed_user_agent(
seal_key: &[u8],
) -> (db::DatabasePool, UserAgentSession) {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
actors
.key_holder
.ask(Bootstrap {
seal_key_raw: MemSafe::new(seal_key.to_vec()).unwrap(),
})
.await
.unwrap();
actors.key_holder.ask(Seal).await.unwrap();
let session = UserAgentSession::new_test(db.clone(), actors);
(db, session)
}
async fn client_dh_encrypt(
user_agent: &mut UserAgentSession,
key_to_send: &[u8],
) -> UnsealEncryptedKey {
let client_secret = EphemeralSecret::random();
let client_public = PublicKey::from(&client_secret);
let response = user_agent
.process_transport_inbound(UserAgentRequest {
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
client_pubkey: client_public.as_bytes().to_vec(),
})),
})
.await
.unwrap();
let server_pubkey = match response.payload.unwrap() {
UserAgentResponsePayload::UnsealStartResponse(resp) => resp.server_pubkey,
other => panic!("Expected UnsealStartResponse, got {other:?}"),
};
let server_public = PublicKey::from(<[u8; 32]>::try_from(server_pubkey.as_slice()).unwrap());
let shared_secret = client_secret.diffie_hellman(&server_public);
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
let nonce = XNonce::from([0u8; 24]);
let associated_data = b"unseal";
let mut ciphertext = key_to_send.to_vec();
cipher
.encrypt_in_place(&nonce, associated_data, &mut ciphertext)
.unwrap();
UnsealEncryptedKey {
nonce: nonce.to_vec(),
ciphertext,
associated_data: associated_data.to_vec(),
}
}
fn unseal_key_request(req: UnsealEncryptedKey) -> UserAgentRequest {
UserAgentRequest {
payload: Some(UserAgentRequestPayload::UnsealEncryptedKey(req)),
}
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_success() {
let seal_key = b"test-seal-key";
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
let response = user_agent
.process_transport_inbound(unseal_key_request(encrypted_key))
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_wrong_seal_key() {
let (_db, mut user_agent) = setup_sealed_user_agent(b"correct-key").await;
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
let response = user_agent
.process_transport_inbound(unseal_key_request(encrypted_key))
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_corrupted_ciphertext() {
let (_db, mut user_agent) = setup_sealed_user_agent(b"test-key").await;
let client_secret = EphemeralSecret::random();
let client_public = PublicKey::from(&client_secret);
user_agent
.process_transport_inbound(UserAgentRequest {
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
client_pubkey: client_public.as_bytes().to_vec(),
})),
})
.await
.unwrap();
let response = user_agent
.process_transport_inbound(unseal_key_request(UnsealEncryptedKey {
nonce: vec![0u8; 24],
ciphertext: vec![0u8; 32],
associated_data: vec![],
}))
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_retry_after_invalid_key() {
let seal_key = b"real-seal-key";
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
{
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
let response = user_agent
.process_transport_inbound(unseal_key_request(encrypted_key))
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
{
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
let response = user_agent
.process_transport_inbound(unseal_key_request(encrypted_key))
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
);
}
}

View File

@@ -2,5 +2,20 @@
name = "arbiter-useragent"
version = "0.1.0"
edition = "2024"
license = "Apache-2.0"
[dependencies]
arbiter-proto.path = "../arbiter-proto"
kameo.workspace = true
tokio = {workspace = true, features = ["net"]}
tonic.workspace = true
tonic.features = ["tls-aws-lc"]
tracing.workspace = true
ed25519-dalek.workspace = true
smlang.workspace = true
x25519-dalek.workspace = true
thiserror.workspace = true
tokio-stream.workspace = true
http = "1.4.0"
rustls-webpki = { version = "0.103.9", features = ["aws-lc-rs"] }
async-trait.workspace = true

View File

@@ -0,0 +1,72 @@
use arbiter_proto::{
proto::{
user_agent::{UserAgentRequest, UserAgentResponse},
arbiter_service_client::ArbiterServiceClient,
},
transport::{IdentityRecvConverter, IdentitySendConverter, grpc},
url::ArbiterUrl,
};
use ed25519_dalek::SigningKey;
use kameo::actor::{ActorRef, Spawn};
use tokio::sync::mpsc;
use tokio_stream::wrappers::ReceiverStream;
use tonic::transport::ClientTlsConfig;
#[derive(Debug, thiserror::Error)]
pub enum ConnectError {
#[error("Could establish connection")]
Connection(#[from] tonic::transport::Error),
#[error("Invalid server URI")]
InvalidUri(#[from] http::uri::InvalidUri),
#[error("Invalid CA certificate")]
InvalidCaCert(#[from] webpki::Error),
#[error("gRPC error")]
Grpc(#[from] tonic::Status),
}
use super::UserAgentActor;
pub type UserAgentGrpc = ActorRef<
UserAgentActor<
grpc::GrpcAdapter<
IdentityRecvConverter<UserAgentResponse>,
IdentitySendConverter<UserAgentRequest>,
>,
>,
>;
pub async fn connect_grpc(
url: ArbiterUrl,
key: SigningKey,
) -> Result<UserAgentGrpc, ConnectError> {
let bootstrap_token = url.bootstrap_token.clone();
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)?.to_owned();
let tls = ClientTlsConfig::new().trust_anchor(anchor);
// TODO: if `host` is localhost, we need to verify server's process authenticity
let channel = tonic::transport::Channel::from_shared(format!("{}:{}", url.host, url.port))?
.tls_config(tls)?
.connect()
.await?;
let mut client = ArbiterServiceClient::new(channel);
let (tx, rx) = mpsc::channel(16);
let bistream = client.user_agent(ReceiverStream::new(rx)).await?;
let bistream = bistream.into_inner();
let adapter = grpc::GrpcAdapter::new(
tx,
bistream,
IdentityRecvConverter::new(),
IdentitySendConverter::new(),
);
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token, adapter));
Ok(actor)
}

View File

@@ -1,14 +1,195 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
use arbiter_proto::{
format_challenge,
proto::user_agent::{
AuthChallengeRequest, AuthChallengeSolution, AuthOk,
UserAgentRequest, UserAgentResponse,
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
},
transport::Bi,
};
use ed25519_dalek::{Signer, SigningKey};
use kameo::{Actor, actor::ActorRef};
use smlang::statemachine;
use tokio::select;
use tracing::{error, info};
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
statemachine! {
name: UserAgent,
custom_error: false,
transitions: {
*Init + SentAuthChallengeRequest = WaitingForServerAuth,
WaitingForServerAuth + ReceivedAuthChallenge = WaitingForAuthOk,
WaitingForServerAuth + ReceivedAuthOk = Authenticated,
WaitingForAuthOk + ReceivedAuthOk = Authenticated,
}
}
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {}
#[derive(Debug, thiserror::Error)]
pub enum InboundError {
#[error("Invalid user agent response")]
InvalidResponse,
#[error("Expected response payload")]
MissingResponsePayload,
#[error("Unexpected response payload")]
UnexpectedResponsePayload,
#[error("Invalid state for auth challenge")]
InvalidStateForAuthChallenge,
#[error("Invalid state for auth ok")]
InvalidStateForAuthOk,
#[error("State machine error")]
StateTransitionFailed,
#[error("Transport send failed")]
TransportSendFailed,
}
pub struct UserAgentActor<Transport>
where
Transport: Bi<UserAgentResponse, UserAgentRequest>,
{
key: SigningKey,
bootstrap_token: Option<String>,
state: UserAgentStateMachine<DummyContext>,
transport: Transport,
}
impl<Transport> UserAgentActor<Transport>
where
Transport: Bi<UserAgentResponse, UserAgentRequest>,
{
pub fn new(key: SigningKey, bootstrap_token: Option<String>, transport: Transport) -> Self {
Self {
key,
bootstrap_token,
state: UserAgentStateMachine::new(DummyContext),
transport,
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), InboundError> {
self.state.process_event(event).map_err(|e| {
error!(?e, "useragent state transition failed");
InboundError::StateTransitionFailed
})?;
Ok(())
}
async fn send_auth_challenge_request(&mut self) -> Result<(), InboundError> {
let req = AuthChallengeRequest {
pubkey: self.key.verifying_key().to_bytes().to_vec(),
bootstrap_token: self.bootstrap_token.take(),
};
self.transition(UserAgentEvents::SentAuthChallengeRequest)?;
self.transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
})
.await
.map_err(|_| InboundError::TransportSendFailed)?;
info!(actor = "useragent", "auth.request.sent");
Ok(())
}
async fn handle_auth_challenge(
&mut self,
challenge: arbiter_proto::proto::user_agent::AuthChallenge,
) -> Result<(), InboundError> {
self.transition(UserAgentEvents::ReceivedAuthChallenge)?;
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
let signature = self.key.sign(&formatted);
let solution = AuthChallengeSolution {
signature: signature.to_bytes().to_vec(),
};
self.transport
.send(UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
})
.await
.map_err(|_| InboundError::TransportSendFailed)?;
info!(actor = "useragent", "auth.solution.sent");
Ok(())
}
fn handle_auth_ok(&mut self, _ok: AuthOk) -> Result<(), InboundError> {
self.transition(UserAgentEvents::ReceivedAuthOk)?;
info!(actor = "useragent", "auth.ok");
Ok(())
}
pub async fn process_inbound_transport(
&mut self,
inbound: UserAgentResponse
) -> Result<(), InboundError> {
let payload = inbound
.payload
.ok_or(InboundError::MissingResponsePayload)?;
match payload {
UserAgentResponsePayload::AuthChallenge(challenge) => {
self.handle_auth_challenge(challenge).await
}
UserAgentResponsePayload::AuthOk(ok) => self.handle_auth_ok(ok),
_ => Err(InboundError::UnexpectedResponsePayload),
}
}
}
impl<Transport> Actor for UserAgentActor<Transport>
where
Transport: Bi<UserAgentResponse, UserAgentRequest>,
{
type Args = Self;
type Error = ();
async fn on_start(
mut args: Self::Args,
_actor_ref: ActorRef<Self>,
) -> Result<Self, Self::Error> {
if let Err(err) = args.send_auth_challenge_request().await {
error!(?err, actor = "useragent", "auth.start.failed");
return Err(());
}
Ok(args)
}
async fn next(
&mut self,
_actor_ref: kameo::prelude::WeakActorRef<Self>,
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
) -> Option<kameo::mailbox::Signal<Self>> {
loop {
select! {
signal = mailbox_rx.recv() => {
return signal;
}
inbound = self.transport.recv() => {
match inbound {
Some(inbound) => {
if let Err(err) = self.process_inbound_transport(inbound).await {
error!(?err, actor = "useragent", "transport.inbound.failed");
return Some(kameo::mailbox::Signal::Stop);
}
}
None => {
info!(actor = "useragent", "transport.closed");
return Some(kameo::mailbox::Signal::Stop);
}
}
}
}
}
}
}
mod grpc;
pub use grpc::{connect_grpc, ConnectError};

View File

@@ -0,0 +1,141 @@
use arbiter_proto::{
format_challenge,
proto::user_agent::{
AuthChallenge, AuthOk,
UserAgentRequest, UserAgentResponse,
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
},
transport::Bi,
};
use arbiter_useragent::UserAgentActor;
use ed25519_dalek::SigningKey;
use kameo::actor::Spawn;
use tokio::sync::mpsc;
use tokio::time::{Duration, timeout};
use async_trait::async_trait;
struct TestTransport {
inbound_rx: mpsc::Receiver<UserAgentResponse>,
outbound_tx: mpsc::Sender<UserAgentRequest>,
}
#[async_trait]
impl Bi<UserAgentResponse, UserAgentRequest> for TestTransport {
async fn send(&mut self, item: UserAgentRequest) -> Result<(), arbiter_proto::transport::Error> {
self.outbound_tx
.send(item)
.await
.map_err(|_| arbiter_proto::transport::Error::ChannelClosed)
}
async fn recv(&mut self) -> Option<UserAgentResponse> {
self.inbound_rx.recv().await
}
}
fn make_transport() -> (
TestTransport,
mpsc::Sender<UserAgentResponse>,
mpsc::Receiver<UserAgentRequest>,
) {
let (inbound_tx, inbound_rx) = mpsc::channel(8);
let (outbound_tx, outbound_rx) = mpsc::channel(8);
(
TestTransport {
inbound_rx,
outbound_tx,
},
inbound_tx,
outbound_rx,
)
}
fn test_key() -> SigningKey {
SigningKey::from_bytes(&[7u8; 32])
}
#[tokio::test]
async fn sends_auth_request_on_start_with_bootstrap_token() {
let key = test_key();
let pubkey = key.verifying_key().to_bytes().to_vec();
let bootstrap_token = Some("bootstrap-123".to_string());
let (transport, inbound_tx, mut outbound_rx) = make_transport();
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token.clone(), transport));
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
.await
.expect("timed out waiting for auth request")
.expect("channel closed before auth request");
let UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
} = outbound
else {
panic!("expected auth challenge request");
};
assert_eq!(req.pubkey, pubkey);
assert_eq!(req.bootstrap_token, bootstrap_token);
drop(inbound_tx);
drop(actor);
}
#[tokio::test]
async fn challenge_flow_sends_solution_from_transport_inbound() {
let key = test_key();
let verify_key = key.verifying_key();
let (transport, inbound_tx, mut outbound_rx) = make_transport();
let actor = UserAgentActor::spawn(UserAgentActor::new(key, None, transport));
let _initial_auth_request = timeout(Duration::from_secs(1), outbound_rx.recv())
.await
.expect("timed out waiting for initial auth request")
.expect("missing initial auth request");
let challenge = AuthChallenge {
pubkey: verify_key.to_bytes().to_vec(),
nonce: 42,
};
inbound_tx
.send(UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
})
.await
.unwrap();
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
.await
.expect("timed out waiting for challenge solution")
.expect("missing challenge solution");
let UserAgentRequest {
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
} = outbound
else {
panic!("expected auth challenge solution");
};
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
let sig: ed25519_dalek::Signature = solution
.signature
.as_slice()
.try_into()
.expect("signature bytes length");
verify_key
.verify_strict(&formatted, &sig)
.expect("solution signature should verify");
inbound_tx
.send(UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthOk(AuthOk {})),
})
.await
.unwrap();
drop(inbound_tx);
drop(actor);
}

View File

@@ -1,4 +1,95 @@
# cargo-vet audits file
[audits]
[[audits.similar]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
version = "2.2.1"
[[audits.test-log]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.18 -> 0.2.19"
[[audits.test-log-macros]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.18 -> 0.2.19"
[[trusted.cc]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2022-10-29"
end = "2027-02-16"
[[trusted.h2]]
criteria = "safe-to-deploy"
user-id = 359 # Sean McArthur (seanmonstar)
start = "2019-03-13"
end = "2027-02-14"
[[trusted.hashbrown]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2025-04-30"
end = "2027-02-14"
[[trusted.hyper-util]]
criteria = "safe-to-deploy"
user-id = 359 # Sean McArthur (seanmonstar)
start = "2022-01-15"
end = "2027-02-14"
[[trusted.libc]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2024-08-15"
end = "2027-02-16"
[[trusted.rustix]]
criteria = "safe-to-deploy"
user-id = 6825 # Dan Gohman (sunfishcode)
start = "2021-10-29"
end = "2027-02-14"
[[trusted.serde_json]]
criteria = "safe-to-deploy"
user-id = 3618 # David Tolnay (dtolnay)
start = "2019-02-28"
end = "2027-02-14"
[[trusted.syn]]
criteria = "safe-to-deploy"
user-id = 3618 # David Tolnay (dtolnay)
start = "2019-03-01"
end = "2027-02-14"
[[trusted.thread_local]]
criteria = "safe-to-deploy"
user-id = 2915 # Amanieu d'Antras (Amanieu)
start = "2019-09-07"
end = "2027-02-16"
[[trusted.toml]]
criteria = "safe-to-deploy"
user-id = 6743 # Ed Page (epage)
start = "2022-12-14"
end = "2027-02-16"
[[trusted.toml_parser]]
criteria = "safe-to-deploy"
user-id = 6743 # Ed Page (epage)
start = "2025-07-08"
end = "2027-02-16"
[[trusted.tonic-build]]
criteria = "safe-to-deploy"
user-id = 10
start = "2019-09-10"
end = "2027-02-16"
[[trusted.windows-sys]]
criteria = "safe-to-deploy"
user-id = 64539 # Kenny Kerr (kennykerr)
start = "2021-11-15"
end = "2027-02-16"

View File

@@ -3,3 +3,879 @@
[cargo-vet]
version = "0.10"
[imports.bytecode-alliance]
url = "https://raw.githubusercontent.com/bytecodealliance/wasmtime/main/supply-chain/audits.toml"
[imports.google]
url = "https://raw.githubusercontent.com/google/supply-chain/main/audits.toml"
[imports.mozilla]
url = "https://raw.githubusercontent.com/mozilla/supply-chain/main/audits.toml"
[imports.zcash]
url = "https://raw.githubusercontent.com/zcash/rust-ecosystem/main/supply-chain/audits.toml"
[[exemptions.addr2line]]
version = "0.25.1"
criteria = "safe-to-deploy"
[[exemptions.aho-corasick]]
version = "1.1.4"
criteria = "safe-to-deploy"
[[exemptions.anyhow]]
version = "1.0.101"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs]]
version = "0.7.1"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs-derive]]
version = "0.6.0"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs-impl]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.async-trait]]
version = "0.1.89"
criteria = "safe-to-deploy"
[[exemptions.aws-lc-rs]]
version = "1.15.4"
criteria = "safe-to-deploy"
[[exemptions.aws-lc-sys]]
version = "0.37.0"
criteria = "safe-to-deploy"
[[exemptions.axum]]
version = "0.8.8"
criteria = "safe-to-deploy"
[[exemptions.axum-core]]
version = "0.5.6"
criteria = "safe-to-deploy"
[[exemptions.backtrace]]
version = "0.3.76"
criteria = "safe-to-deploy"
[[exemptions.backtrace-ext]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.bb8]]
version = "0.9.1"
criteria = "safe-to-deploy"
[[exemptions.bitflags]]
version = "2.10.0"
criteria = "safe-to-deploy"
[[exemptions.block-buffer]]
version = "0.11.0"
criteria = "safe-to-deploy"
[[exemptions.bytes]]
version = "1.11.1"
criteria = "safe-to-deploy"
[[exemptions.cc]]
version = "1.2.55"
criteria = "safe-to-deploy"
[[exemptions.cfg-if]]
version = "1.0.4"
criteria = "safe-to-deploy"
[[exemptions.chacha20]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.chrono]]
version = "0.4.43"
criteria = "safe-to-deploy"
[[exemptions.cmake]]
version = "0.1.57"
criteria = "safe-to-deploy"
[[exemptions.cpufeatures]]
version = "0.2.17"
criteria = "safe-to-deploy"
[[exemptions.cpufeatures]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.crc32fast]]
version = "1.5.0"
criteria = "safe-to-deploy"
[[exemptions.crossbeam-utils]]
version = "0.8.21"
criteria = "safe-to-deploy"
[[exemptions.crypto-common]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.curve25519-dalek]]
version = "5.0.0-pre.6"
criteria = "safe-to-deploy"
[[exemptions.curve25519-dalek-derive]]
version = "0.1.1"
criteria = "safe-to-deploy"
[[exemptions.darling]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.darling_core]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.darling_macro]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.dashmap]]
version = "6.1.0"
criteria = "safe-to-deploy"
[[exemptions.data-encoding]]
version = "2.10.0"
criteria = "safe-to-deploy"
[[exemptions.der-parser]]
version = "10.0.0"
criteria = "safe-to-deploy"
[[exemptions.deranged]]
version = "0.5.5"
criteria = "safe-to-deploy"
[[exemptions.diesel]]
version = "2.3.6"
criteria = "safe-to-deploy"
[[exemptions.diesel-async]]
version = "0.7.4"
criteria = "safe-to-deploy"
[[exemptions.diesel_derives]]
version = "2.3.7"
criteria = "safe-to-deploy"
[[exemptions.diesel_migrations]]
version = "2.3.1"
criteria = "safe-to-deploy"
[[exemptions.diesel_table_macro_syntax]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.digest]]
version = "0.11.0-rc.11"
criteria = "safe-to-deploy"
[[exemptions.downcast-rs]]
version = "2.0.2"
criteria = "safe-to-deploy"
[[exemptions.dsl_auto_type]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.dyn-clone]]
version = "1.0.20"
criteria = "safe-to-deploy"
[[exemptions.ed25519]]
version = "3.0.0-rc.4"
criteria = "safe-to-deploy"
[[exemptions.ed25519-dalek]]
version = "3.0.0-pre.6"
criteria = "safe-to-deploy"
[[exemptions.fiat-crypto]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.find-msvc-tools]]
version = "0.1.9"
criteria = "safe-to-deploy"
[[exemptions.fixedbitset]]
version = "0.5.7"
criteria = "safe-to-deploy"
[[exemptions.flate2]]
version = "1.1.9"
criteria = "safe-to-deploy"
[[exemptions.fs_extra]]
version = "1.3.0"
criteria = "safe-to-deploy"
[[exemptions.futures-task]]
version = "0.3.31"
criteria = "safe-to-deploy"
[[exemptions.futures-util]]
version = "0.3.31"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.2.17"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.3.4"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.hashbrown]]
version = "0.14.5"
criteria = "safe-to-deploy"
[[exemptions.http]]
version = "1.4.0"
criteria = "safe-to-deploy"
[[exemptions.http-body-util]]
version = "0.1.3"
criteria = "safe-to-deploy"
[[exemptions.httparse]]
version = "1.10.1"
criteria = "safe-to-deploy"
[[exemptions.hybrid-array]]
version = "0.4.7"
criteria = "safe-to-deploy"
[[exemptions.hyper]]
version = "1.8.1"
criteria = "safe-to-deploy"
[[exemptions.hyper-timeout]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.iana-time-zone]]
version = "0.1.65"
criteria = "safe-to-deploy"
[[exemptions.id-arena]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.ident_case]]
version = "1.0.1"
criteria = "safe-to-deploy"
[[exemptions.indexmap]]
version = "2.13.0"
criteria = "safe-to-deploy"
[[exemptions.is_ci]]
version = "1.2.0"
criteria = "safe-to-deploy"
[[exemptions.itertools]]
version = "0.14.0"
criteria = "safe-to-deploy"
[[exemptions.itoa]]
version = "1.0.17"
criteria = "safe-to-deploy"
[[exemptions.jobserver]]
version = "0.1.34"
criteria = "safe-to-deploy"
[[exemptions.js-sys]]
version = "0.3.85"
criteria = "safe-to-deploy"
[[exemptions.kameo]]
version = "0.19.2"
criteria = "safe-to-deploy"
[[exemptions.kameo_macros]]
version = "0.19.0"
criteria = "safe-to-deploy"
[[exemptions.libsqlite3-sys]]
version = "0.35.0"
criteria = "safe-to-deploy"
[[exemptions.linux-raw-sys]]
version = "0.11.0"
criteria = "safe-to-deploy"
[[exemptions.lock_api]]
version = "0.4.14"
criteria = "safe-to-deploy"
[[exemptions.log]]
version = "0.4.29"
criteria = "safe-to-deploy"
[[exemptions.matchit]]
version = "0.8.4"
criteria = "safe-to-deploy"
[[exemptions.memchr]]
version = "2.8.0"
criteria = "safe-to-deploy"
[[exemptions.memsafe]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.miette]]
version = "7.6.0"
criteria = "safe-to-deploy"
[[exemptions.miette-derive]]
version = "7.6.0"
criteria = "safe-to-deploy"
[[exemptions.migrations_internals]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.migrations_macros]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.mime]]
version = "0.3.17"
criteria = "safe-to-deploy"
[[exemptions.minimal-lexical]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.mio]]
version = "1.1.1"
criteria = "safe-to-deploy"
[[exemptions.multimap]]
version = "0.10.1"
criteria = "safe-to-deploy"
[[exemptions.num-bigint]]
version = "0.4.6"
criteria = "safe-to-deploy"
[[exemptions.num-conv]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.object]]
version = "0.37.3"
criteria = "safe-to-deploy"
[[exemptions.oid-registry]]
version = "0.8.1"
criteria = "safe-to-deploy"
[[exemptions.once_cell]]
version = "1.21.3"
criteria = "safe-to-deploy"
[[exemptions.owo-colors]]
version = "4.2.3"
criteria = "safe-to-deploy"
[[exemptions.parking_lot]]
version = "0.12.5"
criteria = "safe-to-deploy"
[[exemptions.parking_lot_core]]
version = "0.9.12"
criteria = "safe-to-deploy"
[[exemptions.pem]]
version = "3.0.6"
criteria = "safe-to-deploy"
[[exemptions.petgraph]]
version = "0.8.3"
criteria = "safe-to-deploy"
[[exemptions.pin-project]]
version = "1.1.10"
criteria = "safe-to-deploy"
[[exemptions.pin-project-internal]]
version = "1.1.10"
criteria = "safe-to-deploy"
[[exemptions.portable-atomic]]
version = "1.13.1"
criteria = "safe-to-deploy"
[[exemptions.prettyplease]]
version = "0.2.37"
criteria = "safe-to-deploy"
[[exemptions.proc-macro2]]
version = "1.0.106"
criteria = "safe-to-deploy"
[[exemptions.prost]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-derive]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-types]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.pulldown-cmark]]
version = "0.13.0"
criteria = "safe-to-deploy"
[[exemptions.pulldown-cmark-to-cmark]]
version = "22.0.0"
criteria = "safe-to-deploy"
[[exemptions.quote]]
version = "1.0.44"
criteria = "safe-to-deploy"
[[exemptions.r-efi]]
version = "5.3.0"
criteria = "safe-to-deploy"
[[exemptions.rand]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.rand_core]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.rcgen]]
version = "0.14.7"
criteria = "safe-to-deploy"
[[exemptions.redox_syscall]]
version = "0.5.18"
criteria = "safe-to-deploy"
[[exemptions.regex]]
version = "1.12.3"
criteria = "safe-to-deploy"
[[exemptions.regex-automata]]
version = "0.4.14"
criteria = "safe-to-deploy"
[[exemptions.regex-syntax]]
version = "0.8.9"
criteria = "safe-to-deploy"
[[exemptions.ring]]
version = "0.17.14"
criteria = "safe-to-deploy"
[[exemptions.rsqlite-vfs]]
version = "0.1.0"
criteria = "safe-to-deploy"
[[exemptions.rustc-demangle]]
version = "0.1.27"
criteria = "safe-to-deploy"
[[exemptions.rusticata-macros]]
version = "4.1.0"
criteria = "safe-to-deploy"
[[exemptions.rustls]]
version = "0.23.36"
criteria = "safe-to-deploy"
[[exemptions.rustls-pki-types]]
version = "1.14.0"
criteria = "safe-to-deploy"
[[exemptions.rustls-webpki]]
version = "0.103.9"
criteria = "safe-to-deploy"
[[exemptions.scoped-futures]]
version = "0.1.4"
criteria = "safe-to-deploy"
[[exemptions.scopeguard]]
version = "1.2.0"
criteria = "safe-to-deploy"
[[exemptions.secrecy]]
version = "0.10.3"
criteria = "safe-to-deploy"
[[exemptions.semver]]
version = "1.0.27"
criteria = "safe-to-deploy"
[[exemptions.serde]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.serde_core]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.serde_derive]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.sha2]]
version = "0.11.0-rc.5"
criteria = "safe-to-deploy"
[[exemptions.signal-hook-registry]]
version = "1.4.8"
criteria = "safe-to-deploy"
[[exemptions.signature]]
version = "3.0.0-rc.10"
criteria = "safe-to-deploy"
[[exemptions.simd-adler32]]
version = "0.3.8"
criteria = "safe-to-deploy"
[[exemptions.slab]]
version = "0.4.12"
criteria = "safe-to-deploy"
[[exemptions.smlang]]
version = "0.8.0"
criteria = "safe-to-deploy"
[[exemptions.smlang-macros]]
version = "0.8.0"
criteria = "safe-to-deploy"
[[exemptions.socket2]]
version = "0.6.2"
criteria = "safe-to-deploy"
[[exemptions.sqlite-wasm-rs]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.string_morph]]
version = "0.1.0"
criteria = "safe-to-deploy"
[[exemptions.subtle]]
version = "2.6.1"
criteria = "safe-to-deploy"
[[exemptions.supports-color]]
version = "3.0.2"
criteria = "safe-to-deploy"
[[exemptions.supports-hyperlinks]]
version = "3.2.0"
criteria = "safe-to-deploy"
[[exemptions.supports-unicode]]
version = "3.0.0"
criteria = "safe-to-deploy"
[[exemptions.sync_wrapper]]
version = "1.0.2"
criteria = "safe-to-deploy"
[[exemptions.tempfile]]
version = "3.25.0"
criteria = "safe-to-deploy"
[[exemptions.terminal_size]]
version = "0.4.3"
criteria = "safe-to-deploy"
[[exemptions.thiserror]]
version = "2.0.18"
criteria = "safe-to-deploy"
[[exemptions.thiserror-impl]]
version = "2.0.18"
criteria = "safe-to-deploy"
[[exemptions.time]]
version = "0.3.47"
criteria = "safe-to-deploy"
[[exemptions.time-core]]
version = "0.1.8"
criteria = "safe-to-deploy"
[[exemptions.time-macros]]
version = "0.2.27"
criteria = "safe-to-deploy"
[[exemptions.tokio]]
version = "1.49.0"
criteria = "safe-to-deploy"
[[exemptions.tokio-macros]]
version = "2.6.0"
criteria = "safe-to-deploy"
[[exemptions.tokio-rustls]]
version = "0.26.4"
criteria = "safe-to-deploy"
[[exemptions.tokio-stream]]
version = "0.1.18"
criteria = "safe-to-deploy"
[[exemptions.tokio-util]]
version = "0.7.18"
criteria = "safe-to-deploy"
[[exemptions.tonic]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tonic-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tonic-prost]]
version = "0.14.4"
criteria = "safe-to-deploy"
[[exemptions.tonic-prost-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tower]]
version = "0.5.3"
criteria = "safe-to-deploy"
[[exemptions.tower-layer]]
version = "0.3.3"
criteria = "safe-to-deploy"
[[exemptions.tower-service]]
version = "0.3.3"
criteria = "safe-to-deploy"
[[exemptions.tracing]]
version = "0.1.44"
criteria = "safe-to-deploy"
[[exemptions.tracing-attributes]]
version = "0.1.31"
criteria = "safe-to-deploy"
[[exemptions.tracing-core]]
version = "0.1.36"
criteria = "safe-to-deploy"
[[exemptions.tracing-subscriber]]
version = "0.3.22"
criteria = "safe-to-run"
[[exemptions.typenum]]
version = "1.19.0"
criteria = "safe-to-deploy"
[[exemptions.unicase]]
version = "2.9.0"
criteria = "safe-to-deploy"
[[exemptions.unicode-ident]]
version = "1.0.23"
criteria = "safe-to-deploy"
[[exemptions.untrusted]]
version = "0.7.1"
criteria = "safe-to-deploy"
[[exemptions.untrusted]]
version = "0.9.0"
criteria = "safe-to-deploy"
[[exemptions.uuid]]
version = "1.20.0"
criteria = "safe-to-deploy"
[[exemptions.wasi]]
version = "0.11.1+wasi-snapshot-preview1"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-macro]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-macro-support]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-shared]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.winapi]]
version = "0.3.9"
criteria = "safe-to-deploy"
[[exemptions.winapi-i686-pc-windows-gnu]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.winapi-x86_64-pc-windows-gnu]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.windows-core]]
version = "0.62.2"
criteria = "safe-to-deploy"
[[exemptions.windows-implement]]
version = "0.60.2"
criteria = "safe-to-deploy"
[[exemptions.windows-interface]]
version = "0.59.3"
criteria = "safe-to-deploy"
[[exemptions.windows-result]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.windows-strings]]
version = "0.5.1"
criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.53.5"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnu]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnu]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnu]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnu]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.winnow]]
version = "0.7.14"
criteria = "safe-to-deploy"
[[exemptions.x509-parser]]
version = "0.18.1"
criteria = "safe-to-deploy"
[[exemptions.yasna]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.zmij]]
version = "1.0.20"
criteria = "safe-to-deploy"
[[exemptions.zstd]]
version = "0.13.3"
criteria = "safe-to-deploy"
[[exemptions.zstd-safe]]
version = "7.2.4"
criteria = "safe-to-deploy"
[[exemptions.zstd-sys]]
version = "2.0.16+zstd.1.5.7"
criteria = "safe-to-deploy"

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

Some files were not shown because too many files have changed in this diff Show More