28 Commits

Author SHA1 Message Date
hdbg
1b4369b1cb feat(transport): add domain error type to GrpcTransportActor
Some checks failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline failed
2026-02-26 15:07:11 +01:00
hdbg
7bd37b3c4a refactor: introduce TransportActor abstraction 2026-02-25 21:44:01 +01:00
hdbg
fe8c5e1bd2 housekeeping(useragent): rename
Some checks failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-21 13:54:47 +01:00
hdbg
cbbe1f8881 feat(proto): add URL parsing and TLS certificate management 2026-02-18 14:09:58 +01:00
hdbg
7438d62695 misc: create license and readme 2026-02-17 22:20:30 +01:00
hdbg
4236f2c36d refactor(server): reogranized actors, context, and db modules into <dir>/mod.rs structure
Some checks failed
ci/woodpecker/push/server-lint Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 22:29:48 +01:00
hdbg
76ff535619 refactor(server::tests): moved integration-like tests into tests/ 2026-02-16 22:27:59 +01:00
hdbg
b3566c8af6 refactor(server): separated global actors into their own handle 2026-02-16 21:58:14 +01:00
hdbg
bdb9f01757 refactor(server): actors reorganization & linter fixes 2026-02-16 21:43:59 +01:00
hdbg
0805e7a846 feat(keyholder): add seal method and unseal integration tests 2026-02-16 21:38:29 +01:00
hdbg
eb9cbc88e9 feat(server::user-agent): Unseal implemented 2026-02-16 21:17:06 +01:00
hdbg
dd716da4cd test(keyholder): remove unused imports from test modules 2026-02-16 21:15:13 +01:00
hdbg
1545db7428 fix(ci): add protoc installation for lints
Some checks failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 21:14:55 +01:00
hdbg
20ac84b60c fix(ci): add clippy installation in mise.toml
Some checks failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 21:04:13 +01:00
hdbg
8f6dda871b refactor(actors): rename BootstrapActor to Bootstrapper 2026-02-16 21:01:53 +01:00
hdbg
47108ed8ad chore(supply-chain): update cargo-vet audits and trusted publishers
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
ci/woodpecker/push/server-lint Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline was successful
2026-02-16 20:52:31 +01:00
hdbg
359df73c2e feat(server::key_holder): unique index on (root_key_id, nonce) to avoid nonce reuse 2026-02-16 20:45:15 +01:00
hdbg
ce03b7e15d feat(server::key_holder): ability to remotely get current state 2026-02-16 20:40:36 +01:00
hdbg
e4038d9188 refactor(keyholder): rename KeyHolderActor to KeyHolder and optimize db connection lifetime 2026-02-16 20:36:47 +01:00
hdbg
c82339d764 security(server::key_holder): replaced nonce-caching with exclusive transaction fetching nonce from the database 2026-02-16 18:23:25 +01:00
hdbg
c5b51f4b70 feat(server): UserAgent seal/unseal
Some checks failed
ci/woodpecker/pr/server-lint Pipeline failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline was successful
2026-02-16 14:00:23 +01:00
hdbg
6b8f8c9ff7 feat(unseal): add unseal protocol support for user agents 2026-02-15 13:04:55 +01:00
hdbg
8263bc6b6f feat(server): boot mechanism 2026-02-15 01:44:12 +01:00
hdbg
a6c849f268 ci: add server linting pipeline for Rust code quality checks 2026-02-14 23:44:16 +01:00
hdbg
d8d65da0b4 test(user-agent): add challenge-response auth flow test 2026-02-14 23:43:36 +01:00
hdbg
abdf4e3893 tests(server): UserAgent invalid bootstrap token 2026-02-14 19:48:37 +01:00
hdbg
4bac70a6e9 security(server): cargo-vet proper init
All checks were successful
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline was successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-14 19:16:09 +01:00
hdbg
54a41743be housekeeping(server): trimmed-down dependencies 2026-02-14 19:04:50 +01:00
133 changed files with 4720 additions and 3212 deletions

3
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"git.enabled": false
}

View File

@@ -8,7 +8,7 @@ when:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
- name: audit
image: jdxcode/mise:latest
directory: server
environment:

View File

@@ -0,0 +1,25 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: lint
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
- mise install rust
- mise install protoc
- mise exec rust -- cargo clippy --all-targets --all-features -- -D warnings

View File

@@ -8,7 +8,7 @@ when:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
- name: vet
image: jdxcode/mise:latest
directory: server
environment:

View File

@@ -3,7 +3,6 @@
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
---
## 1. Peer Types

190
LICENSE Normal file
View File

@@ -0,0 +1,190 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2026 MarketTakers
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

13
README.md Normal file
View File

@@ -0,0 +1,13 @@
# Arbiter
> Policy-first multi-client wallet daemon, allowing permissioned transactions across blockchains
## Security warning
Arbiter can't meaningfully protect against host compromise. Potential attack flow:
- Attacker steals TLS keys from database
- Pretends to be server; just accepts user agent challenge solutions
- Pretend to be in sealed state and performing DH with client
- Steals user password and derives seal key
While this attack is highly targetive, it's still possible.
> This software is experimental. Do not use with funds you cannot afford to lose.

View File

@@ -31,6 +31,12 @@
"packageUri": "lib/",
"languageVersion": "3.4"
},
{
"name": "cupertino_icons",
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/cupertino_icons-1.0.8",
"packageUri": "lib/",
"languageVersion": "3.1"
},
{
"name": "fake_async",
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/fake_async-1.3.3",
@@ -158,7 +164,7 @@
"languageVersion": "3.5"
},
{
"name": "arbiter",
"name": "app",
"rootUri": "../",
"packageUri": "lib/",
"languageVersion": "3.10"

View File

@@ -1,12 +1,13 @@
{
"roots": [
"arbiter"
"app"
],
"packages": [
{
"name": "arbiter",
"version": "0.1.0",
"name": "app",
"version": "1.0.0+1",
"dependencies": [
"cupertino_icons",
"flutter"
],
"devDependencies": [
@@ -39,6 +40,11 @@
"vector_math"
]
},
{
"name": "cupertino_icons",
"version": "1.0.8",
"dependencies": []
},
{
"name": "flutter",
"version": "0.0.0",

View File

@@ -1,10 +1,10 @@
// This is a generated file; do not edit or check into version control.
FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/useragent
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app
COCOAPODS_PARALLEL_CODE_SIGN=true
FLUTTER_BUILD_DIR=build
FLUTTER_BUILD_NAME=0.1.0
FLUTTER_BUILD_NUMBER=0.1.0
FLUTTER_BUILD_NAME=1.0.0
FLUTTER_BUILD_NUMBER=1
DART_OBFUSCATION=false
TRACK_WIDGET_CREATION=true
TREE_SHAKE_ICONS=false

View File

@@ -1,11 +1,11 @@
#!/bin/sh
# This is a generated file; do not edit or check into version control.
export "FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable"
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/useragent"
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app"
export "COCOAPODS_PARALLEL_CODE_SIGN=true"
export "FLUTTER_BUILD_DIR=build"
export "FLUTTER_BUILD_NAME=0.1.0"
export "FLUTTER_BUILD_NUMBER=0.1.0"
export "FLUTTER_BUILD_NAME=1.0.0"
export "FLUTTER_BUILD_NUMBER=1"
export "DART_OBFUSCATION=false"
export "TRACK_WIDGET_CREATION=true"
export "TREE_SHAKE_ICONS=false"

View File

@@ -1,89 +0,0 @@
name: app
description: "A new Flutter project."
# The following line prevents the package from being accidentally published to
# pub.dev using `flutter pub publish`. This is preferred for private packages.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
# The following defines the version and build number for your application.
# A version number is three numbers separated by dots, like 1.2.43
# followed by an optional build number separated by a +.
# Both the version and the builder number may be overridden in flutter
# build by specifying --build-name and --build-number, respectively.
# In Android, build-name is used as versionName while build-number used as versionCode.
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
# In iOS, build-name is used as CFBundleShortVersionString while build-number is used as CFBundleVersion.
# Read more about iOS versioning at
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
# In Windows, build-name is used as the major, minor, and patch parts
# of the product and file versions while build-number is used as the build suffix.
version: 1.0.0+1
environment:
sdk: ^3.10.8
# Dependencies specify other packages that your package needs in order to work.
# To automatically upgrade your package dependencies to the latest versions
# consider running `flutter pub upgrade --major-versions`. Alternatively,
# dependencies can be manually updated by changing the version numbers below to
# the latest version available on pub.dev. To see which dependencies have newer
# versions available, run `flutter pub outdated`.
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.8
dev_dependencies:
flutter_test:
sdk: flutter
# The "flutter_lints" package below contains a set of recommended lints to
# encourage good coding practices. The lint set provided by the package is
# activated in the `analysis_options.yaml` file located at the root of your
# package. See that file for information about deactivating specific lint
# rules and activating additional ones.
flutter_lints: ^6.0.0
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter packages.
flutter:
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
# To add assets to your application, add an assets section, like this:
# assets:
# - images/a_dot_burr.jpeg
# - images/a_dot_ham.jpeg
# An image asset can refer to one or more resolution-specific "variants", see
# https://flutter.dev/to/resolution-aware-images
# For details regarding adding assets from package dependencies, see
# https://flutter.dev/to/asset-from-package
# To add custom fonts to your application, add a fonts section here,
# in this "flutter" section. Each entry in this list should have a
# "family" key with the font family name, and a "fonts" key with a
# list giving the asset and other descriptors for the font. For
# example:
# fonts:
# - family: Schyler
# fonts:
# - asset: fonts/Schyler-Regular.ttf
# - asset: fonts/Schyler-Italic.ttf
# style: italic
# - family: Trajan Pro
# fonts:
# - asset: fonts/TrajanPro.ttf
# - asset: fonts/TrajanPro_Bold.ttf
# weight: 700
#
# For details regarding fonts from package dependencies,
# see https://flutter.dev/to/font-from-package

View File

@@ -10,6 +10,10 @@ backend = "cargo:cargo-features"
version = "0.11.1"
backend = "cargo:cargo-features-manager"
[[tools."cargo:cargo-insta"]]
version = "1.46.3"
backend = "cargo:cargo-insta"
[[tools."cargo:cargo-nextest"]]
version = "0.9.126"
backend = "cargo:cargo-nextest"

View File

@@ -2,10 +2,10 @@
"cargo:diesel_cli" = { version = "2.3.6", features = "sqlite,sqlite-bundled", default-features = false }
"cargo:cargo-audit" = "0.22.1"
"cargo:cargo-vet" = "0.10.2"
flutter = "3.38.9-stable"
protoc = "29.6"
rust = "1.93.1"
"rust" = {version = "1.93.0", components = "clippy"}
"cargo:cargo-features-manager" = "0.11.1"
"cargo:cargo-nextest" = "0.9.126"
"cargo:cargo-shear" = "latest"
"cargo:cargo-insta" = "1.46.3"

View File

@@ -3,111 +3,14 @@ syntax = "proto3";
package arbiter;
import "auth.proto";
message ClientRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
CertRotationAck cert_rotation_ack = 2;
}
}
message ClientResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
CertRotationNotification cert_rotation_notification = 2;
}
}
message UserAgentRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
CertRotationAck cert_rotation_ack = 2;
UnsealRequest unseal_request = 3;
}
}
message UserAgentResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
CertRotationNotification cert_rotation_notification = 2;
UnsealResponse unseal_response = 3;
}
}
import "client.proto";
import "user_agent.proto";
message ServerInfo {
string version = 1;
bytes cert_public_key = 2;
}
// TLS Certificate Rotation Protocol
message CertRotationNotification {
// New public certificate (DER-encoded)
bytes new_cert = 1;
// Unix timestamp when rotation will be executed (if all ACKs received)
int64 rotation_scheduled_at = 2;
// Unix timestamp deadline for ACK (7 days from now)
int64 ack_deadline = 3;
// Rotation ID for tracking
int32 rotation_id = 4;
}
message CertRotationAck {
// Rotation ID (from CertRotationNotification)
int32 rotation_id = 1;
// Client public key for identification
bytes client_public_key = 2;
// Confirmation that client saved the new certificate
bool cert_saved = 3;
}
// Vault Unseal Protocol (X25519 ECDH + ChaCha20Poly1305)
message UnsealRequest {
oneof payload {
EphemeralKeyRequest ephemeral_key_request = 1;
SealedPassword sealed_password = 2;
}
}
message UnsealResponse {
oneof payload {
EphemeralKeyResponse ephemeral_key_response = 1;
UnsealResult unseal_result = 2;
}
}
message EphemeralKeyRequest {}
message EphemeralKeyResponse {
// Server's X25519 ephemeral public key (32 bytes)
bytes server_pubkey = 1;
// Unix timestamp when this key expires (60 seconds from generation)
int64 expires_at = 2;
}
message SealedPassword {
// Client's X25519 ephemeral public key (32 bytes)
bytes client_pubkey = 1;
// ChaCha20Poly1305 encrypted password (ciphertext + tag)
bytes encrypted_password = 2;
// 12-byte nonce for ChaCha20Poly1305
bytes nonce = 3;
}
message UnsealResult {
// Whether unseal was successful
bool success = 1;
// Error message if unseal failed
optional string error_message = 2;
}
service ArbiterService {
rpc Client(stream ClientRequest) returns (stream ClientResponse);
rpc UserAgent(stream UserAgentRequest) returns (stream UserAgentResponse);

17
protobufs/client.proto Normal file
View File

@@ -0,0 +1,17 @@
syntax = "proto3";
package arbiter;
import "auth.proto";
message ClientRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
}
}
message ClientResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
}
}

View File

@@ -1,46 +0,0 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
syntax = "proto3";
package google.protobuf;
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
option cc_enable_arenas = true;
option go_package = "google.golang.org/protobuf/types/known/timestamppb";
option java_package = "com.google.protobuf";
option java_outer_classname = "TimestampProto";
option java_multiple_files = true;
option objc_class_prefix = "GPB";
// A Timestamp represents a point in time independent of any time zone or local
// calendar, encoded as a count of seconds and fractions of seconds at
// nanosecond resolution. The count is relative to an epoch at UTC midnight on
// January 1, 1970, in the proleptic Gregorian calendar which extends the
// Gregorian calendar backwards to year one.
message Timestamp {
// Represents seconds of UTC time since Unix epoch
// 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
// 9999-12-31T23:59:59Z inclusive.
int64 seconds = 1;
// Non-negative fractions of a second at nanosecond resolution. Negative
// second values with fractions must still have non-negative nanos values
// that count forward in time. Must be from 0 to 999,999,999
// inclusive.
int32 nanos = 2;
}

View File

@@ -1,14 +0,0 @@
syntax = "proto3";
package arbiter.unseal;
message UserAgentKeyRequest {}
message ServerKeyResponse {
bytes pubkey = 1;
}
message UserAgentSealedKey {
bytes sealed_key = 1;
bytes pubkey = 2;
bytes nonce = 3;
}

View File

@@ -0,0 +1,51 @@
syntax = "proto3";
package arbiter;
import "auth.proto";
import "google/protobuf/empty.proto";
message UnsealStart {
bytes client_pubkey = 1;
}
message UnsealStartResponse {
bytes server_pubkey = 1;
}
message UnsealEncryptedKey {
bytes nonce = 1;
bytes ciphertext = 2;
bytes associated_data = 3;
}
enum UnsealResult {
UNSEAL_RESULT_UNSPECIFIED = 0;
UNSEAL_RESULT_SUCCESS = 1;
UNSEAL_RESULT_INVALID_KEY = 2;
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
}
enum VaultState {
VAULT_STATE_UNSPECIFIED = 0;
VAULT_STATE_UNBOOTSTRAPPED = 1;
VAULT_STATE_SEALED = 2;
VAULT_STATE_UNSEALED = 3;
VAULT_STATE_ERROR = 4;
}
message UserAgentRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
UnsealStart unseal_start = 2;
UnsealEncryptedKey unseal_encrypted_key = 3;
google.protobuf.Empty query_vault_state = 4;
}
}
message UserAgentResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
UnsealStartResponse unseal_start_response = 2;
UnsealResult unseal_result = 3;
VaultState vault_state = 4;
}
}

735
server/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -23,4 +23,12 @@ async-trait = "0.1.89"
futures = "0.3.31"
tokio-stream = { version = "0.1.18", features = ["full"] }
kameo = "0.19.2"
prost-types = { version = "0.14.3", features = ["chrono"] }
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
rstest = "0.26.1"
rustls-pki-types = "1.14.0"
rcgen = { version = "0.14.7", features = [
"aws_lc_rs",
"pem",
"x509-parser",
"zeroize",
], default-features = false }

BIN
server/crates/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -3,5 +3,6 @@ name = "arbiter-client"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]

View File

@@ -3,20 +3,32 @@ name = "arbiter-proto"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]
tonic.workspace = true
tokio.workspace = true
futures.workspace = true
hex = "0.4.3"
tonic-prost = "0.14.3"
prost = "0.14.3"
kameo.workspace = true
prost-types.workspace = true
url = "2.5.8"
miette.workspace = true
thiserror.workspace = true
rustls-pki-types.workspace = true
base64 = "0.22.1"
tracing.workspace = true
[build-dependencies]
prost-build = "0.14.3"
serde_json = "1"
tonic-prost-build = "0.14.3"
[dev-dependencies]
rstest.workspace = true
rand.workspace = true
rcgen.workspace = true
[package.metadata.cargo-shear]
ignored = ["tonic-prost", "prost", "kameo"]

View File

@@ -1,15 +1,21 @@
use tonic_prost_build::configure;
static PROTOBUF_DIR: &str = "../../../protobufs";
fn main() -> Result<(), Box<dyn std::error::Error>> {
let proto_files = vec![
format!("{}/arbiter.proto", PROTOBUF_DIR),
format!("{}/auth.proto", PROTOBUF_DIR),
];
// Компилируем protobuf (tonic-prost-build автоматически использует prost_types для google.protobuf)
tonic_prost_build::configure()
println!("cargo::rerun-if-changed={PROTOBUF_DIR}");
configure()
.message_attribute(".", "#[derive(::kameo::Reply)]")
.compile_protos(&proto_files, &[PROTOBUF_DIR.to_string()])?;
.compile_protos(
&[
format!("{}/arbiter.proto", PROTOBUF_DIR),
format!("{}/auth.proto", PROTOBUF_DIR),
],
&[PROTOBUF_DIR.to_string()],
)
.unwrap();
Ok(())
}

View File

@@ -1,3 +1,8 @@
pub mod transport;
pub mod url;
use base64::{Engine, prelude::BASE64_STANDARD};
use crate::proto::auth::AuthChallenge;
pub mod proto {
@@ -8,9 +13,7 @@ pub mod proto {
}
}
pub mod transport;
pub static BOOTSTRAP_TOKEN_PATH: &str = "bootstrap_token";
pub static BOOTSTRAP_PATH: &str = "bootstrap_token";
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
static ARBITER_HOME: &str = ".arbiter";
@@ -26,6 +29,6 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
}
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
let concat_form = format!("{}:{}", challenge.nonce, hex::encode(&challenge.pubkey));
let concat_form = format!("{}:{}", challenge.nonce, BASE64_STANDARD.encode(&challenge.pubkey));
concat_form.into_bytes().to_vec()
}

View File

@@ -1,9 +1,75 @@
//! Transport abstraction layer for bridging gRPC bidirectional streaming with kameo actors.
//!
//! This module provides a clean separation between the gRPC transport layer and business logic
//! by modeling the connection as two linked kameo actors:
//!
//! - A **transport actor** ([`GrpcTransportActor`]) that owns the gRPC stream and channel,
//! forwarding inbound messages to the business actor and outbound messages to the client.
//! - A **business logic actor** that receives inbound messages from the transport actor and
//! sends outbound messages back through the transport actor.
//!
//! The [`wire()`] function sets up bidirectional linking between the two actors, ensuring
//! that if either actor dies, the other is notified and can shut down gracefully.
//!
//! # Terminology
//!
//! - **InboundMessage**: a message received by the transport actor from the channel/socket
//! and forwarded to the business actor.
//! - **OutboundMessage**: a message produced by the business actor and sent to the transport
//! actor to be forwarded to the channel/socket.
//!
//! # Architecture
//!
//! ```text
//! gRPC Stream ──InboundMessage──▶ GrpcTransportActor ──tell(InboundMessage)──▶ BusinessActor
//! ▲ │
//! └─tell(Result<OutboundMessage, _>)────┘
//! │
//! mpsc::Sender ──▶ Client
//! ```
//!
//! # Example
//!
//! ```rust,ignore
//! let (tx, rx) = mpsc::channel(1000);
//! let context = server_context.clone();
//!
//! wire(
//! |transport_ref| MyBusinessActor::new(context, transport_ref),
//! |business_recipient, business_id| GrpcTransportActor {
//! sender: tx,
//! receiver: grpc_stream,
//! business_logic_actor: business_recipient,
//! business_logic_actor_id: business_id,
//! },
//! ).await;
//!
//! Ok(Response::new(ReceiverStream::new(rx)))
//! ```
use futures::{Stream, StreamExt};
use tokio::sync::mpsc::{self, error::SendError};
use kameo::{
Actor,
actor::{ActorRef, PreparedActor, Recipient, Spawn, WeakActorRef},
mailbox::Signal,
prelude::Message,
};
use tokio::{
select,
sync::mpsc::{self, error::SendError},
};
use tonic::{Status, Streaming};
use tracing::{debug, error};
// Abstraction for stream for sans-io capabilities
/// A bidirectional stream abstraction for sans-io testing.
///
/// Combines a [`Stream`] of incoming messages with the ability to [`send`](Bi::send)
/// outgoing responses. This trait allows business logic to be tested without a real
/// gRPC connection by swapping in an in-memory implementation.
///
/// # Type Parameters
/// - `T`: `InboundMessage` received from the channel/socket (e.g., `UserAgentRequest`)
/// - `U`: `OutboundMessage` sent to the channel/socket (e.g., `UserAgentResponse`)
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
type Error;
fn send(
@@ -12,7 +78,10 @@ pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
}
// Bi-directional stream abstraction for handling gRPC streaming requests and responses
/// Concrete [`Bi`] implementation backed by a tonic gRPC [`Streaming`] and an [`mpsc::Sender`].
///
/// This is the production implementation used in gRPC service handlers. The `request_stream`
/// receives messages from the client, and `response_sender` sends responses back.
pub struct BiStream<T, U> {
pub request_stream: Streaming<T>,
pub response_sender: mpsc::Sender<Result<U, Status>>,
@@ -44,3 +113,259 @@ where
self.response_sender.send(item).await
}
}
/// Marker trait for transport actors that can receive outbound messages of type `T`.
///
/// Implement this on your transport actor to indicate it can handle outbound messages
/// produced by the business actor. Requires the actor to implement [`Message<Result<T, E>>`]
/// so business logic can forward responses via [`tell()`](ActorRef::tell).
///
/// # Example
///
/// ```rust,ignore
/// #[derive(Actor)]
/// struct MyTransportActor { /* ... */ }
///
/// impl Message<Result<MyResponse, MyError>> for MyTransportActor {
/// type Reply = ();
/// async fn handle(&mut self, msg: Result<MyResponse, MyError>, _ctx: &mut Context<Self, Self::Reply>) -> Self::Reply {
/// // forward outbound message to channel/socket
/// }
/// }
///
/// impl TransportActor<MyResponse, MyError> for MyTransportActor {}
/// ```
pub trait TransportActor<Outbound: Send + 'static, DomainError: Send + 'static>:
Actor + Send + Message<Result<Outbound, DomainError>>
{
}
/// A kameo actor that bridges a gRPC bidirectional stream with a business logic actor.
///
/// This actor owns the gRPC [`Streaming`] receiver and an [`mpsc::Sender`] for responses.
/// It multiplexes between its own mailbox (for outbound messages from the business actor)
/// and the gRPC stream (for inbound client messages) using [`tokio::select!`].
///
/// # Message Flow
///
/// - **Inbound**: Messages from the gRPC stream are forwarded to `business_logic_actor`
/// via [`tell()`](Recipient::tell).
/// - **Outbound**: The business actor sends `Result<Outbound, DomainError>` messages to this
/// actor, which forwards them through the `sender` channel to the gRPC response stream.
///
/// # Lifecycle
///
/// - If the business logic actor dies (detected via actor linking), this actor stops,
/// which closes the gRPC stream.
/// - If the gRPC stream closes or errors, this actor stops, which (via linking) notifies
/// the business actor.
/// - Error responses (`Err(DomainError)`) are forwarded to the client and then the actor stops,
/// closing the connection.
///
/// # Type Parameters
/// - `Outbound`: `OutboundMessage` sent to the client (e.g., `UserAgentResponse`)
/// - `Inbound`: `InboundMessage` received from the client (e.g., `UserAgentRequest`)
/// - `E`: The domain error type, must implement `Into<tonic::Status>` for gRPC conversion
pub struct GrpcTransportActor<Outbound, Inbound, DomainError>
where
Outbound: Send + 'static,
Inbound: Send + 'static,
DomainError: Into<tonic::Status> + Send + 'static,
{
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
receiver: tonic::Streaming<Inbound>,
business_logic_actor: Recipient<Inbound>,
_error: std::marker::PhantomData<DomainError>,
}
impl<Outbound, Inbound, DomainError> GrpcTransportActor<Outbound, Inbound, DomainError>
where
Outbound: Send + 'static,
Inbound: Send + 'static,
DomainError: Into<tonic::Status> + Send + 'static,
{
pub fn new(
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
receiver: tonic::Streaming<Inbound>,
business_logic_actor: Recipient<Inbound>,
) -> Self {
Self {
sender,
receiver,
business_logic_actor,
_error: std::marker::PhantomData,
}
}
}
impl<Outbound, Inbound, E> Actor for GrpcTransportActor<Outbound, Inbound, E>
where
Outbound: Send + 'static,
Inbound: Send + 'static,
E: Into<tonic::Status> + Send + 'static,
{
type Args = Self;
type Error = ();
async fn on_start(args: Self::Args, _: ActorRef<Self>) -> Result<Self, Self::Error> {
Ok(args)
}
fn on_link_died(
&mut self,
_: WeakActorRef<Self>,
id: kameo::prelude::ActorId,
_: kameo::prelude::ActorStopReason,
) -> impl Future<
Output = Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error>,
> + Send {
async move {
if id == self.business_logic_actor.id() {
error!("Business logic actor died, stopping GrpcTransportActor");
Ok(std::ops::ControlFlow::Break(
kameo::prelude::ActorStopReason::Normal,
))
} else {
debug!(
"Linked actor {} died, but it's not the business logic actor, ignoring",
id
);
Ok(std::ops::ControlFlow::Continue(()))
}
}
}
async fn next(
&mut self,
_: WeakActorRef<Self>,
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
) -> Option<kameo::mailbox::Signal<Self>> {
select! {
msg = mailbox_rx.recv() => {
msg
}
recv_msg = self.receiver.next() => {
match recv_msg {
Some(Ok(msg)) => {
match self.business_logic_actor.tell(msg).await {
Ok(_) => None,
Err(e) => {
// TODO: this would probably require better error handling - or resending if backpressure is the issue
error!("Failed to send message to business logic actor: {}", e);
Some(Signal::Stop)
}
}
}
Some(Err(e)) => {
error!("Received error from stream: {}, stopping GrpcTransportActor", e);
Some(Signal::Stop)
}
None => {
error!("Receiver channel closed, stopping GrpcTransportActor");
Some(Signal::Stop)
}
}
}
}
}
}
impl<Outbound, Inbound, E> Message<Result<Outbound, E>> for GrpcTransportActor<Outbound, Inbound, E>
where
Outbound: Send + 'static,
Inbound: Send + 'static,
E: Into<tonic::Status> + Send + 'static,
{
type Reply = ();
async fn handle(
&mut self,
msg: Result<Outbound, E>,
ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
) -> Self::Reply {
let is_err = msg.is_err();
let grpc_msg = msg.map_err(Into::into);
match self.sender.send(grpc_msg).await {
Ok(_) => {
if is_err {
ctx.stop();
}
}
Err(e) => {
error!("Failed to send message: {}", e);
ctx.stop();
}
}
}
}
impl<Outbound, Inbound, E> TransportActor<Outbound, E> for GrpcTransportActor<Outbound, Inbound, E>
where
Outbound: Send + 'static,
Inbound: Send + 'static,
E: Into<tonic::Status> + Send + 'static,
{
}
/// Wires together a transport actor and a business logic actor with bidirectional linking.
///
/// This function handles the chicken-and-egg problem of two actors that need references
/// to each other at construction time. It uses kameo's [`PreparedActor`] to obtain
/// [`ActorRef`]s before spawning, then links both actors so that if either dies,
/// the other is notified via [`on_link_died`](Actor::on_link_died).
///
/// The business actor receives a type-erased [`Recipient<Result<Outbound, DomainError>>`] instead of an
/// `ActorRef<Transport>`, keeping it decoupled from the concrete transport implementation.
///
/// # Type Parameters
/// - `Transport`: The transport actor type (e.g., [`GrpcTransportActor`])
/// - `Inbound`: `InboundMessage` received by the business actor from the transport
/// - `Outbound`: `OutboundMessage` sent by the business actor back to the transport
/// - `Business`: The business logic actor
/// - `BusinessCtor`: Closure that receives a prepared business actor and transport recipient,
/// spawns the business actor, and returns its [`ActorRef`]
/// - `TransportCtor`: Closure that receives a prepared transport actor, a recipient for
/// inbound messages, and the business actor id, then spawns the transport actor
///
/// # Returns
/// A tuple of `(transport_ref, business_ref)` — actor references for both spawned actors.
pub async fn wire<
Transport,
Inbound,
Outbound,
DomainError,
Business,
BusinessCtor,
TransportCtor,
>(
business_ctor: BusinessCtor,
transport_ctor: TransportCtor,
) -> (ActorRef<Transport>, ActorRef<Business>)
where
Transport: TransportActor<Outbound, DomainError>,
Inbound: Send + 'static,
Outbound: Send + 'static,
DomainError: Send + 'static,
Business: Actor + Message<Inbound> + Send + 'static,
BusinessCtor: FnOnce(PreparedActor<Business>, Recipient<Result<Outbound, DomainError>>),
TransportCtor:
FnOnce(PreparedActor<Transport>, Recipient<Inbound>),
{
let prepared_business: PreparedActor<Business> = Spawn::prepare();
let prepared_transport: PreparedActor<Transport> = Spawn::prepare();
let business_ref = prepared_business.actor_ref().clone();
let transport_ref = prepared_transport.actor_ref().clone();
transport_ref.link(&business_ref).await;
business_ref.link(&transport_ref).await;
let recipient = transport_ref.clone().recipient();
business_ctor(prepared_business, recipient);
let business_recipient = business_ref.clone().recipient();
transport_ctor(prepared_transport, business_recipient);
(transport_ref, business_ref)
}

View File

@@ -0,0 +1,128 @@
use std::fmt::Display;
use base64::{Engine as _, prelude::BASE64_URL_SAFE};
use rustls_pki_types::CertificateDer;
const ARBITER_URL_SCHEME: &str = "arbiter";
const CERT_QUERY_KEY: &str = "cert";
const BOOTSTRAP_TOKEN_QUERY_KEY: &str = "bootstrap_token";
pub struct ArbiterUrl {
pub host: String,
pub port: u16,
pub ca_cert: CertificateDer<'static>,
pub bootstrap_token: Option<String>,
}
impl Display for ArbiterUrl {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut base = format!(
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
self.host,
self.port,
BASE64_URL_SAFE.encode(self.ca_cert.to_vec())
);
if let Some(token) = &self.bootstrap_token {
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
}
f.write_str(&base)
}
}
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
pub enum Error {
#[error("Invalid URL scheme, expected '{ARBITER_URL_SCHEME}://'")]
#[diagnostic(
code(arbiter::url::invalid_scheme),
help("The URL must start with '{ARBITER_URL_SCHEME}://'")
)]
InvalidScheme,
#[error("Missing host in URL")]
#[diagnostic(
code(arbiter::url::missing_host),
help("The URL must include a host, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:<port>'")
)]
MissingHost,
#[error("Missing port in URL")]
#[diagnostic(
code(arbiter::url::missing_port),
help("The URL must include a port, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:1234'")
)]
MissingPort,
#[error("Missing 'cert' query parameter in URL")]
#[diagnostic(
code(arbiter::url::missing_cert),
help("The URL must include a 'cert' query parameter")
)]
MissingCert,
#[error("Invalid base64 in 'cert' query parameter: {0}")]
#[diagnostic(code(arbiter::url::invalid_cert_base64))]
InvalidCertBase64(#[from] base64::DecodeError),
}
impl<'a> TryFrom<&'a str> for ArbiterUrl {
type Error = Error;
fn try_from(value: &'a str) -> Result<Self, Self::Error> {
let url = url::Url::parse(value).map_err(|_| Error::InvalidScheme)?;
if url.scheme() != ARBITER_URL_SCHEME {
return Err(Error::InvalidScheme);
}
let host = url.host_str().ok_or(Error::MissingHost)?.to_string();
let port = url.port().ok_or(Error::MissingPort)?;
let cert_str = url
.query_pairs()
.find(|(k, _)| k == CERT_QUERY_KEY)
.ok_or(Error::MissingCert)?
.1;
let cert = BASE64_URL_SAFE.decode(cert_str.as_ref())?;
let cert = CertificateDer::from_slice(&cert).into_owned();
let bootstrap_token = url
.query_pairs()
.find(|(k, _)| k == BOOTSTRAP_TOKEN_QUERY_KEY)
.map(|(_, v)| v.to_string());
Ok(ArbiterUrl {
host,
port,
ca_cert: cert,
bootstrap_token,
})
}
}
#[cfg(test)]
mod tests {
use rcgen::generate_simple_self_signed;
use rstest::rstest;
use super::*;
#[rstest]
fn test_parsing_correctness(
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
) {
let cert = generate_simple_self_signed(&["Arbiter CA".into()]).unwrap();
let cert = cert.cert.der();
let url = ArbiterUrl {
host: host.to_string(),
port: 1234,
ca_cert: cert.clone().into_owned(),
bootstrap_token,
};
let url_str = url.to_string();
let parsed_url = ArbiterUrl::try_from(url_str.as_str()).unwrap();
assert_eq!(url.host, parsed_url.host);
assert_eq!(url.port, parsed_url.port);
assert_eq!(url.ca_cert.to_vec(), parsed_url.ca_cert.to_vec());
assert_eq!(url.bootstrap_token, parsed_url.bootstrap_token);
}
}

BIN
server/crates/arbiter-server/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -3,15 +3,10 @@ name = "arbiter-server"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
license = "Apache-2.0"
[dependencies]
diesel = { version = "2.3.6", features = [
"sqlite",
"uuid",
"time",
"chrono",
"serde_json",
] }
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
diesel-async = { version = "0.7.4", features = [
"bb8",
"migrations",
@@ -21,7 +16,9 @@ diesel-async = { version = "0.7.4", features = [
ed25519-dalek.workspace = true
arbiter-proto.path = "../arbiter-proto"
tracing.workspace = true
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tonic.workspace = true
tonic.features = ["tls-aws-lc"]
tokio.workspace = true
rustls.workspace = true
smlang.workspace = true
@@ -34,20 +31,18 @@ futures.workspace = true
tokio-stream.workspace = true
dashmap = "6.1.0"
rand.workspace = true
rcgen = { version = "0.14.7", features = [
"aws_lc_rs",
"pem",
"x509-parser",
"zeroize",
], default-features = false }
rcgen.workspace = true
chrono.workspace = true
memsafe = "0.4.0"
zeroize = { version = "1.8.2", features = ["std", "simd"] }
argon2 = { version = "0.5", features = ["std"] }
kameo.workspace = true
hex = "0.4.3"
chacha20poly1305 = "0.10.1"
x25519-dalek = { version = "2.0", features = ["static_secrets"] }
x25519-dalek.workspace = true
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
argon2 = { version = "0.5.3", features = ["zeroize"] }
restructed = "0.2.2"
strum = { version = "0.27.2", features = ["derive"] }
pem = "3.0.6"
[dev-dependencies]
insta = "1.46.3"
test-log = { version = "0.2", default-features = false, features = ["trace"] }

View File

@@ -1,11 +0,0 @@
-- Rollback TLS rotation tables
-- Удалить добавленную колонку из arbiter_settings
ALTER TABLE arbiter_settings DROP COLUMN current_cert_id;
-- Удалить таблицы в обратном порядке
DROP TABLE IF EXISTS tls_rotation_history;
DROP TABLE IF EXISTS rotation_client_acks;
DROP TABLE IF EXISTS tls_rotation_state;
DROP INDEX IF EXISTS idx_tls_certificates_active;
DROP TABLE IF EXISTS tls_certificates;

View File

@@ -1,57 +0,0 @@
-- История всех сертификатов
CREATE TABLE IF NOT EXISTS tls_certificates (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
cert BLOB NOT NULL, -- DER-encoded
cert_key BLOB NOT NULL, -- PEM-encoded
not_before INTEGER NOT NULL, -- Unix timestamp
not_after INTEGER NOT NULL, -- Unix timestamp
created_at INTEGER NOT NULL DEFAULT(unixepoch('now')),
is_active BOOLEAN NOT NULL DEFAULT 0 -- Только один active=1
) STRICT;
CREATE INDEX idx_tls_certificates_active ON tls_certificates(is_active, not_after);
-- Tracking процесса ротации
CREATE TABLE IF NOT EXISTS tls_rotation_state (
id INTEGER NOT NULL PRIMARY KEY CHECK(id = 1), -- Singleton
state TEXT NOT NULL DEFAULT('normal') CHECK(state IN ('normal', 'initiated', 'waiting_acks', 'ready')),
new_cert_id INTEGER REFERENCES tls_certificates(id),
initiated_at INTEGER,
timeout_at INTEGER -- Таймаут для ожидания ACKs (initiated_at + 7 дней)
) STRICT;
-- Tracking ACKs от клиентов
CREATE TABLE IF NOT EXISTS rotation_client_acks (
rotation_id INTEGER NOT NULL, -- Ссылка на new_cert_id
client_key TEXT NOT NULL, -- Публичный ключ клиента (hex)
ack_received_at INTEGER NOT NULL DEFAULT(unixepoch('now')),
PRIMARY KEY (rotation_id, client_key)
) STRICT;
-- Audit trail событий ротации
CREATE TABLE IF NOT EXISTS tls_rotation_history (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
cert_id INTEGER NOT NULL REFERENCES tls_certificates(id),
event_type TEXT NOT NULL CHECK(event_type IN ('created', 'rotation_initiated', 'acks_complete', 'activated', 'timeout')),
timestamp INTEGER NOT NULL DEFAULT(unixepoch('now')),
details TEXT -- JSON с доп. информацией
) STRICT;
-- Миграция существующего сертификата
INSERT INTO tls_certificates (id, cert, cert_key, not_before, not_after, is_active, created_at)
SELECT
1,
cert,
cert_key,
unixepoch('now') as not_before,
unixepoch('now') + (90 * 24 * 60 * 60) as not_after, -- 90 дней
1 as is_active,
unixepoch('now')
FROM arbiter_settings WHERE id = 1;
-- Инициализация rotation_state
INSERT INTO tls_rotation_state (id, state) VALUES (1, 'normal');
-- Добавить ссылку на текущий сертификат
ALTER TABLE arbiter_settings ADD COLUMN current_cert_id INTEGER REFERENCES tls_certificates(id);
UPDATE arbiter_settings SET current_cert_id = 1 WHERE id = 1;

View File

@@ -1,22 +1,50 @@
create table if not exists aead_encrypted (
create table if not exists root_key_history (
id INTEGER not null PRIMARY KEY,
current_nonce integer not null default(1), -- if re-encrypted, this should be incremented
-- root key stored as aead encrypted artifact, with only difference that it's decrypted by unseal key (derived from user password)
root_key_encryption_nonce blob not null default(1), -- if re-encrypted, this should be incremented. Used for encrypting root key
data_encryption_nonce blob not null default(1), -- nonce used for encrypting with key itself
ciphertext blob not null,
tag blob not null,
schema_version integer not null default(1) -- server would need to reencrypt, because this means that we have changed algorithm
schema_version integer not null default(1), -- server would need to reencrypt, because this means that we have changed algorithm
salt blob not null -- for key deriviation
) STRICT;
create table if not exists aead_encrypted (
id INTEGER not null PRIMARY KEY,
current_nonce blob not null default(1), -- if re-encrypted, this should be incremented
ciphertext blob not null,
tag blob not null,
schema_version integer not null default(1), -- server would need to reencrypt, because this means that we have changed algorithm
associated_root_key_id integer not null references root_key_history (id) on delete RESTRICT,
created_at integer not null default(unixepoch ('now'))
) STRICT;
create unique index if not exists uniq_nonce_per_root_key on aead_encrypted (
current_nonce,
associated_root_key_id
);
create table if not exists tls_history (
id INTEGER not null PRIMARY KEY,
cert text not null,
cert_key text not null, -- PEM Encoded private key
ca_cert text not null,
ca_key text not null, -- PEM Encoded private key
created_at integer not null default(unixepoch ('now'))
) STRICT;
-- This is a singleton
create table if not exists arbiter_settings (
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
root_key_id integer references aead_encrypted (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
cert_key blob not null,
cert blob not null
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
tls_id integer references tls_history (id) on delete RESTRICT
) STRICT;
insert into arbiter_settings (id) values (1) on conflict do nothing; -- ensure singleton row exists
create table if not exists useragent_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
nonce integer not null default(1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))
@@ -24,7 +52,7 @@ create table if not exists useragent_client (
create table if not exists program_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
nonce integer not null default(1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))

View File

@@ -1,2 +0,0 @@
-- Remove argon2_salt column
ALTER TABLE aead_encrypted DROP COLUMN argon2_salt;

View File

@@ -1,2 +0,0 @@
-- Add argon2_salt column to store password derivation salt
ALTER TABLE aead_encrypted ADD COLUMN argon2_salt TEXT;

Binary file not shown.

View File

@@ -1,2 +0,0 @@
pub mod user_agent;
pub mod client;

View File

@@ -1,40 +1,37 @@
use arbiter_proto::{BOOTSTRAP_TOKEN_PATH, home_path};
use diesel::{ExpressionMethods, QueryDsl};
use arbiter_proto::{BOOTSTRAP_PATH, home_path};
use diesel::QueryDsl;
use diesel_async::RunQueryDsl;
use kameo::{Actor, messages};
use memsafe::MemSafe;
use miette::Diagnostic;
use rand::{RngExt, distr::StandardUniform, make_rng, rngs::StdRng};
use secrecy::SecretString;
use thiserror::Error;
use tracing::info;
use zeroize::{Zeroize, Zeroizing};
use crate::{
context::{self, ServerContext},
db::{self, DatabasePool, schema},
use rand::{
RngExt,
distr::{Alphanumeric},
make_rng,
rngs::StdRng,
};
use thiserror::Error;
use crate::db::{self, DatabasePool, schema};
const TOKEN_LENGTH: usize = 64;
pub async fn generate_token() -> Result<String, std::io::Error> {
let rng: StdRng = make_rng();
let token: String = rng
.sample_iter::<char, _>(StandardUniform)
.take(TOKEN_LENGTH)
.fold(Default::default(), |mut accum, char| {
let token: String = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
Default::default(),
|mut accum, char| {
accum += char.to_string().as_str();
accum
});
},
);
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
tokio::fs::write(home_path()?.join(BOOTSTRAP_PATH), token.as_str()).await?;
Ok(token)
}
#[derive(Error, Debug, Diagnostic)]
pub enum BootstrapError {
pub enum Error {
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::bootstrap::database))]
Database(#[from] db::PoolError),
@@ -49,12 +46,12 @@ pub enum BootstrapError {
}
#[derive(Actor)]
pub struct BootstrapActor {
pub struct Bootstrapper {
token: Option<String>,
}
impl BootstrapActor {
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrapError> {
impl Bootstrapper {
pub async fn new(db: &DatabasePool) -> Result<Self, Error> {
let mut conn = db.get().await?;
let row_count: i64 = schema::useragent_client::table
@@ -64,10 +61,9 @@ impl BootstrapActor {
drop(conn);
let token = if row_count == 0 {
let token = generate_token().await?;
info!(%token, "Generated bootstrap token");
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
Some(token)
} else {
None
@@ -75,15 +71,10 @@ impl BootstrapActor {
Ok(Self { token })
}
#[cfg(test)]
pub fn get_token(&self) -> Option<String> {
self.token.clone()
}
}
#[messages]
impl BootstrapActor {
impl Bootstrapper {
#[message]
pub fn is_correct_token(&self, token: String) -> bool {
match &self.token {
@@ -102,3 +93,11 @@ impl BootstrapActor {
}
}
}
#[messages]
impl Bootstrapper {
#[message]
pub fn get_token(&self) -> Option<String> {
self.token.clone()
}
}

View File

@@ -0,0 +1 @@
pub mod v1;

View File

@@ -0,0 +1,237 @@
use std::ops::Deref as _;
use argon2::{Algorithm, Argon2, password_hash::Salt as ArgonSalt};
use chacha20poly1305::{
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
aead::{AeadMut, Error, Payload},
};
use memsafe::MemSafe;
use rand::{
Rng as _, SeedableRng,
rngs::{StdRng, SysRng},
};
pub const ROOT_KEY_TAG: &[u8] = "arbiter/seal/v1".as_bytes();
pub const TAG: &[u8] = "arbiter/private-key/v1".as_bytes();
pub const NONCE_LENGTH: usize = 24;
#[derive(Default)]
pub struct Nonce([u8; NONCE_LENGTH]);
impl Nonce {
pub fn increment(&mut self) {
for i in (0..self.0.len()).rev() {
if self.0[i] == 0xFF {
self.0[i] = 0;
} else {
self.0[i] += 1;
break;
}
}
}
pub fn to_vec(&self) -> Vec<u8> {
self.0.to_vec()
}
}
impl<'a> TryFrom<&'a [u8]> for Nonce {
type Error = ();
fn try_from(value: &'a [u8]) -> Result<Self, Self::Error> {
if value.len() != NONCE_LENGTH {
return Err(());
}
let mut nonce = [0u8; NONCE_LENGTH];
nonce.copy_from_slice(value);
Ok(Self(nonce))
}
}
pub struct KeyCell(pub MemSafe<Key>);
impl From<MemSafe<Key>> for KeyCell {
fn from(value: MemSafe<Key>) -> Self {
Self(value)
}
}
impl TryFrom<MemSafe<Vec<u8>>> for KeyCell {
type Error = ();
fn try_from(mut value: MemSafe<Vec<u8>>) -> Result<Self, Self::Error> {
let value = value.read().unwrap();
if value.len() != size_of::<Key>() {
return Err(());
}
let mut cell = MemSafe::new(Key::default()).unwrap();
{
let mut cell_write = cell.write().unwrap();
let cell_slice: &mut [u8] = cell_write.as_mut();
cell_slice.copy_from_slice(&value);
}
Ok(Self(cell))
}
}
impl KeyCell {
pub fn new_secure_random() -> Self {
let mut key = MemSafe::new(Key::default()).unwrap();
{
let mut key_buffer = key.write().unwrap();
let key_buffer: &mut [u8] = key_buffer.as_mut();
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
rng.fill_bytes(key_buffer);
}
key.into()
}
pub fn encrypt_in_place(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
mut buffer: impl AsMut<Vec<u8>>,
) -> Result<(), Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let buffer = buffer.as_mut();
cipher.encrypt_in_place(nonce, associated_data, buffer)
}
pub fn decrypt_in_place(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
buffer: &mut MemSafe<Vec<u8>>,
) -> Result<(), Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let mut buffer = buffer.write().unwrap();
let buffer: &mut Vec<u8> = buffer.as_mut();
cipher.decrypt_in_place(nonce, associated_data, buffer)
}
pub fn encrypt(
&mut self,
nonce: &Nonce,
associated_data: &[u8],
plaintext: impl AsRef<[u8]>,
) -> Result<Vec<u8>, Error> {
let key_reader = self.0.read().unwrap();
let key_ref = key_reader.deref();
let mut cipher = XChaCha20Poly1305::new(key_ref);
let nonce = XNonce::from_slice(nonce.0.as_ref());
let ciphertext = cipher.encrypt(
nonce,
Payload {
msg: plaintext.as_ref(),
aad: associated_data,
},
)?;
Ok(ciphertext)
}
}
pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
pub fn generate_salt() -> Salt {
let mut salt = Salt::default();
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
rng.fill_bytes(&mut salt);
salt
}
/// User password might be of different length, have not enough entropy, etc...
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
pub fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
let mut key = MemSafe::new(Key::default()).unwrap();
{
let password_source = password.read().unwrap();
let mut key_buffer = key.write().unwrap();
let key_buffer: &mut [u8] = key_buffer.as_mut();
hasher
.hash_password_into(password_source.deref(), salt, key_buffer)
.unwrap();
}
key.into()
}
#[cfg(test)]
mod tests {
use super::*;
use memsafe::MemSafe;
#[test]
pub fn derive_seal_key_deterministic() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let password2 = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key1 = derive_seal_key(password, &salt);
let mut key2 = derive_seal_key(password2, &salt);
let key1_reader = key1.0.read().unwrap();
let key2_reader = key2.0.read().unwrap();
assert_eq!(key1_reader.deref(), key2_reader.deref());
}
#[test]
pub fn successful_derive() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key = derive_seal_key(password, &salt);
let key_reader = key.0.read().unwrap();
let key_ref = key_reader.deref();
assert_ne!(key_ref.as_slice(), &[0u8; 32][..]);
}
#[test]
pub fn encrypt_decrypt() {
static PASSWORD: &[u8] = b"password";
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
let salt = generate_salt();
let mut key = derive_seal_key(password, &salt);
let nonce = Nonce(*b"unique nonce 123 1231233"); // 24 bytes for XChaCha20Poly1305
let associated_data = b"associated data";
let mut buffer = b"secret data".to_vec();
key.encrypt_in_place(&nonce, associated_data, &mut buffer)
.unwrap();
assert_ne!(buffer, b"secret data");
let mut buffer = MemSafe::new(buffer).unwrap();
key.decrypt_in_place(&nonce, associated_data, &mut buffer)
.unwrap();
let buffer = buffer.read().unwrap();
assert_eq!(*buffer, b"secret data");
}
#[test]
// We should fuzz this
pub fn test_nonce_increment() {
let mut nonce = Nonce([0u8; NONCE_LENGTH]);
nonce.increment();
assert_eq!(
nonce.0,
[
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
]
);
}
}

View File

@@ -0,0 +1,407 @@
use diesel::{
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
dsl::{insert_into, update},
};
use diesel_async::{AsyncConnection, RunQueryDsl};
use kameo::{Actor, Reply, messages};
use memsafe::MemSafe;
use strum::{EnumDiscriminants, IntoDiscriminant};
use tracing::{error, info};
use crate::db::{
self,
models::{self, RootKeyHistory},
schema::{self},
};
use encryption::v1::{self, KeyCell, Nonce};
pub mod encryption;
#[derive(Default, EnumDiscriminants)]
#[strum_discriminants(derive(Reply), vis(pub))]
enum State {
#[default]
Unbootstrapped,
Sealed {
root_key_history_id: i32,
},
Unsealed {
root_key_history_id: i32,
root_key: KeyCell,
},
}
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
pub enum Error {
#[error("Keyholder is already bootstrapped")]
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
AlreadyBootstrapped,
#[error("Keyholder is not bootstrapped")]
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
NotBootstrapped,
#[error("Invalid key provided")]
#[diagnostic(code(arbiter::keyholder::invalid_key))]
InvalidKey,
#[error("Requested aead entry not found")]
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
NotFound,
#[error("Encryption error: {0}")]
#[diagnostic(code(arbiter::keyholder::encryption_error))]
Encryption(#[from] chacha20poly1305::aead::Error),
#[error("Database error: {0}")]
#[diagnostic(code(arbiter::keyholder::database_error))]
DatabaseConnection(#[from] db::PoolError),
#[error("Database transaction error: {0}")]
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
DatabaseTransaction(#[from] diesel::result::Error),
#[error("Broken database")]
#[diagnostic(code(arbiter::keyholder::broken_database))]
BrokenDatabase,
}
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
/// Provides API for encrypting and decrypting data using the vault root key.
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
#[derive(Actor)]
pub struct KeyHolder {
db: db::DatabasePool,
state: State,
}
#[messages]
impl KeyHolder {
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
let state = {
let mut conn = db.get().await?;
let (root_key_history,) = schema::arbiter_settings::table
.left_join(schema::root_key_history::table)
.select((Option::<RootKeyHistory>::as_select(),))
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
.await?;
match root_key_history {
Some(root_key_history) => State::Sealed {
root_key_history_id: root_key_history.id,
},
None => State::Unbootstrapped,
}
};
Ok(Self { db, state })
}
// Exclusive transaction to avoid race condtions if multiple keyholders write
// additional layer of protection against nonce-reuse
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
let mut conn = pool.get().await?;
let nonce = conn
.exclusive_transaction(|conn| {
Box::pin(async move {
let current_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(conn)
.await?;
let mut nonce =
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
error!(
"Broken database: invalid nonce for root key history id={}",
root_key_id
);
Error::BrokenDatabase
})?;
nonce.increment();
update(schema::root_key_history::table)
.filter(schema::root_key_history::id.eq(root_key_id))
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
.execute(conn)
.await?;
Result::<_, Error>::Ok(nonce)
})
})
.await?;
Ok(nonce)
}
#[message]
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
if !matches!(self.state, State::Unbootstrapped) {
return Err(Error::AlreadyBootstrapped);
}
let salt = v1::generate_salt();
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
let mut root_key = KeyCell::new_secure_random();
// Zero nonces are fine because they are one-time
let root_key_nonce = v1::Nonce::default();
let data_encryption_nonce = v1::Nonce::default();
let root_key_ciphertext: Vec<u8> = {
let root_key_reader = root_key.0.read().unwrap();
let root_key_reader = root_key_reader.as_slice();
seal_key
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
.map_err(|err| {
error!(?err, "Fatal bootstrap error");
Error::Encryption(err)
})?
};
let mut conn = self.db.get().await?;
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
let root_key_history_id = conn
.transaction(|conn| {
Box::pin(async move {
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
.values(&models::NewRootKeyHistory {
ciphertext: root_key_ciphertext,
tag: v1::ROOT_KEY_TAG.to_vec(),
root_key_encryption_nonce: root_key_nonce.to_vec(),
data_encryption_nonce: data_encryption_nonce_bytes,
schema_version: 1,
salt: salt.to_vec(),
})
.returning(schema::root_key_history::id)
.get_result(conn)
.await?;
update(schema::arbiter_settings::table)
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
})
})
.await?;
self.state = State::Unsealed {
root_key,
root_key_history_id,
};
info!("Keyholder bootstrapped successfully");
Ok(())
}
#[message]
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
let State::Sealed {
root_key_history_id,
} = &self.state
else {
return Err(Error::NotBootstrapped);
};
// We don't want to hold connection while doing expensive KDF work
let current_key = {
let mut conn = self.db.get().await?;
schema::root_key_history::table
.filter(schema::root_key_history::id.eq(*root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.select(RootKeyHistory::as_select())
.first(&mut conn)
.await?
};
let salt = &current_key.salt;
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
error!("Broken database: invalid salt for root key");
Error::BrokenDatabase
})?;
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|_| {
error!("Broken database: invalid nonce for root key");
Error::BrokenDatabase
},
)?;
seal_key
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
.map_err(|err| {
error!(?err, "Failed to unseal root key: invalid seal key");
Error::InvalidKey
})?;
self.state = State::Unsealed {
root_key_history_id: current_key.id,
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
error!(?err, "Broken database: invalid encryption key size");
Error::BrokenDatabase
})?,
};
info!("Keyholder unsealed successfully");
Ok(())
}
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
#[message]
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
let State::Unsealed { root_key, .. } = &mut self.state else {
return Err(Error::NotBootstrapped);
};
let row: models::AeadEncrypted = {
let mut conn = self.db.get().await?;
schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.filter(schema::aead_encrypted::id.eq(aead_id))
.first(&mut conn)
.await
.optional()?
.ok_or(Error::NotFound)?
};
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
error!(
"Broken database: invalid nonce for aead_encrypted id={}",
aead_id
);
Error::BrokenDatabase
})?;
let mut output = MemSafe::new(row.ciphertext).unwrap();
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
Ok(output)
}
// Creates new `aead_encrypted` entry in the database and returns it's ID
#[message]
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
let State::Unsealed {
root_key,
root_key_history_id,
} = &mut self.state
else {
return Err(Error::NotBootstrapped);
};
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
let mut ciphertext_buffer = plaintext.write().unwrap();
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
let ciphertext = std::mem::take(ciphertext_buffer);
let mut conn = self.db.get().await?;
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
.values(&models::NewAeadEncrypted {
ciphertext,
tag: v1::TAG.to_vec(),
current_nonce: nonce.to_vec(),
schema_version: 1,
associated_root_key_id: *root_key_history_id,
created_at: chrono::Utc::now().timestamp() as i32,
})
.returning(schema::aead_encrypted::id)
.get_result(&mut conn)
.await?;
Ok(aead_id)
}
#[message]
pub fn get_state(&self) -> StateDiscriminants {
self.state.discriminant()
}
#[message]
pub fn seal(&mut self) -> Result<(), Error> {
let State::Unsealed {
root_key_history_id,
..
} = &self.state
else {
return Err(Error::NotBootstrapped);
};
self.state = State::Sealed {
root_key_history_id: *root_key_history_id,
};
Ok(())
}
}
#[cfg(test)]
mod tests {
use diesel::SelectableHelper;
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::db::{self};
use super::*;
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.bootstrap(seal_key).await.unwrap();
actor
}
#[tokio::test]
#[test_log::test]
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
let db = db::create_test_pool().await;
let mut actor = bootstrapped_actor(&db).await;
let root_key_history_id = match actor.state {
State::Unsealed {
root_key_history_id,
..
} => root_key_history_id,
_ => panic!("expected unsealed state"),
};
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
.await
.unwrap();
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
.await
.unwrap();
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
let mut conn = db.get().await.unwrap();
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
let id = actor
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
.await
.unwrap();
let row: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
assert!(
row.current_nonce > n2.to_vec(),
"next write must advance nonce"
);
}
}

View File

@@ -0,0 +1,40 @@
use kameo::actor::{ActorRef, Spawn};
use miette::Diagnostic;
use thiserror::Error;
use crate::{
actors::{bootstrap::Bootstrapper, keyholder::KeyHolder},
db,
};
pub mod bootstrap;
pub mod client;
pub mod keyholder;
pub mod user_agent;
#[derive(Error, Debug, Diagnostic)]
pub enum SpawnError {
#[error("Failed to spawn Bootstrapper actor")]
#[diagnostic(code(SpawnError::Bootstrapper))]
Bootstrapper(#[from] bootstrap::Error),
#[error("Failed to spawn KeyHolder actor")]
#[diagnostic(code(SpawnError::KeyHolder))]
KeyHolder(#[from] keyholder::Error),
}
/// Long-lived actors that are shared across all connections and handle global state and operations
#[derive(Clone)]
pub struct GlobalActors {
pub key_holder: ActorRef<KeyHolder>,
pub bootstrapper: ActorRef<Bootstrapper>,
}
impl GlobalActors {
pub async fn spawn(db: db::DatabasePool) -> Result<Self, SpawnError> {
Ok(Self {
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
key_holder: KeyHolder::spawn(KeyHolder::new(db.clone()).await?),
})
}
}

View File

@@ -1,498 +0,0 @@
use std::sync::Arc;
use arbiter_proto::{
proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallengeRequest, ClientMessage, ServerMessage as AuthServerMessage,
client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
},
transport::Bi,
};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
use diesel_async::{AsyncConnection, RunQueryDsl};
use ed25519_dalek::VerifyingKey;
use futures::StreamExt;
use kameo::{
Actor,
actor::{ActorRef, Spawn},
error::SendError,
messages,
prelude::Context,
};
use tokio::sync::mpsc;
use tokio::sync::mpsc::Sender;
use tonic::{Status, transport::Server};
use tracing::{debug, error, info};
use crate::{
ServerContext,
actors::user_agent::auth::AuthChallenge,
context::bootstrap::{BootstrapActor, ConsumeToken},
db::{self, schema},
errors::GrpcStatusExt,
};
#[derive(Debug)]
pub struct ChallengeContext {
challenge: AuthChallenge,
key: VerifyingKey,
}
// Request context with deserialized public key for state machine.
// This intermediate struct is needed because the state machine branches depending on presence of bootstrap token,
// but we want to have the deserialized key in both branches.
#[derive(Clone, Debug)]
pub struct AuthRequestContext {
pubkey: VerifyingKey,
bootstrap_token: Option<String>,
}
smlang::statemachine!(
name: UserAgent,
derive_states: [Debug],
custom_error: false,
transitions: {
*Init + AuthRequest(AuthRequestContext) / auth_request_context = ReceivedAuthRequest(AuthRequestContext),
ReceivedAuthRequest(AuthRequestContext) + ReceivedBootstrapToken = Authenticated,
ReceivedAuthRequest(AuthRequestContext) + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Authenticated,
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
}
);
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn move_challenge(
&mut self,
state_data: &AuthRequestContext,
event_data: ChallengeContext,
) -> Result<ChallengeContext, ()> {
Ok(event_data)
}
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn auth_request_context(
&mut self,
event_data: AuthRequestContext,
) -> Result<AuthRequestContext, ()> {
Ok(event_data)
}
}
#[derive(Actor)]
pub struct UserAgentActor {
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
state: UserAgentStateMachine<DummyContext>,
tx: Sender<Result<UserAgentResponse, Status>>,
context: ServerContext,
ephemeral_key: Option<crate::context::unseal::EphemeralKeyPair>,
}
impl UserAgentActor {
pub(crate) fn new(
context: ServerContext,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db: context.db.clone(),
bootstapper: context.bootstrapper.clone(),
state: UserAgentStateMachine::new(DummyContext),
tx,
context,
ephemeral_key: None,
}
}
pub(crate) fn new_manual(
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
context: ServerContext,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db,
bootstapper,
state: UserAgentStateMachine::new(DummyContext),
tx,
context,
ephemeral_key: None,
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Status> {
self.state.process_event(event).map_err(|e| {
error!(?e, "State transition failed");
Status::internal("State machine error")
})?;
Ok(())
}
async fn auth_with_bootstrap_token(
&mut self,
pubkey: ed25519_dalek::VerifyingKey,
token: String,
) -> Result<UserAgentResponse, Status> {
let token_ok: bool = self
.bootstapper
.ask(ConsumeToken { token })
.await
.map_err(|e| {
error!(?pubkey, "Failed to consume bootstrap token: {e}");
Status::internal("Bootstrap token consumption failed")
})?;
if !token_ok {
error!(?pubkey, "Invalid bootstrap token provided");
return Err(Status::invalid_argument("Invalid bootstrap token"));
}
{
let mut conn = self.db.get().await.to_status()?;
diesel::insert_into(schema::useragent_client::table)
.values((
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
schema::useragent_client::nonce.eq(1),
))
.execute(&mut conn)
.await
.to_status()?;
}
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
Ok(auth_response(ServerAuthPayload::AuthOk(auth::AuthOk {})))
}
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
let nonce: Option<i32> = {
let mut db_conn = self.db.get().await.to_status()?;
db_conn
.transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::useragent_client::table
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.select(schema::useragent_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::useragent_client::table)
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()
.to_status()?
};
let Some(nonce) = nonce else {
error!(?pubkey, "Public key not found in database");
return Err(Status::unauthenticated("Public key not registered"));
};
let challenge = auth::AuthChallenge {
pubkey: pubkey_bytes,
nonce,
};
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
challenge: challenge.clone(),
key: pubkey,
}))?;
info!(
?pubkey,
?challenge,
"Sent authentication challenge to client"
);
Ok(auth_response(ServerAuthPayload::AuthChallenge(challenge)))
}
fn verify_challenge_solution(
&self,
solution: &auth::AuthChallengeSolution,
) -> Result<(bool, &ChallengeContext), Status> {
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
else {
error!("Received challenge solution in invalid state");
return Err(Status::invalid_argument(
"Invalid state for challenge solution",
));
};
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
let signature = solution.signature.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
Status::invalid_argument("Invalid signature length")
})?;
let valid = challenge_context
.key
.verify_strict(&formatted_challenge, &signature)
.is_ok();
Ok((valid, challenge_context))
}
}
type Output = Result<UserAgentResponse, Status>;
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(AuthServerMessage {
payload: Some(payload),
})),
}
}
#[messages]
impl UserAgentActor {
#[message(ctx)]
pub async fn handle_auth_challenge_request(
&mut self,
req: AuthChallengeRequest,
ctx: &mut Context<Self, Output>,
) -> Output {
let pubkey = req.pubkey.as_array().ok_or(Status::invalid_argument(
"Expected pubkey to have specific length",
))?;
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|err| {
error!(?pubkey, "Failed to convert to VerifyingKey");
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
})?;
self.transition(UserAgentEvents::AuthRequest(AuthRequestContext {
pubkey,
bootstrap_token: req.bootstrap_token.clone(),
}))?;
match req.bootstrap_token {
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
None => self.auth_with_challenge(pubkey, req.pubkey).await,
}
}
#[message(ctx)]
pub async fn handle_auth_challenge_solution(
&mut self,
solution: auth::AuthChallengeSolution,
ctx: &mut Context<Self, Output>,
) -> Output {
let (valid, challenge_context) = self.verify_challenge_solution(&solution)?;
if valid {
info!(
?challenge_context,
"Client provided valid solution to authentication challenge"
);
self.transition(UserAgentEvents::ReceivedGoodSolution)?;
Ok(auth_response(ServerAuthPayload::AuthOk(auth::AuthOk {})))
} else {
error!("Client provided invalid solution to authentication challenge");
self.transition(UserAgentEvents::ReceivedBadSolution)?;
Err(Status::unauthenticated("Invalid challenge solution"))
}
}
#[message(ctx)]
pub async fn handle_unseal_request(
&mut self,
request: arbiter_proto::proto::UnsealRequest,
ctx: &mut Context<Self, Output>,
) -> Output {
use arbiter_proto::proto::{
EphemeralKeyResponse, SealedPassword, UnsealResponse, UnsealResult,
unseal_request::Payload as ReqPayload,
unseal_response::Payload as RespPayload,
};
match request.payload {
Some(ReqPayload::EphemeralKeyRequest(_)) => {
// Generate new ephemeral keypair
let keypair = crate::context::unseal::EphemeralKeyPair::generate();
let expires_at = keypair.expires_at() as i64;
let public_bytes = keypair.public_bytes();
// Store for later use
self.ephemeral_key = Some(keypair);
info!("Generated ephemeral X25519 keypair for unseal, expires at {}", expires_at);
Ok(UserAgentResponse {
payload: Some(UserAgentResponsePayload::UnsealResponse(UnsealResponse {
payload: Some(RespPayload::EphemeralKeyResponse(EphemeralKeyResponse {
server_pubkey: public_bytes,
expires_at,
})),
})),
})
}
Some(ReqPayload::SealedPassword(sealed)) => {
// Get and consume ephemeral key
let keypair = self
.ephemeral_key
.take()
.ok_or_else(|| Status::failed_precondition("No ephemeral key generated"))?;
// Check expiration
if keypair.is_expired() {
error!("Ephemeral key expired before sealed password was received");
return Err(Status::deadline_exceeded("Ephemeral key expired"));
}
// Perform ECDH
let shared_secret = keypair
.perform_dh(&sealed.client_pubkey)
.map_err(|e| Status::invalid_argument(format!("Invalid client pubkey: {}", e)))?;
// Decrypt password
let nonce: [u8; 12] = sealed
.nonce
.as_slice()
.try_into()
.map_err(|_| Status::invalid_argument("Nonce must be 12 bytes"))?;
let password_bytes = crate::crypto::aead::decrypt(
&sealed.encrypted_password,
&shared_secret,
&nonce,
)
.map_err(|e| {
error!("Failed to decrypt password: {}", e);
Status::internal(format!("Decryption failed: {}", e))
})?;
let password = String::from_utf8(password_bytes).map_err(|_| {
error!("Password is not valid UTF-8");
Status::invalid_argument("Password must be UTF-8")
})?;
// Call unseal on context
info!("Attempting to unseal vault with decrypted password");
let result = self.context.unseal(&password).await;
match result {
Ok(()) => {
info!("Vault unsealed successfully");
Ok(UserAgentResponse {
payload: Some(UserAgentResponsePayload::UnsealResponse(
UnsealResponse {
payload: Some(RespPayload::UnsealResult(UnsealResult {
success: true,
error_message: None,
})),
},
)),
})
}
Err(e) => {
error!("Unseal failed: {}", e);
Ok(UserAgentResponse {
payload: Some(UserAgentResponsePayload::UnsealResponse(
UnsealResponse {
payload: Some(RespPayload::UnsealResult(UnsealResult {
success: false,
error_message: Some(e.to_string()),
})),
},
)),
})
}
}
}
None => {
error!("Received empty unseal request");
Err(Status::invalid_argument("Empty unseal request"))
}
}
}
}
#[cfg(test)]
mod tests {
use arbiter_proto::proto::{
UserAgentResponse,
auth::{AuthChallengeRequest, AuthOk},
user_agent_response::Payload as UserAgentResponsePayload,
};
use kameo::actor::Spawn;
use crate::{
actors::user_agent::HandleAuthChallengeRequest, context::bootstrap::BootstrapActor, db,
};
use super::UserAgentActor;
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_token_auth() {
let db = db::create_test_pool().await;
// explicitly not installing any user_agent pubkeys
let bootstrapper = BootstrapActor::new(&db).await.unwrap(); // this will create bootstrap token
let token = bootstrapper.get_token().unwrap();
let bootstrapper_ref = BootstrapActor::spawn(bootstrapper);
let context = crate::ServerContext::new(db.clone()).await.unwrap();
let user_agent = UserAgentActor::new_manual(
db.clone(),
bootstrapper_ref,
context,
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
);
let user_agent_ref = UserAgentActor::spawn(user_agent);
// simulate client sending auth request with bootstrap token
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some(token),
},
})
.await
.expect("Shouldn't fail to send message");
// auth succeeded
assert_eq!(
result,
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(
arbiter_proto::proto::auth::ServerMessage {
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
AuthOk {},
)),
},
)),
}
);
}
}
mod transport;
pub(crate) use transport::handle_user_agent;

View File

@@ -0,0 +1,57 @@
use tonic::Status;
use crate::db;
#[derive(Debug, thiserror::Error)]
pub enum UserAgentError {
#[error("Missing payload in request")]
MissingPayload,
#[error("Invalid bootstrap token")]
InvalidBootstrapToken,
#[error("Public key not registered")]
PubkeyNotRegistered,
#[error("Invalid public key format")]
InvalidPubkey,
#[error("Invalid signature length")]
InvalidSignatureLength,
#[error("Invalid challenge solution")]
InvalidChallengeSolution,
#[error("Invalid state for operation")]
InvalidState,
#[error("Actor unavailable")]
ActorUnavailable,
#[error("Database error")]
Database(#[from] diesel::result::Error),
#[error("Database pool error")]
DatabasePool(#[from] db::PoolError),
}
impl From<UserAgentError> for Status {
fn from(err: UserAgentError) -> Self {
match err {
UserAgentError::MissingPayload
| UserAgentError::InvalidBootstrapToken
| UserAgentError::InvalidPubkey
| UserAgentError::InvalidSignatureLength => Status::invalid_argument(err.to_string()),
UserAgentError::PubkeyNotRegistered | UserAgentError::InvalidChallengeSolution => {
Status::unauthenticated(err.to_string())
}
UserAgentError::InvalidState => Status::failed_precondition(err.to_string()),
UserAgentError::ActorUnavailable
| UserAgentError::Database(_)
| UserAgentError::DatabasePool(_) => Status::internal(err.to_string()),
}
}
}

View File

@@ -0,0 +1,401 @@
use std::{ops::DerefMut, sync::Mutex};
use arbiter_proto::proto::{
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
UserAgentResponse,
auth::{
self, AuthChallengeRequest, AuthOk, ClientMessage as ClientAuthMessage,
ServerMessage as AuthServerMessage,
client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
use diesel_async::RunQueryDsl;
use ed25519_dalek::VerifyingKey;
use kameo::{Actor, actor::Recipient, error::SendError, messages, prelude::Message};
use memsafe::MemSafe;
use tracing::{error, info};
use x25519_dalek::{EphemeralSecret, PublicKey};
use crate::{
ServerContext,
actors::{
GlobalActors,
bootstrap::ConsumeToken,
keyholder::{self, TryUnseal},
user_agent::state::{
ChallengeContext, DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine,
UserAgentStates,
},
},
db::{self, schema},
};
mod error;
mod state;
pub use error::UserAgentError;
#[derive(Actor)]
pub struct UserAgentActor {
db: db::DatabasePool,
actors: GlobalActors,
state: UserAgentStateMachine<DummyContext>,
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
}
impl UserAgentActor {
pub(crate) fn new(
context: ServerContext,
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
) -> Self {
Self {
db: context.db.clone(),
actors: context.actors.clone(),
state: UserAgentStateMachine::new(DummyContext),
transport,
}
}
pub fn new_manual(
db: db::DatabasePool,
actors: GlobalActors,
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
) -> Self {
Self {
db,
actors,
state: UserAgentStateMachine::new(DummyContext),
transport,
}
}
async fn process_request(&mut self, req: UserAgentRequest) -> Output {
let msg = req.payload.ok_or_else(|| {
error!(actor = "useragent", "Received message with no payload");
UserAgentError::MissingPayload
})?;
match msg {
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
payload: Some(ClientAuthPayload::AuthChallengeRequest(req)),
}) => self.handle_auth_challenge_request(req).await,
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
payload: Some(ClientAuthPayload::AuthChallengeSolution(solution)),
}) => self.handle_auth_challenge_solution(solution).await,
UserAgentRequestPayload::UnsealStart(unseal_start) => {
self.handle_unseal_request(unseal_start).await
}
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
}
_ => Err(UserAgentError::MissingPayload),
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
self.state.process_event(event).map_err(|e| {
error!(?e, "State transition failed");
UserAgentError::InvalidState
})?;
Ok(())
}
async fn auth_with_bootstrap_token(
&mut self,
pubkey: ed25519_dalek::VerifyingKey,
token: String,
) -> Output {
let token_ok: bool = self
.actors
.bootstrapper
.ask(ConsumeToken { token })
.await
.map_err(|e| {
error!(?pubkey, "Failed to consume bootstrap token: {e}");
UserAgentError::ActorUnavailable
})?;
if !token_ok {
error!(?pubkey, "Invalid bootstrap token provided");
return Err(UserAgentError::InvalidBootstrapToken);
}
{
let mut conn = self.db.get().await?;
diesel::insert_into(schema::useragent_client::table)
.values((
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
schema::useragent_client::nonce.eq(1),
))
.execute(&mut conn)
.await?;
}
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
}
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
let nonce: Option<i32> = {
let mut db_conn = self.db.get().await?;
db_conn
.exclusive_transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::useragent_client::table
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.select(schema::useragent_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::useragent_client::table)
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()?
};
let Some(nonce) = nonce else {
error!(?pubkey, "Public key not found in database");
return Err(UserAgentError::PubkeyNotRegistered);
};
let challenge = auth::AuthChallenge {
pubkey: pubkey_bytes,
nonce,
};
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
challenge: challenge.clone(),
key: pubkey,
}))?;
info!(
?pubkey,
?challenge,
"Sent authentication challenge to client"
);
Ok(auth_response(ServerAuthPayload::AuthChallenge(challenge)))
}
fn verify_challenge_solution(
&self,
solution: &auth::AuthChallengeSolution,
) -> Result<(bool, &ChallengeContext), UserAgentError> {
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
else {
error!("Received challenge solution in invalid state");
return Err(UserAgentError::InvalidState);
};
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
let signature = solution.signature.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
UserAgentError::InvalidSignatureLength
})?;
let valid = challenge_context
.key
.verify_strict(&formatted_challenge, &signature)
.is_ok();
Ok((valid, challenge_context))
}
}
type Output = Result<UserAgentResponse, UserAgentError>;
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(AuthServerMessage {
payload: Some(payload),
})),
}
}
fn unseal_response(payload: UserAgentResponsePayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(payload),
}
}
#[messages]
impl UserAgentActor {
#[message]
pub async fn handle_unseal_request(&mut self, req: UnsealStart) -> Output {
let secret = EphemeralSecret::random();
let public_key = PublicKey::from(&secret);
let client_pubkey_bytes: [u8; 32] = req
.client_pubkey
.try_into()
.map_err(|_| UserAgentError::InvalidPubkey)?;
let client_public_key = PublicKey::from(client_pubkey_bytes);
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
secret: Mutex::new(Some(secret)),
client_public_key,
}))?;
Ok(unseal_response(
UserAgentResponsePayload::UnsealStartResponse(UnsealStartResponse {
server_pubkey: public_key.as_bytes().to_vec(),
}),
))
}
#[message]
pub async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
error!("Received unseal encrypted key in invalid state");
return Err(UserAgentError::InvalidState);
};
let ephemeral_secret = {
let mut secret_lock = unseal_context.secret.lock().unwrap();
let secret = secret_lock.take();
match secret {
Some(secret) => secret,
None => {
drop(secret_lock);
error!("Ephemeral secret already taken");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
return Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)));
}
}
};
let nonce = XNonce::from_slice(&req.nonce);
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
let decryption_result = {
let mut write_handle = seal_key_buffer.write().unwrap();
let write_handle = write_handle.deref_mut();
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
};
match decryption_result {
Ok(_) => {
match self
.actors
.key_holder
.ask(TryUnseal {
seal_key_raw: seal_key_buffer,
})
.await
{
Ok(_) => {
info!("Successfully unsealed key with client-provided key");
self.transition(UserAgentEvents::ReceivedValidKey)?;
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
UnsealResult::Success.into(),
)))
}
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
Err(SendError::HandlerError(err)) => {
error!(?err, "Keyholder failed to unseal key");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
Err(err) => {
error!(?err, "Failed to send unseal request to keyholder");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Err(UserAgentError::ActorUnavailable)
}
}
}
Err(err) => {
error!(?err, "Failed to decrypt unseal key");
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
UnsealResult::InvalidKey.into(),
)))
}
}
}
#[message]
pub async fn handle_auth_challenge_request(&mut self, req: AuthChallengeRequest) -> Output {
let pubkey = req
.pubkey
.as_array()
.ok_or(UserAgentError::InvalidPubkey)?;
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|_err| {
error!(?pubkey, "Failed to convert to VerifyingKey");
UserAgentError::InvalidPubkey
})?;
self.transition(UserAgentEvents::AuthRequest)?;
match req.bootstrap_token {
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
None => self.auth_with_challenge(pubkey, req.pubkey).await,
}
}
#[message]
pub async fn handle_auth_challenge_solution(
&mut self,
solution: auth::AuthChallengeSolution,
) -> Output {
let (valid, challenge_context) = self.verify_challenge_solution(&solution)?;
if valid {
info!(
?challenge_context,
"Client provided valid solution to authentication challenge"
);
self.transition(UserAgentEvents::ReceivedGoodSolution)?;
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
} else {
error!("Client provided invalid solution to authentication challenge");
self.transition(UserAgentEvents::ReceivedBadSolution)?;
Err(UserAgentError::InvalidChallengeSolution)
}
}
}
impl Message<UserAgentRequest> for UserAgentActor {
type Reply = ();
async fn handle(
&mut self,
msg: UserAgentRequest,
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
) -> Self::Reply {
let result = self.process_request(msg).await;
if let Err(e) = self.transport.tell(result).await {
error!(actor = "useragent", "Failed to send response to transport: {}", e);
}
}
}

View File

@@ -0,0 +1,51 @@
use std::sync::Mutex;
use arbiter_proto::proto::auth::AuthChallenge;
use ed25519_dalek::VerifyingKey;
use x25519_dalek::{EphemeralSecret, PublicKey};
/// Context for state machine with validated key and sent challenge
/// Challenge is then transformed to bytes using shared function and verified
#[derive(Clone, Debug)]
pub struct ChallengeContext {
pub challenge: AuthChallenge,
pub key: VerifyingKey,
}
pub struct UnsealContext {
pub client_public_key: PublicKey,
pub secret: Mutex<Option<EphemeralSecret>>,
}
smlang::statemachine!(
name: UserAgent,
custom_error: false,
transitions: {
*Init + AuthRequest = ReceivedAuthRequest,
ReceivedAuthRequest + ReceivedBootstrapToken = Idle,
ReceivedAuthRequest + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Idle,
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
Idle + UnsealRequest(UnsealContext) / generate_temp_keypair = WaitingForUnsealKey(UnsealContext),
WaitingForUnsealKey(UnsealContext) + ReceivedValidKey = Unsealed,
WaitingForUnsealKey(UnsealContext) + ReceivedInvalidKey = Idle,
}
);
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
Ok(event_data)
}
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn move_challenge(&mut self, event_data: ChallengeContext) -> Result<ChallengeContext, ()> {
Ok(event_data)
}
}

View File

@@ -1,95 +0,0 @@
use super::UserAgentActor;
use arbiter_proto::proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallenge, AuthChallengeRequest, AuthOk, ClientMessage,
ServerMessage as AuthServerMessage, client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use futures::StreamExt;
use kameo::{
actor::{ActorRef, Spawn as _},
error::SendError,
};
use tokio::sync::mpsc;
use tonic::Status;
use tracing::error;
use crate::{
actors::user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution},
context::ServerContext,
};
pub(crate) async fn handle_user_agent(
context: ServerContext,
mut req_stream: tonic::Streaming<UserAgentRequest>,
tx: mpsc::Sender<Result<UserAgentResponse, Status>>,
) {
let actor = UserAgentActor::spawn(UserAgentActor::new(context, tx.clone()));
while let Some(Ok(req)) = req_stream.next().await
&& actor.is_alive()
{
match process_message(&actor, req).await {
Ok(resp) => {
if tx.send(Ok(resp)).await.is_err() {
error!(actor = "useragent", "Failed to send response to client");
break;
}
}
Err(status) => {
let _ = tx.send(Err(status)).await;
break;
}
}
}
actor.kill();
}
async fn process_message(
actor: &ActorRef<UserAgentActor>,
req: UserAgentRequest,
) -> Result<UserAgentResponse, Status> {
let msg = req.payload.ok_or_else(|| {
error!(actor = "useragent", "Received message with no payload");
Status::invalid_argument("Expected message with payload")
})?;
let UserAgentRequestPayload::AuthMessage(ClientMessage {
payload: Some(client_message),
}) = msg
else {
error!(
actor = "useragent",
"Received unexpected message type during authentication"
);
return Err(Status::invalid_argument(
"Expected AuthMessage with ClientMessage payload",
));
};
match client_message {
ClientAuthPayload::AuthChallengeRequest(req) => actor
.ask(HandleAuthChallengeRequest { req })
.await
.map_err(into_status),
ClientAuthPayload::AuthChallengeSolution(solution) => actor
.ask(HandleAuthChallengeSolution { solution })
.await
.map_err(into_status),
}
}
fn into_status<M>(e: SendError<M, Status>) -> Status {
match e {
SendError::HandlerError(status) => status,
_ => {
error!(actor = "useragent", "Failed to send message to actor");
Status::internal("session failure")
}
}
}

View File

@@ -1,403 +0,0 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::time::Duration;
use diesel::OptionalExtension as _;
use diesel_async::RunQueryDsl as _;
use ed25519_dalek::VerifyingKey;
use kameo::actor::{ActorRef, Spawn};
use miette::Diagnostic;
use rand::rngs::StdRng;
use secrecy::{ExposeSecret, SecretBox};
use smlang::statemachine;
use thiserror::Error;
use tokio::sync::{watch, RwLock};
use zeroize::Zeroizing;
use crate::{
context::{
bootstrap::{BootstrapActor, generate_token},
lease::LeaseHandler,
tls::{RotationState, RotationTask, TlsDataRaw, TlsManager},
},
db::{
self,
models::ArbiterSetting,
schema::{self, arbiter_settings},
},
};
pub(crate) mod bootstrap;
pub(crate) mod lease;
pub(crate) mod tls;
pub(crate) mod unseal;
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Database setup failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_setup))]
DatabaseSetup(#[from] db::DatabaseSetupError),
#[error("Connection acquire failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_pool))]
DatabasePool(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::init::database_query))]
DatabaseQuery(#[from] diesel::result::Error),
#[error("TLS initialization failed: {0}")]
#[diagnostic(code(arbiter_server::init::tls_init))]
Tls(#[from] tls::TlsInitError),
#[error("Bootstrap token generation failed: {0}")]
#[diagnostic(code(arbiter_server::init::bootstrap_token))]
BootstrapToken(#[from] bootstrap::BootstrapError),
#[error("I/O Error: {0}")]
#[diagnostic(code(arbiter_server::init::io))]
Io(#[from] std::io::Error),
}
#[derive(Error, Debug, Diagnostic)]
pub enum UnsealError {
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::unseal::database_pool))]
Database(#[from] db::PoolError),
#[error("Query error: {0}")]
#[diagnostic(code(arbiter_server::unseal::database_query))]
Query(#[from] diesel::result::Error),
#[error("Decryption failed: {0}")]
#[diagnostic(code(arbiter_server::unseal::decryption))]
DecryptionFailed(#[from] crate::crypto::CryptoError),
#[error("Invalid state for unseal")]
#[diagnostic(code(arbiter_server::unseal::invalid_state))]
InvalidState,
#[error("Missing salt in database")]
#[diagnostic(code(arbiter_server::unseal::missing_salt))]
MissingSalt,
#[error("No root key configured in database")]
#[diagnostic(code(arbiter_server::unseal::no_root_key))]
NoRootKey,
}
#[derive(Error, Debug, Diagnostic)]
pub enum SealError {
#[error("Invalid state for seal")]
#[diagnostic(code(arbiter_server::seal::invalid_state))]
InvalidState,
}
/// Secure in-memory storage for root encryption key
///
/// Uses `secrecy` crate for automatic zeroization on drop to prevent key material
/// from remaining in memory after use. SecretBox provides heap-allocated secret
/// storage that implements Send + Sync for safe use in async contexts.
pub struct KeyStorage {
/// 32-byte root key protected by SecretBox
key: SecretBox<[u8; 32]>,
}
impl KeyStorage {
/// Create new KeyStorage from a 32-byte root key
pub fn new(key: [u8; 32]) -> Self {
Self {
key: SecretBox::new(Box::new(key)),
}
}
/// Access the key for cryptographic operations
pub fn key(&self) -> &[u8; 32] {
self.key.expose_secret()
}
}
// Drop автоматически реализован через secrecy::Zeroize
// который зануляет память при освобождении
statemachine! {
name: Server,
transitions: {
*NotBootstrapped + Bootstrapped = Sealed,
Sealed + Unsealed(KeyStorage) / move_key = Ready(KeyStorage),
Ready(KeyStorage) + Sealed / dispose_key = Sealed,
}
}
pub struct _Context;
impl ServerStateMachineContext for _Context {
/// Move key from unseal event into Ready state
fn move_key(&mut self, event_data: KeyStorage) -> Result<KeyStorage, ()> {
// Просто перемещаем KeyStorage из event в state
// Без клонирования - event data consumed
Ok(event_data)
}
/// Securely dispose of key when sealing
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn dispose_key(&mut self, _state_data: &KeyStorage) -> Result<(), ()> {
// KeyStorage будет dropped после state transition
// secrecy::Zeroize зануляет память автоматически
Ok(())
}
}
pub(crate) struct _ServerContextInner {
pub db: db::DatabasePool,
pub state: RwLock<ServerStateMachine<_Context>>,
pub rng: StdRng,
pub tls: Arc<TlsManager>,
pub bootstrapper: ActorRef<BootstrapActor>,
pub rotation_state: RwLock<RotationState>,
pub rotation_acks: Arc<RwLock<HashSet<VerifyingKey>>>,
pub user_agent_leases: LeaseHandler<VerifyingKey>,
pub client_leases: LeaseHandler<VerifyingKey>,
}
#[derive(Clone)]
pub(crate) struct ServerContext(Arc<_ServerContextInner>);
impl std::ops::Deref for ServerContext {
type Target = _ServerContextInner;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl ServerContext {
/// Check if all active clients have acknowledged the rotation
pub async fn check_rotation_ready(&self) -> bool {
// TODO: Implement proper rotation readiness check
// For now, return false as placeholder
false
}
async fn load_tls(
db: &db::DatabasePool,
settings: Option<&ArbiterSetting>,
) -> Result<TlsManager, InitError> {
match settings {
Some(s) if s.current_cert_id.is_some() => {
// Load active certificate from tls_certificates table
Ok(TlsManager::load_from_db(
db.clone(),
s.current_cert_id.unwrap(),
)
.await?)
}
Some(s) => {
// Legacy migration: extract validity and save to new table
let tls_data_raw = TlsDataRaw {
cert: s.cert.clone(),
key: s.cert_key.clone(),
};
// For legacy certificates, use current time as not_before
// and current time + 90 days as not_after
let not_before = chrono::Utc::now().timestamp();
let not_after = not_before + (90 * 24 * 60 * 60); // 90 days
Ok(TlsManager::new_from_legacy(
db.clone(),
tls_data_raw,
not_before,
not_after,
)
.await?)
}
None => {
// First startup - generate new certificate
Ok(TlsManager::new(db.clone()).await?)
}
}
}
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
let mut conn = db.get().await?;
let rng = rand::make_rng();
let settings = arbiter_settings::table
.first::<ArbiterSetting>(&mut conn)
.await
.optional()?;
drop(conn);
// Load TLS manager
let tls = Self::load_tls(&db, settings.as_ref()).await?;
// Load rotation state from database
let rotation_state = RotationState::load_from_db(&db)
.await
.unwrap_or(RotationState::Normal);
let bootstrap_token = generate_token().await?;
let mut state = ServerStateMachine::new(_Context);
if let Some(settings) = &settings
&& settings.root_key_id.is_some()
{
// TODO: pass the encrypted root key to the state machine and let it handle decryption and transition to Sealed
let _ = state.process_event(ServerEvents::Bootstrapped);
}
// Create shutdown channel for rotation task
let (rotation_shutdown_tx, rotation_shutdown_rx) = watch::channel(false);
// Initialize bootstrap actor
let bootstrapper = BootstrapActor::spawn(BootstrapActor::new(&db).await?);
let context = Arc::new(_ServerContextInner {
db: db.clone(),
rng,
tls: Arc::new(tls),
state: RwLock::new(state),
bootstrapper,
rotation_state: RwLock::new(rotation_state),
rotation_acks: Arc::new(RwLock::new(HashSet::new())),
user_agent_leases: Default::default(),
client_leases: Default::default(),
});
Ok(Self(context))
}
/// Unseal vault with password
pub async fn unseal(&self, password: &str) -> Result<(), UnsealError> {
use crate::crypto::root_key;
use diesel::QueryDsl as _;
// 1. Get root_key_id from settings
let mut conn = self.db.get().await?;
let settings: db::models::ArbiterSetting = schema::arbiter_settings::table
.first(&mut conn)
.await?;
let root_key_id = settings.root_key_id.ok_or(UnsealError::NoRootKey)?;
// 2. Load encrypted root key
let encrypted: db::models::AeadEncrypted = schema::aead_encrypted::table
.find(root_key_id)
.first(&mut conn)
.await?;
let salt = encrypted
.argon2_salt
.as_ref()
.ok_or(UnsealError::MissingSalt)?;
drop(conn);
// 3. Decrypt root key using password
let root_key = root_key::decrypt_root_key(&encrypted, password, salt)
.map_err(UnsealError::DecryptionFailed)?;
// 4. Create secure storage
let key_storage = KeyStorage::new(root_key);
// 5. Transition state machine
let mut state = self.state.write().await;
state
.process_event(ServerEvents::Unsealed(key_storage))
.map_err(|_| UnsealError::InvalidState)?;
Ok(())
}
/// Seal the server (lock the key)
pub async fn seal(&self) -> Result<(), SealError> {
let mut state = self.state.write().await;
state
.process_event(ServerEvents::Sealed)
.map_err(|_| SealError::InvalidState)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_keystorage_creation() {
let key = [42u8; 32];
let storage = KeyStorage::new(key);
assert_eq!(storage.key()[0], 42);
assert_eq!(storage.key().len(), 32);
}
#[test]
fn test_keystorage_zeroization() {
let key = [99u8; 32];
{
let _storage = KeyStorage::new(key);
// storage будет dropped здесь
}
// После drop SecretBox должен зануляеть память
// Это проверяется автоматически через secrecy::Zeroize
}
#[test]
fn test_state_machine_transitions() {
let mut state = ServerStateMachine::new(_Context);
// Начальное состояние
assert!(matches!(state.state(), &ServerStates::NotBootstrapped));
// Bootstrapped transition
state.process_event(ServerEvents::Bootstrapped).unwrap();
assert!(matches!(state.state(), &ServerStates::Sealed));
// Unsealed transition
let key_storage = KeyStorage::new([1u8; 32]);
state
.process_event(ServerEvents::Unsealed(key_storage))
.unwrap();
assert!(matches!(state.state(), &ServerStates::Ready(_)));
// Sealed transition
state.process_event(ServerEvents::Sealed).unwrap();
assert!(matches!(state.state(), &ServerStates::Sealed));
}
#[test]
fn test_move_key_callback() {
let mut ctx = _Context;
let key_storage = KeyStorage::new([7u8; 32]);
let result = ctx.move_key(key_storage);
assert!(result.is_ok());
assert_eq!(result.unwrap().key()[0], 7);
}
#[test]
fn test_dispose_key_callback() {
let mut ctx = _Context;
let key_storage = KeyStorage::new([13u8; 32]);
let result = ctx.dispose_key(&key_storage);
assert!(result.is_ok());
}
#[test]
fn test_invalid_state_transitions() {
let mut state = ServerStateMachine::new(_Context);
// Попытка unseal без bootstrap
let key_storage = KeyStorage::new([1u8; 32]);
let result = state.process_event(ServerEvents::Unsealed(key_storage));
assert!(result.is_err());
// Правильный путь
state.process_event(ServerEvents::Bootstrapped).unwrap();
// Попытка повторного bootstrap
let result = state.process_event(ServerEvents::Bootstrapped);
assert!(result.is_err());
}
}

View File

@@ -1,46 +0,0 @@
use std::sync::Arc;
use dashmap::DashSet;
#[derive(Clone, Default)]
struct LeaseStorage<T: Eq + std::hash::Hash>(Arc<DashSet<T>>);
// A lease that automatically releases the item when dropped
pub struct Lease<T: Clone + std::hash::Hash + Eq> {
item: T,
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> Drop for Lease<T> {
fn drop(&mut self) {
self.storage.0.remove(&self.item);
}
}
#[derive(Clone, Default)]
pub struct LeaseHandler<T: Clone + std::hash::Hash + Eq> {
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> LeaseHandler<T> {
pub fn new() -> Self {
Self {
storage: LeaseStorage(Arc::new(DashSet::new())),
}
}
pub fn acquire(&self, item: T) -> Result<Lease<T>, ()> {
if self.storage.0.insert(item.clone()) {
Ok(Lease {
item,
storage: self.storage.clone(),
})
} else {
Err(())
}
}
/// Get all currently leased items
pub fn get_all(&self) -> Vec<T> {
self.storage.0.iter().map(|entry| entry.clone()).collect()
}
}

View File

@@ -0,0 +1,65 @@
use std::sync::Arc;
use miette::Diagnostic;
use thiserror::Error;
use crate::{
actors::GlobalActors,
context::tls::TlsManager,
db::{self},
};
pub mod tls;
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Database setup failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_setup))]
DatabaseSetup(#[from] db::DatabaseSetupError),
#[error("Connection acquire failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_pool))]
DatabasePool(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::init::database_query))]
DatabaseQuery(#[from] diesel::result::Error),
#[error("TLS initialization failed: {0}")]
#[diagnostic(code(arbiter_server::init::tls_init))]
Tls(#[from] tls::InitError),
#[error("Actor spawn failed: {0}")]
#[diagnostic(code(arbiter_server::init::actor_spawn))]
ActorSpawn(#[from] crate::actors::SpawnError),
#[error("I/O Error: {0}")]
#[diagnostic(code(arbiter_server::init::io))]
Io(#[from] std::io::Error),
}
pub struct _ServerContextInner {
pub db: db::DatabasePool,
pub tls: TlsManager,
pub actors: GlobalActors,
}
#[derive(Clone)]
pub struct ServerContext(Arc<_ServerContextInner>);
impl std::ops::Deref for ServerContext {
type Target = _ServerContextInner;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl ServerContext {
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
Ok(Self(Arc::new(_ServerContextInner {
actors: GlobalActors::spawn(db.clone()).await?,
tls: TlsManager::new(db.clone()).await?,
db,
})))
}
}

View File

@@ -0,0 +1,252 @@
use std::string::FromUtf8Error;
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper as _};
use diesel_async::{AsyncConnection, RunQueryDsl};
use miette::Diagnostic;
use pem::Pem;
use rcgen::{
BasicConstraints, Certificate, CertificateParams, CertifiedIssuer, DistinguishedName, DnType,
IsCa, Issuer, KeyPair, KeyUsagePurpose,
};
use rustls::pki_types::{pem::PemObject};
use thiserror::Error;
use tonic::transport::CertificateDer;
use crate::db::{
self,
models::{NewTlsHistory, TlsHistory},
schema::{
arbiter_settings,
tls_history::{self},
},
};
const ENCODE_CONFIG: pem::EncodeConfig = {
let line_ending = match cfg!(target_family = "windows") {
true => pem::LineEnding::CRLF,
false => pem::LineEnding::LF,
};
pem::EncodeConfig::new().set_line_ending(line_ending)
};
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Key generation error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
KeyGeneration(#[from] rcgen::Error),
#[error("Key invalid format: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_invalid_format))]
KeyInvalidFormat(#[from] FromUtf8Error),
#[error("Key deserialization error: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
KeyDeserializationError(rcgen::Error),
#[error("Database error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::database_error))]
DatabaseError(#[from] diesel::result::Error),
#[error("Pem deserialization error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::pem_deserialization))]
PemDeserializationError(#[from] rustls::pki_types::pem::Error),
#[error("Database pool acquire error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::database_pool_acquire))]
DatabasePoolAcquire(#[from] db::PoolError),
}
pub type PemCert = String;
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
pem::encode_config(
&Pem::new("CERTIFICATE", cert.to_vec()),
ENCODE_CONFIG,
)
}
#[allow(unused)]
struct SerializedTls {
cert_pem: PemCert,
cert_key_pem: String,
}
struct TlsCa {
issuer: Issuer<'static, KeyPair>,
cert: CertificateDer<'static>,
}
impl TlsCa {
fn generate() -> Result<Self, InitError> {
let keypair = KeyPair::generate()?;
let mut params = CertificateParams::new(["Arbiter Instance CA".into()])?;
params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained);
params.key_usages = vec![
KeyUsagePurpose::KeyCertSign,
KeyUsagePurpose::CrlSign,
KeyUsagePurpose::DigitalSignature,
];
let mut dn = DistinguishedName::new();
dn.push(DnType::CommonName, "Arbiter Instance CA");
params.distinguished_name = dn;
let certified_issuer = CertifiedIssuer::self_signed(params, keypair)?;
let cert_key_pem = certified_issuer.key().serialize_pem();
let issuer = Issuer::from_ca_cert_pem(
&certified_issuer.pem(),
KeyPair::from_pem(cert_key_pem.as_ref()).unwrap(),
)
.unwrap();
Ok(Self {
issuer,
cert: certified_issuer.der().clone(),
})
}
fn generate_leaf(&self) -> Result<TlsCert, InitError> {
let cert_key = KeyPair::generate()?;
let mut params = CertificateParams::new(["Arbiter Instance Leaf".into()])?;
params.is_ca = IsCa::NoCa;
params.key_usages = vec![
KeyUsagePurpose::DigitalSignature,
KeyUsagePurpose::KeyEncipherment,
];
let mut dn = DistinguishedName::new();
dn.push(DnType::CommonName, "Arbiter Instance Leaf");
params.distinguished_name = dn;
let new_cert = params.signed_by(&cert_key, &self.issuer)?;
Ok(TlsCert {
cert: new_cert,
cert_key,
})
}
#[allow(unused)]
fn serialize(&self) -> Result<SerializedTls, InitError> {
let cert_key_pem = self.issuer.key().serialize_pem();
Ok(SerializedTls {
cert_pem: encode_cert_to_pem(&self.cert),
cert_key_pem,
})
}
#[allow(unused)]
fn try_deserialize(cert_pem: &str, cert_key_pem: &str) -> Result<Self, InitError> {
let keypair =
KeyPair::from_pem(cert_key_pem).map_err(InitError::KeyDeserializationError)?;
let issuer = Issuer::from_ca_cert_pem(cert_pem, keypair)?;
Ok(Self {
issuer,
cert: CertificateDer::from_pem_slice(cert_pem.as_bytes())?,
})
}
}
struct TlsCert {
cert: Certificate,
cert_key: KeyPair,
}
// TODO: Implement cert rotation
pub struct TlsManager {
cert: CertificateDer<'static>,
keypair: KeyPair,
ca_cert: CertificateDer<'static>,
_db: db::DatabasePool,
}
impl TlsManager {
pub async fn generate_new(db: &db::DatabasePool) -> Result<Self, InitError> {
let ca = TlsCa::generate()?;
let new_cert = ca.generate_leaf()?;
{
let mut conn = db.get().await?;
conn.transaction(|conn| {
Box::pin(async {
let new_tls_history = NewTlsHistory {
cert: new_cert.cert.pem(),
cert_key: new_cert.cert_key.serialize_pem(),
ca_cert: encode_cert_to_pem(&ca.cert),
ca_key: ca.issuer.key().serialize_pem(),
};
let inserted_tls_history: i32 = diesel::insert_into(tls_history::table)
.values(&new_tls_history)
.returning(tls_history::id)
.get_result(conn)
.await?;
diesel::update(arbiter_settings::table)
.set(arbiter_settings::tls_id.eq(inserted_tls_history))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(())
})
})
.await?;
}
Ok(Self {
cert: new_cert.cert.der().clone(),
keypair: new_cert.cert_key,
ca_cert: ca.cert,
_db: db.clone(),
})
}
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
let cert_data: Option<TlsHistory> = {
let mut conn = db.get().await?;
arbiter_settings::table
.left_join(tls_history::table)
.select(Option::<TlsHistory>::as_select())
.first(&mut conn)
.await?
};
match cert_data {
Some(data) => {
let try_load = || -> Result<_, Box<dyn std::error::Error>> {
let keypair = KeyPair::from_pem(&data.cert_key)?;
let cert = CertificateDer::from_pem_slice(data.cert.as_bytes())?;
let ca_cert = CertificateDer::from_pem_slice(data.ca_cert.as_bytes())?;
Ok(Self {
cert,
keypair,
ca_cert,
_db: db.clone(),
})
};
match try_load() {
Ok(manager) => Ok(manager),
Err(e) => {
eprintln!("Failed to load existing TLS certs: {e}. Generating new ones.");
Self::generate_new(&db).await
}
}
}
None => Self::generate_new(&db).await,
}
}
pub fn cert(&self) -> &CertificateDer<'static> {
&self.cert
}
pub fn ca_cert(&self) -> &CertificateDer<'static> {
&self.ca_cert
}
pub fn cert_pem(&self) -> PemCert {
encode_cert_to_pem(&self.cert)
}
pub fn key_pem(&self) -> String {
self.keypair.serialize_pem()
}
}

View File

@@ -1,192 +0,0 @@
use std::sync::Arc;
use std::string::FromUtf8Error;
use miette::Diagnostic;
use rcgen::{Certificate, KeyPair};
use rustls::pki_types::CertificateDer;
use thiserror::Error;
use tokio::sync::RwLock;
use crate::db;
pub mod rotation;
pub use rotation::{RotationError, RotationState, RotationTask};
#[derive(Error, Debug, Diagnostic)]
#[expect(clippy::enum_variant_names)]
pub enum TlsInitError {
#[error("Key generation error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
KeyGeneration(#[from] rcgen::Error),
#[error("Key invalid format: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_invalid_format))]
KeyInvalidFormat(#[from] FromUtf8Error),
#[error("Key deserialization error: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
KeyDeserializationError(rcgen::Error),
}
pub struct TlsData {
pub cert: CertificateDer<'static>,
pub keypair: KeyPair,
}
pub struct TlsDataRaw {
pub cert: Vec<u8>,
pub key: Vec<u8>,
}
impl TlsDataRaw {
pub fn serialize(cert: &TlsData) -> Self {
Self {
cert: cert.cert.as_ref().to_vec(),
key: cert.keypair.serialize_pem().as_bytes().to_vec(),
}
}
pub fn deserialize(&self) -> Result<TlsData, TlsInitError> {
let cert = CertificateDer::from_slice(&self.cert).into_owned();
let key =
String::from_utf8(self.key.clone()).map_err(TlsInitError::KeyInvalidFormat)?;
let keypair = KeyPair::from_pem(&key).map_err(TlsInitError::KeyDeserializationError)?;
Ok(TlsData { cert, keypair })
}
}
/// Metadata about a certificate including validity period
pub struct CertificateMetadata {
pub cert_id: i32,
pub cert: CertificateDer<'static>,
pub keypair: Arc<KeyPair>,
pub not_before: i64,
pub not_after: i64,
pub created_at: i64,
}
pub(crate) fn generate_cert(key: &KeyPair) -> Result<(Certificate, i64, i64), rcgen::Error> {
let params = rcgen::CertificateParams::new(vec![
"arbiter.local".to_string(),
"localhost".to_string(),
])?;
// Set validity period: 90 days from now
let not_before = chrono::Utc::now();
let not_after = not_before + chrono::Duration::days(90);
// Note: rcgen doesn't directly expose not_before/not_after setting in all versions
// For now, we'll generate the cert and track validity separately
let cert = params.self_signed(key)?;
Ok((cert, not_before.timestamp(), not_after.timestamp()))
}
// Certificate rotation enabled
pub(crate) struct TlsManager {
// Current active certificate (atomic replacement via RwLock)
current_cert: Arc<RwLock<CertificateMetadata>>,
// Database pool for persistence
db: db::DatabasePool,
}
impl TlsManager {
/// Create new TlsManager with a generated certificate
pub async fn new(db: db::DatabasePool) -> Result<Self, TlsInitError> {
let keypair = KeyPair::generate()?;
let (cert, not_before, not_after) = generate_cert(&keypair)?;
let cert_der = cert.der().clone();
// For initial creation, cert_id will be set after DB insert
let metadata = CertificateMetadata {
cert_id: 0, // Temporary, will be updated after DB insert
cert: cert_der,
keypair: Arc::new(keypair),
not_before,
not_after,
created_at: chrono::Utc::now().timestamp(),
};
Ok(Self {
current_cert: Arc::new(RwLock::new(metadata)),
db,
})
}
/// Load TlsManager from database with specific certificate ID
pub async fn load_from_db(db: db::DatabasePool, cert_id: i32) -> Result<Self, TlsInitError> {
// TODO: Load certificate from database
// For now, return error - will be implemented when database access is ready
Err(TlsInitError::KeyGeneration(rcgen::Error::CouldNotParseCertificate))
}
/// Create from legacy TlsDataRaw format
pub async fn new_from_legacy(
db: db::DatabasePool,
data: TlsDataRaw,
not_before: i64,
not_after: i64,
) -> Result<Self, TlsInitError> {
let tls_data = data.deserialize()?;
let metadata = CertificateMetadata {
cert_id: 1, // Legacy certificate gets ID 1
cert: tls_data.cert,
keypair: Arc::new(tls_data.keypair),
not_before,
not_after,
created_at: chrono::Utc::now().timestamp(),
};
Ok(Self {
current_cert: Arc::new(RwLock::new(metadata)),
db,
})
}
/// Get current certificate data
pub async fn get_certificate(&self) -> (CertificateDer<'static>, Arc<KeyPair>) {
let cert = self.current_cert.read().await;
(cert.cert.clone(), cert.keypair.clone())
}
/// Replace certificate atomically
pub async fn replace_certificate(&self, new_cert: CertificateMetadata) -> Result<(), TlsInitError> {
let mut cert = self.current_cert.write().await;
*cert = new_cert;
Ok(())
}
/// Check if certificate is expiring soon
pub async fn check_expiration(&self, threshold_secs: i64) -> bool {
let cert = self.current_cert.read().await;
let now = chrono::Utc::now().timestamp();
cert.not_after - now < threshold_secs
}
/// Get certificate metadata for rotation logic
pub async fn get_certificate_metadata(&self) -> CertificateMetadata {
let cert = self.current_cert.read().await;
CertificateMetadata {
cert_id: cert.cert_id,
cert: cert.cert.clone(),
keypair: cert.keypair.clone(),
not_before: cert.not_before,
not_after: cert.not_after,
created_at: cert.created_at,
}
}
pub fn bytes(&self) -> TlsDataRaw {
// This method is now async-compatible but we keep sync interface
// TODO: Make this async or remove if not needed
TlsDataRaw {
cert: vec![],
key: vec![],
}
}
}

View File

@@ -1,552 +0,0 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::time::Duration;
use diesel::prelude::*;
use diesel_async::RunQueryDsl;
use ed25519_dalek::VerifyingKey;
use miette::Diagnostic;
use rcgen::KeyPair;
use thiserror::Error;
use tokio::sync::watch;
use tracing::{debug, error, info, warn};
use crate::context::ServerContext;
use crate::db::models::{NewRotationClientAck, NewTlsCertificate, NewTlsRotationHistory};
use crate::db::schema::{rotation_client_acks, tls_certificates, tls_rotation_history, tls_rotation_state};
use crate::db::DatabasePool;
use super::{generate_cert, CertificateMetadata, TlsInitError};
#[derive(Error, Debug, Diagnostic)]
pub enum RotationError {
#[error("Certificate generation failed: {0}")]
#[diagnostic(code(arbiter_server::rotation::cert_generation))]
CertGeneration(#[from] rcgen::Error),
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::rotation::database))]
Database(#[from] diesel::result::Error),
#[error("TLS initialization error: {0}")]
#[diagnostic(code(arbiter_server::rotation::tls_init))]
TlsInit(#[from] TlsInitError),
#[error("Invalid rotation state: {0}")]
#[diagnostic(code(arbiter_server::rotation::invalid_state))]
InvalidState(String),
#[error("No active certificate found")]
#[diagnostic(code(arbiter_server::rotation::no_active_cert))]
NoActiveCertificate,
}
/// Состояние процесса ротации сертификата
#[derive(Debug, Clone)]
pub enum RotationState {
/// Обычная работа, ротация не требуется
Normal,
/// Ротация инициирована, новый сертификат сгенерирован
RotationInitiated {
initiated_at: i64,
new_cert_id: i32,
},
/// Ожидание подтверждений (ACKs) от клиентов
WaitingForAcks {
new_cert_id: i32,
initiated_at: i64,
timeout_at: i64,
},
/// Все ACK получены или таймаут истёк, готов к ротации
ReadyToRotate {
new_cert_id: i32,
},
}
impl RotationState {
/// Загрузить состояние из базы данных
pub async fn load_from_db(db: &DatabasePool) -> Result<Self, RotationError> {
use crate::db::schema::tls_rotation_state::dsl::*;
let mut conn = db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let state_record: (i32, String, Option<i32>, Option<i32>, Option<i32>) =
tls_rotation_state
.select((id, state, new_cert_id, initiated_at, timeout_at))
.filter(id.eq(1))
.first(&mut conn)
.await?;
let rotation_state = match state_record.1.as_str() {
"normal" => RotationState::Normal,
"initiated" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("Initiated state missing new_cert_id".into())
})?;
let init_at = state_record.3.ok_or_else(|| {
RotationError::InvalidState("Initiated state missing initiated_at".into())
})?;
RotationState::RotationInitiated {
initiated_at: init_at as i64,
new_cert_id: cert_id,
}
}
"waiting_acks" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing new_cert_id".into())
})?;
let init_at = state_record.3.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing initiated_at".into())
})?;
let timeout = state_record.4.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing timeout_at".into())
})?;
RotationState::WaitingForAcks {
new_cert_id: cert_id,
initiated_at: init_at as i64,
timeout_at: timeout as i64,
}
}
"ready" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("Ready state missing new_cert_id".into())
})?;
RotationState::ReadyToRotate {
new_cert_id: cert_id,
}
}
other => {
return Err(RotationError::InvalidState(format!(
"Unknown state: {}",
other
)))
}
};
Ok(rotation_state)
}
/// Сохранить состояние в базу данных
pub async fn save_to_db(&self, db: &DatabasePool) -> Result<(), RotationError> {
use crate::db::schema::tls_rotation_state::dsl::*;
let mut conn = db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let (state_str, cert_id, init_at, timeout) = match self {
RotationState::Normal => ("normal", None, None, None),
RotationState::RotationInitiated {
initiated_at: init,
new_cert_id: cert,
} => ("initiated", Some(*cert), Some(*init as i32), None),
RotationState::WaitingForAcks {
new_cert_id: cert,
initiated_at: init,
timeout_at: timeout_val,
} => (
"waiting_acks",
Some(*cert),
Some(*init as i32),
Some(*timeout_val as i32),
),
RotationState::ReadyToRotate { new_cert_id: cert } => ("ready", Some(*cert), None, None),
};
diesel::update(tls_rotation_state.filter(id.eq(1)))
.set((
state.eq(state_str),
new_cert_id.eq(cert_id),
initiated_at.eq(init_at),
timeout_at.eq(timeout),
))
.execute(&mut conn)
.await?;
Ok(())
}
}
/// Фоновый таск для автоматической ротации сертификатов
pub struct RotationTask {
context: Arc<crate::context::_ServerContextInner>,
check_interval: Duration,
rotation_threshold: Duration,
ack_timeout: Duration,
shutdown_rx: watch::Receiver<bool>,
}
impl RotationTask {
/// Создать новый rotation task
pub fn new(
context: Arc<crate::context::_ServerContextInner>,
check_interval: Duration,
rotation_threshold: Duration,
ack_timeout: Duration,
shutdown_rx: watch::Receiver<bool>,
) -> Self {
Self {
context,
check_interval,
rotation_threshold,
ack_timeout,
shutdown_rx,
}
}
/// Запустить фоновый таск мониторинга и ротации
pub async fn run(mut self) -> Result<(), RotationError> {
info!("Starting TLS certificate rotation task");
loop {
tokio::select! {
_ = tokio::time::sleep(self.check_interval) => {
if let Err(e) = self.check_and_process().await {
error!("Rotation task error: {}", e);
}
}
_ = self.shutdown_rx.changed() => {
info!("Rotation task shutting down");
break;
}
}
}
Ok(())
}
/// Проверить текущее состояние и выполнить необходимые действия
async fn check_and_process(&self) -> Result<(), RotationError> {
let state = self.context.rotation_state.read().await.clone();
match state {
RotationState::Normal => {
// Проверить, нужна ли ротация
self.check_expiration_and_initiate().await?;
}
RotationState::RotationInitiated { new_cert_id, .. } => {
// Автоматически перейти в WaitingForAcks
self.transition_to_waiting_acks(new_cert_id).await?;
}
RotationState::WaitingForAcks {
new_cert_id,
timeout_at,
..
} => {
self.handle_waiting_for_acks(new_cert_id, timeout_at).await?;
}
RotationState::ReadyToRotate { new_cert_id } => {
self.execute_rotation(new_cert_id).await?;
}
}
Ok(())
}
/// Проверить срок действия сертификата и инициировать ротацию если нужно
async fn check_expiration_and_initiate(&self) -> Result<(), RotationError> {
let threshold_secs = self.rotation_threshold.as_secs() as i64;
if self.context.tls.check_expiration(threshold_secs).await {
info!("Certificate expiring soon, initiating rotation");
self.initiate_rotation().await?;
}
Ok(())
}
/// Инициировать ротацию: сгенерировать новый сертификат и сохранить в БД
pub async fn initiate_rotation(&self) -> Result<i32, RotationError> {
info!("Initiating certificate rotation");
// 1. Генерация нового сертификата
let keypair = KeyPair::generate()?;
let (cert, not_before, not_after) = generate_cert(&keypair)?;
let cert_der = cert.der().clone();
// 2. Сохранение в БД (is_active = false, пока не активирован)
let new_cert_id = self
.save_new_certificate(&cert_der, &keypair, not_before, not_after)
.await?;
info!(new_cert_id, "New certificate generated and saved");
// 3. Обновление rotation_state
let new_state = RotationState::RotationInitiated {
initiated_at: chrono::Utc::now().timestamp(),
new_cert_id,
};
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// 4. Логирование в audit trail
self.log_rotation_event(new_cert_id, "rotation_initiated", None)
.await?;
Ok(new_cert_id)
}
/// Перейти в состояние WaitingForAcks и разослать уведомления
async fn transition_to_waiting_acks(&self, new_cert_id: i32) -> Result<(), RotationError> {
info!(new_cert_id, "Transitioning to WaitingForAcks state");
let initiated_at = chrono::Utc::now().timestamp();
let timeout_at = initiated_at + self.ack_timeout.as_secs() as i64;
// Обновить состояние
let new_state = RotationState::WaitingForAcks {
new_cert_id,
initiated_at,
timeout_at,
};
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// TODO: Broadcast уведомлений клиентам
// self.broadcast_rotation_notification(new_cert_id, timeout_at).await?;
info!(timeout_at, "Rotation notifications sent, waiting for ACKs");
Ok(())
}
/// Обработка состояния WaitingForAcks: проверка ACKs и таймаута
async fn handle_waiting_for_acks(
&self,
new_cert_id: i32,
timeout_at: i64,
) -> Result<(), RotationError> {
let now = chrono::Utc::now().timestamp();
// Проверить таймаут
if now > timeout_at {
let missing = self.get_missing_acks(new_cert_id).await?;
warn!(
missing_count = missing.len(),
"Rotation ACK timeout reached, proceeding with rotation"
);
// Переход в ReadyToRotate
let new_state = RotationState::ReadyToRotate { new_cert_id };
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
self.log_rotation_event(
new_cert_id,
"timeout",
Some(format!("Missing ACKs from {} clients", missing.len())),
)
.await?;
return Ok(());
}
// Проверить, все ли ACK получены
let missing = self.get_missing_acks(new_cert_id).await?;
if missing.is_empty() {
info!("All clients acknowledged, ready to rotate");
let new_state = RotationState::ReadyToRotate { new_cert_id };
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
self.log_rotation_event(new_cert_id, "acks_complete", None)
.await?;
} else {
let time_remaining = timeout_at - now;
debug!(
missing_count = missing.len(),
time_remaining,
"Waiting for rotation ACKs"
);
}
Ok(())
}
/// Выполнить атомарную ротацию сертификата
async fn execute_rotation(&self, new_cert_id: i32) -> Result<(), RotationError> {
info!(new_cert_id, "Executing certificate rotation");
// 1. Загрузить новый сертификат из БД
let new_cert = self.load_certificate(new_cert_id).await?;
// 2. Атомарная замена в TlsManager
self.context
.tls
.replace_certificate(new_cert)
.await
.map_err(RotationError::TlsInit)?;
// 3. Обновить БД: старый is_active=false, новый is_active=true
self.activate_certificate(new_cert_id).await?;
// 4. TODO: Отключить всех клиентов
// self.disconnect_all_clients().await?;
// 5. Очистить rotation_state
let new_state = RotationState::Normal;
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// 6. Очистить ACKs
self.context.rotation_acks.write().await.clear();
self.clear_rotation_acks(new_cert_id).await?;
// 7. Логирование
self.log_rotation_event(new_cert_id, "activated", None)
.await?;
info!(new_cert_id, "Certificate rotation completed successfully");
Ok(())
}
/// Сохранить новый сертификат в БД
async fn save_new_certificate(
&self,
cert_der: &[u8],
keypair: &KeyPair,
cert_not_before: i64,
cert_not_after: i64,
) -> Result<i32, RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let new_cert = NewTlsCertificate {
cert: cert_der.to_vec(),
cert_key: keypair.serialize_pem().as_bytes().to_vec(),
not_before: cert_not_before as i32,
not_after: cert_not_after as i32,
is_active: false,
};
diesel::insert_into(tls_certificates)
.values(&new_cert)
.execute(&mut conn)
.await?;
// Получить ID последней вставленной записи
let cert_id: i32 = diesel::select(diesel::dsl::sql::<diesel::sql_types::Integer>(
"last_insert_rowid()",
))
.first(&mut conn)
.await?;
self.log_rotation_event(cert_id, "created", None).await?;
Ok(cert_id)
}
/// Загрузить сертификат из БД
async fn load_certificate(&self, cert_id: i32) -> Result<CertificateMetadata, RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let cert_record: (Vec<u8>, Vec<u8>, i32, i32, i32) = tls_certificates
.select((cert, cert_key, not_before, not_after, created_at))
.filter(id.eq(cert_id))
.first(&mut conn)
.await?;
let cert_der = rustls::pki_types::CertificateDer::from(cert_record.0);
let key_pem = String::from_utf8(cert_record.1)
.map_err(|e| RotationError::InvalidState(format!("Invalid key encoding: {}", e)))?;
let keypair = KeyPair::from_pem(&key_pem)?;
Ok(CertificateMetadata {
cert_id,
cert: cert_der,
keypair: Arc::new(keypair),
not_before: cert_record.2 as i64,
not_after: cert_record.3 as i64,
created_at: cert_record.4 as i64,
})
}
/// Активировать сертификат (установить is_active=true)
async fn activate_certificate(&self, cert_id: i32) -> Result<(), RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
// Деактивировать все сертификаты
diesel::update(tls_certificates)
.set(is_active.eq(false))
.execute(&mut conn)
.await?;
// Активировать новый
diesel::update(tls_certificates.filter(id.eq(cert_id)))
.set(is_active.eq(true))
.execute(&mut conn)
.await?;
Ok(())
}
/// Получить список клиентов, которые ещё не отправили ACK
async fn get_missing_acks(&self, rotation_id: i32) -> Result<Vec<VerifyingKey>, RotationError> {
// TODO: Реализовать получение списка всех активных клиентов
// и вычитание тех, кто уже отправил ACK
// Пока возвращаем пустой список
Ok(Vec::new())
}
/// Очистить ACKs для данной ротации из БД
async fn clear_rotation_acks(&self, rotation_id: i32) -> Result<(), RotationError> {
use crate::db::schema::rotation_client_acks::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
diesel::delete(rotation_client_acks.filter(rotation_id.eq(rotation_id)))
.execute(&mut conn)
.await?;
Ok(())
}
/// Записать событие в audit trail
async fn log_rotation_event(
&self,
history_cert_id: i32,
history_event_type: &str,
history_details: Option<String>,
) -> Result<(), RotationError> {
use crate::db::schema::tls_rotation_history::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let new_history = NewTlsRotationHistory {
cert_id: history_cert_id,
event_type: history_event_type.to_string(),
details: history_details,
};
diesel::insert_into(tls_rotation_history)
.values(&new_history)
.execute(&mut conn)
.await?;
Ok(())
}
}

View File

@@ -1,161 +0,0 @@
use std::time::{SystemTime, UNIX_EPOCH};
use miette::Diagnostic;
use secrecy::{ExposeSecret, SecretBox};
use thiserror::Error;
use x25519_dalek::{PublicKey, StaticSecret};
const EPHEMERAL_KEY_LIFETIME_SECS: u64 = 60;
#[derive(Error, Debug, Diagnostic)]
pub enum UnsealError {
#[error("Invalid public key")]
#[diagnostic(code(arbiter_server::unseal::invalid_pubkey))]
InvalidPublicKey,
}
/// Ephemeral X25519 keypair for secure password transmission
///
/// Generated on-demand when client requests unseal. Expires after 60 seconds.
/// Uses StaticSecret stored in SecretBox for automatic zeroization on drop.
pub struct EphemeralKeyPair {
/// Secret key stored securely
secret: SecretBox<StaticSecret>,
public: PublicKey,
expires_at: u64,
}
impl EphemeralKeyPair {
/// Generate new ephemeral X25519 keypair
pub fn generate() -> Self {
// Generate random 32 bytes
let secret_bytes = rand::random::<[u8; 32]>();
let secret = StaticSecret::from(secret_bytes);
let public = PublicKey::from(&secret);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("System time before UNIX epoch")
.as_secs();
Self {
secret: SecretBox::new(Box::new(secret)),
public,
expires_at: now + EPHEMERAL_KEY_LIFETIME_SECS,
}
}
/// Check if this ephemeral key has expired
pub fn is_expired(&self) -> bool {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("System time before UNIX epoch")
.as_secs();
now > self.expires_at
}
/// Get expiration timestamp (Unix epoch seconds)
pub fn expires_at(&self) -> u64 {
self.expires_at
}
/// Get public key as bytes for transmission to client
pub fn public_bytes(&self) -> Vec<u8> {
self.public.as_bytes().to_vec()
}
/// Perform Diffie-Hellman key exchange with client's public key
///
/// Returns 32-byte shared secret for ChaCha20Poly1305 encryption
pub fn perform_dh(&self, client_pubkey: &[u8]) -> Result<[u8; 32], UnsealError> {
// Parse client public key
let client_public = PublicKey::from(
<[u8; 32]>::try_from(client_pubkey).map_err(|_| UnsealError::InvalidPublicKey)?,
);
// Perform ECDH
let shared_secret = self.secret.expose_secret().diffie_hellman(&client_public);
Ok(shared_secret.to_bytes())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_ephemeral_keypair_generation() {
let keypair = EphemeralKeyPair::generate();
// Public key should be 32 bytes
assert_eq!(keypair.public_bytes().len(), 32);
// Should not be expired immediately
assert!(!keypair.is_expired());
// Expiration should be ~60 seconds in future
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
let time_until_expiry = keypair.expires_at() - now;
assert!((59..=61).contains(&time_until_expiry));
}
#[test]
fn test_perform_dh_with_valid_key() {
let server_keypair = EphemeralKeyPair::generate();
let client_secret_bytes = rand::random::<[u8; 32]>();
let client_secret = StaticSecret::from(client_secret_bytes);
let client_public = PublicKey::from(&client_secret);
// Server performs DH
let server_shared_secret = server_keypair
.perform_dh(client_public.as_bytes())
.expect("DH should succeed");
// Client performs DH
let client_shared_secret = client_secret.diffie_hellman(&server_keypair.public);
// Shared secrets should match
assert_eq!(server_shared_secret, client_shared_secret.to_bytes());
assert_eq!(server_shared_secret.len(), 32);
}
#[test]
fn test_perform_dh_with_invalid_key() {
let keypair = EphemeralKeyPair::generate();
// Try with invalid length
let invalid_key = vec![1, 2, 3];
let result = keypair.perform_dh(&invalid_key);
assert!(result.is_err());
// Try with wrong length (not 32 bytes)
let invalid_key = vec![0u8; 16];
let result = keypair.perform_dh(&invalid_key);
assert!(result.is_err());
}
#[test]
fn test_different_keypairs_produce_different_shared_secrets() {
let server_keypair1 = EphemeralKeyPair::generate();
let server_keypair2 = EphemeralKeyPair::generate();
let client_secret_bytes = rand::random::<[u8; 32]>();
let client_secret = StaticSecret::from(client_secret_bytes);
let client_public = PublicKey::from(&client_secret);
let shared1 = server_keypair1
.perform_dh(client_public.as_bytes())
.unwrap();
let shared2 = server_keypair2
.perform_dh(client_public.as_bytes())
.unwrap();
// Different server keys should produce different shared secrets
assert_ne!(shared1, shared2);
}
}

View File

@@ -1,139 +0,0 @@
use chacha20poly1305::{
aead::{Aead, KeyInit},
ChaCha20Poly1305, Key, Nonce,
};
use super::CryptoError;
/// Encrypt plaintext with AEAD (ChaCha20Poly1305)
///
/// Returns (ciphertext, tag) on success
pub fn encrypt(
plaintext: &[u8],
key: &[u8; 32],
nonce: &[u8; 12],
) -> Result<Vec<u8>, CryptoError> {
let cipher_key = Key::from_slice(key);
let cipher = ChaCha20Poly1305::new(cipher_key);
let nonce_array = Nonce::from_slice(nonce);
cipher
.encrypt(nonce_array, plaintext)
.map_err(|e| CryptoError::AeadEncryption(e.to_string()))
}
/// Decrypt ciphertext with AEAD (ChaCha20Poly1305)
///
/// The ciphertext должен содержать tag (последние 16 bytes)
pub fn decrypt(
ciphertext_with_tag: &[u8],
key: &[u8; 32],
nonce: &[u8; 12],
) -> Result<Vec<u8>, CryptoError> {
let cipher_key = Key::from_slice(key);
let cipher = ChaCha20Poly1305::new(cipher_key);
let nonce_array = Nonce::from_slice(nonce);
cipher
.decrypt(nonce_array, ciphertext_with_tag)
.map_err(|e| CryptoError::AeadDecryption(e.to_string()))
}
/// Generate nonce from counter
///
/// Converts i32 counter to 12-byte nonce (big-endian encoding)
pub fn nonce_from_counter(counter: i32) -> [u8; 12] {
let mut nonce = [0u8; 12];
nonce[8..12].copy_from_slice(&counter.to_be_bytes());
nonce
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_aead_encrypt_decrypt_round_trip() {
let plaintext = b"Hello, World! This is a secret message.";
let key = [42u8; 32];
let nonce = nonce_from_counter(1);
// Encrypt
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Verify ciphertext is different from plaintext
assert_ne!(ciphertext.as_slice(), plaintext);
// Decrypt
let decrypted = decrypt(&ciphertext, &key, &nonce).expect("Decryption failed");
// Verify round-trip
assert_eq!(decrypted.as_slice(), plaintext);
}
#[test]
fn test_aead_decrypt_with_wrong_key() {
let plaintext = b"Secret data";
let key = [1u8; 32];
let wrong_key = [2u8; 32];
let nonce = nonce_from_counter(1);
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Attempt decrypt with wrong key
let result = decrypt(&ciphertext, &wrong_key, &nonce);
// Should fail
assert!(result.is_err());
}
#[test]
fn test_aead_decrypt_with_wrong_nonce() {
let plaintext = b"Secret data";
let key = [1u8; 32];
let nonce = nonce_from_counter(1);
let wrong_nonce = nonce_from_counter(2);
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Attempt decrypt with wrong nonce
let result = decrypt(&ciphertext, &key, &wrong_nonce);
// Should fail
assert!(result.is_err());
}
#[test]
fn test_nonce_generation_from_counter() {
let nonce1 = nonce_from_counter(1);
let nonce2 = nonce_from_counter(2);
let nonce_max = nonce_from_counter(i32::MAX);
// Verify nonces are different
assert_ne!(nonce1, nonce2);
// Verify nonce format (first 8 bytes should be zero, last 4 contain counter)
assert_eq!(&nonce1[0..8], &[0u8; 8]);
assert_eq!(&nonce1[8..12], &1i32.to_be_bytes());
assert_eq!(&nonce_max[8..12], &i32::MAX.to_be_bytes());
}
#[test]
fn test_aead_tampered_ciphertext() {
let plaintext = b"Important message";
let key = [7u8; 32];
let nonce = nonce_from_counter(5);
let mut ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Tamper with ciphertext (flip a bit)
if let Some(byte) = ciphertext.get_mut(5) {
*byte ^= 0x01;
}
// Attempt decrypt - should fail due to authentication tag mismatch
let result = decrypt(&ciphertext, &key, &nonce);
assert!(result.is_err());
}
}

View File

@@ -1,28 +0,0 @@
pub mod aead;
pub mod root_key;
use miette::Diagnostic;
use thiserror::Error;
#[derive(Error, Debug, Diagnostic)]
pub enum CryptoError {
#[error("AEAD encryption failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::aead_encryption))]
AeadEncryption(String),
#[error("AEAD decryption failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::aead_decryption))]
AeadDecryption(String),
#[error("Key derivation failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::key_derivation))]
KeyDerivation(String),
#[error("Invalid nonce: {0}")]
#[diagnostic(code(arbiter_server::crypto::invalid_nonce))]
InvalidNonce(String),
#[error("Invalid key format: {0}")]
#[diagnostic(code(arbiter_server::crypto::invalid_key))]
InvalidKey(String),
}

View File

@@ -1,240 +0,0 @@
use argon2::{
password_hash::{rand_core::OsRng, PasswordHasher, SaltString},
Argon2, PasswordHash, PasswordVerifier,
};
use crate::db::models::AeadEncrypted;
use super::{aead, CryptoError};
/// Encrypt root key with user password
///
/// Uses Argon2id for password derivation and ChaCha20Poly1305 for encryption
pub fn encrypt_root_key(
root_key: &[u8; 32],
password: &str,
nonce_counter: i32,
) -> Result<(AeadEncrypted, String), CryptoError> {
// Derive key from password using Argon2
let (derived_key, salt) = derive_key_from_password(password)?;
// Generate nonce from counter
let nonce = aead::nonce_from_counter(nonce_counter);
// Encrypt root key
let ciphertext_with_tag = aead::encrypt(root_key, &derived_key, &nonce)?;
// Extract tag (last 16 bytes)
let tag_start = ciphertext_with_tag
.len()
.checked_sub(16)
.ok_or_else(|| CryptoError::AeadEncryption("Ciphertext too short".into()))?;
let ciphertext = ciphertext_with_tag[..tag_start].to_vec();
let tag = ciphertext_with_tag[tag_start..].to_vec();
let aead_encrypted = AeadEncrypted {
id: 1, // Will be set by database
current_nonce: nonce_counter,
ciphertext,
tag,
schema_version: 1, // Current version
argon2_salt: Some(salt.clone()),
};
Ok((aead_encrypted, salt))
}
/// Decrypt root key with user password
///
/// Verifies password hash and decrypts using ChaCha20Poly1305
pub fn decrypt_root_key(
encrypted: &AeadEncrypted,
password: &str,
salt: &str,
) -> Result<[u8; 32], CryptoError> {
// Derive key from password using stored salt
let derived_key = derive_key_with_salt(password, salt)?;
// Generate nonce from counter
let nonce = aead::nonce_from_counter(encrypted.current_nonce);
// Reconstruct ciphertext with tag
let mut ciphertext_with_tag = encrypted.ciphertext.clone();
ciphertext_with_tag.extend_from_slice(&encrypted.tag);
// Decrypt
let plaintext = aead::decrypt(&ciphertext_with_tag, &derived_key, &nonce)?;
// Verify length
if plaintext.len() != 32 {
return Err(CryptoError::InvalidKey(format!(
"Expected 32 bytes, got {}",
plaintext.len()
)));
}
// Convert to fixed-size array
let mut root_key = [0u8; 32];
root_key.copy_from_slice(&plaintext);
Ok(root_key)
}
/// Derive 32-byte key from password using Argon2id
///
/// Generates new random salt and returns (derived_key, salt_string)
fn derive_key_from_password(password: &str) -> Result<([u8; 32], String), CryptoError> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
let password_hash = argon2
.hash_password(password.as_bytes(), &salt)
.map_err(|e| CryptoError::KeyDerivation(e.to_string()))?;
// Extract hash output (32 bytes)
let hash_output = password_hash
.hash
.ok_or_else(|| CryptoError::KeyDerivation("No hash output".into()))?;
let hash_bytes = hash_output.as_bytes();
if hash_bytes.len() != 32 {
return Err(CryptoError::KeyDerivation(format!(
"Expected 32 bytes, got {}",
hash_bytes.len()
)));
}
let mut key = [0u8; 32];
key.copy_from_slice(hash_bytes);
Ok((key, salt.to_string()))
}
/// Derive 32-byte key from password using existing salt
fn derive_key_with_salt(password: &str, salt_str: &str) -> Result<[u8; 32], CryptoError> {
let argon2 = Argon2::default();
// Parse salt
let salt =
SaltString::from_b64(salt_str).map_err(|e| CryptoError::InvalidKey(e.to_string()))?;
let password_hash = argon2
.hash_password(password.as_bytes(), &salt)
.map_err(|e| CryptoError::KeyDerivation(e.to_string()))?;
// Extract hash output
let hash_output = password_hash
.hash
.ok_or_else(|| CryptoError::KeyDerivation("No hash output".into()))?;
let hash_bytes = hash_output.as_bytes();
if hash_bytes.len() != 32 {
return Err(CryptoError::KeyDerivation(format!(
"Expected 32 bytes, got {}",
hash_bytes.len()
)));
}
let mut key = [0u8; 32];
key.copy_from_slice(hash_bytes);
Ok(key)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_root_key_encrypt_decrypt_round_trip() {
let root_key = [42u8; 32];
let password = "super_secret_password_123";
let nonce_counter = 1;
// Encrypt
let (encrypted, salt) =
encrypt_root_key(&root_key, password, nonce_counter).expect("Encryption failed");
// Verify structure
assert_eq!(encrypted.current_nonce, nonce_counter);
assert_eq!(encrypted.schema_version, 1);
assert_eq!(encrypted.tag.len(), 16); // AEAD tag size
// Decrypt
let decrypted =
decrypt_root_key(&encrypted, password, &salt).expect("Decryption failed");
// Verify round-trip
assert_eq!(decrypted, root_key);
}
#[test]
fn test_decrypt_with_wrong_password() {
let root_key = [99u8; 32];
let correct_password = "correct_password";
let wrong_password = "wrong_password";
let nonce_counter = 1;
// Encrypt with correct password
let (encrypted, salt) =
encrypt_root_key(&root_key, correct_password, nonce_counter).expect("Encryption failed");
// Attempt decrypt with wrong password
let result = decrypt_root_key(&encrypted, wrong_password, &salt);
// Should fail due to authentication tag mismatch
assert!(result.is_err());
}
#[test]
fn test_password_derivation_different_salts() {
let password = "same_password";
// Derive key twice - should produce different salts
let (key1, salt1) = derive_key_from_password(password).expect("Derivation 1 failed");
let (key2, salt2) = derive_key_from_password(password).expect("Derivation 2 failed");
// Salts should be different (randomly generated)
assert_ne!(salt1, salt2);
// Keys should be different (due to different salts)
assert_ne!(key1, key2);
}
#[test]
fn test_password_derivation_with_same_salt() {
let password = "test_password";
// Generate key and salt
let (key1, salt) = derive_key_from_password(password).expect("Derivation failed");
// Derive key again with same salt
let key2 = derive_key_with_salt(password, &salt).expect("Re-derivation failed");
// Keys should be identical
assert_eq!(key1, key2);
}
#[test]
fn test_different_nonce_produces_different_ciphertext() {
let root_key = [77u8; 32];
let password = "password123";
let (encrypted1, salt1) = encrypt_root_key(&root_key, password, 1).expect("Encryption 1 failed");
let (encrypted2, salt2) = encrypt_root_key(&root_key, password, 2).expect("Encryption 2 failed");
// Different nonces should produce different ciphertexts
assert_ne!(encrypted1.ciphertext, encrypted2.ciphertext);
// But both should decrypt correctly
let decrypted1 = decrypt_root_key(&encrypted1, password, &salt1).expect("Decryption 1 failed");
let decrypted2 = decrypt_root_key(&encrypted2, password, &salt2).expect("Decryption 2 failed");
assert_eq!(decrypted1, root_key);
assert_eq!(decrypted2, root_key);
}
}

View File

@@ -1,12 +1,7 @@
use std::sync::Arc;
use diesel::{
Connection as _, SqliteConnection,
connection::{SimpleConnection as _, TransactionManager},
};
use diesel::{Connection as _, SqliteConnection, connection::SimpleConnection as _};
use diesel_async::{
AsyncConnection, SimpleAsyncConnection,
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig, RecyclingMethod},
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig},
sync_connection_wrapper::SyncConnectionWrapper,
};
use diesel_migrations::{EmbeddedMigrations, MigrationHarness, embed_migrations};
@@ -29,23 +24,23 @@ const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
#[derive(Error, Diagnostic, Debug)]
pub enum DatabaseSetupError {
#[error("Failed to determine home directory")]
#[diagnostic(code(arbiter::db::home_dir_error))]
#[diagnostic(code(arbiter::db::home_dir))]
HomeDir(std::io::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::connection_error))]
#[diagnostic(code(arbiter::db::connection))]
Connection(diesel::ConnectionError),
#[error(transparent)]
#[diagnostic(code(arbiter::db::concurrency_error))]
#[diagnostic(code(arbiter::db::concurrency))]
ConcurrencySetup(diesel::result::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::migration_error))]
#[diagnostic(code(arbiter::db::migration))]
Migration(Box<dyn std::error::Error + Send + Sync>),
#[error(transparent)]
#[diagnostic(code(arbiter::db::pool_error))]
#[diagnostic(code(arbiter::db::pool))]
Pool(#[from] PoolInitError),
}
@@ -96,12 +91,12 @@ fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
#[tracing::instrument(level = "info")]
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
let database_url = url.map(String::from).unwrap_or(format!(
"{}?mode=rwc",
(database_path()?
let database_url = url.map(String::from).unwrap_or(
database_path()?
.to_str()
.expect("database path is not valid UTF-8"))
));
.expect("database path is not valid UTF-8")
.to_string(),
);
initialize_database(&database_url)?;
@@ -134,7 +129,6 @@ pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetu
Ok(pool)
}
#[cfg(test)]
pub async fn create_test_pool() -> DatabasePool {
use rand::distr::{Alphanumeric, SampleString as _};

View File

@@ -1,33 +1,74 @@
#![allow(unused)]
#![allow(clippy::all)]
use crate::db::schema::{self, aead_encrypted, arbiter_settings};
use crate::db::schema::{self, aead_encrypted, arbiter_settings, root_key_history, tls_history};
use diesel::{prelude::*, sqlite::Sqlite};
use restructed::Models;
pub mod types {
use chrono::{DateTime, Utc};
pub struct SqliteTimestamp(DateTime<Utc>);
}
#[derive(Queryable, Selectable, Debug, Insertable)]
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[view(
NewAeadEncrypted,
derive(Insertable),
omit(id),
attributes_with = "deriveless"
)]
#[diesel(table_name = aead_encrypted, check_for_backend(Sqlite))]
pub struct AeadEncrypted {
pub id: i32,
pub current_nonce: i32,
pub ciphertext: Vec<u8>,
pub tag: Vec<u8>,
pub current_nonce: Vec<u8>,
pub schema_version: i32,
pub argon2_salt: Option<String>,
pub associated_root_key_id: i32, // references root_key_history.id
pub created_at: i32,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
pub struct ArbiterSetting {
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = root_key_history, check_for_backend(Sqlite))]
#[view(
NewRootKeyHistory,
derive(Insertable),
omit(id),
attributes_with = "deriveless"
)]
pub struct RootKeyHistory {
pub id: i32,
pub root_key_id: Option<i32>, // references aead_encrypted.id
pub cert_key: Vec<u8>,
pub cert: Vec<u8>,
pub current_cert_id: Option<i32>, // references tls_certificates.id
pub ciphertext: Vec<u8>,
pub tag: Vec<u8>,
pub root_key_encryption_nonce: Vec<u8>,
pub data_encryption_nonce: Vec<u8>,
pub schema_version: i32,
pub salt: Vec<u8>,
}
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = tls_history, check_for_backend(Sqlite))]
#[view(
NewTlsHistory,
derive(Insertable),
omit(id, created_at),
attributes_with = "deriveless"
)]
pub struct TlsHistory {
pub id: i32,
pub cert: String,
pub cert_key: String, // PEM Encoded private key
pub ca_cert: String, // PEM Encoded certificate for cert signing
pub ca_key: String, // PEM Encoded public key for cert signing
pub created_at: i32,
}
#[derive(Queryable, Debug, Insertable, Selectable)]
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
pub struct ArbiterSettings {
pub id: i32,
pub root_key_id: Option<i32>, // references root_key_history.id
pub tls_id: Option<i32>, // references tls_history.id
}
#[derive(Queryable, Debug)]
@@ -49,70 +90,3 @@ pub struct UseragentClient {
pub created_at: i32,
pub updated_at: i32,
}
// TLS Certificate Rotation Models
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_certificates, check_for_backend(Sqlite))]
pub struct TlsCertificate {
pub id: i32,
pub cert: Vec<u8>,
pub cert_key: Vec<u8>,
pub not_before: i32,
pub not_after: i32,
pub created_at: i32,
pub is_active: bool,
}
#[derive(Insertable)]
#[diesel(table_name = schema::tls_certificates)]
pub struct NewTlsCertificate {
pub cert: Vec<u8>,
pub cert_key: Vec<u8>,
pub not_before: i32,
pub not_after: i32,
pub is_active: bool,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_rotation_state, check_for_backend(Sqlite))]
pub struct TlsRotationState {
pub id: i32,
pub state: String,
pub new_cert_id: Option<i32>,
pub initiated_at: Option<i32>,
pub timeout_at: Option<i32>,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::rotation_client_acks, check_for_backend(Sqlite))]
pub struct RotationClientAck {
pub rotation_id: i32,
pub client_key: String,
pub ack_received_at: i32,
}
#[derive(Insertable)]
#[diesel(table_name = schema::rotation_client_acks)]
pub struct NewRotationClientAck {
pub rotation_id: i32,
pub client_key: String,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_rotation_history, check_for_backend(Sqlite))]
pub struct TlsRotationHistory {
pub id: i32,
pub cert_id: i32,
pub event_type: String,
pub timestamp: i32,
pub details: Option<String>,
}
#[derive(Insertable)]
#[diesel(table_name = schema::tls_rotation_history)]
pub struct NewTlsRotationHistory {
pub cert_id: i32,
pub event_type: String,
pub details: Option<String>,
}

View File

@@ -3,11 +3,12 @@
diesel::table! {
aead_encrypted (id) {
id -> Integer,
current_nonce -> Integer,
current_nonce -> Binary,
ciphertext -> Binary,
tag -> Binary,
schema_version -> Integer,
argon2_salt -> Nullable<Text>,
associated_root_key_id -> Integer,
created_at -> Integer,
}
}
@@ -15,9 +16,7 @@ diesel::table! {
arbiter_settings (id) {
id -> Integer,
root_key_id -> Nullable<Integer>,
cert_key -> Binary,
cert -> Binary,
current_cert_id -> Nullable<Integer>,
tls_id -> Nullable<Integer>,
}
}
@@ -31,6 +30,29 @@ diesel::table! {
}
}
diesel::table! {
root_key_history (id) {
id -> Integer,
root_key_encryption_nonce -> Binary,
data_encryption_nonce -> Binary,
ciphertext -> Binary,
tag -> Binary,
schema_version -> Integer,
salt -> Binary,
}
}
diesel::table! {
tls_history (id) {
id -> Integer,
cert -> Text,
cert_key -> Text,
ca_cert -> Text,
ca_key -> Text,
created_at -> Integer,
}
}
diesel::table! {
useragent_client (id) {
id -> Integer,
@@ -41,59 +63,15 @@ diesel::table! {
}
}
diesel::table! {
tls_certificates (id) {
id -> Integer,
cert -> Binary,
cert_key -> Binary,
not_before -> Integer,
not_after -> Integer,
created_at -> Integer,
is_active -> Bool,
}
}
diesel::table! {
tls_rotation_state (id) {
id -> Integer,
state -> Text,
new_cert_id -> Nullable<Integer>,
initiated_at -> Nullable<Integer>,
timeout_at -> Nullable<Integer>,
}
}
diesel::table! {
rotation_client_acks (rotation_id, client_key) {
rotation_id -> Integer,
client_key -> Text,
ack_received_at -> Integer,
}
}
diesel::table! {
tls_rotation_history (id) {
id -> Integer,
cert_id -> Integer,
event_type -> Text,
timestamp -> Integer,
details -> Nullable<Text>,
}
}
diesel::joinable!(arbiter_settings -> aead_encrypted (root_key_id));
diesel::joinable!(arbiter_settings -> tls_certificates (current_cert_id));
diesel::joinable!(tls_rotation_state -> tls_certificates (new_cert_id));
diesel::joinable!(rotation_client_acks -> tls_certificates (rotation_id));
diesel::joinable!(tls_rotation_history -> tls_certificates (cert_id));
diesel::joinable!(aead_encrypted -> root_key_history (associated_root_key_id));
diesel::joinable!(arbiter_settings -> root_key_history (root_key_id));
diesel::joinable!(arbiter_settings -> tls_history (tls_id));
diesel::allow_tables_to_appear_in_same_query!(
aead_encrypted,
arbiter_settings,
program_client,
root_key_history,
tls_history,
useragent_client,
tls_certificates,
tls_rotation_state,
rotation_client_acks,
tls_rotation_history,
);

View File

@@ -1,24 +0,0 @@
use tonic::Status;
use tracing::error;
pub trait GrpcStatusExt<T> {
fn to_status(self) -> Result<T, Status>;
}
impl<T> GrpcStatusExt<T> for Result<T, diesel::result::Error> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database error");
Status::internal("Database error")
})
}
}
impl<T> GrpcStatusExt<T> for Result<T, crate::db::PoolError> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database pool error");
Status::internal("Database pool error")
})
}
}

View File

@@ -1,27 +1,26 @@
#![allow(unused)]
use std::sync::Arc;
#![forbid(unsafe_code)]
use arbiter_proto::{
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
transport::BiStream,
transport::{BiStream, GrpcTransportActor, wire},
};
use async_trait::async_trait;
use kameo::actor::PreparedActor;
use tokio_stream::wrappers::ReceiverStream;
use tokio::sync::mpsc;
use tonic::{Request, Response, Status};
use crate::{
actors::{client::handle_client, user_agent::handle_user_agent},
actors::{
client::handle_client,
user_agent::UserAgentActor,
},
context::ServerContext,
};
pub mod actors;
mod context;
mod crypto;
mod db;
mod errors;
pub mod context;
pub mod db;
const DEFAULT_CHANNEL_SIZE: usize = 1000;
@@ -29,6 +28,12 @@ pub struct Server {
context: ServerContext,
}
impl Server {
pub fn new(context: ServerContext) -> Self {
Self { context }
}
}
#[async_trait]
impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
@@ -57,7 +62,22 @@ impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
) -> Result<Response<Self::UserAgentStream>, Status> {
let req_stream = request.into_inner();
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
tokio::spawn(handle_user_agent(self.context.clone(), req_stream, tx));
let context = self.context.clone();
wire(
|prepared: PreparedActor<UserAgentActor>, recipient| {
prepared.spawn(UserAgentActor::new(context, recipient));
},
|prepared: PreparedActor<GrpcTransportActor<_, _, _>>, business_recipient| {
prepared.spawn(GrpcTransportActor::new(
tx,
req_stream,
business_recipient,
));
},
)
.await;
Ok(Response::new(ReceiverStream::new(rx)))
}
}

View File

@@ -0,0 +1,53 @@
use std::net::SocketAddr;
use arbiter_proto::{proto::arbiter_service_server::ArbiterServiceServer, url::ArbiterUrl};
use arbiter_server::{Server, actors::bootstrap::GetToken, context::ServerContext, db};
use miette::miette;
use tonic::transport::{Identity, ServerTlsConfig};
use tracing::info;
const PORT: u16 = 50051;
#[tokio::main]
async fn main() -> miette::Result<()> {
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| tracing_subscriber::EnvFilter::new("info")),
)
.init();
info!("Starting arbiter server");
let db = db::create_pool(None).await?;
info!("Database ready");
let context = ServerContext::new(db).await?;
let addr: SocketAddr = format!("127.0.0.1:{PORT}").parse().expect("valid address");
info!(%addr, "Starting gRPC server");
let url = ArbiterUrl {
host: addr.ip().to_string(),
port: addr.port(),
ca_cert: context.tls.ca_cert().clone().into_owned(),
bootstrap_token: context.actors.bootstrapper.ask(GetToken).await.unwrap(),
};
info!(%url, "Server URL");
let tls = ServerTlsConfig::new().identity(Identity::from_pem(
context.tls.cert_pem(),
context.tls.key_pem(),
));
tonic::transport::Server::builder()
.tls_config(tls)
.map_err(|err| miette!("Faild to setup TLS: {err}"))?
.add_service(ArbiterServiceServer::new(Server::new(context)))
.serve(addr)
.await
.map_err(|e| miette::miette!("gRPC server error: {e}"))?;
unreachable!("gRPC server should run indefinitely");
}

View File

@@ -0,0 +1,28 @@
use arbiter_server::{
actors::keyholder::KeyHolder,
db::{self, schema},
};
use diesel::QueryDsl;
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
#[allow(dead_code)]
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
actor
.bootstrap(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
.await
.unwrap();
actor
}
#[allow(dead_code)]
pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
let mut conn = db.get().await.unwrap();
let id = schema::arbiter_settings::table
.select(schema::arbiter_settings::root_key_id)
.first::<Option<i32>>(&mut conn)
.await
.unwrap();
id.expect("root_key_id should be set after bootstrap")
}

View File

@@ -0,0 +1,8 @@
mod common;
#[path = "keyholder/concurrency.rs"]
mod concurrency;
#[path = "keyholder/lifecycle.rs"]
mod lifecycle;
#[path = "keyholder/storage.rs"]
mod storage;

View File

@@ -0,0 +1,173 @@
use std::collections::{HashMap, HashSet};
use arbiter_server::{
actors::keyholder::{CreateNew, Error, KeyHolder},
db::{self, models, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
use diesel_async::RunQueryDsl;
use kameo::actor::{ActorRef, Spawn as _};
use memsafe::MemSafe;
use tokio::task::JoinSet;
use crate::common;
async fn write_concurrently(
actor: ActorRef<KeyHolder>,
prefix: &'static str,
count: usize,
) -> Vec<(i32, Vec<u8>)> {
let mut set = JoinSet::new();
for i in 0..count {
let actor = actor.clone();
set.spawn(async move {
let plaintext = format!("{prefix}-{i}").into_bytes();
let id = actor
.ask(CreateNew {
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
})
.await
.unwrap();
(id, plaintext)
});
}
let mut out = Vec::with_capacity(count);
while let Some(res) = set.join_next().await {
out.push(res.unwrap());
}
out
}
#[tokio::test]
#[test_log::test]
async fn concurrent_create_new_no_duplicate_nonces_() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
let writes = write_concurrently(actor, "nonce-unique", 32).await;
assert_eq!(writes.len(), 32);
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
assert_eq!(rows.len(), 32);
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
}
#[tokio::test]
#[test_log::test]
async fn concurrent_create_new_root_nonce_never_moves_backward() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
write_concurrently(actor, "root-max", 24).await;
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
let max_nonce = rows
.iter()
.map(|r| r.current_nonce.clone())
.max()
.expect("at least one row");
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(root_row.data_encryption_nonce, max_nonce);
}
#[tokio::test]
#[test_log::test]
async fn insert_failure_does_not_create_partial_row() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let root_key_history_id = common::root_key_history_id(&db).await;
let mut conn = db.get().await.unwrap();
let before_count: i64 = schema::aead_encrypted::table
.count()
.get_result(&mut conn)
.await
.unwrap();
let before_root_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(&mut conn)
.await
.unwrap();
sql_query(
"CREATE TRIGGER fail_aead_insert BEFORE INSERT ON aead_encrypted BEGIN SELECT RAISE(ABORT, 'forced test failure'); END;",
)
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::DatabaseTransaction(_)));
let mut conn = db.get().await.unwrap();
sql_query("DROP TRIGGER fail_aead_insert;")
.execute(&mut conn)
.await
.unwrap();
let after_count: i64 = schema::aead_encrypted::table
.count()
.get_result(&mut conn)
.await
.unwrap();
assert_eq!(
before_count, after_count,
"failed insert must not create row"
);
let after_root_nonce: Vec<u8> = schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id))
.select(schema::root_key_history::data_encryption_nonce)
.first(&mut conn)
.await
.unwrap();
assert!(
after_root_nonce > before_root_nonce,
"current behavior allows nonce gap on failed insert"
);
}
#[tokio::test]
#[test_log::test]
async fn decrypt_roundtrip_after_high_concurrency() {
let db = db::create_test_pool().await;
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
let writes = write_concurrently(actor, "roundtrip", 40).await;
let expected: HashMap<i32, Vec<u8>> = writes.into_iter().collect();
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
decryptor
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
.await
.unwrap();
for (id, plaintext) in expected {
let mut decrypted = decryptor.decrypt(id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
}

View File

@@ -0,0 +1,131 @@
use arbiter_server::{
actors::keyholder::{Error, KeyHolder},
db::{self, models, schema},
};
use diesel::{QueryDsl, SelectableHelper};
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::common;
#[tokio::test]
#[test_log::test]
async fn test_bootstrap() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.bootstrap(seal_key).await.unwrap();
let mut conn = db.get().await.unwrap();
let row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
assert_eq!(row.schema_version, 1);
assert_eq!(
row.tag,
arbiter_server::actors::keyholder::encryption::v1::ROOT_KEY_TAG
);
assert!(!row.ciphertext.is_empty());
assert!(!row.salt.is_empty());
assert_eq!(
row.data_encryption_nonce,
arbiter_server::actors::keyholder::encryption::v1::Nonce::default().to_vec()
);
}
#[tokio::test]
#[test_log::test]
async fn test_bootstrap_rejects_double() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
let err = actor.bootstrap(seal_key2).await.unwrap_err();
assert!(matches!(err, Error::AlreadyBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_create_new_before_bootstrap_fails() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db).await.unwrap();
let err = actor
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_decrypt_before_bootstrap_fails() {
let db = db::create_test_pool().await;
let mut actor = KeyHolder::new(db).await.unwrap();
let err = actor.decrypt(1).await.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_new_restores_sealed_state() {
let db = db::create_test_pool().await;
let actor = common::bootstrapped_keyholder(&db).await;
drop(actor);
let mut actor2 = KeyHolder::new(db).await.unwrap();
let err = actor2.decrypt(1).await.unwrap_err();
assert!(matches!(err, Error::NotBootstrapped));
}
#[tokio::test]
#[test_log::test]
async fn test_unseal_correct_password() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"survive a restart";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
drop(actor);
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.try_unseal(seal_key).await.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_unseal_wrong_then_correct_password() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"important data";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
drop(actor);
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
let err = actor.try_unseal(bad_key).await.unwrap_err();
assert!(matches!(err, Error::InvalidKey));
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
actor.try_unseal(good_key).await.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}

View File

@@ -0,0 +1,161 @@
use std::collections::HashSet;
use arbiter_server::{
actors::keyholder::{Error, encryption::v1},
db::{self, models, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
use diesel_async::RunQueryDsl;
use memsafe::MemSafe;
use crate::common;
#[tokio::test]
#[test_log::test]
async fn test_create_decrypt_roundtrip() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"hello arbiter";
let aead_id = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
assert_eq!(*decrypted.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_decrypt_nonexistent_returns_not_found() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let err = actor.decrypt(9999).await.unwrap_err();
assert!(matches!(err, Error::NotFound));
}
#[tokio::test]
#[test_log::test]
async fn test_ciphertext_differs_across_entries() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let plaintext = b"same content";
let id1 = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let id2 = actor
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
.await
.unwrap();
let mut conn = db.get().await.unwrap();
let row1: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id1))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
let row2: models::AeadEncrypted = schema::aead_encrypted::table
.filter(schema::aead_encrypted::id.eq(id2))
.select(models::AeadEncrypted::as_select())
.first(&mut conn)
.await
.unwrap();
assert_ne!(row1.ciphertext, row2.ciphertext);
let mut d1 = actor.decrypt(id1).await.unwrap();
let mut d2 = actor.decrypt(id2).await.unwrap();
assert_eq!(*d1.read().unwrap(), plaintext);
assert_eq!(*d2.read().unwrap(), plaintext);
}
#[tokio::test]
#[test_log::test]
async fn test_nonce_never_reused() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let n = 5;
for i in 0..n {
actor
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
.await
.unwrap();
}
let mut conn = db.get().await.unwrap();
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
.select(models::AeadEncrypted::as_select())
.load(&mut conn)
.await
.unwrap();
assert_eq!(rows.len(), n);
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
for (i, row) in rows.iter().enumerate() {
let mut expected = v1::Nonce::default();
for _ in 0..=i {
expected.increment();
}
assert_eq!(row.current_nonce, expected.to_vec(), "nonce {i} mismatch");
}
let root_row: models::RootKeyHistory = schema::root_key_history::table
.select(models::RootKeyHistory::as_select())
.first(&mut conn)
.await
.unwrap();
let last_nonce = &rows.last().unwrap().current_nonce;
assert_eq!(&root_row.data_encryption_nonce, last_nonce);
}
#[tokio::test]
#[test_log::test]
async fn broken_db_nonce_format_fails_closed() {
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let root_key_history_id = common::root_key_history_id(&db).await;
let mut conn = db.get().await.unwrap();
update(
schema::root_key_history::table
.filter(schema::root_key_history::id.eq(root_key_history_id)),
)
.set(schema::root_key_history::data_encryption_nonce.eq(vec![1, 2, 3]))
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
.await
.unwrap_err();
assert!(matches!(err, Error::BrokenDatabase));
let db = db::create_test_pool().await;
let mut actor = common::bootstrapped_keyholder(&db).await;
let id = actor
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
.await
.unwrap();
let mut conn = db.get().await.unwrap();
update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id)))
.set(schema::aead_encrypted::current_nonce.eq(vec![7, 8]))
.execute(&mut conn)
.await
.unwrap();
drop(conn);
let err = actor.decrypt(id).await.unwrap_err();
assert!(matches!(err, Error::BrokenDatabase));
}

View File

@@ -0,0 +1,31 @@
mod common;
use arbiter_proto::proto::UserAgentResponse;
use arbiter_server::actors::user_agent::UserAgentError;
use kameo::{Actor, actor::Recipient, actor::Spawn, prelude::Message};
/// A no-op actor that discards any messages it receives.
#[derive(Actor)]
struct NullSink;
impl Message<Result<UserAgentResponse, UserAgentError>> for NullSink {
type Reply = ();
async fn handle(
&mut self,
_msg: Result<UserAgentResponse, UserAgentError>,
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
) -> Self::Reply {
}
}
/// Creates a `Recipient` that silently discards all messages.
fn null_recipient() -> Recipient<Result<UserAgentResponse, UserAgentError>> {
let actor_ref = NullSink::spawn(NullSink);
actor_ref.recipient()
}
#[path = "user_agent/auth.rs"]
mod auth;
#[path = "user_agent/unseal.rs"]
mod unseal;

View File

@@ -0,0 +1,171 @@
use arbiter_proto::proto::{
UserAgentResponse,
auth::{self, AuthChallengeRequest, AuthOk},
user_agent_response::Payload as UserAgentResponsePayload,
};
use arbiter_server::{
actors::{
GlobalActors,
bootstrap::GetToken,
user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution, UserAgentActor},
},
db::{self, schema},
};
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
use diesel_async::RunQueryDsl;
use ed25519_dalek::Signer as _;
use kameo::actor::Spawn;
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_token_auth() {
let db =db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
let user_agent =
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
let user_agent_ref = UserAgentActor::spawn(user_agent);
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some(token),
},
})
.await
.expect("Shouldn't fail to send message");
assert_eq!(
result,
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(
arbiter_proto::proto::auth::ServerMessage {
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
AuthOk {},
)),
},
)),
}
);
let mut conn = db.get().await.unwrap();
let stored_pubkey: Vec<u8> = schema::useragent_client::table
.select(schema::useragent_client::public_key)
.first::<Vec<u8>>(&mut conn)
.await
.unwrap();
assert_eq!(stored_pubkey, new_key.verifying_key().to_bytes().to_vec());
}
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_invalid_token_auth() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let user_agent =
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
let user_agent_ref = UserAgentActor::spawn(user_agent);
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some("invalid_token".to_string()),
},
})
.await;
match result {
Err(kameo::error::SendError::HandlerError(err)) => {
assert!(
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidBootstrapToken),
"Expected InvalidBootstrapToken, got {err:?}"
);
}
Err(other) => {
panic!("Expected SendError::HandlerError, got {other:?}");
}
Ok(_) => {
panic!("Expected error due to invalid bootstrap token, but got success");
}
}
}
#[tokio::test]
#[test_log::test]
pub async fn test_challenge_auth() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let user_agent =
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
let user_agent_ref = UserAgentActor::spawn(user_agent);
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
{
let mut conn = db.get().await.unwrap();
insert_into(schema::useragent_client::table)
.values(schema::useragent_client::public_key.eq(pubkey_bytes.clone()))
.execute(&mut conn)
.await
.unwrap();
}
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: None,
},
})
.await
.expect("Shouldn't fail to send message");
let UserAgentResponse {
payload:
Some(UserAgentResponsePayload::AuthMessage(arbiter_proto::proto::auth::ServerMessage {
payload:
Some(arbiter_proto::proto::auth::server_message::Payload::AuthChallenge(challenge)),
})),
} = result
else {
panic!("Expected auth challenge response, got {result:?}");
};
let formatted_challenge = arbiter_proto::format_challenge(&challenge);
let signature = new_key.sign(&formatted_challenge);
let serialized_signature = signature.to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeSolution {
solution: auth::AuthChallengeSolution {
signature: serialized_signature,
},
})
.await
.expect("Shouldn't fail to send message");
assert_eq!(
result,
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(
arbiter_proto::proto::auth::ServerMessage {
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
AuthOk {},
)),
},
)),
}
);
}

View File

@@ -0,0 +1,230 @@
use arbiter_proto::proto::{
UnsealEncryptedKey, UnsealResult, UnsealStart, auth::AuthChallengeRequest,
user_agent_response::Payload as UserAgentResponsePayload,
};
use arbiter_server::{
actors::{
GlobalActors,
bootstrap::GetToken,
keyholder::{Bootstrap, Seal},
user_agent::{
HandleAuthChallengeRequest, HandleUnsealEncryptedKey, HandleUnsealRequest,
UserAgentActor,
},
},
db,
};
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
use kameo::actor::{ActorRef, Spawn};
use memsafe::MemSafe;
use x25519_dalek::{EphemeralSecret, PublicKey};
async fn setup_authenticated_user_agent(
seal_key: &[u8],
) -> (arbiter_server::db::DatabasePool, ActorRef<UserAgentActor>) {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
actors
.key_holder
.ask(Bootstrap {
seal_key_raw: MemSafe::new(seal_key.to_vec()).unwrap(),
})
.await
.unwrap();
actors.key_holder.ask(Seal).await.unwrap();
let user_agent =
UserAgentActor::new_manual(db.clone(), actors.clone(), super::null_recipient());
let user_agent_ref = UserAgentActor::spawn(user_agent);
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
let auth_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: auth_key.verifying_key().to_bytes().to_vec(),
bootstrap_token: Some(token),
},
})
.await
.unwrap();
(db, user_agent_ref)
}
async fn client_dh_encrypt(
user_agent_ref: &ActorRef<UserAgentActor>,
key_to_send: &[u8],
) -> UnsealEncryptedKey {
let client_secret = EphemeralSecret::random();
let client_public = PublicKey::from(&client_secret);
let response = user_agent_ref
.ask(HandleUnsealRequest {
req: UnsealStart {
client_pubkey: client_public.as_bytes().to_vec(),
},
})
.await
.unwrap();
let server_pubkey = match response.payload.unwrap() {
UserAgentResponsePayload::UnsealStartResponse(resp) => resp.server_pubkey,
other => panic!("Expected UnsealStartResponse, got {other:?}"),
};
let server_public = PublicKey::from(<[u8; 32]>::try_from(server_pubkey.as_slice()).unwrap());
let shared_secret = client_secret.diffie_hellman(&server_public);
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
let nonce = XNonce::from([0u8; 24]);
let associated_data = b"unseal";
let mut ciphertext = key_to_send.to_vec();
cipher
.encrypt_in_place(&nonce, associated_data, &mut ciphertext)
.unwrap();
UnsealEncryptedKey {
nonce: nonce.to_vec(),
ciphertext,
associated_data: associated_data.to_vec(),
}
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_success() {
let seal_key = b"test-seal-key";
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
let response = user_agent_ref
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_wrong_seal_key() {
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"correct-key").await;
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
let response = user_agent_ref
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_corrupted_ciphertext() {
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"test-key").await;
let client_secret = EphemeralSecret::random();
let client_public = PublicKey::from(&client_secret);
user_agent_ref
.ask(HandleUnsealRequest {
req: UnsealStart {
client_pubkey: client_public.as_bytes().to_vec(),
},
})
.await
.unwrap();
let response = user_agent_ref
.ask(HandleUnsealEncryptedKey {
req: UnsealEncryptedKey {
nonce: vec![0u8; 24],
ciphertext: vec![0u8; 32],
associated_data: vec![],
},
})
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_start_without_auth_fails() {
let db = db::create_test_pool().await;
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
let user_agent =
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
let user_agent_ref = UserAgentActor::spawn(user_agent);
let client_secret = EphemeralSecret::random();
let client_public = PublicKey::from(&client_secret);
let result = user_agent_ref
.ask(HandleUnsealRequest {
req: UnsealStart {
client_pubkey: client_public.as_bytes().to_vec(),
},
})
.await;
match result {
Err(kameo::error::SendError::HandlerError(err)) => {
assert!(
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidState),
"Expected InvalidState, got {err:?}"
);
}
other => panic!("Expected state machine error, got {other:?}"),
}
}
#[tokio::test]
#[test_log::test]
pub async fn test_unseal_retry_after_invalid_key() {
let seal_key = b"real-seal-key";
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
{
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
let response = user_agent_ref
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
);
}
{
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
let response = user_agent_ref
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
.await
.unwrap();
assert_eq!(
response.payload.unwrap(),
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
);
}
}

View File

@@ -2,5 +2,14 @@
name = "arbiter-useragent"
version = "0.1.0"
edition = "2024"
license = "Apache-2.0"
[dependencies]
arbiter-proto.path = "../arbiter-proto"
kameo.workspace = true
tokio = {workspace = true, features = ["net"]}
tonic.workspace = true
tracing.workspace = true
ed25519-dalek.workspace = true
smlang.workspace = true
x25519-dalek.workspace = true

View File

@@ -1,14 +1,66 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
use arbiter_proto::{proto::UserAgentRequest, transport::TransportActor};
use ed25519_dalek::SigningKey;
use kameo::{
Actor, Reply,
actor::{ActorRef, WeakActorRef},
prelude::Message,
};
use smlang::statemachine;
use tonic::transport::CertificateDer;
use tracing::{debug, error};
struct Storage {
pub identity: SigningKey,
pub server_ca_cert: CertificateDer<'static>,
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Debug)]
pub enum InitError {
StorageError,
Other(String),
}
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
statemachine! {
name: UserAgentStateMachine,
custom_error: false,
transitions: {
*Init + SendAuthChallenge = WaitingForAuthSolution
}
}
pub struct UserAgentActor<A: TransportActor<UserAgentRequest>> {
key: SigningKey,
server_ca_cert: CertificateDer<'static>,
sender: ActorRef<A>,
}
impl<A: TransportActor<UserAgentRequest>> Actor for UserAgentActor<A> {
type Args = Self;
type Error = InitError;
async fn on_start(args: Self::Args, actor_ref: ActorRef<Self>) -> Result<Self, Self::Error> {
todo!()
}
async fn on_link_died(
&mut self,
_: WeakActorRef<Self>,
id: kameo::prelude::ActorId,
_: kameo::prelude::ActorStopReason,
) -> Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error> {
if id == self.sender.id() {
error!("Transport actor died, stopping UserAgentActor");
Ok(std::ops::ControlFlow::Break(
kameo::prelude::ActorStopReason::Normal,
))
} else {
debug!(
"Linked actor {} died, but it's not the transport actor, ignoring",
id
);
Ok(std::ops::ControlFlow::Continue(()))
}
}
}

View File

@@ -1,6 +1,11 @@
# cargo-vet audits file
[[audits.similar]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
version = "2.2.1"
[[audits.test-log]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
@@ -11,6 +16,12 @@ who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.18 -> 0.2.19"
[[trusted.cc]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2022-10-29"
end = "2027-02-16"
[[trusted.h2]]
criteria = "safe-to-deploy"
user-id = 359 # Sean McArthur (seanmonstar)
@@ -29,6 +40,12 @@ user-id = 359 # Sean McArthur (seanmonstar)
start = "2022-01-15"
end = "2027-02-14"
[[trusted.libc]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2024-08-15"
end = "2027-02-16"
[[trusted.rustix]]
criteria = "safe-to-deploy"
user-id = 6825 # Dan Gohman (sunfishcode)
@@ -46,3 +63,33 @@ criteria = "safe-to-deploy"
user-id = 3618 # David Tolnay (dtolnay)
start = "2019-03-01"
end = "2027-02-14"
[[trusted.thread_local]]
criteria = "safe-to-deploy"
user-id = 2915 # Amanieu d'Antras (Amanieu)
start = "2019-09-07"
end = "2027-02-16"
[[trusted.toml]]
criteria = "safe-to-deploy"
user-id = 6743 # Ed Page (epage)
start = "2022-12-14"
end = "2027-02-16"
[[trusted.toml_parser]]
criteria = "safe-to-deploy"
user-id = 6743 # Ed Page (epage)
start = "2025-07-08"
end = "2027-02-16"
[[trusted.tonic-build]]
criteria = "safe-to-deploy"
user-id = 10
start = "2019-09-10"
end = "2027-02-16"
[[trusted.windows-sys]]
criteria = "safe-to-deploy"
user-id = 64539 # Kenny Kerr (kennykerr)
start = "2021-11-15"
end = "2027-02-16"

View File

@@ -13,6 +13,9 @@ url = "https://raw.githubusercontent.com/google/supply-chain/main/audits.toml"
[imports.mozilla]
url = "https://raw.githubusercontent.com/mozilla/supply-chain/main/audits.toml"
[imports.zcash]
url = "https://raw.githubusercontent.com/zcash/rust-ecosystem/main/supply-chain/audits.toml"
[[exemptions.addr2line]]
version = "0.25.1"
criteria = "safe-to-deploy"
@@ -41,10 +44,6 @@ criteria = "safe-to-deploy"
version = "0.1.89"
criteria = "safe-to-deploy"
[[exemptions.autocfg]]
version = "1.5.0"
criteria = "safe-to-deploy"
[[exemptions.aws-lc-rs]]
version = "1.15.4"
criteria = "safe-to-deploy"
@@ -193,10 +192,6 @@ criteria = "safe-to-deploy"
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.dunce]]
version = "1.0.5"
criteria = "safe-to-deploy"
[[exemptions.dyn-clone]]
version = "1.0.20"
criteria = "safe-to-deploy"
@@ -209,10 +204,6 @@ criteria = "safe-to-deploy"
version = "3.0.0-pre.6"
criteria = "safe-to-deploy"
[[exemptions.errno]]
version = "0.3.14"
criteria = "safe-to-deploy"
[[exemptions.fiat-crypto]]
version = "0.3.0"
criteria = "safe-to-deploy"
@@ -261,10 +252,6 @@ criteria = "safe-to-deploy"
version = "1.4.0"
criteria = "safe-to-deploy"
[[exemptions.http-body]]
version = "1.0.1"
criteria = "safe-to-deploy"
[[exemptions.http-body-util]]
version = "0.1.3"
criteria = "safe-to-deploy"
@@ -329,10 +316,6 @@ criteria = "safe-to-deploy"
version = "0.19.0"
criteria = "safe-to-deploy"
[[exemptions.libc]]
version = "0.2.181"
criteria = "safe-to-deploy"
[[exemptions.libsqlite3-sys]]
version = "0.35.0"
criteria = "safe-to-deploy"
@@ -525,10 +508,6 @@ criteria = "safe-to-deploy"
version = "0.1.27"
criteria = "safe-to-deploy"
[[exemptions.rustc_version]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.rusticata-macros]]
version = "4.1.0"
criteria = "safe-to-deploy"
@@ -545,10 +524,6 @@ criteria = "safe-to-deploy"
version = "0.103.9"
criteria = "safe-to-deploy"
[[exemptions.rustversion]]
version = "1.0.22"
criteria = "safe-to-deploy"
[[exemptions.scoped-futures]]
version = "0.1.4"
criteria = "safe-to-deploy"
@@ -653,10 +628,6 @@ criteria = "safe-to-deploy"
version = "2.0.18"
criteria = "safe-to-deploy"
[[exemptions.thread_local]]
version = "1.1.9"
criteria = "safe-to-run"
[[exemptions.time]]
version = "0.3.47"
criteria = "safe-to-deploy"
@@ -689,14 +660,6 @@ criteria = "safe-to-deploy"
version = "0.7.18"
criteria = "safe-to-deploy"
[[exemptions.toml]]
version = "0.9.11+spec-1.1.0"
criteria = "safe-to-deploy"
[[exemptions.toml_parser]]
version = "1.0.6+spec-1.1.0"
criteria = "safe-to-deploy"
[[exemptions.tonic]]
version = "0.14.3"
criteria = "safe-to-deploy"
@@ -741,10 +704,6 @@ criteria = "safe-to-deploy"
version = "0.3.22"
criteria = "safe-to-run"
[[exemptions.try-lock]]
version = "0.2.5"
criteria = "safe-to-deploy"
[[exemptions.typenum]]
version = "1.19.0"
criteria = "safe-to-deploy"
@@ -769,10 +728,6 @@ criteria = "safe-to-deploy"
version = "1.20.0"
criteria = "safe-to-deploy"
[[exemptions.want]]
version = "0.3.1"
criteria = "safe-to-deploy"
[[exemptions.wasi]]
version = "0.11.1+wasi-snapshot-preview1"
criteria = "safe-to-deploy"
@@ -817,10 +772,6 @@ criteria = "safe-to-deploy"
version = "0.59.3"
criteria = "safe-to-deploy"
[[exemptions.windows-link]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.windows-result]]
version = "0.4.1"
criteria = "safe-to-deploy"
@@ -829,18 +780,6 @@ criteria = "safe-to-deploy"
version = "0.5.1"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.52.0"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.60.2"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.61.2"
criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.52.6"
criteria = "safe-to-deploy"
@@ -925,10 +864,6 @@ criteria = "safe-to-deploy"
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.zeroize]]
version = "1.8.2"
criteria = "safe-to-deploy"
[[exemptions.zmij]]
version = "1.0.20"
criteria = "safe-to-deploy"

View File

@@ -41,6 +41,12 @@ user-id = 359
user-login = "seanmonstar"
user-name = "Sean McArthur"
[[publisher.libc]]
version = "0.2.182"
when = "2026-02-13"
user-id = 55123
user-login = "rust-lang-owner"
[[publisher.rustix]]
version = "1.1.3"
when = "2025-12-23"
@@ -63,12 +69,33 @@ user-login = "dtolnay"
user-name = "David Tolnay"
[[publisher.syn]]
version = "2.0.114"
when = "2026-01-07"
version = "2.0.115"
when = "2026-02-12"
user-id = 3618
user-login = "dtolnay"
user-name = "David Tolnay"
[[publisher.thread_local]]
version = "1.1.9"
when = "2025-06-12"
user-id = 2915
user-login = "Amanieu"
user-name = "Amanieu d'Antras"
[[publisher.toml]]
version = "0.9.12+spec-1.1.0"
when = "2026-02-10"
user-id = 6743
user-login = "epage"
user-name = "Ed Page"
[[publisher.toml_parser]]
version = "1.0.8+spec-1.1.0"
when = "2026-02-12"
user-id = 6743
user-login = "epage"
user-name = "Ed Page"
[[publisher.unicode-width]]
version = "0.1.14"
when = "2024-09-19"
@@ -120,6 +147,34 @@ version = "0.244.0"
when = "2026-01-06"
trusted-publisher = "github:bytecodealliance/wasm-tools"
[[publisher.windows-sys]]
version = "0.52.0"
when = "2023-11-15"
user-id = 64539
user-login = "kennykerr"
user-name = "Kenny Kerr"
[[publisher.windows-sys]]
version = "0.59.0"
when = "2024-07-30"
user-id = 64539
user-login = "kennykerr"
user-name = "Kenny Kerr"
[[publisher.windows-sys]]
version = "0.60.2"
when = "2025-06-12"
user-id = 64539
user-login = "kennykerr"
user-name = "Kenny Kerr"
[[publisher.windows-sys]]
version = "0.61.2"
when = "2025-10-06"
user-id = 64539
user-login = "kennykerr"
user-name = "Kenny Kerr"
[[publisher.wit-bindgen]]
version = "0.51.0"
when = "2026-01-12"
@@ -265,6 +320,12 @@ criteria = "safe-to-deploy"
version = "1.1.2"
notes = "Contains `unsafe` code but it's well-documented and scoped to what it's intended to be doing. Otherwise a well-focused and straightforward crate."
[[audits.bytecode-alliance.audits.cipher]]
who = "Andrew Brown <andrew.brown@intel.com>"
criteria = "safe-to-deploy"
version = "0.4.4"
notes = "Most unsafe is hidden by `inout` dependency; only remaining unsafe is raw-splitting a slice and an unreachable hint. Older versions of this regularly reach ~150k daily downloads."
[[audits.bytecode-alliance.audits.core-foundation-sys]]
who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
@@ -279,6 +340,23 @@ who = "Nick Fitzgerald <fitzgen@gmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.4 -> 0.2.5"
[[audits.bytecode-alliance.audits.errno]]
who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
version = "0.3.0"
notes = "This crate uses libc and windows-sys APIs to get and set the raw OS error value."
[[audits.bytecode-alliance.audits.errno]]
who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
delta = "0.3.0 -> 0.3.1"
notes = "Just a dependency version bump and a bug fix for redox"
[[audits.bytecode-alliance.audits.errno]]
who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
delta = "0.3.9 -> 0.3.10"
[[audits.bytecode-alliance.audits.fastrand]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
@@ -385,11 +463,28 @@ criteria = "safe-to-deploy"
delta = "0.4.1 -> 0.5.0"
notes = "Minor changes for a `no_std` upgrade but otherwise everything looks as expected."
[[audits.bytecode-alliance.audits.http-body]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
version = "1.0.0-rc.2"
[[audits.bytecode-alliance.audits.http-body]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "1.0.0-rc.2 -> 1.0.0"
notes = "Only minor changes made for a stable release."
[[audits.bytecode-alliance.audits.iana-time-zone-haiku]]
who = "Dan Gohman <dev@sunfishcode.online>"
criteria = "safe-to-deploy"
version = "0.1.2"
[[audits.bytecode-alliance.audits.inout]]
who = "Andrew Brown <andrew.brown@intel.com>"
criteria = "safe-to-deploy"
version = "0.1.3"
notes = "A part of RustCrypto/utils, this crate is designed to handle unsafe buffers and carefully documents the safety concerns throughout. Older versions of this tally up to ~130k daily downloads."
[[audits.bytecode-alliance.audits.leb128fmt]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
@@ -443,6 +538,24 @@ criteria = "safe-to-deploy"
delta = "0.8.5 -> 0.8.9"
notes = "No new unsafe code, just refactorings."
[[audits.bytecode-alliance.audits.nu-ansi-term]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
version = "0.46.0"
notes = "one use of unsafe to call windows specific api to get console handle."
[[audits.bytecode-alliance.audits.nu-ansi-term]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.46.0 -> 0.50.1"
notes = "Lots of stylistic/rust-related chanegs, plus new features, but nothing out of the ordrinary."
[[audits.bytecode-alliance.audits.nu-ansi-term]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.50.1 -> 0.50.3"
notes = "CI changes, Rust changes, nothing out of the ordinary."
[[audits.bytecode-alliance.audits.num-traits]]
who = "Andrew Brown <andrew.brown@intel.com>"
criteria = "safe-to-deploy"
@@ -537,12 +650,38 @@ criteria = "safe-to-run"
delta = "0.2.16 -> 0.2.18"
notes = "Standard macro changes, nothing out of place"
[[audits.bytecode-alliance.audits.tracing-log]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
version = "0.1.3"
notes = """
This is a standard adapter between the `log` ecosystem and the `tracing`
ecosystem. There's one `unsafe` block in this crate and it's well-scoped.
"""
[[audits.bytecode-alliance.audits.tracing-log]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
delta = "0.1.3 -> 0.2.0"
notes = "Nothing out of the ordinary, a typical major version update and nothing awry."
[[audits.bytecode-alliance.audits.try-lock]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
version = "0.2.4"
notes = "Implements a concurrency primitive with atomics, and is not obviously incorrect"
[[audits.bytecode-alliance.audits.vcpkg]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
version = "0.2.15"
notes = "no build.rs, no macros, no unsafe. It reads the filesystem and makes copies of DLLs into OUT_DIR."
[[audits.bytecode-alliance.audits.want]]
who = "Pat Hickey <phickey@fastly.com>"
criteria = "safe-to-deploy"
version = "0.3.0"
[[audits.bytecode-alliance.audits.wasm-metadata]]
who = "Alex Crichton <alex@alexcrichton.com>"
criteria = "safe-to-deploy"
@@ -591,6 +730,13 @@ criteria = "safe-to-deploy"
delta = "0.243.0 -> 0.244.0"
notes = "The Bytecode Alliance is the author of this crate"
[[audits.google.audits.autocfg]]
who = "Manish Goregaokar <manishearth@google.com>"
criteria = "safe-to-deploy"
version = "1.4.0"
notes = "Contains no unsafe"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.base64]]
who = "amarjotgill <amarjotgill@google.com>"
criteria = "safe-to-deploy"
@@ -719,6 +865,89 @@ delta = "0.2.9 -> 0.2.13"
notes = "Audited at https://fxrev.dev/946396"
aggregated-from = "https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/third_party/rust_crates/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.proc-macro-error-attr]]
who = "George Burgess IV <gbiv@google.com>"
criteria = "safe-to-deploy"
version = "1.0.4"
aggregated-from = "https://chromium.googlesource.com/chromiumos/third_party/rust_crates/+/refs/heads/main/cargo-vet/audits.toml?format=TEXT"
[[audits.google.audits.rand_core]]
who = "Lukasz Anforowicz <lukasza@chromium.org>"
criteria = "safe-to-deploy"
version = "0.6.4"
notes = """
For more detailed unsafe review notes please see https://crrev.com/c/6362797
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Lukasz Anforowicz <lukasza@chromium.org>"
criteria = "safe-to-deploy"
version = "1.0.14"
notes = """
Grepped for `-i cipher`, `-i crypto`, `'\bfs\b'``, `'\bnet\b'``, `'\bunsafe\b'``
and there were no hits except for:
* Using trivially-safe `unsafe` in test code:
```
tests/test_const.rs:unsafe fn _unsafe() {}
tests/test_const.rs:const _UNSAFE: () = unsafe { _unsafe() };
```
* Using `unsafe` in a string:
```
src/constfn.rs: "unsafe" => Qualifiers::Unsafe,
```
* Using `std::fs` in `build/build.rs` to write `${OUT_DIR}/version.expr`
which is later read back via `include!` used in `src/lib.rs`.
Version `1.0.6` of this crate has been added to Chromium in
https://source.chromium.org/chromium/chromium/src/+/28841c33c77833cc30b286f9ae24c97e7a8f4057
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Adrian Taylor <adetaylor@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.14 -> 1.0.15"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "danakj <danakj@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.15 -> 1.0.16"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Dustin J. Mitchell <djmitche@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.16 -> 1.0.17"
notes = "Just updates windows compat"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Liza Burakova <liza@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.17 -> 1.0.18"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Dustin J. Mitchell <djmitche@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.18 -> 1.0.19"
notes = "No unsafe, just doc changes"
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.rustversion]]
who = "Daniel Cheng <dcheng@chromium.org>"
criteria = "safe-to-deploy"
delta = "1.0.19 -> 1.0.20"
notes = "Only minor updates to documentation and the mock today used for testing."
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.smallvec]]
who = "Manish Goregaokar <manishearth@google.com>"
criteria = "safe-to-deploy"
@@ -736,6 +965,28 @@ Previously reviewed during security review and the audit is grandparented in.
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.strum]]
who = "danakj@chromium.org"
criteria = "safe-to-deploy"
version = "0.25.0"
notes = """
Reviewed in https://crrev.com/c/5171063
Previously reviewed during security review and the audit is grandparented in.
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.google.audits.strum_macros]]
who = "danakj@chromium.org"
criteria = "safe-to-deploy"
version = "0.25.3"
notes = """
Reviewed in https://crrev.com/c/5171063
Previously reviewed during security review and the audit is grandparented in.
"""
aggregated-from = "https://chromium.googlesource.com/chromium/src/+/main/third_party/rust/chromium_crates_io/supply-chain/audits.toml?format=TEXT"
[[audits.mozilla.wildcard-audits.core-foundation-sys]]
who = "Bobby Holley <bobbyholley@gmail.com>"
criteria = "safe-to-deploy"
@@ -812,6 +1063,12 @@ criteria = "safe-to-deploy"
delta = "0.2.3 -> 0.2.4"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.errno]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
delta = "0.3.1 -> 0.3.3"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.fastrand]]
who = "Mike Hommey <mh+mozilla@glandium.org>"
criteria = "safe-to-deploy"
@@ -929,6 +1186,16 @@ yet, but it's all valid. Otherwise it's a pretty simple crate.
"""
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.rustc_version]]
who = "Nika Layzell <nika@thelayzells.com>"
criteria = "safe-to-deploy"
version = "0.4.0"
notes = """
Use of powerful capabilities is limited to invoking `rustc -vV` to get version
information for parsing version information.
"""
aggregated-from = "https://raw.githubusercontent.com/mozilla/cargo-vet/main/supply-chain/audits.toml"
[[audits.mozilla.audits.serde_spanned]]
who = "Ben Dean-Kawamura <bdk@mozilla.com>"
criteria = "safe-to-deploy"
@@ -955,6 +1222,12 @@ criteria = "safe-to-deploy"
delta = "1.1.0 -> 1.3.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.similar]]
who = "Nika Layzell <nika@thelayzells.com>"
criteria = "safe-to-deploy"
delta = "2.2.1 -> 2.7.0"
aggregated-from = "https://raw.githubusercontent.com/mozilla/cargo-vet/main/supply-chain/audits.toml"
[[audits.mozilla.audits.smallvec]]
who = "Erich Gubler <erichdongubler@gmail.com>"
criteria = "safe-to-deploy"
@@ -967,6 +1240,30 @@ criteria = "safe-to-deploy"
delta = "0.10.0 -> 0.11.1"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.strum]]
who = "Teodor Tanasoaia <ttanasoaia@mozilla.com>"
criteria = "safe-to-deploy"
delta = "0.25.0 -> 0.26.3"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.strum]]
who = "Erich Gubler <erichdongubler@gmail.com>"
criteria = "safe-to-deploy"
delta = "0.26.3 -> 0.27.1"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.strum_macros]]
who = "Teodor Tanasoaia <ttanasoaia@mozilla.com>"
criteria = "safe-to-deploy"
delta = "0.25.3 -> 0.26.4"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.strum_macros]]
who = "Erich Gubler <erichdongubler@gmail.com>"
criteria = "safe-to-deploy"
delta = "0.26.4 -> 0.27.1"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.synstructure]]
who = "Nika Layzell <nika@thelayzells.com>"
criteria = "safe-to-deploy"
@@ -1038,3 +1335,153 @@ who = "Jan-Erik Rediger <jrediger@mozilla.com>"
criteria = "safe-to-deploy"
version = "0.1.5"
aggregated-from = "https://raw.githubusercontent.com/mozilla/glean/main/supply-chain/audits.toml"
[[audits.mozilla.audits.windows-link]]
who = "Mark Hammond <mhammond@skippinet.com.au>"
criteria = "safe-to-deploy"
version = "0.1.1"
notes = "A microsoft crate allowing unsafe calls to windows apis."
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.windows-link]]
who = "Erich Gubler <erichdongubler@gmail.com>"
criteria = "safe-to-deploy"
delta = "0.1.1 -> 0.2.0"
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.mozilla.audits.zeroize]]
who = "Benjamin Beurdouche <beurdouche@mozilla.com>"
criteria = "safe-to-deploy"
version = "1.8.1"
notes = """
This code DOES contain unsafe code required to internally call volatiles
for deleting data. This is expected and documented behavior.
"""
aggregated-from = "https://hg.mozilla.org/mozilla-central/raw-file/tip/supply-chain/audits.toml"
[[audits.zcash.audits.autocfg]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "1.4.0 -> 1.5.0"
notes = "Filesystem change is to remove the generated LLVM IR output file after probing."
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.dunce]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
version = "1.0.5"
notes = """
Does what it says on the tin. No `unsafe`, and the only IO is `std::fs::canonicalize`.
Path and string handling looks plausibly correct.
"""
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.errno]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.3.3 -> 0.3.8"
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.errno]]
who = "Daira-Emma Hopwood <daira@jacaranda.org>"
criteria = "safe-to-deploy"
delta = "0.3.8 -> 0.3.9"
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.errno]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.3.10 -> 0.3.11"
notes = "The `__errno` location for vxworks and cygwin looks correct from a quick search."
aggregated-from = "https://raw.githubusercontent.com/zcash/wallet/main/supply-chain/audits.toml"
[[audits.zcash.audits.errno]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.3.11 -> 0.3.13"
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.errno]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.3.13 -> 0.3.14"
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.http-body]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "1.0.0 -> 1.0.1"
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.inout]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.1.3 -> 0.1.4"
aggregated-from = "https://raw.githubusercontent.com/zcash/wallet/main/supply-chain/audits.toml"
[[audits.zcash.audits.rustc_version]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.4.0 -> 0.4.1"
notes = "Changes to `Command` usage are to add support for `RUSTC_WRAPPER`."
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.rustversion]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "1.0.20 -> 1.0.21"
notes = "Build script change is to fix building with `-Zfmt-debug=none`."
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.rustversion]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "1.0.21 -> 1.0.22"
notes = "Changes to generated code are to prepend a clippy annotation."
aggregated-from = "https://raw.githubusercontent.com/zcash/wallet/main/supply-chain/audits.toml"
[[audits.zcash.audits.strum]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.27.1 -> 0.27.2"
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.strum_macros]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.27.1 -> 0.27.2"
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.try-lock]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.2.4 -> 0.2.5"
notes = "Bumps MSRV to remove unsafe code block."
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.want]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.3.0 -> 0.3.1"
notes = """
Migrates to `try-lock 0.2.4` to replace some unsafe APIs that were not marked
`unsafe` (but that were being used safely).
"""
aggregated-from = "https://raw.githubusercontent.com/zcash/zcash/master/qa/supply-chain/audits.toml"
[[audits.zcash.audits.windows-link]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "0.2.0 -> 0.2.1"
notes = "No code changes at all."
aggregated-from = "https://raw.githubusercontent.com/zcash/librustzcash/main/supply-chain/audits.toml"
[[audits.zcash.audits.zeroize]]
who = "Jack Grigg <jack@electriccoin.co>"
criteria = "safe-to-deploy"
delta = "1.8.1 -> 1.8.2"
notes = """
Changes to `unsafe` code are to alter how `core::mem::size_of` is named; no actual changes
to the `unsafe` logic.
"""
aggregated-from = "https://raw.githubusercontent.com/zcash/wallet/main/supply-chain/audits.toml"

View File

@@ -1,31 +0,0 @@
Extension Discovery Cache
=========================
This folder is used by `package:extension_discovery` to cache lists of
packages that contains extensions for other packages.
DO NOT USE THIS FOLDER
----------------------
* Do not read (or rely) the contents of this folder.
* Do write to this folder.
If you're interested in the lists of extensions stored in this folder use the
API offered by package `extension_discovery` to get this information.
If this package doesn't work for your use-case, then don't try to read the
contents of this folder. It may change, and will not remain stable.
Use package `extension_discovery`
---------------------------------
If you want to access information from this folder.
Feel free to delete this folder
-------------------------------
Files in this folder act as a cache, and the cache is discarded if the files
are older than the modification time of `.dart_tool/package_config.json`.
Hence, it should never be necessary to clear this cache manually, if you find a
need to do please file a bug.

View File

@@ -1 +0,0 @@
{"version":2,"entries":[{"package":"arbiter","rootUri":"../","packageUri":"lib/"}]}

Some files were not shown because too many files have changed in this diff Show More