19 Commits

Author SHA1 Message Date
075d33219e feat(server): implement KeyStorage and state machine lifecycle
Some checks failed
ci/woodpecker/pr/server-audit Pipeline was successful
ci/woodpecker/pr/server-vet Pipeline failed
ci/woodpecker/pr/server-test Pipeline failed
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline failed
ci/woodpecker/push/server-test Pipeline failed
2026-02-16 15:57:14 +01:00
8cb6f4abe0 feat(tls): implement TLS certificate management and rotation 2026-02-16 15:57:00 +01:00
hdbg
4bac70a6e9 security(server): cargo-vet proper init
All checks were successful
ci/woodpecker/push/server-audit Pipeline was successful
ci/woodpecker/push/server-vet Pipeline was successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-14 19:16:09 +01:00
hdbg
54a41743be housekeeping(server): trimmed-down dependencies 2026-02-14 19:04:50 +01:00
hdbg
02ed243810 ci(server): introduce cargo-audit pipeline 2026-02-14 19:04:50 +01:00
hdbg
93005199e9 fix(ci): protoc installation for arbiter-proto compilation
All checks were successful
ci/woodpecker/push/server-test Pipeline was successful
2026-02-14 19:00:34 +01:00
hdbg
72b680f103 fix(ci): mise docker image
Some checks failed
ci/woodpecker/push/server-test Pipeline failed
2026-02-14 18:40:53 +01:00
hdbg
90f2476f3d ci(server): introduce tests pipeline
Some checks failed
ci/woodpecker/push/server-test Pipeline failed
2026-02-14 18:39:57 +01:00
hdbg
81a55d28f0 test(db): add create_test_pool and use in tests 2026-02-14 18:33:33 +01:00
hdbg
69dd8f57ca tests(server): UserAgent bootstrap token auth flow test 2026-02-14 18:16:19 +01:00
hdbg
345a967c13 refactor(server): separated UserAgentActor gRPC transport related things into separate module 2026-02-14 17:58:25 +01:00
hdbg
069a997691 feat(server): UserAgent auth flow implemented 2026-02-14 17:53:58 +01:00
hdbg
ffa60c90b1 feat(auth): simplify auth model and implement bootstrap flow
Remove key_identity indirection table, storing public keys and nonces
directly on client tables. Replace AuthResponse with AuthOk, add a
BootstrapActor to manage token lifecycle, and move user agent stream
handling into the actor module.
2026-02-14 12:03:14 +01:00
hdbg
8fb7a04102 misc: spec refactor :) 2026-02-13 17:24:20 +01:00
hdbg
056cd4af40 misc: initial spec 2026-02-13 17:18:50 +01:00
hdbg
832d884457 feat(auth): implement bootstrap token auth handling 2026-02-13 16:35:54 +01:00
hdbg
208bbbd540 feat: actors experiment 2026-02-13 13:37:58 +01:00
hdbg
bbbb4feaa0 feat(unseal): add unseal protocol and crypto infrastructure 2026-02-12 18:49:43 +01:00
hdbg
8dd0276185 feat(proto): add separate client/user-agent gRPC services 2026-02-11 13:31:39 +01:00
52 changed files with 6725 additions and 186 deletions

View File

@@ -1,3 +0,0 @@
{
"git.enabled": false
}

View File

@@ -0,0 +1,26 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
# Install only the necessary Rust toolchain and test runner to speed up the CI
- mise install rust
- mise install cargo:cargo-audit
- mise exec cargo:cargo-audit -- cargo audit

View File

@@ -0,0 +1,27 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
# Install only the necessary Rust toolchain and test runner to speed up the CI
- mise install rust
- mise install protoc
- mise install cargo:cargo-nextest
- mise exec cargo:cargo-nextest -- cargo nextest run --no-fail-fast

View File

@@ -0,0 +1,26 @@
when:
- event: pull_request
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
- event: push
branch: main
path:
include: ['.woodpecker/server-*.yaml', 'server/**']
steps:
- name: test
image: jdxcode/mise:latest
directory: server
environment:
CARGO_TERM_COLOR: always
CARGO_TARGET_DIR: /usr/local/cargo/target
CARGO_HOME: /usr/local/cargo/registry
volumes:
- cargo-target:/usr/local/cargo/target
- cargo-registry:/usr/local/cargo/registry
commands:
- apt-get update && apt-get install -y pkg-config
# Install only the necessary Rust toolchain and test runner to speed up the CI
- mise install rust
- mise install cargo:cargo-vet
- mise exec cargo:cargo-vet -- cargo vet

156
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,156 @@
# Arbiter
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
---
## 1. Peer Types
Arbiter distinguishes two kinds of peers:
- **User Agent** — A client application used by the owner to manage the vault (create wallets, approve SDK clients, configure policies).
- **SDK Client** — A consumer of signing capabilities, typically an automation tool. In the future, this could include a browser-based wallet.
---
## 2. Authentication
### 2.1 Challenge-Response
All peers authenticate via public-key cryptography using a challenge-response protocol:
1. The peer sends its public key and requests a challenge.
2. The server looks up the key in its database. If found, it increments the nonce and returns a challenge (replay-attack protection).
3. The peer signs the challenge with its private key and sends the signature back.
4. The server verifies the signature:
- **Pass:** The connection is considered authenticated.
- **Fail:** The server closes the connection.
### 2.2 User Agent Bootstrap
On first run — when no User Agents are registered — the server generates a one-time bootstrap token. It is made available in two ways:
- **Local setup:** Written to `~/.arbiter/bootstrap_token` for automatic discovery by a co-located User Agent.
- **Remote setup:** Printed to the server's console output.
The first User Agent must present this token alongside the standard challenge-response to complete registration.
### 2.3 SDK Client Registration
There is no bootstrap mechanism for SDK clients. They must be explicitly approved by an already-registered User Agent.
---
## 3. Server Identity
The server proves its identity using TLS with a self-signed certificate. The TLS private key is generated on first run and is long-term; no rotation mechanism exists yet due to the complexity of multi-peer coordination.
Peers verify the server by its **public key fingerprint**:
- **User Agent (local):** Receives the fingerprint automatically through the bootstrap token.
- **User Agent (remote) / SDK Client:** Must receive the fingerprint out-of-band.
> A streamlined setup mechanism using a single connection string is planned but not yet implemented.
---
## 4. Key Management
### 4.1 Key Hierarchy
There are three layers of keys:
| Key | Encrypts | Encrypted by |
|---|---|---|
| **User key** (password) | Root key | — (derived from user input) |
| **Root key** | Wallet keys | User key |
| **Wallet keys** | — (used for signing) | Root key |
This layered design enables:
- **Password rotation** without re-encrypting every wallet key (only the root key is re-encrypted).
- **Root key rotation** without requiring the user to change their password.
### 4.2 Encryption at Rest
The database stores everything in encrypted form using symmetric AEAD. The encryption scheme is versioned to support transparent migration — when the vault unseals, Arbiter automatically re-encrypts any entries that are behind the current scheme version. See [IMPLEMENTATION.md](IMPLEMENTATION.md) for the specific scheme and versioning mechanism.
---
## 5. Vault Lifecycle
### 5.1 Sealed State
On boot, the root key is encrypted and the server cannot perform any signing operations. This state is called **Sealed**.
### 5.2 Unseal Flow
To transition to the **Unsealed** state, a User Agent must provide the password:
1. The User Agent initiates an unseal request.
2. The server generates a one-time key pair and returns the public key.
3. The User Agent encrypts the user's password with this one-time public key and sends the ciphertext to the server.
4. The server decrypts and verifies the password:
- **Success:** The root key is decrypted and placed into a hardened memory cell. The server transitions to `Unsealed`. Any entries pending encryption scheme migration are re-encrypted.
- **Failure:** The server returns an error indicating the password is incorrect.
### 5.3 Memory Protection
Once unsealed, the root key must be protected in memory against:
- Memory dumps
- Page swaps to disk
- Hibernation files
See [IMPLEMENTATION.md](IMPLEMENTATION.md) for the current and planned memory protection approaches.
---
## 6. Permission Engine
### 6.1 Fundamental Rules
- SDK clients have **no access by default**.
- Access is granted **explicitly** by a User Agent.
- Grants are scoped to **specific wallets** and governed by **policies**.
Each blockchain requires its own policy system due to differences in static transaction analysis. Currently, only EVM is supported; Solana support is planned.
Arbiter is also responsible for ensuring that **transaction nonces are never reused**.
### 6.2 EVM Policies
Every EVM grant is scoped to a specific **wallet** and **chain ID**.
#### 6.2.1 Transaction Sub-Grants
Arbiter maintains an ever-expanding database of known contracts and their ABIs. Based on contract knowledge, transaction requests fall into three categories:
**1. Known contract (ABI available)**
The transaction can be decoded and presented with semantic meaning. For example: *"Client X wants to transfer Y USDT to address Z."*
Available restrictions:
- Volume limits (e.g., "no more than 10,000 tokens ever")
- Rate limits (e.g., "no more than 100 tokens per hour")
**2. Unknown contract (no ABI)**
The transaction cannot be decoded, so its effects are opaque — it could do anything, including draining all tokens. The user is warned, and if approved, access is granted to all interactions with the contract (matched by the `to` field).
Available restrictions:
- Transaction count limits (e.g., "no more than 100 transactions ever")
- Rate limits (e.g., "no more than 5 transactions per hour")
**3. Plain ether transfer (no calldata)**
These transactions have no `calldata` and therefore cannot interact with contracts. They can be subject to the same volume and rate restrictions as above.
#### 6.2.2 Global Limits
In addition to sub-grant-specific restrictions, the following limits can be applied across all grant types:
- **Gas limit** — Maximum gas per transaction.
- **Time-window restrictions** — e.g., signing allowed only 08:0020:00 on Mondays and Thursdays.

35
IMPLEMENTATION.md Normal file
View File

@@ -0,0 +1,35 @@
# Implementation Details
This document covers concrete technology choices and dependencies. For the architectural design, see [ARCHITECTURE.md](ARCHITECTURE.md).
---
## Cryptography
### Authentication
- **Signature scheme:** ed25519
### Encryption at Rest
- **Scheme:** Symmetric AEAD — currently **XChaCha20-Poly1305**
- **Version tracking:** Each `aead_encrypted` database entry carries a `scheme` field denoting the version, enabling transparent migration on unseal
### Server Identity
- **Transport:** TLS with a self-signed certificate
- **Key type:** Generated on first run; long-term (no rotation mechanism yet)
---
## Communication
- **Protocol:** gRPC with Protocol Buffers
- **Server identity distribution:** `ServerInfo` protobuf struct containing the TLS public key fingerprint
- **Future consideration:** grpc-web lacks bidirectional stream support, so a browser-based wallet may require protojson over WebSocket
---
## Memory Protection
The unsealed root key must be held in a hardened memory cell resistant to dumps, page swaps, and hibernation.
- **Current:** Using the `memsafe` crate as an interim solution
- **Planned:** Custom implementation based on `mlock` (Unix) and `VirtualProtect` (Windows)

View File

@@ -2,6 +2,22 @@
version = "0.22.1"
backend = "cargo:cargo-audit"
[[tools."cargo:cargo-features"]]
version = "1.0.0"
backend = "cargo:cargo-features"
[[tools."cargo:cargo-features-manager"]]
version = "0.11.1"
backend = "cargo:cargo-features-manager"
[[tools."cargo:cargo-nextest"]]
version = "0.9.126"
backend = "cargo:cargo-nextest"
[[tools."cargo:cargo-shear"]]
version = "1.9.1"
backend = "cargo:cargo-shear"
[[tools."cargo:cargo-vet"]]
version = "0.10.2"
backend = "cargo:cargo-vet"

View File

@@ -5,4 +5,7 @@
flutter = "3.38.9-stable"
protoc = "29.6"
rust = "1.93.0"
rust = "1.93.1"
"cargo:cargo-features-manager" = "0.11.1"
"cargo:cargo-nextest" = "0.9.126"
"cargo:cargo-shear" = "latest"

View File

@@ -4,20 +4,65 @@ package arbiter;
import "auth.proto";
message ClientMessage {
message ClientRequest {
oneof payload {
arbiter.auth.AuthChallengeRequest auth_challenge_request = 1;
arbiter.auth.AuthChallengeSolution auth_challenge_solution = 2;
arbiter.auth.ClientMessage auth_message = 1;
CertRotationAck cert_rotation_ack = 2;
}
}
message ServerMessage {
message ClientResponse {
oneof payload {
arbiter.auth.AuthChallenge auth_challenge = 1;
arbiter.auth.AuthResponse auth_response = 2;
arbiter.auth.ServerMessage auth_message = 1;
CertRotationNotification cert_rotation_notification = 2;
}
}
service Server {
rpc Communicate(stream ClientMessage) returns (stream ServerMessage);
message UserAgentRequest {
oneof payload {
arbiter.auth.ClientMessage auth_message = 1;
CertRotationAck cert_rotation_ack = 2;
}
}
message UserAgentResponse {
oneof payload {
arbiter.auth.ServerMessage auth_message = 1;
CertRotationNotification cert_rotation_notification = 2;
}
}
message ServerInfo {
string version = 1;
bytes cert_public_key = 2;
}
// TLS Certificate Rotation Protocol
message CertRotationNotification {
// New public certificate (DER-encoded)
bytes new_cert = 1;
// Unix timestamp when rotation will be executed (if all ACKs received)
int64 rotation_scheduled_at = 2;
// Unix timestamp deadline for ACK (7 days from now)
int64 ack_deadline = 3;
// Rotation ID for tracking
int32 rotation_id = 4;
}
message CertRotationAck {
// Rotation ID (from CertRotationNotification)
int32 rotation_id = 1;
// Client public key for identification
bytes client_public_key = 2;
// Confirmation that client saved the new certificate
bool cert_saved = 3;
}
service ArbiterService {
rpc Client(stream ClientRequest) returns (stream ClientResponse);
rpc UserAgent(stream UserAgentRequest) returns (stream UserAgentResponse);
}

View File

@@ -6,22 +6,30 @@ import "google/protobuf/timestamp.proto";
message AuthChallengeRequest {
bytes pubkey = 1;
optional string bootstrap_token = 2;
}
message AuthChallenge {
bytes pubkey = 1;
bytes nonce = 2;
google.protobuf.Timestamp minted = 3 ;
int32 nonce = 2;
}
message AuthChallengeSolution {
AuthChallenge challenge = 1 ;
bytes signature = 2;
bytes signature = 1;
}
message AuthResponse {
string token = 1;
string refresh_token = 2;
google.protobuf.Timestamp expires_at = 3 ;
google.protobuf.Timestamp refresh_expires_at = 4 ;
message AuthOk {}
message ClientMessage {
oneof payload {
AuthChallengeRequest auth_challenge_request = 1;
AuthChallengeSolution auth_challenge_solution = 2;
}
}
message ServerMessage {
oneof payload {
AuthChallenge auth_challenge = 1;
AuthOk auth_ok = 2;
}
}

View File

@@ -0,0 +1,46 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
syntax = "proto3";
package google.protobuf;
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
option cc_enable_arenas = true;
option go_package = "google.golang.org/protobuf/types/known/timestamppb";
option java_package = "com.google.protobuf";
option java_outer_classname = "TimestampProto";
option java_multiple_files = true;
option objc_class_prefix = "GPB";
// A Timestamp represents a point in time independent of any time zone or local
// calendar, encoded as a count of seconds and fractions of seconds at
// nanosecond resolution. The count is relative to an epoch at UTC midnight on
// January 1, 1970, in the proleptic Gregorian calendar which extends the
// Gregorian calendar backwards to year one.
message Timestamp {
// Represents seconds of UTC time since Unix epoch
// 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
// 9999-12-31T23:59:59Z inclusive.
int64 seconds = 1;
// Non-negative fractions of a second at nanosecond resolution. Negative
// second values with fractions must still have non-negative nanos values
// that count forward in time. Must be from 0 to 999,999,999
// inclusive.
int32 nanos = 2;
}

14
protobufs/unseal.proto Normal file
View File

@@ -0,0 +1,14 @@
syntax = "proto3";
package arbiter.unseal;
message UserAgentKeyRequest {}
message ServerKeyResponse {
bytes pubkey = 1;
}
message UserAgentSealedKey {
bytes sealed_key = 1;
bytes pubkey = 2;
bytes nonce = 3;
}

1467
server/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,31 +3,23 @@ members = [
"crates/arbiter-client",
"crates/arbiter-proto",
"crates/arbiter-server",
"crates/arbiter-useragent",
]
resolver = "3"
[workspace.dependencies]
prost = "0.14.3"
tonic = "0.14.3"
tonic = { version = "0.14.3", features = ["deflate", "gzip", "tls-connect-info", "zstd"] }
tracing = "0.1.44"
tokio = { version = "1.49.0", features = ["full"] }
ed25519 = "3.0.0-rc.4"
ed25519-dalek = "3.0.0-pre.6"
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
chrono = { version = "0.4.43", features = ["serde"] }
rand = "0.10.0"
uuid = "1.20.0"
[package]
name = "arbiter"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
[dependencies]
rustls = "0.23.36"
smlang = "0.8.0"
miette = { version = "7.6.0", features = ["fancy", "serde"] }
thiserror = "2.0.18"
async-trait = "0.1.89"
futures = "0.3.31"
tokio-stream = { version = "0.1.18", features = ["full"] }
kameo = "0.19.2"

View File

@@ -2,5 +2,6 @@
name = "arbiter-client"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
[dependencies]

View File

@@ -2,16 +2,20 @@
name = "arbiter-proto"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
[dependencies]
tonic.workspace = true
prost.workspace = true
bytes = "1.11.1"
prost-derive = "0.14.3"
prost-types = { version = "0.14.3", features = ["chrono"] }
tokio.workspace = true
futures.workspace = true
hex = "0.4.3"
tonic-prost = "0.14.3"
prost = "0.14.3"
kameo.workspace = true
[build-dependencies]
prost-build = "0.14.3"
serde_json = "1"
tonic-prost-build = "0.14.3"

View File

@@ -1,16 +1,15 @@
use tonic_prost_build::configure;
static PROTOBUF_DIR: &str = "../../../protobufs";
fn main() -> Result<(), Box<dyn std::error::Error>> {
configure()
.compile_protos(
&[
let proto_files = vec![
format!("{}/arbiter.proto", PROTOBUF_DIR),
format!("{}/auth.proto", PROTOBUF_DIR),
],
&[PROTOBUF_DIR.to_string()],
)
.unwrap();
];
// Компилируем protobuf (tonic-prost-build автоматически использует prost_types для google.protobuf)
tonic_prost_build::configure()
.message_attribute(".", "#[derive(::kameo::Reply)]")
.compile_protos(&proto_files, &[PROTOBUF_DIR.to_string()])?;
Ok(())
}

View File

@@ -1,3 +1,5 @@
use crate::proto::auth::AuthChallenge;
pub mod proto {
tonic::include_proto!("arbiter");
@@ -5,3 +7,25 @@ pub mod proto {
tonic::include_proto!("arbiter.auth");
}
}
pub mod transport;
pub static BOOTSTRAP_TOKEN_PATH: &str = "bootstrap_token";
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
static ARBITER_HOME: &str = ".arbiter";
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
std::io::ErrorKind::PermissionDenied,
"can not get home directory",
))?;
let arbiter_home = home_dir.join(ARBITER_HOME);
std::fs::create_dir_all(&arbiter_home)?;
Ok(arbiter_home)
}
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
let concat_form = format!("{}:{}", challenge.nonce, hex::encode(&challenge.pubkey));
concat_form.into_bytes().to_vec()
}

View File

@@ -0,0 +1,46 @@
use futures::{Stream, StreamExt};
use tokio::sync::mpsc::{self, error::SendError};
use tonic::{Status, Streaming};
// Abstraction for stream for sans-io capabilities
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
type Error;
fn send(
&mut self,
item: Result<U, Status>,
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
}
// Bi-directional stream abstraction for handling gRPC streaming requests and responses
pub struct BiStream<T, U> {
pub request_stream: Streaming<T>,
pub response_sender: mpsc::Sender<Result<U, Status>>,
}
impl<T, U> Stream for BiStream<T, U>
where
T: Send + 'static,
U: Send + 'static,
{
type Item = Result<T, Status>;
fn poll_next(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
self.request_stream.poll_next_unpin(cx)
}
}
impl<T, U> Bi<T, U> for BiStream<T, U>
where
T: Send + 'static,
U: Send + 'static,
{
type Error = SendError<Result<U, Status>>;
async fn send(&mut self, item: Result<U, Status>) -> Result<(), Self::Error> {
self.response_sender.send(item).await
}
}

View File

@@ -2,10 +2,51 @@
name = "arbiter-server"
version = "0.1.0"
edition = "2024"
repository = "https://git.markettakers.org/MarketTakers/arbiter"
[dependencies]
diesel = { version = "2.3.6", features = ["sqlite", "uuid", "time", "chrono", "serde_json"] }
diesel-async = { version = "0.7.4", features = ["sqlite", "tokio", "migrations", "pool", "deadpool"] }
diesel = { version = "2.3.6", features = [
"sqlite",
"uuid",
"time",
"chrono",
"serde_json",
] }
diesel-async = { version = "0.7.4", features = [
"bb8",
"migrations",
"sqlite",
"tokio",
] }
ed25519-dalek.workspace = true
arbiter-proto.path = "../arbiter-proto"
tracing.workspace = true
tonic.workspace = true
tokio.workspace = true
rustls.workspace = true
smlang.workspace = true
miette.workspace = true
thiserror.workspace = true
diesel_migrations = { version = "2.3.1", features = ["sqlite"] }
async-trait.workspace = true
secrecy = "0.10.3"
futures.workspace = true
tokio-stream.workspace = true
dashmap = "6.1.0"
rand.workspace = true
rcgen = { version = "0.14.7", features = [
"aws_lc_rs",
"pem",
"x509-parser",
"zeroize",
], default-features = false }
chrono.workspace = true
memsafe = "0.4.0"
zeroize = { version = "1.8.2", features = ["std", "simd"] }
argon2 = { version = "0.5", features = ["std"] }
kameo.workspace = true
hex = "0.4.3"
chacha20poly1305 = "0.10.1"
[dev-dependencies]
test-log = { version = "0.2", default-features = false, features = ["trace"] }

View File

@@ -2,7 +2,7 @@
# see https://diesel.rs/guides/configuring-diesel-cli
[print_schema]
file = "src/schema.rs"
file = "src/db/schema.rs"
custom_type_derives = ["diesel::query_builder::QueryId", "Clone"]
[migrations_directory]

View File

@@ -1 +0,0 @@
-- Your SQL goes here

View File

@@ -0,0 +1,11 @@
-- Rollback TLS rotation tables
-- Удалить добавленную колонку из arbiter_settings
ALTER TABLE arbiter_settings DROP COLUMN current_cert_id;
-- Удалить таблицы в обратном порядке
DROP TABLE IF EXISTS tls_rotation_history;
DROP TABLE IF EXISTS rotation_client_acks;
DROP TABLE IF EXISTS tls_rotation_state;
DROP INDEX IF EXISTS idx_tls_certificates_active;
DROP TABLE IF EXISTS tls_certificates;

View File

@@ -0,0 +1,57 @@
-- История всех сертификатов
CREATE TABLE IF NOT EXISTS tls_certificates (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
cert BLOB NOT NULL, -- DER-encoded
cert_key BLOB NOT NULL, -- PEM-encoded
not_before INTEGER NOT NULL, -- Unix timestamp
not_after INTEGER NOT NULL, -- Unix timestamp
created_at INTEGER NOT NULL DEFAULT(unixepoch('now')),
is_active BOOLEAN NOT NULL DEFAULT 0 -- Только один active=1
) STRICT;
CREATE INDEX idx_tls_certificates_active ON tls_certificates(is_active, not_after);
-- Tracking процесса ротации
CREATE TABLE IF NOT EXISTS tls_rotation_state (
id INTEGER NOT NULL PRIMARY KEY CHECK(id = 1), -- Singleton
state TEXT NOT NULL DEFAULT('normal') CHECK(state IN ('normal', 'initiated', 'waiting_acks', 'ready')),
new_cert_id INTEGER REFERENCES tls_certificates(id),
initiated_at INTEGER,
timeout_at INTEGER -- Таймаут для ожидания ACKs (initiated_at + 7 дней)
) STRICT;
-- Tracking ACKs от клиентов
CREATE TABLE IF NOT EXISTS rotation_client_acks (
rotation_id INTEGER NOT NULL, -- Ссылка на new_cert_id
client_key TEXT NOT NULL, -- Публичный ключ клиента (hex)
ack_received_at INTEGER NOT NULL DEFAULT(unixepoch('now')),
PRIMARY KEY (rotation_id, client_key)
) STRICT;
-- Audit trail событий ротации
CREATE TABLE IF NOT EXISTS tls_rotation_history (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
cert_id INTEGER NOT NULL REFERENCES tls_certificates(id),
event_type TEXT NOT NULL CHECK(event_type IN ('created', 'rotation_initiated', 'acks_complete', 'activated', 'timeout')),
timestamp INTEGER NOT NULL DEFAULT(unixepoch('now')),
details TEXT -- JSON с доп. информацией
) STRICT;
-- Миграция существующего сертификата
INSERT INTO tls_certificates (id, cert, cert_key, not_before, not_after, is_active, created_at)
SELECT
1,
cert,
cert_key,
unixepoch('now') as not_before,
unixepoch('now') + (90 * 24 * 60 * 60) as not_after, -- 90 дней
1 as is_active,
unixepoch('now')
FROM arbiter_settings WHERE id = 1;
-- Инициализация rotation_state
INSERT INTO tls_rotation_state (id, state) VALUES (1, 'normal');
-- Добавить ссылку на текущий сертификат
ALTER TABLE arbiter_settings ADD COLUMN current_cert_id INTEGER REFERENCES tls_certificates(id);
UPDATE arbiter_settings SET current_cert_id = 1 WHERE id = 1;

View File

@@ -0,0 +1,31 @@
create table if not exists aead_encrypted (
id INTEGER not null PRIMARY KEY,
current_nonce integer not null default(1), -- if re-encrypted, this should be incremented
ciphertext blob not null,
tag blob not null,
schema_version integer not null default(1) -- server would need to reencrypt, because this means that we have changed algorithm
) STRICT;
-- This is a singleton
create table if not exists arbiter_settings (
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
root_key_id integer references aead_encrypted (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
cert_key blob not null,
cert blob not null
) STRICT;
create table if not exists useragent_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))
) STRICT;
create table if not exists program_client (
id integer not null primary key,
nonce integer not null default (1), -- used for auth challenge
public_key blob not null,
created_at integer not null default(unixepoch ('now')),
updated_at integer not null default(unixepoch ('now'))
) STRICT;

View File

@@ -0,0 +1,2 @@
-- Remove argon2_salt column
ALTER TABLE aead_encrypted DROP COLUMN argon2_salt;

View File

@@ -0,0 +1,2 @@
-- Add argon2_salt column to store password derivation salt
ALTER TABLE aead_encrypted ADD COLUMN argon2_salt TEXT;

View File

@@ -0,0 +1,2 @@
pub mod user_agent;
pub mod client;

View File

@@ -0,0 +1,12 @@
use arbiter_proto::{
proto::{ClientRequest, ClientResponse},
transport::Bi,
};
use crate::ServerContext;
pub(crate) async fn handle_client(
_context: ServerContext,
_bistream: impl Bi<ClientRequest, ClientResponse>,
) {
}

View File

@@ -0,0 +1,374 @@
use std::sync::Arc;
use arbiter_proto::{
proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallengeRequest, ClientMessage, ServerMessage as AuthServerMessage,
client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
},
transport::Bi,
};
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
use diesel_async::{AsyncConnection, RunQueryDsl};
use ed25519_dalek::VerifyingKey;
use futures::StreamExt;
use kameo::{
Actor,
actor::{ActorRef, Spawn},
error::SendError,
messages,
prelude::Context,
};
use tokio::sync::mpsc;
use tokio::sync::mpsc::Sender;
use tonic::{Status, transport::Server};
use tracing::{debug, error, info};
use crate::{
ServerContext,
actors::user_agent::auth::AuthChallenge,
context::bootstrap::{BootstrapActor, ConsumeToken},
db::{self, schema},
errors::GrpcStatusExt,
};
#[derive(Debug)]
pub struct ChallengeContext {
challenge: AuthChallenge,
key: VerifyingKey,
}
// Request context with deserialized public key for state machine.
// This intermediate struct is needed because the state machine branches depending on presence of bootstrap token,
// but we want to have the deserialized key in both branches.
#[derive(Clone, Debug)]
pub struct AuthRequestContext {
pubkey: VerifyingKey,
bootstrap_token: Option<String>,
}
smlang::statemachine!(
name: UserAgent,
derive_states: [Debug],
custom_error: false,
transitions: {
*Init + AuthRequest(AuthRequestContext) / auth_request_context = ReceivedAuthRequest(AuthRequestContext),
ReceivedAuthRequest(AuthRequestContext) + ReceivedBootstrapToken = Authenticated,
ReceivedAuthRequest(AuthRequestContext) + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Authenticated,
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
}
);
pub struct DummyContext;
impl UserAgentStateMachineContext for DummyContext {
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn move_challenge(
&mut self,
state_data: &AuthRequestContext,
event_data: ChallengeContext,
) -> Result<ChallengeContext, ()> {
Ok(event_data)
}
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn auth_request_context(
&mut self,
event_data: AuthRequestContext,
) -> Result<AuthRequestContext, ()> {
Ok(event_data)
}
}
#[derive(Actor)]
pub struct UserAgentActor {
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
state: UserAgentStateMachine<DummyContext>,
tx: Sender<Result<UserAgentResponse, Status>>,
}
impl UserAgentActor {
pub(crate) fn new(
context: ServerContext,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db: context.db.clone(),
bootstapper: context.bootstrapper.clone(),
state: UserAgentStateMachine::new(DummyContext),
tx,
}
}
pub(crate) fn new_manual(
db: db::DatabasePool,
bootstapper: ActorRef<BootstrapActor>,
tx: Sender<Result<UserAgentResponse, Status>>,
) -> Self {
Self {
db,
bootstapper,
state: UserAgentStateMachine::new(DummyContext),
tx,
}
}
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Status> {
self.state.process_event(event).map_err(|e| {
error!(?e, "State transition failed");
Status::internal("State machine error")
})?;
Ok(())
}
async fn auth_with_bootstrap_token(
&mut self,
pubkey: ed25519_dalek::VerifyingKey,
token: String,
) -> Result<UserAgentResponse, Status> {
let token_ok: bool = self
.bootstapper
.ask(ConsumeToken { token })
.await
.map_err(|e| {
error!(?pubkey, "Failed to consume bootstrap token: {e}");
Status::internal("Bootstrap token consumption failed")
})?;
if !token_ok {
error!(?pubkey, "Invalid bootstrap token provided");
return Err(Status::invalid_argument("Invalid bootstrap token"));
}
{
let mut conn = self.db.get().await.to_status()?;
diesel::insert_into(schema::useragent_client::table)
.values((
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
schema::useragent_client::nonce.eq(1),
))
.execute(&mut conn)
.await
.to_status()?;
}
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
Ok(auth_response(ServerAuthPayload::AuthOk(auth::AuthOk {})))
}
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
let nonce: Option<i32> = {
let mut db_conn = self.db.get().await.to_status()?;
db_conn
.transaction(|conn| {
Box::pin(async move {
let current_nonce = schema::useragent_client::table
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.select(schema::useragent_client::nonce)
.first::<i32>(conn)
.await?;
update(schema::useragent_client::table)
.filter(
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
)
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
.execute(conn)
.await?;
Result::<_, diesel::result::Error>::Ok(current_nonce)
})
})
.await
.optional()
.to_status()?
};
let Some(nonce) = nonce else {
error!(?pubkey, "Public key not found in database");
return Err(Status::unauthenticated("Public key not registered"));
};
let challenge = auth::AuthChallenge {
pubkey: pubkey_bytes,
nonce,
};
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
challenge: challenge.clone(),
key: pubkey,
}))?;
info!(
?pubkey,
?challenge,
"Sent authentication challenge to client"
);
Ok(auth_response(ServerAuthPayload::AuthChallenge(challenge)))
}
fn verify_challenge_solution(
&self,
solution: &auth::AuthChallengeSolution,
) -> Result<(bool, &ChallengeContext), Status> {
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
else {
error!("Received challenge solution in invalid state");
return Err(Status::invalid_argument(
"Invalid state for challenge solution",
));
};
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
let signature = solution.signature.as_slice().try_into().map_err(|_| {
error!(?solution, "Invalid signature length");
Status::invalid_argument("Invalid signature length")
})?;
let valid = challenge_context
.key
.verify_strict(&formatted_challenge, &signature)
.is_ok();
Ok((valid, challenge_context))
}
}
type Output = Result<UserAgentResponse, Status>;
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(AuthServerMessage {
payload: Some(payload),
})),
}
}
#[messages]
impl UserAgentActor {
#[message(ctx)]
pub async fn handle_auth_challenge_request(
&mut self,
req: AuthChallengeRequest,
ctx: &mut Context<Self, Output>,
) -> Output {
let pubkey = req.pubkey.as_array().ok_or(Status::invalid_argument(
"Expected pubkey to have specific length",
))?;
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|err| {
error!(?pubkey, "Failed to convert to VerifyingKey");
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
})?;
self.transition(UserAgentEvents::AuthRequest(AuthRequestContext {
pubkey,
bootstrap_token: req.bootstrap_token.clone(),
}))?;
match req.bootstrap_token {
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
None => self.auth_with_challenge(pubkey, req.pubkey).await,
}
}
#[message(ctx)]
pub async fn handle_auth_challenge_solution(
&mut self,
solution: auth::AuthChallengeSolution,
ctx: &mut Context<Self, Output>,
) -> Output {
let (valid, challenge_context) = self.verify_challenge_solution(&solution)?;
if valid {
info!(
?challenge_context,
"Client provided valid solution to authentication challenge"
);
self.transition(UserAgentEvents::ReceivedGoodSolution)?;
Ok(auth_response(ServerAuthPayload::AuthOk(auth::AuthOk {})))
} else {
error!("Client provided invalid solution to authentication challenge");
self.transition(UserAgentEvents::ReceivedBadSolution)?;
Err(Status::unauthenticated("Invalid challenge solution"))
}
}
}
#[cfg(test)]
mod tests {
use arbiter_proto::proto::{
UserAgentResponse,
auth::{AuthChallengeRequest, AuthOk},
user_agent_response::Payload as UserAgentResponsePayload,
};
use kameo::actor::Spawn;
use crate::{
actors::user_agent::HandleAuthChallengeRequest, context::bootstrap::BootstrapActor, db,
};
use super::UserAgentActor;
#[tokio::test]
#[test_log::test]
pub async fn test_bootstrap_token_auth() {
let db = db::create_test_pool().await;
// explicitly not installing any user_agent pubkeys
let bootstrapper = BootstrapActor::new(&db).await.unwrap(); // this will create bootstrap token
let token = bootstrapper.get_token().unwrap();
let bootstrapper_ref = BootstrapActor::spawn(bootstrapper);
let user_agent = UserAgentActor::new_manual(
db.clone(),
bootstrapper_ref,
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
);
let user_agent_ref = UserAgentActor::spawn(user_agent);
// simulate client sending auth request with bootstrap token
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
let result = user_agent_ref
.ask(HandleAuthChallengeRequest {
req: AuthChallengeRequest {
pubkey: pubkey_bytes,
bootstrap_token: Some(token),
},
})
.await
.expect("Shouldn't fail to send message");
// auth succeeded
assert_eq!(
result,
UserAgentResponse {
payload: Some(UserAgentResponsePayload::AuthMessage(
arbiter_proto::proto::auth::ServerMessage {
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
AuthOk {},
)),
},
)),
}
);
}
}
mod transport;
pub(crate) use transport::handle_user_agent;

View File

@@ -0,0 +1,95 @@
use super::UserAgentActor;
use arbiter_proto::proto::{
UserAgentRequest, UserAgentResponse,
auth::{
self, AuthChallenge, AuthChallengeRequest, AuthOk, ClientMessage,
ServerMessage as AuthServerMessage, client_message::Payload as ClientAuthPayload,
server_message::Payload as ServerAuthPayload,
},
user_agent_request::Payload as UserAgentRequestPayload,
user_agent_response::Payload as UserAgentResponsePayload,
};
use futures::StreamExt;
use kameo::{
actor::{ActorRef, Spawn as _},
error::SendError,
};
use tokio::sync::mpsc;
use tonic::Status;
use tracing::error;
use crate::{
actors::user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution},
context::ServerContext,
};
pub(crate) async fn handle_user_agent(
context: ServerContext,
mut req_stream: tonic::Streaming<UserAgentRequest>,
tx: mpsc::Sender<Result<UserAgentResponse, Status>>,
) {
let actor = UserAgentActor::spawn(UserAgentActor::new(context, tx.clone()));
while let Some(Ok(req)) = req_stream.next().await
&& actor.is_alive()
{
match process_message(&actor, req).await {
Ok(resp) => {
if tx.send(Ok(resp)).await.is_err() {
error!(actor = "useragent", "Failed to send response to client");
break;
}
}
Err(status) => {
let _ = tx.send(Err(status)).await;
break;
}
}
}
actor.kill();
}
async fn process_message(
actor: &ActorRef<UserAgentActor>,
req: UserAgentRequest,
) -> Result<UserAgentResponse, Status> {
let msg = req.payload.ok_or_else(|| {
error!(actor = "useragent", "Received message with no payload");
Status::invalid_argument("Expected message with payload")
})?;
let UserAgentRequestPayload::AuthMessage(ClientMessage {
payload: Some(client_message),
}) = msg
else {
error!(
actor = "useragent",
"Received unexpected message type during authentication"
);
return Err(Status::invalid_argument(
"Expected AuthMessage with ClientMessage payload",
));
};
match client_message {
ClientAuthPayload::AuthChallengeRequest(req) => actor
.ask(HandleAuthChallengeRequest { req })
.await
.map_err(into_status),
ClientAuthPayload::AuthChallengeSolution(solution) => actor
.ask(HandleAuthChallengeSolution { solution })
.await
.map_err(into_status),
}
}
fn into_status<M>(e: SendError<M, Status>) -> Status {
match e {
SendError::HandlerError(status) => status,
_ => {
error!(actor = "useragent", "Failed to send message to actor");
Status::internal("session failure")
}
}
}

View File

@@ -0,0 +1,402 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::time::Duration;
use diesel::OptionalExtension as _;
use diesel_async::RunQueryDsl as _;
use ed25519_dalek::VerifyingKey;
use kameo::actor::{ActorRef, Spawn};
use miette::Diagnostic;
use rand::rngs::StdRng;
use secrecy::{ExposeSecret, SecretBox};
use smlang::statemachine;
use thiserror::Error;
use tokio::sync::{watch, RwLock};
use zeroize::Zeroizing;
use crate::{
context::{
bootstrap::{BootstrapActor, generate_token},
lease::LeaseHandler,
tls::{RotationState, RotationTask, TlsDataRaw, TlsManager},
},
db::{
self,
models::ArbiterSetting,
schema::{self, arbiter_settings},
},
};
pub(crate) mod bootstrap;
pub(crate) mod lease;
pub(crate) mod tls;
#[derive(Error, Debug, Diagnostic)]
pub enum InitError {
#[error("Database setup failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_setup))]
DatabaseSetup(#[from] db::DatabaseSetupError),
#[error("Connection acquire failed: {0}")]
#[diagnostic(code(arbiter_server::init::database_pool))]
DatabasePool(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::init::database_query))]
DatabaseQuery(#[from] diesel::result::Error),
#[error("TLS initialization failed: {0}")]
#[diagnostic(code(arbiter_server::init::tls_init))]
Tls(#[from] tls::TlsInitError),
#[error("Bootstrap token generation failed: {0}")]
#[diagnostic(code(arbiter_server::init::bootstrap_token))]
BootstrapToken(#[from] bootstrap::BootstrapError),
#[error("I/O Error: {0}")]
#[diagnostic(code(arbiter_server::init::io))]
Io(#[from] std::io::Error),
}
#[derive(Error, Debug, Diagnostic)]
pub enum UnsealError {
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::unseal::database_pool))]
Database(#[from] db::PoolError),
#[error("Query error: {0}")]
#[diagnostic(code(arbiter_server::unseal::database_query))]
Query(#[from] diesel::result::Error),
#[error("Decryption failed: {0}")]
#[diagnostic(code(arbiter_server::unseal::decryption))]
DecryptionFailed(#[from] crate::crypto::CryptoError),
#[error("Invalid state for unseal")]
#[diagnostic(code(arbiter_server::unseal::invalid_state))]
InvalidState,
#[error("Missing salt in database")]
#[diagnostic(code(arbiter_server::unseal::missing_salt))]
MissingSalt,
#[error("No root key configured in database")]
#[diagnostic(code(arbiter_server::unseal::no_root_key))]
NoRootKey,
}
#[derive(Error, Debug, Diagnostic)]
pub enum SealError {
#[error("Invalid state for seal")]
#[diagnostic(code(arbiter_server::seal::invalid_state))]
InvalidState,
}
/// Secure in-memory storage for root encryption key
///
/// Uses `secrecy` crate for automatic zeroization on drop to prevent key material
/// from remaining in memory after use. SecretBox provides heap-allocated secret
/// storage that implements Send + Sync for safe use in async contexts.
pub struct KeyStorage {
/// 32-byte root key protected by SecretBox
key: SecretBox<[u8; 32]>,
}
impl KeyStorage {
/// Create new KeyStorage from a 32-byte root key
pub fn new(key: [u8; 32]) -> Self {
Self {
key: SecretBox::new(Box::new(key)),
}
}
/// Access the key for cryptographic operations
pub fn key(&self) -> &[u8; 32] {
self.key.expose_secret()
}
}
// Drop автоматически реализован через secrecy::Zeroize
// который зануляет память при освобождении
statemachine! {
name: Server,
transitions: {
*NotBootstrapped + Bootstrapped = Sealed,
Sealed + Unsealed(KeyStorage) / move_key = Ready(KeyStorage),
Ready(KeyStorage) + Sealed / dispose_key = Sealed,
}
}
pub struct _Context;
impl ServerStateMachineContext for _Context {
/// Move key from unseal event into Ready state
fn move_key(&mut self, event_data: KeyStorage) -> Result<KeyStorage, ()> {
// Просто перемещаем KeyStorage из event в state
// Без клонирования - event data consumed
Ok(event_data)
}
/// Securely dispose of key when sealing
#[allow(missing_docs)]
#[allow(clippy::unused_unit)]
fn dispose_key(&mut self, _state_data: &KeyStorage) -> Result<(), ()> {
// KeyStorage будет dropped после state transition
// secrecy::Zeroize зануляет память автоматически
Ok(())
}
}
pub(crate) struct _ServerContextInner {
pub db: db::DatabasePool,
pub state: RwLock<ServerStateMachine<_Context>>,
pub rng: StdRng,
pub tls: Arc<TlsManager>,
pub bootstrapper: ActorRef<BootstrapActor>,
pub rotation_state: RwLock<RotationState>,
pub rotation_acks: Arc<RwLock<HashSet<VerifyingKey>>>,
pub user_agent_leases: LeaseHandler<VerifyingKey>,
pub client_leases: LeaseHandler<VerifyingKey>,
}
#[derive(Clone)]
pub(crate) struct ServerContext(Arc<_ServerContextInner>);
impl std::ops::Deref for ServerContext {
type Target = _ServerContextInner;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl ServerContext {
/// Check if all active clients have acknowledged the rotation
pub async fn check_rotation_ready(&self) -> bool {
// TODO: Implement proper rotation readiness check
// For now, return false as placeholder
false
}
async fn load_tls(
db: &db::DatabasePool,
settings: Option<&ArbiterSetting>,
) -> Result<TlsManager, InitError> {
match settings {
Some(s) if s.current_cert_id.is_some() => {
// Load active certificate from tls_certificates table
Ok(TlsManager::load_from_db(
db.clone(),
s.current_cert_id.unwrap(),
)
.await?)
}
Some(s) => {
// Legacy migration: extract validity and save to new table
let tls_data_raw = TlsDataRaw {
cert: s.cert.clone(),
key: s.cert_key.clone(),
};
// For legacy certificates, use current time as not_before
// and current time + 90 days as not_after
let not_before = chrono::Utc::now().timestamp();
let not_after = not_before + (90 * 24 * 60 * 60); // 90 days
Ok(TlsManager::new_from_legacy(
db.clone(),
tls_data_raw,
not_before,
not_after,
)
.await?)
}
None => {
// First startup - generate new certificate
Ok(TlsManager::new(db.clone()).await?)
}
}
}
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
let mut conn = db.get().await?;
let rng = rand::make_rng();
let settings = arbiter_settings::table
.first::<ArbiterSetting>(&mut conn)
.await
.optional()?;
drop(conn);
// Load TLS manager
let tls = Self::load_tls(&db, settings.as_ref()).await?;
// Load rotation state from database
let rotation_state = RotationState::load_from_db(&db)
.await
.unwrap_or(RotationState::Normal);
let bootstrap_token = generate_token().await?;
let mut state = ServerStateMachine::new(_Context);
if let Some(settings) = &settings
&& settings.root_key_id.is_some()
{
// TODO: pass the encrypted root key to the state machine and let it handle decryption and transition to Sealed
let _ = state.process_event(ServerEvents::Bootstrapped);
}
// Create shutdown channel for rotation task
let (rotation_shutdown_tx, rotation_shutdown_rx) = watch::channel(false);
// Initialize bootstrap actor
let bootstrapper = BootstrapActor::spawn(BootstrapActor::new(&db).await?);
let context = Arc::new(_ServerContextInner {
db: db.clone(),
rng,
tls: Arc::new(tls),
state: RwLock::new(state),
bootstrapper,
rotation_state: RwLock::new(rotation_state),
rotation_acks: Arc::new(RwLock::new(HashSet::new())),
user_agent_leases: Default::default(),
client_leases: Default::default(),
});
Ok(Self(context))
}
/// Unseal vault with password
pub async fn unseal(&self, password: &str) -> Result<(), UnsealError> {
use crate::crypto::root_key;
use diesel::QueryDsl as _;
// 1. Get root_key_id from settings
let mut conn = self.db.get().await?;
let settings: db::models::ArbiterSetting = schema::arbiter_settings::table
.first(&mut conn)
.await?;
let root_key_id = settings.root_key_id.ok_or(UnsealError::NoRootKey)?;
// 2. Load encrypted root key
let encrypted: db::models::AeadEncrypted = schema::aead_encrypted::table
.find(root_key_id)
.first(&mut conn)
.await?;
let salt = encrypted
.argon2_salt
.as_ref()
.ok_or(UnsealError::MissingSalt)?;
drop(conn);
// 3. Decrypt root key using password
let root_key = root_key::decrypt_root_key(&encrypted, password, salt)
.map_err(UnsealError::DecryptionFailed)?;
// 4. Create secure storage
let key_storage = KeyStorage::new(root_key);
// 5. Transition state machine
let mut state = self.state.write().await;
state
.process_event(ServerEvents::Unsealed(key_storage))
.map_err(|_| UnsealError::InvalidState)?;
Ok(())
}
/// Seal the server (lock the key)
pub async fn seal(&self) -> Result<(), SealError> {
let mut state = self.state.write().await;
state
.process_event(ServerEvents::Sealed)
.map_err(|_| SealError::InvalidState)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_keystorage_creation() {
let key = [42u8; 32];
let storage = KeyStorage::new(key);
assert_eq!(storage.key()[0], 42);
assert_eq!(storage.key().len(), 32);
}
#[test]
fn test_keystorage_zeroization() {
let key = [99u8; 32];
{
let _storage = KeyStorage::new(key);
// storage будет dropped здесь
}
// После drop SecretBox должен зануляеть память
// Это проверяется автоматически через secrecy::Zeroize
}
#[test]
fn test_state_machine_transitions() {
let mut state = ServerStateMachine::new(_Context);
// Начальное состояние
assert!(matches!(state.state(), &ServerStates::NotBootstrapped));
// Bootstrapped transition
state.process_event(ServerEvents::Bootstrapped).unwrap();
assert!(matches!(state.state(), &ServerStates::Sealed));
// Unsealed transition
let key_storage = KeyStorage::new([1u8; 32]);
state
.process_event(ServerEvents::Unsealed(key_storage))
.unwrap();
assert!(matches!(state.state(), &ServerStates::Ready(_)));
// Sealed transition
state.process_event(ServerEvents::Sealed).unwrap();
assert!(matches!(state.state(), &ServerStates::Sealed));
}
#[test]
fn test_move_key_callback() {
let mut ctx = _Context;
let key_storage = KeyStorage::new([7u8; 32]);
let result = ctx.move_key(key_storage);
assert!(result.is_ok());
assert_eq!(result.unwrap().key()[0], 7);
}
#[test]
fn test_dispose_key_callback() {
let mut ctx = _Context;
let key_storage = KeyStorage::new([13u8; 32]);
let result = ctx.dispose_key(&key_storage);
assert!(result.is_ok());
}
#[test]
fn test_invalid_state_transitions() {
let mut state = ServerStateMachine::new(_Context);
// Попытка unseal без bootstrap
let key_storage = KeyStorage::new([1u8; 32]);
let result = state.process_event(ServerEvents::Unsealed(key_storage));
assert!(result.is_err());
// Правильный путь
state.process_event(ServerEvents::Bootstrapped).unwrap();
// Попытка повторного bootstrap
let result = state.process_event(ServerEvents::Bootstrapped);
assert!(result.is_err());
}
}

View File

@@ -0,0 +1,104 @@
use arbiter_proto::{BOOTSTRAP_TOKEN_PATH, home_path};
use diesel::{ExpressionMethods, QueryDsl};
use diesel_async::RunQueryDsl;
use kameo::{Actor, messages};
use memsafe::MemSafe;
use miette::Diagnostic;
use rand::{RngExt, distr::StandardUniform, make_rng, rngs::StdRng};
use secrecy::SecretString;
use thiserror::Error;
use tracing::info;
use zeroize::{Zeroize, Zeroizing};
use crate::{
context::{self, ServerContext},
db::{self, DatabasePool, schema},
};
const TOKEN_LENGTH: usize = 64;
pub async fn generate_token() -> Result<String, std::io::Error> {
let rng: StdRng = make_rng();
let token: String = rng
.sample_iter::<char, _>(StandardUniform)
.take(TOKEN_LENGTH)
.fold(Default::default(), |mut accum, char| {
accum += char.to_string().as_str();
accum
});
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
Ok(token)
}
#[derive(Error, Debug, Diagnostic)]
pub enum BootstrapError {
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::bootstrap::database))]
Database(#[from] db::PoolError),
#[error("Database query error: {0}")]
#[diagnostic(code(arbiter_server::bootstrap::database_query))]
Query(#[from] diesel::result::Error),
#[error("I/O error: {0}")]
#[diagnostic(code(arbiter_server::bootstrap::io))]
Io(#[from] std::io::Error),
}
#[derive(Actor)]
pub struct BootstrapActor {
token: Option<String>,
}
impl BootstrapActor {
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrapError> {
let mut conn = db.get().await?;
let row_count: i64 = schema::useragent_client::table
.count()
.get_result(&mut conn)
.await?;
drop(conn);
let token = if row_count == 0 {
let token = generate_token().await?;
info!(%token, "Generated bootstrap token");
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
Some(token)
} else {
None
};
Ok(Self { token })
}
#[cfg(test)]
pub fn get_token(&self) -> Option<String> {
self.token.clone()
}
}
#[messages]
impl BootstrapActor {
#[message]
pub fn is_correct_token(&self, token: String) -> bool {
match &self.token {
Some(expected) => *expected == token,
None => false,
}
}
#[message]
pub fn consume_token(&mut self, token: String) -> bool {
if self.is_correct_token(token) {
self.token = None;
true
} else {
false
}
}
}

View File

@@ -0,0 +1,46 @@
use std::sync::Arc;
use dashmap::DashSet;
#[derive(Clone, Default)]
struct LeaseStorage<T: Eq + std::hash::Hash>(Arc<DashSet<T>>);
// A lease that automatically releases the item when dropped
pub struct Lease<T: Clone + std::hash::Hash + Eq> {
item: T,
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> Drop for Lease<T> {
fn drop(&mut self) {
self.storage.0.remove(&self.item);
}
}
#[derive(Clone, Default)]
pub struct LeaseHandler<T: Clone + std::hash::Hash + Eq> {
storage: LeaseStorage<T>,
}
impl<T: Clone + std::hash::Hash + Eq> LeaseHandler<T> {
pub fn new() -> Self {
Self {
storage: LeaseStorage(Arc::new(DashSet::new())),
}
}
pub fn acquire(&self, item: T) -> Result<Lease<T>, ()> {
if self.storage.0.insert(item.clone()) {
Ok(Lease {
item,
storage: self.storage.clone(),
})
} else {
Err(())
}
}
/// Get all currently leased items
pub fn get_all(&self) -> Vec<T> {
self.storage.0.iter().map(|entry| entry.clone()).collect()
}
}

View File

@@ -0,0 +1,192 @@
use std::sync::Arc;
use std::string::FromUtf8Error;
use miette::Diagnostic;
use rcgen::{Certificate, KeyPair};
use rustls::pki_types::CertificateDer;
use thiserror::Error;
use tokio::sync::RwLock;
use crate::db;
pub mod rotation;
pub use rotation::{RotationError, RotationState, RotationTask};
#[derive(Error, Debug, Diagnostic)]
#[expect(clippy::enum_variant_names)]
pub enum TlsInitError {
#[error("Key generation error during TLS initialization: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
KeyGeneration(#[from] rcgen::Error),
#[error("Key invalid format: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_invalid_format))]
KeyInvalidFormat(#[from] FromUtf8Error),
#[error("Key deserialization error: {0}")]
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
KeyDeserializationError(rcgen::Error),
}
pub struct TlsData {
pub cert: CertificateDer<'static>,
pub keypair: KeyPair,
}
pub struct TlsDataRaw {
pub cert: Vec<u8>,
pub key: Vec<u8>,
}
impl TlsDataRaw {
pub fn serialize(cert: &TlsData) -> Self {
Self {
cert: cert.cert.as_ref().to_vec(),
key: cert.keypair.serialize_pem().as_bytes().to_vec(),
}
}
pub fn deserialize(&self) -> Result<TlsData, TlsInitError> {
let cert = CertificateDer::from_slice(&self.cert).into_owned();
let key =
String::from_utf8(self.key.clone()).map_err(TlsInitError::KeyInvalidFormat)?;
let keypair = KeyPair::from_pem(&key).map_err(TlsInitError::KeyDeserializationError)?;
Ok(TlsData { cert, keypair })
}
}
/// Metadata about a certificate including validity period
pub struct CertificateMetadata {
pub cert_id: i32,
pub cert: CertificateDer<'static>,
pub keypair: Arc<KeyPair>,
pub not_before: i64,
pub not_after: i64,
pub created_at: i64,
}
pub(crate) fn generate_cert(key: &KeyPair) -> Result<(Certificate, i64, i64), rcgen::Error> {
let params = rcgen::CertificateParams::new(vec![
"arbiter.local".to_string(),
"localhost".to_string(),
])?;
// Set validity period: 90 days from now
let not_before = chrono::Utc::now();
let not_after = not_before + chrono::Duration::days(90);
// Note: rcgen doesn't directly expose not_before/not_after setting in all versions
// For now, we'll generate the cert and track validity separately
let cert = params.self_signed(key)?;
Ok((cert, not_before.timestamp(), not_after.timestamp()))
}
// Certificate rotation enabled
pub(crate) struct TlsManager {
// Current active certificate (atomic replacement via RwLock)
current_cert: Arc<RwLock<CertificateMetadata>>,
// Database pool for persistence
db: db::DatabasePool,
}
impl TlsManager {
/// Create new TlsManager with a generated certificate
pub async fn new(db: db::DatabasePool) -> Result<Self, TlsInitError> {
let keypair = KeyPair::generate()?;
let (cert, not_before, not_after) = generate_cert(&keypair)?;
let cert_der = cert.der().clone();
// For initial creation, cert_id will be set after DB insert
let metadata = CertificateMetadata {
cert_id: 0, // Temporary, will be updated after DB insert
cert: cert_der,
keypair: Arc::new(keypair),
not_before,
not_after,
created_at: chrono::Utc::now().timestamp(),
};
Ok(Self {
current_cert: Arc::new(RwLock::new(metadata)),
db,
})
}
/// Load TlsManager from database with specific certificate ID
pub async fn load_from_db(db: db::DatabasePool, cert_id: i32) -> Result<Self, TlsInitError> {
// TODO: Load certificate from database
// For now, return error - will be implemented when database access is ready
Err(TlsInitError::KeyGeneration(rcgen::Error::CouldNotParseCertificate))
}
/// Create from legacy TlsDataRaw format
pub async fn new_from_legacy(
db: db::DatabasePool,
data: TlsDataRaw,
not_before: i64,
not_after: i64,
) -> Result<Self, TlsInitError> {
let tls_data = data.deserialize()?;
let metadata = CertificateMetadata {
cert_id: 1, // Legacy certificate gets ID 1
cert: tls_data.cert,
keypair: Arc::new(tls_data.keypair),
not_before,
not_after,
created_at: chrono::Utc::now().timestamp(),
};
Ok(Self {
current_cert: Arc::new(RwLock::new(metadata)),
db,
})
}
/// Get current certificate data
pub async fn get_certificate(&self) -> (CertificateDer<'static>, Arc<KeyPair>) {
let cert = self.current_cert.read().await;
(cert.cert.clone(), cert.keypair.clone())
}
/// Replace certificate atomically
pub async fn replace_certificate(&self, new_cert: CertificateMetadata) -> Result<(), TlsInitError> {
let mut cert = self.current_cert.write().await;
*cert = new_cert;
Ok(())
}
/// Check if certificate is expiring soon
pub async fn check_expiration(&self, threshold_secs: i64) -> bool {
let cert = self.current_cert.read().await;
let now = chrono::Utc::now().timestamp();
cert.not_after - now < threshold_secs
}
/// Get certificate metadata for rotation logic
pub async fn get_certificate_metadata(&self) -> CertificateMetadata {
let cert = self.current_cert.read().await;
CertificateMetadata {
cert_id: cert.cert_id,
cert: cert.cert.clone(),
keypair: cert.keypair.clone(),
not_before: cert.not_before,
not_after: cert.not_after,
created_at: cert.created_at,
}
}
pub fn bytes(&self) -> TlsDataRaw {
// This method is now async-compatible but we keep sync interface
// TODO: Make this async or remove if not needed
TlsDataRaw {
cert: vec![],
key: vec![],
}
}
}

View File

@@ -0,0 +1,552 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::time::Duration;
use diesel::prelude::*;
use diesel_async::RunQueryDsl;
use ed25519_dalek::VerifyingKey;
use miette::Diagnostic;
use rcgen::KeyPair;
use thiserror::Error;
use tokio::sync::watch;
use tracing::{debug, error, info, warn};
use crate::context::ServerContext;
use crate::db::models::{NewRotationClientAck, NewTlsCertificate, NewTlsRotationHistory};
use crate::db::schema::{rotation_client_acks, tls_certificates, tls_rotation_history, tls_rotation_state};
use crate::db::DatabasePool;
use super::{generate_cert, CertificateMetadata, TlsInitError};
#[derive(Error, Debug, Diagnostic)]
pub enum RotationError {
#[error("Certificate generation failed: {0}")]
#[diagnostic(code(arbiter_server::rotation::cert_generation))]
CertGeneration(#[from] rcgen::Error),
#[error("Database error: {0}")]
#[diagnostic(code(arbiter_server::rotation::database))]
Database(#[from] diesel::result::Error),
#[error("TLS initialization error: {0}")]
#[diagnostic(code(arbiter_server::rotation::tls_init))]
TlsInit(#[from] TlsInitError),
#[error("Invalid rotation state: {0}")]
#[diagnostic(code(arbiter_server::rotation::invalid_state))]
InvalidState(String),
#[error("No active certificate found")]
#[diagnostic(code(arbiter_server::rotation::no_active_cert))]
NoActiveCertificate,
}
/// Состояние процесса ротации сертификата
#[derive(Debug, Clone)]
pub enum RotationState {
/// Обычная работа, ротация не требуется
Normal,
/// Ротация инициирована, новый сертификат сгенерирован
RotationInitiated {
initiated_at: i64,
new_cert_id: i32,
},
/// Ожидание подтверждений (ACKs) от клиентов
WaitingForAcks {
new_cert_id: i32,
initiated_at: i64,
timeout_at: i64,
},
/// Все ACK получены или таймаут истёк, готов к ротации
ReadyToRotate {
new_cert_id: i32,
},
}
impl RotationState {
/// Загрузить состояние из базы данных
pub async fn load_from_db(db: &DatabasePool) -> Result<Self, RotationError> {
use crate::db::schema::tls_rotation_state::dsl::*;
let mut conn = db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let state_record: (i32, String, Option<i32>, Option<i32>, Option<i32>) =
tls_rotation_state
.select((id, state, new_cert_id, initiated_at, timeout_at))
.filter(id.eq(1))
.first(&mut conn)
.await?;
let rotation_state = match state_record.1.as_str() {
"normal" => RotationState::Normal,
"initiated" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("Initiated state missing new_cert_id".into())
})?;
let init_at = state_record.3.ok_or_else(|| {
RotationError::InvalidState("Initiated state missing initiated_at".into())
})?;
RotationState::RotationInitiated {
initiated_at: init_at as i64,
new_cert_id: cert_id,
}
}
"waiting_acks" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing new_cert_id".into())
})?;
let init_at = state_record.3.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing initiated_at".into())
})?;
let timeout = state_record.4.ok_or_else(|| {
RotationError::InvalidState("WaitingForAcks state missing timeout_at".into())
})?;
RotationState::WaitingForAcks {
new_cert_id: cert_id,
initiated_at: init_at as i64,
timeout_at: timeout as i64,
}
}
"ready" => {
let cert_id = state_record.2.ok_or_else(|| {
RotationError::InvalidState("Ready state missing new_cert_id".into())
})?;
RotationState::ReadyToRotate {
new_cert_id: cert_id,
}
}
other => {
return Err(RotationError::InvalidState(format!(
"Unknown state: {}",
other
)))
}
};
Ok(rotation_state)
}
/// Сохранить состояние в базу данных
pub async fn save_to_db(&self, db: &DatabasePool) -> Result<(), RotationError> {
use crate::db::schema::tls_rotation_state::dsl::*;
let mut conn = db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let (state_str, cert_id, init_at, timeout) = match self {
RotationState::Normal => ("normal", None, None, None),
RotationState::RotationInitiated {
initiated_at: init,
new_cert_id: cert,
} => ("initiated", Some(*cert), Some(*init as i32), None),
RotationState::WaitingForAcks {
new_cert_id: cert,
initiated_at: init,
timeout_at: timeout_val,
} => (
"waiting_acks",
Some(*cert),
Some(*init as i32),
Some(*timeout_val as i32),
),
RotationState::ReadyToRotate { new_cert_id: cert } => ("ready", Some(*cert), None, None),
};
diesel::update(tls_rotation_state.filter(id.eq(1)))
.set((
state.eq(state_str),
new_cert_id.eq(cert_id),
initiated_at.eq(init_at),
timeout_at.eq(timeout),
))
.execute(&mut conn)
.await?;
Ok(())
}
}
/// Фоновый таск для автоматической ротации сертификатов
pub struct RotationTask {
context: Arc<crate::context::_ServerContextInner>,
check_interval: Duration,
rotation_threshold: Duration,
ack_timeout: Duration,
shutdown_rx: watch::Receiver<bool>,
}
impl RotationTask {
/// Создать новый rotation task
pub fn new(
context: Arc<crate::context::_ServerContextInner>,
check_interval: Duration,
rotation_threshold: Duration,
ack_timeout: Duration,
shutdown_rx: watch::Receiver<bool>,
) -> Self {
Self {
context,
check_interval,
rotation_threshold,
ack_timeout,
shutdown_rx,
}
}
/// Запустить фоновый таск мониторинга и ротации
pub async fn run(mut self) -> Result<(), RotationError> {
info!("Starting TLS certificate rotation task");
loop {
tokio::select! {
_ = tokio::time::sleep(self.check_interval) => {
if let Err(e) = self.check_and_process().await {
error!("Rotation task error: {}", e);
}
}
_ = self.shutdown_rx.changed() => {
info!("Rotation task shutting down");
break;
}
}
}
Ok(())
}
/// Проверить текущее состояние и выполнить необходимые действия
async fn check_and_process(&self) -> Result<(), RotationError> {
let state = self.context.rotation_state.read().await.clone();
match state {
RotationState::Normal => {
// Проверить, нужна ли ротация
self.check_expiration_and_initiate().await?;
}
RotationState::RotationInitiated { new_cert_id, .. } => {
// Автоматически перейти в WaitingForAcks
self.transition_to_waiting_acks(new_cert_id).await?;
}
RotationState::WaitingForAcks {
new_cert_id,
timeout_at,
..
} => {
self.handle_waiting_for_acks(new_cert_id, timeout_at).await?;
}
RotationState::ReadyToRotate { new_cert_id } => {
self.execute_rotation(new_cert_id).await?;
}
}
Ok(())
}
/// Проверить срок действия сертификата и инициировать ротацию если нужно
async fn check_expiration_and_initiate(&self) -> Result<(), RotationError> {
let threshold_secs = self.rotation_threshold.as_secs() as i64;
if self.context.tls.check_expiration(threshold_secs).await {
info!("Certificate expiring soon, initiating rotation");
self.initiate_rotation().await?;
}
Ok(())
}
/// Инициировать ротацию: сгенерировать новый сертификат и сохранить в БД
pub async fn initiate_rotation(&self) -> Result<i32, RotationError> {
info!("Initiating certificate rotation");
// 1. Генерация нового сертификата
let keypair = KeyPair::generate()?;
let (cert, not_before, not_after) = generate_cert(&keypair)?;
let cert_der = cert.der().clone();
// 2. Сохранение в БД (is_active = false, пока не активирован)
let new_cert_id = self
.save_new_certificate(&cert_der, &keypair, not_before, not_after)
.await?;
info!(new_cert_id, "New certificate generated and saved");
// 3. Обновление rotation_state
let new_state = RotationState::RotationInitiated {
initiated_at: chrono::Utc::now().timestamp(),
new_cert_id,
};
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// 4. Логирование в audit trail
self.log_rotation_event(new_cert_id, "rotation_initiated", None)
.await?;
Ok(new_cert_id)
}
/// Перейти в состояние WaitingForAcks и разослать уведомления
async fn transition_to_waiting_acks(&self, new_cert_id: i32) -> Result<(), RotationError> {
info!(new_cert_id, "Transitioning to WaitingForAcks state");
let initiated_at = chrono::Utc::now().timestamp();
let timeout_at = initiated_at + self.ack_timeout.as_secs() as i64;
// Обновить состояние
let new_state = RotationState::WaitingForAcks {
new_cert_id,
initiated_at,
timeout_at,
};
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// TODO: Broadcast уведомлений клиентам
// self.broadcast_rotation_notification(new_cert_id, timeout_at).await?;
info!(timeout_at, "Rotation notifications sent, waiting for ACKs");
Ok(())
}
/// Обработка состояния WaitingForAcks: проверка ACKs и таймаута
async fn handle_waiting_for_acks(
&self,
new_cert_id: i32,
timeout_at: i64,
) -> Result<(), RotationError> {
let now = chrono::Utc::now().timestamp();
// Проверить таймаут
if now > timeout_at {
let missing = self.get_missing_acks(new_cert_id).await?;
warn!(
missing_count = missing.len(),
"Rotation ACK timeout reached, proceeding with rotation"
);
// Переход в ReadyToRotate
let new_state = RotationState::ReadyToRotate { new_cert_id };
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
self.log_rotation_event(
new_cert_id,
"timeout",
Some(format!("Missing ACKs from {} clients", missing.len())),
)
.await?;
return Ok(());
}
// Проверить, все ли ACK получены
let missing = self.get_missing_acks(new_cert_id).await?;
if missing.is_empty() {
info!("All clients acknowledged, ready to rotate");
let new_state = RotationState::ReadyToRotate { new_cert_id };
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
self.log_rotation_event(new_cert_id, "acks_complete", None)
.await?;
} else {
let time_remaining = timeout_at - now;
debug!(
missing_count = missing.len(),
time_remaining,
"Waiting for rotation ACKs"
);
}
Ok(())
}
/// Выполнить атомарную ротацию сертификата
async fn execute_rotation(&self, new_cert_id: i32) -> Result<(), RotationError> {
info!(new_cert_id, "Executing certificate rotation");
// 1. Загрузить новый сертификат из БД
let new_cert = self.load_certificate(new_cert_id).await?;
// 2. Атомарная замена в TlsManager
self.context
.tls
.replace_certificate(new_cert)
.await
.map_err(RotationError::TlsInit)?;
// 3. Обновить БД: старый is_active=false, новый is_active=true
self.activate_certificate(new_cert_id).await?;
// 4. TODO: Отключить всех клиентов
// self.disconnect_all_clients().await?;
// 5. Очистить rotation_state
let new_state = RotationState::Normal;
*self.context.rotation_state.write().await = new_state.clone();
new_state.save_to_db(&self.context.db).await?;
// 6. Очистить ACKs
self.context.rotation_acks.write().await.clear();
self.clear_rotation_acks(new_cert_id).await?;
// 7. Логирование
self.log_rotation_event(new_cert_id, "activated", None)
.await?;
info!(new_cert_id, "Certificate rotation completed successfully");
Ok(())
}
/// Сохранить новый сертификат в БД
async fn save_new_certificate(
&self,
cert_der: &[u8],
keypair: &KeyPair,
cert_not_before: i64,
cert_not_after: i64,
) -> Result<i32, RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let new_cert = NewTlsCertificate {
cert: cert_der.to_vec(),
cert_key: keypair.serialize_pem().as_bytes().to_vec(),
not_before: cert_not_before as i32,
not_after: cert_not_after as i32,
is_active: false,
};
diesel::insert_into(tls_certificates)
.values(&new_cert)
.execute(&mut conn)
.await?;
// Получить ID последней вставленной записи
let cert_id: i32 = diesel::select(diesel::dsl::sql::<diesel::sql_types::Integer>(
"last_insert_rowid()",
))
.first(&mut conn)
.await?;
self.log_rotation_event(cert_id, "created", None).await?;
Ok(cert_id)
}
/// Загрузить сертификат из БД
async fn load_certificate(&self, cert_id: i32) -> Result<CertificateMetadata, RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let cert_record: (Vec<u8>, Vec<u8>, i32, i32, i32) = tls_certificates
.select((cert, cert_key, not_before, not_after, created_at))
.filter(id.eq(cert_id))
.first(&mut conn)
.await?;
let cert_der = rustls::pki_types::CertificateDer::from(cert_record.0);
let key_pem = String::from_utf8(cert_record.1)
.map_err(|e| RotationError::InvalidState(format!("Invalid key encoding: {}", e)))?;
let keypair = KeyPair::from_pem(&key_pem)?;
Ok(CertificateMetadata {
cert_id,
cert: cert_der,
keypair: Arc::new(keypair),
not_before: cert_record.2 as i64,
not_after: cert_record.3 as i64,
created_at: cert_record.4 as i64,
})
}
/// Активировать сертификат (установить is_active=true)
async fn activate_certificate(&self, cert_id: i32) -> Result<(), RotationError> {
use crate::db::schema::tls_certificates::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
// Деактивировать все сертификаты
diesel::update(tls_certificates)
.set(is_active.eq(false))
.execute(&mut conn)
.await?;
// Активировать новый
diesel::update(tls_certificates.filter(id.eq(cert_id)))
.set(is_active.eq(true))
.execute(&mut conn)
.await?;
Ok(())
}
/// Получить список клиентов, которые ещё не отправили ACK
async fn get_missing_acks(&self, rotation_id: i32) -> Result<Vec<VerifyingKey>, RotationError> {
// TODO: Реализовать получение списка всех активных клиентов
// и вычитание тех, кто уже отправил ACK
// Пока возвращаем пустой список
Ok(Vec::new())
}
/// Очистить ACKs для данной ротации из БД
async fn clear_rotation_acks(&self, rotation_id: i32) -> Result<(), RotationError> {
use crate::db::schema::rotation_client_acks::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
diesel::delete(rotation_client_acks.filter(rotation_id.eq(rotation_id)))
.execute(&mut conn)
.await?;
Ok(())
}
/// Записать событие в audit trail
async fn log_rotation_event(
&self,
history_cert_id: i32,
history_event_type: &str,
history_details: Option<String>,
) -> Result<(), RotationError> {
use crate::db::schema::tls_rotation_history::dsl::*;
let mut conn = self.context.db.get().await.map_err(|e| {
RotationError::InvalidState(format!("Failed to get DB connection: {}", e))
})?;
let new_history = NewTlsRotationHistory {
cert_id: history_cert_id,
event_type: history_event_type.to_string(),
details: history_details,
};
diesel::insert_into(tls_rotation_history)
.values(&new_history)
.execute(&mut conn)
.await?;
Ok(())
}
}

View File

@@ -0,0 +1,139 @@
use chacha20poly1305::{
aead::{Aead, KeyInit},
ChaCha20Poly1305, Key, Nonce,
};
use super::CryptoError;
/// Encrypt plaintext with AEAD (ChaCha20Poly1305)
///
/// Returns (ciphertext, tag) on success
pub fn encrypt(
plaintext: &[u8],
key: &[u8; 32],
nonce: &[u8; 12],
) -> Result<Vec<u8>, CryptoError> {
let cipher_key = Key::from_slice(key);
let cipher = ChaCha20Poly1305::new(cipher_key);
let nonce_array = Nonce::from_slice(nonce);
cipher
.encrypt(nonce_array, plaintext)
.map_err(|e| CryptoError::AeadEncryption(e.to_string()))
}
/// Decrypt ciphertext with AEAD (ChaCha20Poly1305)
///
/// The ciphertext должен содержать tag (последние 16 bytes)
pub fn decrypt(
ciphertext_with_tag: &[u8],
key: &[u8; 32],
nonce: &[u8; 12],
) -> Result<Vec<u8>, CryptoError> {
let cipher_key = Key::from_slice(key);
let cipher = ChaCha20Poly1305::new(cipher_key);
let nonce_array = Nonce::from_slice(nonce);
cipher
.decrypt(nonce_array, ciphertext_with_tag)
.map_err(|e| CryptoError::AeadDecryption(e.to_string()))
}
/// Generate nonce from counter
///
/// Converts i32 counter to 12-byte nonce (big-endian encoding)
pub fn nonce_from_counter(counter: i32) -> [u8; 12] {
let mut nonce = [0u8; 12];
nonce[8..12].copy_from_slice(&counter.to_be_bytes());
nonce
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_aead_encrypt_decrypt_round_trip() {
let plaintext = b"Hello, World! This is a secret message.";
let key = [42u8; 32];
let nonce = nonce_from_counter(1);
// Encrypt
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Verify ciphertext is different from plaintext
assert_ne!(ciphertext.as_slice(), plaintext);
// Decrypt
let decrypted = decrypt(&ciphertext, &key, &nonce).expect("Decryption failed");
// Verify round-trip
assert_eq!(decrypted.as_slice(), plaintext);
}
#[test]
fn test_aead_decrypt_with_wrong_key() {
let plaintext = b"Secret data";
let key = [1u8; 32];
let wrong_key = [2u8; 32];
let nonce = nonce_from_counter(1);
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Attempt decrypt with wrong key
let result = decrypt(&ciphertext, &wrong_key, &nonce);
// Should fail
assert!(result.is_err());
}
#[test]
fn test_aead_decrypt_with_wrong_nonce() {
let plaintext = b"Secret data";
let key = [1u8; 32];
let nonce = nonce_from_counter(1);
let wrong_nonce = nonce_from_counter(2);
let ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Attempt decrypt with wrong nonce
let result = decrypt(&ciphertext, &key, &wrong_nonce);
// Should fail
assert!(result.is_err());
}
#[test]
fn test_nonce_generation_from_counter() {
let nonce1 = nonce_from_counter(1);
let nonce2 = nonce_from_counter(2);
let nonce_max = nonce_from_counter(i32::MAX);
// Verify nonces are different
assert_ne!(nonce1, nonce2);
// Verify nonce format (first 8 bytes should be zero, last 4 contain counter)
assert_eq!(&nonce1[0..8], &[0u8; 8]);
assert_eq!(&nonce1[8..12], &1i32.to_be_bytes());
assert_eq!(&nonce_max[8..12], &i32::MAX.to_be_bytes());
}
#[test]
fn test_aead_tampered_ciphertext() {
let plaintext = b"Important message";
let key = [7u8; 32];
let nonce = nonce_from_counter(5);
let mut ciphertext = encrypt(plaintext, &key, &nonce).expect("Encryption failed");
// Tamper with ciphertext (flip a bit)
if let Some(byte) = ciphertext.get_mut(5) {
*byte ^= 0x01;
}
// Attempt decrypt - should fail due to authentication tag mismatch
let result = decrypt(&ciphertext, &key, &nonce);
assert!(result.is_err());
}
}

View File

@@ -0,0 +1,28 @@
pub mod aead;
pub mod root_key;
use miette::Diagnostic;
use thiserror::Error;
#[derive(Error, Debug, Diagnostic)]
pub enum CryptoError {
#[error("AEAD encryption failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::aead_encryption))]
AeadEncryption(String),
#[error("AEAD decryption failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::aead_decryption))]
AeadDecryption(String),
#[error("Key derivation failed: {0}")]
#[diagnostic(code(arbiter_server::crypto::key_derivation))]
KeyDerivation(String),
#[error("Invalid nonce: {0}")]
#[diagnostic(code(arbiter_server::crypto::invalid_nonce))]
InvalidNonce(String),
#[error("Invalid key format: {0}")]
#[diagnostic(code(arbiter_server::crypto::invalid_key))]
InvalidKey(String),
}

View File

@@ -0,0 +1,240 @@
use argon2::{
password_hash::{rand_core::OsRng, PasswordHasher, SaltString},
Argon2, PasswordHash, PasswordVerifier,
};
use crate::db::models::AeadEncrypted;
use super::{aead, CryptoError};
/// Encrypt root key with user password
///
/// Uses Argon2id for password derivation and ChaCha20Poly1305 for encryption
pub fn encrypt_root_key(
root_key: &[u8; 32],
password: &str,
nonce_counter: i32,
) -> Result<(AeadEncrypted, String), CryptoError> {
// Derive key from password using Argon2
let (derived_key, salt) = derive_key_from_password(password)?;
// Generate nonce from counter
let nonce = aead::nonce_from_counter(nonce_counter);
// Encrypt root key
let ciphertext_with_tag = aead::encrypt(root_key, &derived_key, &nonce)?;
// Extract tag (last 16 bytes)
let tag_start = ciphertext_with_tag
.len()
.checked_sub(16)
.ok_or_else(|| CryptoError::AeadEncryption("Ciphertext too short".into()))?;
let ciphertext = ciphertext_with_tag[..tag_start].to_vec();
let tag = ciphertext_with_tag[tag_start..].to_vec();
let aead_encrypted = AeadEncrypted {
id: 1, // Will be set by database
current_nonce: nonce_counter,
ciphertext,
tag,
schema_version: 1, // Current version
argon2_salt: Some(salt.clone()),
};
Ok((aead_encrypted, salt))
}
/// Decrypt root key with user password
///
/// Verifies password hash and decrypts using ChaCha20Poly1305
pub fn decrypt_root_key(
encrypted: &AeadEncrypted,
password: &str,
salt: &str,
) -> Result<[u8; 32], CryptoError> {
// Derive key from password using stored salt
let derived_key = derive_key_with_salt(password, salt)?;
// Generate nonce from counter
let nonce = aead::nonce_from_counter(encrypted.current_nonce);
// Reconstruct ciphertext with tag
let mut ciphertext_with_tag = encrypted.ciphertext.clone();
ciphertext_with_tag.extend_from_slice(&encrypted.tag);
// Decrypt
let plaintext = aead::decrypt(&ciphertext_with_tag, &derived_key, &nonce)?;
// Verify length
if plaintext.len() != 32 {
return Err(CryptoError::InvalidKey(format!(
"Expected 32 bytes, got {}",
plaintext.len()
)));
}
// Convert to fixed-size array
let mut root_key = [0u8; 32];
root_key.copy_from_slice(&plaintext);
Ok(root_key)
}
/// Derive 32-byte key from password using Argon2id
///
/// Generates new random salt and returns (derived_key, salt_string)
fn derive_key_from_password(password: &str) -> Result<([u8; 32], String), CryptoError> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
let password_hash = argon2
.hash_password(password.as_bytes(), &salt)
.map_err(|e| CryptoError::KeyDerivation(e.to_string()))?;
// Extract hash output (32 bytes)
let hash_output = password_hash
.hash
.ok_or_else(|| CryptoError::KeyDerivation("No hash output".into()))?;
let hash_bytes = hash_output.as_bytes();
if hash_bytes.len() != 32 {
return Err(CryptoError::KeyDerivation(format!(
"Expected 32 bytes, got {}",
hash_bytes.len()
)));
}
let mut key = [0u8; 32];
key.copy_from_slice(hash_bytes);
Ok((key, salt.to_string()))
}
/// Derive 32-byte key from password using existing salt
fn derive_key_with_salt(password: &str, salt_str: &str) -> Result<[u8; 32], CryptoError> {
let argon2 = Argon2::default();
// Parse salt
let salt =
SaltString::from_b64(salt_str).map_err(|e| CryptoError::InvalidKey(e.to_string()))?;
let password_hash = argon2
.hash_password(password.as_bytes(), &salt)
.map_err(|e| CryptoError::KeyDerivation(e.to_string()))?;
// Extract hash output
let hash_output = password_hash
.hash
.ok_or_else(|| CryptoError::KeyDerivation("No hash output".into()))?;
let hash_bytes = hash_output.as_bytes();
if hash_bytes.len() != 32 {
return Err(CryptoError::KeyDerivation(format!(
"Expected 32 bytes, got {}",
hash_bytes.len()
)));
}
let mut key = [0u8; 32];
key.copy_from_slice(hash_bytes);
Ok(key)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_root_key_encrypt_decrypt_round_trip() {
let root_key = [42u8; 32];
let password = "super_secret_password_123";
let nonce_counter = 1;
// Encrypt
let (encrypted, salt) =
encrypt_root_key(&root_key, password, nonce_counter).expect("Encryption failed");
// Verify structure
assert_eq!(encrypted.current_nonce, nonce_counter);
assert_eq!(encrypted.schema_version, 1);
assert_eq!(encrypted.tag.len(), 16); // AEAD tag size
// Decrypt
let decrypted =
decrypt_root_key(&encrypted, password, &salt).expect("Decryption failed");
// Verify round-trip
assert_eq!(decrypted, root_key);
}
#[test]
fn test_decrypt_with_wrong_password() {
let root_key = [99u8; 32];
let correct_password = "correct_password";
let wrong_password = "wrong_password";
let nonce_counter = 1;
// Encrypt with correct password
let (encrypted, salt) =
encrypt_root_key(&root_key, correct_password, nonce_counter).expect("Encryption failed");
// Attempt decrypt with wrong password
let result = decrypt_root_key(&encrypted, wrong_password, &salt);
// Should fail due to authentication tag mismatch
assert!(result.is_err());
}
#[test]
fn test_password_derivation_different_salts() {
let password = "same_password";
// Derive key twice - should produce different salts
let (key1, salt1) = derive_key_from_password(password).expect("Derivation 1 failed");
let (key2, salt2) = derive_key_from_password(password).expect("Derivation 2 failed");
// Salts should be different (randomly generated)
assert_ne!(salt1, salt2);
// Keys should be different (due to different salts)
assert_ne!(key1, key2);
}
#[test]
fn test_password_derivation_with_same_salt() {
let password = "test_password";
// Generate key and salt
let (key1, salt) = derive_key_from_password(password).expect("Derivation failed");
// Derive key again with same salt
let key2 = derive_key_with_salt(password, &salt).expect("Re-derivation failed");
// Keys should be identical
assert_eq!(key1, key2);
}
#[test]
fn test_different_nonce_produces_different_ciphertext() {
let root_key = [77u8; 32];
let password = "password123";
let (encrypted1, salt1) = encrypt_root_key(&root_key, password, 1).expect("Encryption 1 failed");
let (encrypted2, salt2) = encrypt_root_key(&root_key, password, 2).expect("Encryption 2 failed");
// Different nonces should produce different ciphertexts
assert_ne!(encrypted1.ciphertext, encrypted2.ciphertext);
// But both should decrypt correctly
let decrypted1 = decrypt_root_key(&encrypted1, password, &salt1).expect("Decryption 1 failed");
let decrypted2 = decrypt_root_key(&encrypted2, password, &salt2).expect("Decryption 2 failed");
assert_eq!(decrypted1, root_key);
assert_eq!(decrypted2, root_key);
}
}

View File

@@ -0,0 +1,152 @@
use std::sync::Arc;
use diesel::{
Connection as _, SqliteConnection,
connection::{SimpleConnection as _, TransactionManager},
};
use diesel_async::{
AsyncConnection, SimpleAsyncConnection,
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig, RecyclingMethod},
sync_connection_wrapper::SyncConnectionWrapper,
};
use diesel_migrations::{EmbeddedMigrations, MigrationHarness, embed_migrations};
use miette::Diagnostic;
use thiserror::Error;
use tracing::info;
pub mod models;
pub mod schema;
pub type DatabaseConnection = SyncConnectionWrapper<SqliteConnection>;
pub type DatabasePool = diesel_async::pooled_connection::bb8::Pool<DatabaseConnection>;
pub type PoolInitError = diesel_async::pooled_connection::PoolError;
pub type PoolError = diesel_async::pooled_connection::bb8::RunError;
static DB_FILE: &str = "arbiter.sqlite";
const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
#[derive(Error, Diagnostic, Debug)]
pub enum DatabaseSetupError {
#[error("Failed to determine home directory")]
#[diagnostic(code(arbiter::db::home_dir_error))]
HomeDir(std::io::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::connection_error))]
Connection(diesel::ConnectionError),
#[error(transparent)]
#[diagnostic(code(arbiter::db::concurrency_error))]
ConcurrencySetup(diesel::result::Error),
#[error(transparent)]
#[diagnostic(code(arbiter::db::migration_error))]
Migration(Box<dyn std::error::Error + Send + Sync>),
#[error(transparent)]
#[diagnostic(code(arbiter::db::pool_error))]
Pool(#[from] PoolInitError),
}
#[tracing::instrument(level = "info")]
fn database_path() -> Result<std::path::PathBuf, DatabaseSetupError> {
let arbiter_home = arbiter_proto::home_path().map_err(DatabaseSetupError::HomeDir)?;
let db_path = arbiter_home.join(DB_FILE);
Ok(db_path)
}
#[tracing::instrument(level = "info", skip(conn))]
fn db_config(conn: &mut SqliteConnection) -> Result<(), diesel::result::Error> {
// fsync only in critical moments
conn.batch_execute("PRAGMA synchronous = NORMAL;")?;
// write WAL changes back every 1000 pages, for an in average 1MB WAL file.
// May affect readers if number is increased
conn.batch_execute("PRAGMA wal_autocheckpoint = 1000;")?;
// free some space by truncating possibly massive WAL files from the last run
conn.batch_execute("PRAGMA wal_checkpoint(TRUNCATE);")?;
// sqlite foreign keys are disabled by default, enable them for safety
conn.batch_execute("PRAGMA foreign_keys = ON;")?;
// better space reclamation
conn.batch_execute("PRAGMA auto_vacuum = FULL;")?;
// secure delete, overwrite deleted content with zeros to prevent recovery
conn.batch_execute("PRAGMA secure_delete = ON;")?;
Ok(())
}
#[tracing::instrument(level = "info", skip(url))]
fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
let mut conn = SqliteConnection::establish(url).map_err(DatabaseSetupError::Connection)?;
db_config(&mut conn).map_err(DatabaseSetupError::ConcurrencySetup)?;
conn.run_pending_migrations(MIGRATIONS)
.map_err(DatabaseSetupError::Migration)?;
info!(%url, "Database initialized successfully");
Ok(())
}
#[tracing::instrument(level = "info")]
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
let database_url = url.map(String::from).unwrap_or(format!(
"{}?mode=rwc",
(database_path()?
.to_str()
.expect("database path is not valid UTF-8"))
));
initialize_database(&database_url)?;
let mut config = ManagerConfig::default();
config.custom_setup = Box::new(|url| {
Box::pin(async move {
let mut conn = DatabaseConnection::establish(url).await?;
// see https://fractaledmind.github.io/2023/09/07/enhancing-rails-sqlite-fine-tuning/
// sleep if the database is busy, this corresponds to up to 9 seconds sleeping time.
conn.batch_execute("PRAGMA busy_timeout = 9000;")
.await
.map_err(diesel::ConnectionError::CouldntSetupConfiguration)?;
// better write-concurrency
conn.batch_execute("PRAGMA journal_mode = WAL;")
.await
.map_err(diesel::ConnectionError::CouldntSetupConfiguration)?;
Ok(conn)
})
});
let pool = DatabasePool::builder()
.build(AsyncDieselConnectionManager::new_with_config(
database_url,
config,
))
.await?;
Ok(pool)
}
#[cfg(test)]
pub async fn create_test_pool() -> DatabasePool {
use rand::distr::{Alphanumeric, SampleString as _};
let tempfile_name = Alphanumeric.sample_string(&mut rand::rng(), 16);
let file = std::env::temp_dir().join(tempfile_name);
let url = format!(
"{}?mode=rwc",
file.to_str().expect("temp file path is not valid UTF-8")
);
create_pool(Some(&url))
.await
.expect("Failed to create test database pool")
}

View File

@@ -0,0 +1,118 @@
#![allow(unused)]
#![allow(clippy::all)]
use crate::db::schema::{self, aead_encrypted, arbiter_settings};
use diesel::{prelude::*, sqlite::Sqlite};
pub mod types {
use chrono::{DateTime, Utc};
pub struct SqliteTimestamp(DateTime<Utc>);
}
#[derive(Queryable, Selectable, Debug, Insertable)]
#[diesel(table_name = aead_encrypted, check_for_backend(Sqlite))]
pub struct AeadEncrypted {
pub id: i32,
pub current_nonce: i32,
pub ciphertext: Vec<u8>,
pub tag: Vec<u8>,
pub schema_version: i32,
pub argon2_salt: Option<String>,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
pub struct ArbiterSetting {
pub id: i32,
pub root_key_id: Option<i32>, // references aead_encrypted.id
pub cert_key: Vec<u8>,
pub cert: Vec<u8>,
pub current_cert_id: Option<i32>, // references tls_certificates.id
}
#[derive(Queryable, Debug)]
#[diesel(table_name = schema::program_client, check_for_backend(Sqlite))]
pub struct ProgramClient {
pub id: i32,
pub public_key: Vec<u8>,
pub nonce: i32,
pub created_at: i32,
pub updated_at: i32,
}
#[derive(Queryable, Debug)]
#[diesel(table_name = schema::useragent_client, check_for_backend(Sqlite))]
pub struct UseragentClient {
pub id: i32,
pub public_key: Vec<u8>,
pub nonce: i32,
pub created_at: i32,
pub updated_at: i32,
}
// TLS Certificate Rotation Models
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_certificates, check_for_backend(Sqlite))]
pub struct TlsCertificate {
pub id: i32,
pub cert: Vec<u8>,
pub cert_key: Vec<u8>,
pub not_before: i32,
pub not_after: i32,
pub created_at: i32,
pub is_active: bool,
}
#[derive(Insertable)]
#[diesel(table_name = schema::tls_certificates)]
pub struct NewTlsCertificate {
pub cert: Vec<u8>,
pub cert_key: Vec<u8>,
pub not_before: i32,
pub not_after: i32,
pub is_active: bool,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_rotation_state, check_for_backend(Sqlite))]
pub struct TlsRotationState {
pub id: i32,
pub state: String,
pub new_cert_id: Option<i32>,
pub initiated_at: Option<i32>,
pub timeout_at: Option<i32>,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::rotation_client_acks, check_for_backend(Sqlite))]
pub struct RotationClientAck {
pub rotation_id: i32,
pub client_key: String,
pub ack_received_at: i32,
}
#[derive(Insertable)]
#[diesel(table_name = schema::rotation_client_acks)]
pub struct NewRotationClientAck {
pub rotation_id: i32,
pub client_key: String,
}
#[derive(Queryable, Debug, Insertable)]
#[diesel(table_name = schema::tls_rotation_history, check_for_backend(Sqlite))]
pub struct TlsRotationHistory {
pub id: i32,
pub cert_id: i32,
pub event_type: String,
pub timestamp: i32,
pub details: Option<String>,
}
#[derive(Insertable)]
#[diesel(table_name = schema::tls_rotation_history)]
pub struct NewTlsRotationHistory {
pub cert_id: i32,
pub event_type: String,
pub details: Option<String>,
}

View File

@@ -0,0 +1,99 @@
// @generated automatically by Diesel CLI.
diesel::table! {
aead_encrypted (id) {
id -> Integer,
current_nonce -> Integer,
ciphertext -> Binary,
tag -> Binary,
schema_version -> Integer,
argon2_salt -> Nullable<Text>,
}
}
diesel::table! {
arbiter_settings (id) {
id -> Integer,
root_key_id -> Nullable<Integer>,
cert_key -> Binary,
cert -> Binary,
current_cert_id -> Nullable<Integer>,
}
}
diesel::table! {
program_client (id) {
id -> Integer,
nonce -> Integer,
public_key -> Binary,
created_at -> Integer,
updated_at -> Integer,
}
}
diesel::table! {
useragent_client (id) {
id -> Integer,
nonce -> Integer,
public_key -> Binary,
created_at -> Integer,
updated_at -> Integer,
}
}
diesel::table! {
tls_certificates (id) {
id -> Integer,
cert -> Binary,
cert_key -> Binary,
not_before -> Integer,
not_after -> Integer,
created_at -> Integer,
is_active -> Bool,
}
}
diesel::table! {
tls_rotation_state (id) {
id -> Integer,
state -> Text,
new_cert_id -> Nullable<Integer>,
initiated_at -> Nullable<Integer>,
timeout_at -> Nullable<Integer>,
}
}
diesel::table! {
rotation_client_acks (rotation_id, client_key) {
rotation_id -> Integer,
client_key -> Text,
ack_received_at -> Integer,
}
}
diesel::table! {
tls_rotation_history (id) {
id -> Integer,
cert_id -> Integer,
event_type -> Text,
timestamp -> Integer,
details -> Nullable<Text>,
}
}
diesel::joinable!(arbiter_settings -> aead_encrypted (root_key_id));
diesel::joinable!(arbiter_settings -> tls_certificates (current_cert_id));
diesel::joinable!(tls_rotation_state -> tls_certificates (new_cert_id));
diesel::joinable!(rotation_client_acks -> tls_certificates (rotation_id));
diesel::joinable!(tls_rotation_history -> tls_certificates (cert_id));
diesel::allow_tables_to_appear_in_same_query!(
aead_encrypted,
arbiter_settings,
program_client,
useragent_client,
tls_certificates,
tls_rotation_state,
rotation_client_acks,
tls_rotation_history,
);

View File

@@ -0,0 +1,24 @@
use tonic::Status;
use tracing::error;
pub trait GrpcStatusExt<T> {
fn to_status(self) -> Result<T, Status>;
}
impl<T> GrpcStatusExt<T> for Result<T, diesel::result::Error> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database error");
Status::internal("Database error")
})
}
}
impl<T> GrpcStatusExt<T> for Result<T, crate::db::PoolError> {
fn to_status(self) -> Result<T, Status> {
self.map_err(|e| {
error!(error = ?e, "Database pool error");
Status::internal("Database pool error")
})
}
}

View File

@@ -1,5 +1,63 @@
#![allow(unused)]
#[tokio::main]
pub async fn main() {
use std::sync::Arc;
use arbiter_proto::{
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
transport::BiStream,
};
use async_trait::async_trait;
use tokio_stream::wrappers::ReceiverStream;
use tokio::sync::mpsc;
use tonic::{Request, Response, Status};
use crate::{
actors::{client::handle_client, user_agent::handle_user_agent},
context::ServerContext,
};
pub mod actors;
mod context;
mod crypto;
mod db;
mod errors;
const DEFAULT_CHANNEL_SIZE: usize = 1000;
pub struct Server {
context: ServerContext,
}
#[async_trait]
impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
type ClientStream = ReceiverStream<Result<ClientResponse, Status>>;
async fn client(
&self,
request: Request<tonic::Streaming<ClientRequest>>,
) -> Result<Response<Self::ClientStream>, Status> {
let req_stream = request.into_inner();
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
tokio::spawn(handle_client(
self.context.clone(),
BiStream {
request_stream: req_stream,
response_sender: tx,
},
));
Ok(Response::new(ReceiverStream::new(rx)))
}
async fn user_agent(
&self,
request: Request<tonic::Streaming<UserAgentRequest>>,
) -> Result<Response<Self::UserAgentStream>, Status> {
let req_stream = request.into_inner();
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
tokio::spawn(handle_user_agent(self.context.clone(), req_stream, tx));
Ok(Response::new(ReceiverStream::new(rx)))
}
}

View File

@@ -0,0 +1,6 @@
[package]
name = "arbiter-useragent"
version = "0.1.0"
edition = "2024"
[dependencies]

View File

@@ -0,0 +1,14 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

View File

@@ -1,9 +0,0 @@
# For documentation on how to configure this file,
# see https://diesel.rs/guides/configuring-diesel-cli
[print_schema]
file = "src/schema.rs"
custom_type_derives = ["diesel::query_builder::QueryId", "Clone"]
[migrations_directory]
dir = "migrations"

View File

@@ -1,3 +0,0 @@
fn main() {
println!("Hello, world!");
}

View File

@@ -1,4 +1,48 @@
# cargo-vet audits file
[audits]
[[audits.test-log]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.18 -> 0.2.19"
[[audits.test-log-macros]]
who = "hdbg <httpdebugger@protonmail.com>"
criteria = "safe-to-deploy"
delta = "0.2.18 -> 0.2.19"
[[trusted.h2]]
criteria = "safe-to-deploy"
user-id = 359 # Sean McArthur (seanmonstar)
start = "2019-03-13"
end = "2027-02-14"
[[trusted.hashbrown]]
criteria = "safe-to-deploy"
user-id = 55123 # rust-lang-owner
start = "2025-04-30"
end = "2027-02-14"
[[trusted.hyper-util]]
criteria = "safe-to-deploy"
user-id = 359 # Sean McArthur (seanmonstar)
start = "2022-01-15"
end = "2027-02-14"
[[trusted.rustix]]
criteria = "safe-to-deploy"
user-id = 6825 # Dan Gohman (sunfishcode)
start = "2021-10-29"
end = "2027-02-14"
[[trusted.serde_json]]
criteria = "safe-to-deploy"
user-id = 3618 # David Tolnay (dtolnay)
start = "2019-02-28"
end = "2027-02-14"
[[trusted.syn]]
criteria = "safe-to-deploy"
user-id = 3618 # David Tolnay (dtolnay)
start = "2019-03-01"
end = "2027-02-14"

View File

@@ -3,3 +3,944 @@
[cargo-vet]
version = "0.10"
[imports.bytecode-alliance]
url = "https://raw.githubusercontent.com/bytecodealliance/wasmtime/main/supply-chain/audits.toml"
[imports.google]
url = "https://raw.githubusercontent.com/google/supply-chain/main/audits.toml"
[imports.mozilla]
url = "https://raw.githubusercontent.com/mozilla/supply-chain/main/audits.toml"
[[exemptions.addr2line]]
version = "0.25.1"
criteria = "safe-to-deploy"
[[exemptions.aho-corasick]]
version = "1.1.4"
criteria = "safe-to-deploy"
[[exemptions.anyhow]]
version = "1.0.101"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs]]
version = "0.7.1"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs-derive]]
version = "0.6.0"
criteria = "safe-to-deploy"
[[exemptions.asn1-rs-impl]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.async-trait]]
version = "0.1.89"
criteria = "safe-to-deploy"
[[exemptions.autocfg]]
version = "1.5.0"
criteria = "safe-to-deploy"
[[exemptions.aws-lc-rs]]
version = "1.15.4"
criteria = "safe-to-deploy"
[[exemptions.aws-lc-sys]]
version = "0.37.0"
criteria = "safe-to-deploy"
[[exemptions.axum]]
version = "0.8.8"
criteria = "safe-to-deploy"
[[exemptions.axum-core]]
version = "0.5.6"
criteria = "safe-to-deploy"
[[exemptions.backtrace]]
version = "0.3.76"
criteria = "safe-to-deploy"
[[exemptions.backtrace-ext]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.bb8]]
version = "0.9.1"
criteria = "safe-to-deploy"
[[exemptions.bitflags]]
version = "2.10.0"
criteria = "safe-to-deploy"
[[exemptions.block-buffer]]
version = "0.11.0"
criteria = "safe-to-deploy"
[[exemptions.bytes]]
version = "1.11.1"
criteria = "safe-to-deploy"
[[exemptions.cc]]
version = "1.2.55"
criteria = "safe-to-deploy"
[[exemptions.cfg-if]]
version = "1.0.4"
criteria = "safe-to-deploy"
[[exemptions.chacha20]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.chrono]]
version = "0.4.43"
criteria = "safe-to-deploy"
[[exemptions.cmake]]
version = "0.1.57"
criteria = "safe-to-deploy"
[[exemptions.cpufeatures]]
version = "0.2.17"
criteria = "safe-to-deploy"
[[exemptions.cpufeatures]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.crc32fast]]
version = "1.5.0"
criteria = "safe-to-deploy"
[[exemptions.crossbeam-utils]]
version = "0.8.21"
criteria = "safe-to-deploy"
[[exemptions.crypto-common]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.curve25519-dalek]]
version = "5.0.0-pre.6"
criteria = "safe-to-deploy"
[[exemptions.curve25519-dalek-derive]]
version = "0.1.1"
criteria = "safe-to-deploy"
[[exemptions.darling]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.darling_core]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.darling_macro]]
version = "0.21.3"
criteria = "safe-to-deploy"
[[exemptions.dashmap]]
version = "6.1.0"
criteria = "safe-to-deploy"
[[exemptions.data-encoding]]
version = "2.10.0"
criteria = "safe-to-deploy"
[[exemptions.der-parser]]
version = "10.0.0"
criteria = "safe-to-deploy"
[[exemptions.deranged]]
version = "0.5.5"
criteria = "safe-to-deploy"
[[exemptions.diesel]]
version = "2.3.6"
criteria = "safe-to-deploy"
[[exemptions.diesel-async]]
version = "0.7.4"
criteria = "safe-to-deploy"
[[exemptions.diesel_derives]]
version = "2.3.7"
criteria = "safe-to-deploy"
[[exemptions.diesel_migrations]]
version = "2.3.1"
criteria = "safe-to-deploy"
[[exemptions.diesel_table_macro_syntax]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.digest]]
version = "0.11.0-rc.11"
criteria = "safe-to-deploy"
[[exemptions.downcast-rs]]
version = "2.0.2"
criteria = "safe-to-deploy"
[[exemptions.dsl_auto_type]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.dunce]]
version = "1.0.5"
criteria = "safe-to-deploy"
[[exemptions.dyn-clone]]
version = "1.0.20"
criteria = "safe-to-deploy"
[[exemptions.ed25519]]
version = "3.0.0-rc.4"
criteria = "safe-to-deploy"
[[exemptions.ed25519-dalek]]
version = "3.0.0-pre.6"
criteria = "safe-to-deploy"
[[exemptions.errno]]
version = "0.3.14"
criteria = "safe-to-deploy"
[[exemptions.fiat-crypto]]
version = "0.3.0"
criteria = "safe-to-deploy"
[[exemptions.find-msvc-tools]]
version = "0.1.9"
criteria = "safe-to-deploy"
[[exemptions.fixedbitset]]
version = "0.5.7"
criteria = "safe-to-deploy"
[[exemptions.flate2]]
version = "1.1.9"
criteria = "safe-to-deploy"
[[exemptions.fs_extra]]
version = "1.3.0"
criteria = "safe-to-deploy"
[[exemptions.futures-task]]
version = "0.3.31"
criteria = "safe-to-deploy"
[[exemptions.futures-util]]
version = "0.3.31"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.2.17"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.3.4"
criteria = "safe-to-deploy"
[[exemptions.getrandom]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.hashbrown]]
version = "0.14.5"
criteria = "safe-to-deploy"
[[exemptions.http]]
version = "1.4.0"
criteria = "safe-to-deploy"
[[exemptions.http-body]]
version = "1.0.1"
criteria = "safe-to-deploy"
[[exemptions.http-body-util]]
version = "0.1.3"
criteria = "safe-to-deploy"
[[exemptions.httparse]]
version = "1.10.1"
criteria = "safe-to-deploy"
[[exemptions.hybrid-array]]
version = "0.4.7"
criteria = "safe-to-deploy"
[[exemptions.hyper]]
version = "1.8.1"
criteria = "safe-to-deploy"
[[exemptions.hyper-timeout]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.iana-time-zone]]
version = "0.1.65"
criteria = "safe-to-deploy"
[[exemptions.id-arena]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.ident_case]]
version = "1.0.1"
criteria = "safe-to-deploy"
[[exemptions.indexmap]]
version = "2.13.0"
criteria = "safe-to-deploy"
[[exemptions.is_ci]]
version = "1.2.0"
criteria = "safe-to-deploy"
[[exemptions.itertools]]
version = "0.14.0"
criteria = "safe-to-deploy"
[[exemptions.itoa]]
version = "1.0.17"
criteria = "safe-to-deploy"
[[exemptions.jobserver]]
version = "0.1.34"
criteria = "safe-to-deploy"
[[exemptions.js-sys]]
version = "0.3.85"
criteria = "safe-to-deploy"
[[exemptions.kameo]]
version = "0.19.2"
criteria = "safe-to-deploy"
[[exemptions.kameo_macros]]
version = "0.19.0"
criteria = "safe-to-deploy"
[[exemptions.libc]]
version = "0.2.181"
criteria = "safe-to-deploy"
[[exemptions.libsqlite3-sys]]
version = "0.35.0"
criteria = "safe-to-deploy"
[[exemptions.linux-raw-sys]]
version = "0.11.0"
criteria = "safe-to-deploy"
[[exemptions.lock_api]]
version = "0.4.14"
criteria = "safe-to-deploy"
[[exemptions.log]]
version = "0.4.29"
criteria = "safe-to-deploy"
[[exemptions.matchit]]
version = "0.8.4"
criteria = "safe-to-deploy"
[[exemptions.memchr]]
version = "2.8.0"
criteria = "safe-to-deploy"
[[exemptions.memsafe]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.miette]]
version = "7.6.0"
criteria = "safe-to-deploy"
[[exemptions.miette-derive]]
version = "7.6.0"
criteria = "safe-to-deploy"
[[exemptions.migrations_internals]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.migrations_macros]]
version = "2.3.0"
criteria = "safe-to-deploy"
[[exemptions.mime]]
version = "0.3.17"
criteria = "safe-to-deploy"
[[exemptions.minimal-lexical]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.mio]]
version = "1.1.1"
criteria = "safe-to-deploy"
[[exemptions.multimap]]
version = "0.10.1"
criteria = "safe-to-deploy"
[[exemptions.num-bigint]]
version = "0.4.6"
criteria = "safe-to-deploy"
[[exemptions.num-conv]]
version = "0.2.0"
criteria = "safe-to-deploy"
[[exemptions.object]]
version = "0.37.3"
criteria = "safe-to-deploy"
[[exemptions.oid-registry]]
version = "0.8.1"
criteria = "safe-to-deploy"
[[exemptions.once_cell]]
version = "1.21.3"
criteria = "safe-to-deploy"
[[exemptions.owo-colors]]
version = "4.2.3"
criteria = "safe-to-deploy"
[[exemptions.parking_lot]]
version = "0.12.5"
criteria = "safe-to-deploy"
[[exemptions.parking_lot_core]]
version = "0.9.12"
criteria = "safe-to-deploy"
[[exemptions.pem]]
version = "3.0.6"
criteria = "safe-to-deploy"
[[exemptions.petgraph]]
version = "0.8.3"
criteria = "safe-to-deploy"
[[exemptions.pin-project]]
version = "1.1.10"
criteria = "safe-to-deploy"
[[exemptions.pin-project-internal]]
version = "1.1.10"
criteria = "safe-to-deploy"
[[exemptions.portable-atomic]]
version = "1.13.1"
criteria = "safe-to-deploy"
[[exemptions.prettyplease]]
version = "0.2.37"
criteria = "safe-to-deploy"
[[exemptions.proc-macro2]]
version = "1.0.106"
criteria = "safe-to-deploy"
[[exemptions.prost]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-derive]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.prost-types]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.pulldown-cmark]]
version = "0.13.0"
criteria = "safe-to-deploy"
[[exemptions.pulldown-cmark-to-cmark]]
version = "22.0.0"
criteria = "safe-to-deploy"
[[exemptions.quote]]
version = "1.0.44"
criteria = "safe-to-deploy"
[[exemptions.r-efi]]
version = "5.3.0"
criteria = "safe-to-deploy"
[[exemptions.rand]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.rand_core]]
version = "0.10.0"
criteria = "safe-to-deploy"
[[exemptions.rcgen]]
version = "0.14.7"
criteria = "safe-to-deploy"
[[exemptions.redox_syscall]]
version = "0.5.18"
criteria = "safe-to-deploy"
[[exemptions.regex]]
version = "1.12.3"
criteria = "safe-to-deploy"
[[exemptions.regex-automata]]
version = "0.4.14"
criteria = "safe-to-deploy"
[[exemptions.regex-syntax]]
version = "0.8.9"
criteria = "safe-to-deploy"
[[exemptions.ring]]
version = "0.17.14"
criteria = "safe-to-deploy"
[[exemptions.rsqlite-vfs]]
version = "0.1.0"
criteria = "safe-to-deploy"
[[exemptions.rustc-demangle]]
version = "0.1.27"
criteria = "safe-to-deploy"
[[exemptions.rustc_version]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.rusticata-macros]]
version = "4.1.0"
criteria = "safe-to-deploy"
[[exemptions.rustls]]
version = "0.23.36"
criteria = "safe-to-deploy"
[[exemptions.rustls-pki-types]]
version = "1.14.0"
criteria = "safe-to-deploy"
[[exemptions.rustls-webpki]]
version = "0.103.9"
criteria = "safe-to-deploy"
[[exemptions.rustversion]]
version = "1.0.22"
criteria = "safe-to-deploy"
[[exemptions.scoped-futures]]
version = "0.1.4"
criteria = "safe-to-deploy"
[[exemptions.scopeguard]]
version = "1.2.0"
criteria = "safe-to-deploy"
[[exemptions.secrecy]]
version = "0.10.3"
criteria = "safe-to-deploy"
[[exemptions.semver]]
version = "1.0.27"
criteria = "safe-to-deploy"
[[exemptions.serde]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.serde_core]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.serde_derive]]
version = "1.0.228"
criteria = "safe-to-deploy"
[[exemptions.sha2]]
version = "0.11.0-rc.5"
criteria = "safe-to-deploy"
[[exemptions.signal-hook-registry]]
version = "1.4.8"
criteria = "safe-to-deploy"
[[exemptions.signature]]
version = "3.0.0-rc.10"
criteria = "safe-to-deploy"
[[exemptions.simd-adler32]]
version = "0.3.8"
criteria = "safe-to-deploy"
[[exemptions.slab]]
version = "0.4.12"
criteria = "safe-to-deploy"
[[exemptions.smlang]]
version = "0.8.0"
criteria = "safe-to-deploy"
[[exemptions.smlang-macros]]
version = "0.8.0"
criteria = "safe-to-deploy"
[[exemptions.socket2]]
version = "0.6.2"
criteria = "safe-to-deploy"
[[exemptions.sqlite-wasm-rs]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.string_morph]]
version = "0.1.0"
criteria = "safe-to-deploy"
[[exemptions.subtle]]
version = "2.6.1"
criteria = "safe-to-deploy"
[[exemptions.supports-color]]
version = "3.0.2"
criteria = "safe-to-deploy"
[[exemptions.supports-hyperlinks]]
version = "3.2.0"
criteria = "safe-to-deploy"
[[exemptions.supports-unicode]]
version = "3.0.0"
criteria = "safe-to-deploy"
[[exemptions.sync_wrapper]]
version = "1.0.2"
criteria = "safe-to-deploy"
[[exemptions.tempfile]]
version = "3.25.0"
criteria = "safe-to-deploy"
[[exemptions.terminal_size]]
version = "0.4.3"
criteria = "safe-to-deploy"
[[exemptions.thiserror]]
version = "2.0.18"
criteria = "safe-to-deploy"
[[exemptions.thiserror-impl]]
version = "2.0.18"
criteria = "safe-to-deploy"
[[exemptions.thread_local]]
version = "1.1.9"
criteria = "safe-to-run"
[[exemptions.time]]
version = "0.3.47"
criteria = "safe-to-deploy"
[[exemptions.time-core]]
version = "0.1.8"
criteria = "safe-to-deploy"
[[exemptions.time-macros]]
version = "0.2.27"
criteria = "safe-to-deploy"
[[exemptions.tokio]]
version = "1.49.0"
criteria = "safe-to-deploy"
[[exemptions.tokio-macros]]
version = "2.6.0"
criteria = "safe-to-deploy"
[[exemptions.tokio-rustls]]
version = "0.26.4"
criteria = "safe-to-deploy"
[[exemptions.tokio-stream]]
version = "0.1.18"
criteria = "safe-to-deploy"
[[exemptions.tokio-util]]
version = "0.7.18"
criteria = "safe-to-deploy"
[[exemptions.toml]]
version = "0.9.11+spec-1.1.0"
criteria = "safe-to-deploy"
[[exemptions.toml_parser]]
version = "1.0.6+spec-1.1.0"
criteria = "safe-to-deploy"
[[exemptions.tonic]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tonic-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tonic-prost]]
version = "0.14.4"
criteria = "safe-to-deploy"
[[exemptions.tonic-prost-build]]
version = "0.14.3"
criteria = "safe-to-deploy"
[[exemptions.tower]]
version = "0.5.3"
criteria = "safe-to-deploy"
[[exemptions.tower-layer]]
version = "0.3.3"
criteria = "safe-to-deploy"
[[exemptions.tower-service]]
version = "0.3.3"
criteria = "safe-to-deploy"
[[exemptions.tracing]]
version = "0.1.44"
criteria = "safe-to-deploy"
[[exemptions.tracing-attributes]]
version = "0.1.31"
criteria = "safe-to-deploy"
[[exemptions.tracing-core]]
version = "0.1.36"
criteria = "safe-to-deploy"
[[exemptions.tracing-subscriber]]
version = "0.3.22"
criteria = "safe-to-run"
[[exemptions.try-lock]]
version = "0.2.5"
criteria = "safe-to-deploy"
[[exemptions.typenum]]
version = "1.19.0"
criteria = "safe-to-deploy"
[[exemptions.unicase]]
version = "2.9.0"
criteria = "safe-to-deploy"
[[exemptions.unicode-ident]]
version = "1.0.23"
criteria = "safe-to-deploy"
[[exemptions.untrusted]]
version = "0.7.1"
criteria = "safe-to-deploy"
[[exemptions.untrusted]]
version = "0.9.0"
criteria = "safe-to-deploy"
[[exemptions.uuid]]
version = "1.20.0"
criteria = "safe-to-deploy"
[[exemptions.want]]
version = "0.3.1"
criteria = "safe-to-deploy"
[[exemptions.wasi]]
version = "0.11.1+wasi-snapshot-preview1"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-macro]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-macro-support]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.wasm-bindgen-shared]]
version = "0.2.108"
criteria = "safe-to-deploy"
[[exemptions.winapi]]
version = "0.3.9"
criteria = "safe-to-deploy"
[[exemptions.winapi-i686-pc-windows-gnu]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.winapi-x86_64-pc-windows-gnu]]
version = "0.4.0"
criteria = "safe-to-deploy"
[[exemptions.windows-core]]
version = "0.62.2"
criteria = "safe-to-deploy"
[[exemptions.windows-implement]]
version = "0.60.2"
criteria = "safe-to-deploy"
[[exemptions.windows-interface]]
version = "0.59.3"
criteria = "safe-to-deploy"
[[exemptions.windows-link]]
version = "0.2.1"
criteria = "safe-to-deploy"
[[exemptions.windows-result]]
version = "0.4.1"
criteria = "safe-to-deploy"
[[exemptions.windows-strings]]
version = "0.5.1"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.52.0"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.60.2"
criteria = "safe-to-deploy"
[[exemptions.windows-sys]]
version = "0.61.2"
criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows-targets]]
version = "0.53.5"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_aarch64_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnu]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnu]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_i686_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnu]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnu]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_gnullvm]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_msvc]]
version = "0.52.6"
criteria = "safe-to-deploy"
[[exemptions.windows_x86_64_msvc]]
version = "0.53.1"
criteria = "safe-to-deploy"
[[exemptions.winnow]]
version = "0.7.14"
criteria = "safe-to-deploy"
[[exemptions.x509-parser]]
version = "0.18.1"
criteria = "safe-to-deploy"
[[exemptions.yasna]]
version = "0.5.2"
criteria = "safe-to-deploy"
[[exemptions.zeroize]]
version = "1.8.2"
criteria = "safe-to-deploy"
[[exemptions.zmij]]
version = "1.0.20"
criteria = "safe-to-deploy"
[[exemptions.zstd]]
version = "0.13.3"
criteria = "safe-to-deploy"
[[exemptions.zstd-safe]]
version = "7.2.4"
criteria = "safe-to-deploy"
[[exemptions.zstd-sys]]
version = "2.0.16+zstd.1.5.7"
criteria = "safe-to-deploy"

File diff suppressed because it is too large Load Diff