Compare commits
114 Commits
f6116b03e7
...
terrors-re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
efb11d2271 | ||
| 2148faa376 | |||
|
|
eb37ee0a0c | ||
|
|
1f07fd6a98 | ||
|
|
e135519c06 | ||
|
|
f015d345f4 | ||
|
|
784261f4d8 | ||
|
|
971db0e919 | ||
|
|
e1a8553142 | ||
|
|
ec70561c93 | ||
|
|
3993d3a8cc | ||
|
|
c87456ae2f | ||
|
|
e89983de3a | ||
|
|
f56668d9f6 | ||
|
|
434738bae5 | ||
|
|
915540de32 | ||
|
|
5a5008080a | ||
|
|
3bc423f9b2 | ||
|
|
f2c33a5bf4 | ||
|
|
3e8b26418a | ||
|
|
60ce1cc110 | ||
|
|
2ff4d0961c | ||
|
|
d61dab3285 | ||
|
|
c439c9645d | ||
|
|
c2883704e6 | ||
| 47caec38a6 | |||
|
|
77c3babec7 | ||
|
|
6f03ce4d1d | ||
|
|
712f114763 | ||
|
|
c56184d30b | ||
|
|
9017ea4017 | ||
|
|
088fa6fe72 | ||
|
|
c90af9c196 | ||
|
|
a5a9bc73b0 | ||
|
|
6ed8150e48 | ||
|
|
fac312d860 | ||
|
|
549a0f5f52 | ||
|
|
4db102b3d1 | ||
|
|
c61a9e30ac | ||
|
|
27836beb75 | ||
|
|
099f76166e | ||
|
|
66026e903a | ||
|
|
3360d3c8c7 | ||
|
|
02980468db | ||
|
|
ec0e8a980c | ||
|
|
16d5b9a233 | ||
|
|
62c4bc5ade | ||
|
|
ccd657c9ec | ||
|
|
013af7e65f | ||
| 84978afd58 | |||
|
|
4cb5b303dc | ||
| 8fde3cec41 | |||
| 17ac195c5d | |||
| c1c5d14133 | |||
| 47144bdf81 | |||
| 42760bbd79 | |||
| d29bca853b | |||
| f8d27a1454 | |||
| 6030f30901 | |||
| a3c401194f | |||
|
|
6386510f52 | ||
| ec36e5c2ea | |||
|
|
ba86d18250 | ||
|
|
606a1f3774 | ||
|
|
b3a67ffc00 | ||
|
|
168290040c | ||
|
|
2b27da224e | ||
|
|
9e92b168ba | ||
|
|
bd159c35e8 | ||
|
|
b3e378b5fc | ||
|
|
b7c4f2e735 | ||
|
|
4a5dd3eea7 | ||
|
|
5af6d8dd9c | ||
|
|
5dfe390ac3 | ||
|
|
43c7b211c3 | ||
|
|
c5f9cfcaa0 | ||
|
|
67fce6f06a | ||
|
|
191b126462 | ||
|
|
cb05407bb6 | ||
| 4beb34764d | |||
|
|
4b4a8f4489 | ||
|
|
54d0fe0505 | ||
|
|
06f4d628db | ||
|
|
657f47e32f | ||
|
|
86f8feb291 | ||
|
|
6deec731e2 | ||
|
|
f5a5c62181 | ||
|
|
b8afd94b21 | ||
|
|
7b57965952 | ||
|
|
9dca7aff27 | ||
|
|
4d1f047baf | ||
|
|
925c7a211f | ||
|
|
d81120f59c | ||
|
|
e118eceb85 | ||
|
|
4a84fe9339 | ||
|
|
c6e13dc476 | ||
|
|
8f5d4cc385 | ||
|
|
2ffd60973d | ||
|
|
08af101b2e | ||
|
|
bb58868333 | ||
|
|
b05cdeec66 | ||
|
|
9ec465706a | ||
|
|
46a3c1768c | ||
|
|
6c8a67c520 | ||
|
|
bbaed3fb97 | ||
|
|
4700bc407e | ||
|
|
281fbcb31d | ||
|
|
a55221573b | ||
|
|
45acb45a05 | ||
|
|
11f1caa6da | ||
|
|
f769c9119b | ||
|
|
1145642255 | ||
|
|
9f33277a4f | ||
|
|
0a8e1dce3f |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,3 +1,5 @@
|
|||||||
target/
|
target/
|
||||||
scripts/__pycache__/
|
scripts/__pycache__/
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
.cargo/config.toml
|
||||||
|
.vscode/
|
||||||
|
|||||||
3
.vscode/settings.json
vendored
3
.vscode/settings.json
vendored
@@ -1,3 +0,0 @@
|
|||||||
{
|
|
||||||
"git.enabled": false
|
|
||||||
}
|
|
||||||
@@ -22,4 +22,4 @@ steps:
|
|||||||
- apt-get update && apt-get install -y pkg-config
|
- apt-get update && apt-get install -y pkg-config
|
||||||
- mise install rust
|
- mise install rust
|
||||||
- mise install protoc
|
- mise install protoc
|
||||||
- mise exec rust -- cargo clippy --all-targets --all-features -- -D warnings
|
- mise exec rust -- cargo clippy --all -- -D warnings
|
||||||
18
.woodpecker/useragent-analyze.yaml
Normal file
18
.woodpecker/useragent-analyze.yaml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
when:
|
||||||
|
- event: pull_request
|
||||||
|
path:
|
||||||
|
include: ['.woodpecker/useragent-*.yaml', 'useragent/**']
|
||||||
|
- event: push
|
||||||
|
branch: main
|
||||||
|
path:
|
||||||
|
include: ['.woodpecker/useragent-*.yaml', 'useragent/**']
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: analyze
|
||||||
|
image: jdxcode/mise:latest
|
||||||
|
commands:
|
||||||
|
- mise install flutter
|
||||||
|
- mise install protoc
|
||||||
|
# Reruns codegen to catch protocol drift
|
||||||
|
- mise codegen
|
||||||
|
- cd useragent/ && flutter analyze
|
||||||
128
AGENTS.md
Normal file
128
AGENTS.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# AGENTS.md
|
||||||
|
|
||||||
|
This file provides guidance to Codex (Codex.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
Arbiter is a **permissioned signing service** for cryptocurrency wallets. It consists of:
|
||||||
|
- **`server/`** — Rust gRPC daemon that holds encrypted keys and enforces policies
|
||||||
|
- **`useragent/`** — Flutter desktop app (macOS/Windows) with a Rust backend via Rinf
|
||||||
|
- **`protobufs/`** — Protocol Buffer definitions shared between server and client
|
||||||
|
|
||||||
|
The vault never exposes key material; it only produces signatures when requests satisfy configured policies.
|
||||||
|
|
||||||
|
## Toolchain Setup
|
||||||
|
|
||||||
|
Tools are managed via [mise](https://mise.jdx.dev/). Install all required tools:
|
||||||
|
```sh
|
||||||
|
mise install
|
||||||
|
```
|
||||||
|
|
||||||
|
Key versions: Rust 1.93.0 (with clippy), Flutter 3.38.9-stable, protoc 29.6, diesel_cli 2.3.6 (sqlite).
|
||||||
|
|
||||||
|
## Server (Rust workspace at `server/`)
|
||||||
|
|
||||||
|
### Crates
|
||||||
|
|
||||||
|
| Crate | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `arbiter-proto` | Generated gRPC stubs + protobuf types; compiled from `protobufs/*.proto` via `tonic-prost-build` |
|
||||||
|
| `arbiter-server` | Main daemon — actors, DB, EVM policy engine, gRPC service implementation |
|
||||||
|
| `arbiter-useragent` | Rust client library for the user agent side of the gRPC protocol |
|
||||||
|
| `arbiter-client` | Rust client library for SDK clients |
|
||||||
|
|
||||||
|
### Common Commands
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd server
|
||||||
|
|
||||||
|
# Build
|
||||||
|
cargo build
|
||||||
|
|
||||||
|
# Run the server daemon
|
||||||
|
cargo run -p arbiter-server
|
||||||
|
|
||||||
|
# Run all tests (preferred over cargo test)
|
||||||
|
cargo nextest run
|
||||||
|
|
||||||
|
# Run a single test
|
||||||
|
cargo nextest run <test_name>
|
||||||
|
|
||||||
|
# Lint
|
||||||
|
cargo clippy
|
||||||
|
|
||||||
|
# Security audit
|
||||||
|
cargo audit
|
||||||
|
|
||||||
|
# Check unused dependencies
|
||||||
|
cargo shear
|
||||||
|
|
||||||
|
# Run snapshot tests and update snapshots
|
||||||
|
cargo insta review
|
||||||
|
```
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
The server is actor-based using the **kameo** crate. All long-lived state lives in `GlobalActors`:
|
||||||
|
|
||||||
|
- **`Bootstrapper`** — Manages the one-time bootstrap token written to `~/.arbiter/bootstrap_token` on first run.
|
||||||
|
- **`KeyHolder`** — Holds the encrypted root key and manages the Sealed/Unsealed vault state machine. On unseal, decrypts the root key into a `memsafe` hardened memory cell.
|
||||||
|
- **`MessageRouter`** — Coordinates streaming messages between user agents and SDK clients.
|
||||||
|
- **`EvmActor`** — Handles EVM transaction policy enforcement and signing.
|
||||||
|
|
||||||
|
Per-connection actors live under `actors/user_agent/` and `actors/client/`, each with `auth` (challenge-response authentication) and `session` (post-auth operations) sub-modules.
|
||||||
|
|
||||||
|
**Database:** SQLite via `diesel-async` + `bb8` connection pool. Schema managed by embedded Diesel migrations in `crates/arbiter-server/migrations/`. DB file lives at `~/.arbiter/arbiter.sqlite`. Tests use a temp-file DB via `db::create_test_pool()`.
|
||||||
|
|
||||||
|
**Cryptography:**
|
||||||
|
- Authentication: ed25519 (challenge-response, nonce-tracked per peer)
|
||||||
|
- Encryption at rest: XChaCha20-Poly1305 (versioned via `scheme` field for transparent migration on unseal)
|
||||||
|
- Password KDF: Argon2
|
||||||
|
- Unseal transport: X25519 ephemeral key exchange
|
||||||
|
- TLS: self-signed certificate (aws-lc-rs backend), fingerprint distributed via `ArbiterUrl`
|
||||||
|
|
||||||
|
**Protocol:** gRPC with Protocol Buffers. The `ArbiterUrl` type encodes host, port, CA cert, and bootstrap token into a single shareable string (printed to console on first run).
|
||||||
|
|
||||||
|
### Proto Regeneration
|
||||||
|
|
||||||
|
When `.proto` files in `protobufs/` change, rebuild to regenerate:
|
||||||
|
```sh
|
||||||
|
cd server && cargo build -p arbiter-proto
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Migrations
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Create a new migration
|
||||||
|
diesel migration generate <name> --migration-dir crates/arbiter-server/migrations
|
||||||
|
|
||||||
|
# Run migrations manually (server also runs them on startup)
|
||||||
|
diesel migration run --migration-dir crates/arbiter-server/migrations
|
||||||
|
```
|
||||||
|
|
||||||
|
## User Agent (Flutter + Rinf at `useragent/`)
|
||||||
|
|
||||||
|
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
||||||
|
|
||||||
|
Communication between Dart and Rust uses typed **signals** defined in `useragent/native/hub/src/signals/`. After modifying signal structs, regenerate Dart bindings:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd useragent && rinf gen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Commands
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd useragent
|
||||||
|
|
||||||
|
# Run the app (macOS or Windows)
|
||||||
|
flutter run
|
||||||
|
|
||||||
|
# Regenerate Rust↔Dart signal bindings
|
||||||
|
rinf gen
|
||||||
|
|
||||||
|
# Analyze Dart code
|
||||||
|
flutter analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
The Rinf Rust entry point is `useragent/native/hub/src/lib.rs`. It spawns actors defined in `useragent/native/hub/src/actors/` which handle Dart↔server communication via signals.
|
||||||
128
CLAUDE.md
Normal file
128
CLAUDE.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
Arbiter is a **permissioned signing service** for cryptocurrency wallets. It consists of:
|
||||||
|
- **`server/`** — Rust gRPC daemon that holds encrypted keys and enforces policies
|
||||||
|
- **`useragent/`** — Flutter desktop app (macOS/Windows) with a Rust backend via Rinf
|
||||||
|
- **`protobufs/`** — Protocol Buffer definitions shared between server and client
|
||||||
|
|
||||||
|
The vault never exposes key material; it only produces signatures when requests satisfy configured policies.
|
||||||
|
|
||||||
|
## Toolchain Setup
|
||||||
|
|
||||||
|
Tools are managed via [mise](https://mise.jdx.dev/). Install all required tools:
|
||||||
|
```sh
|
||||||
|
mise install
|
||||||
|
```
|
||||||
|
|
||||||
|
Key versions: Rust 1.93.0 (with clippy), Flutter 3.38.9-stable, protoc 29.6, diesel_cli 2.3.6 (sqlite).
|
||||||
|
|
||||||
|
## Server (Rust workspace at `server/`)
|
||||||
|
|
||||||
|
### Crates
|
||||||
|
|
||||||
|
| Crate | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `arbiter-proto` | Generated gRPC stubs + protobuf types; compiled from `protobufs/*.proto` via `tonic-prost-build` |
|
||||||
|
| `arbiter-server` | Main daemon — actors, DB, EVM policy engine, gRPC service implementation |
|
||||||
|
| `arbiter-useragent` | Rust client library for the user agent side of the gRPC protocol |
|
||||||
|
| `arbiter-client` | Rust client library for SDK clients |
|
||||||
|
|
||||||
|
### Common Commands
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd server
|
||||||
|
|
||||||
|
# Build
|
||||||
|
cargo build
|
||||||
|
|
||||||
|
# Run the server daemon
|
||||||
|
cargo run -p arbiter-server
|
||||||
|
|
||||||
|
# Run all tests (preferred over cargo test)
|
||||||
|
cargo nextest run
|
||||||
|
|
||||||
|
# Run a single test
|
||||||
|
cargo nextest run <test_name>
|
||||||
|
|
||||||
|
# Lint
|
||||||
|
cargo clippy
|
||||||
|
|
||||||
|
# Security audit
|
||||||
|
cargo audit
|
||||||
|
|
||||||
|
# Check unused dependencies
|
||||||
|
cargo shear
|
||||||
|
|
||||||
|
# Run snapshot tests and update snapshots
|
||||||
|
cargo insta review
|
||||||
|
```
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
The server is actor-based using the **kameo** crate. All long-lived state lives in `GlobalActors`:
|
||||||
|
|
||||||
|
- **`Bootstrapper`** — Manages the one-time bootstrap token written to `~/.arbiter/bootstrap_token` on first run.
|
||||||
|
- **`KeyHolder`** — Holds the encrypted root key and manages the Sealed/Unsealed vault state machine. On unseal, decrypts the root key into a `memsafe` hardened memory cell.
|
||||||
|
- **`MessageRouter`** — Coordinates streaming messages between user agents and SDK clients.
|
||||||
|
- **`EvmActor`** — Handles EVM transaction policy enforcement and signing.
|
||||||
|
|
||||||
|
Per-connection actors live under `actors/user_agent/` and `actors/client/`, each with `auth` (challenge-response authentication) and `session` (post-auth operations) sub-modules.
|
||||||
|
|
||||||
|
**Database:** SQLite via `diesel-async` + `bb8` connection pool. Schema managed by embedded Diesel migrations in `crates/arbiter-server/migrations/`. DB file lives at `~/.arbiter/arbiter.sqlite`. Tests use a temp-file DB via `db::create_test_pool()`.
|
||||||
|
|
||||||
|
**Cryptography:**
|
||||||
|
- Authentication: ed25519 (challenge-response, nonce-tracked per peer)
|
||||||
|
- Encryption at rest: XChaCha20-Poly1305 (versioned via `scheme` field for transparent migration on unseal)
|
||||||
|
- Password KDF: Argon2
|
||||||
|
- Unseal transport: X25519 ephemeral key exchange
|
||||||
|
- TLS: self-signed certificate (aws-lc-rs backend), fingerprint distributed via `ArbiterUrl`
|
||||||
|
|
||||||
|
**Protocol:** gRPC with Protocol Buffers. The `ArbiterUrl` type encodes host, port, CA cert, and bootstrap token into a single shareable string (printed to console on first run).
|
||||||
|
|
||||||
|
### Proto Regeneration
|
||||||
|
|
||||||
|
When `.proto` files in `protobufs/` change, rebuild to regenerate:
|
||||||
|
```sh
|
||||||
|
cd server && cargo build -p arbiter-proto
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Migrations
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Create a new migration
|
||||||
|
diesel migration generate <name> --migration-dir crates/arbiter-server/migrations
|
||||||
|
|
||||||
|
# Run migrations manually (server also runs them on startup)
|
||||||
|
diesel migration run --migration-dir crates/arbiter-server/migrations
|
||||||
|
```
|
||||||
|
|
||||||
|
## User Agent (Flutter + Rinf at `useragent/`)
|
||||||
|
|
||||||
|
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
||||||
|
|
||||||
|
Communication between Dart and Rust uses typed **signals** defined in `useragent/native/hub/src/signals/`. After modifying signal structs, regenerate Dart bindings:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd useragent && rinf gen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Commands
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd useragent
|
||||||
|
|
||||||
|
# Run the app (macOS or Windows)
|
||||||
|
flutter run
|
||||||
|
|
||||||
|
# Regenerate Rust↔Dart signal bindings
|
||||||
|
rinf gen
|
||||||
|
|
||||||
|
# Analyze Dart code
|
||||||
|
flutter analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
The Rinf Rust entry point is `useragent/native/hub/src/lib.rs`. It spawns actors defined in `useragent/native/hub/src/actors/` which handle Dart↔server communication via signals.
|
||||||
@@ -4,6 +4,66 @@ This document covers concrete technology choices and dependencies. For the archi
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Client Connection Flow
|
||||||
|
|
||||||
|
### Authentication Result Semantics
|
||||||
|
|
||||||
|
Authentication no longer uses an implicit success-only response shape. Both `client` and `user-agent` return explicit auth status enums over the wire.
|
||||||
|
|
||||||
|
- **Client:** `AuthResult` may return `SUCCESS`, `INVALID_KEY`, `INVALID_SIGNATURE`, `APPROVAL_DENIED`, `NO_USER_AGENTS_ONLINE`, or `INTERNAL`
|
||||||
|
- **User-agent:** `AuthResult` may return `SUCCESS`, `INVALID_KEY`, `INVALID_SIGNATURE`, `BOOTSTRAP_REQUIRED`, `TOKEN_INVALID`, or `INTERNAL`
|
||||||
|
|
||||||
|
This makes transport-level failures and actor/domain-level auth failures distinct:
|
||||||
|
|
||||||
|
- **Transport/protocol failures** are surfaced as stream/status errors
|
||||||
|
- **Authentication failures** are surfaced as successful protocol responses carrying an explicit auth status
|
||||||
|
|
||||||
|
Clients are expected to handle these status codes directly and present the concrete failure reason to the user.
|
||||||
|
|
||||||
|
### New Client Approval
|
||||||
|
|
||||||
|
When a client whose public key is not yet in the database connects, all connected user agents are asked to approve the connection. The first agent to respond determines the outcome; remaining requests are cancelled via a watch channel.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A([Client connects]) --> B[Receive AuthChallengeRequest]
|
||||||
|
B --> C{pubkey in DB?}
|
||||||
|
|
||||||
|
C -- yes --> D[Read nonce\nIncrement nonce in DB]
|
||||||
|
D --> G
|
||||||
|
|
||||||
|
C -- no --> E[Ask all UserAgents:\nClientConnectionRequest]
|
||||||
|
E --> F{First response}
|
||||||
|
F -- denied --> Z([Reject connection])
|
||||||
|
F -- approved --> F2[Cancel remaining\nUserAgent requests]
|
||||||
|
F2 --> F3[INSERT client\nnonce = 1]
|
||||||
|
F3 --> G[Send AuthChallenge\nwith nonce]
|
||||||
|
|
||||||
|
G --> H[Receive AuthChallengeSolution]
|
||||||
|
H --> I{Signature valid?}
|
||||||
|
I -- no --> Z
|
||||||
|
I -- yes --> J([Session started])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Known Issue: Concurrent Registration Race (TOCTOU)
|
||||||
|
|
||||||
|
Two connections presenting the same previously-unknown public key can race through the approval flow simultaneously:
|
||||||
|
|
||||||
|
1. Both check the DB → neither is registered.
|
||||||
|
2. Both request approval from user agents → both receive approval.
|
||||||
|
3. Both `INSERT` the client record → the second insert silently overwrites the first, resetting the nonce.
|
||||||
|
|
||||||
|
This means the first connection's nonce is invalidated by the second, causing its challenge verification to fail. A fix requires either serialising new-client registration (e.g. an in-memory lock keyed on pubkey) or replacing the separate check + insert with an `INSERT OR IGNORE` / upsert guarded by a unique constraint on `public_key`.
|
||||||
|
|
||||||
|
### Nonce Semantics
|
||||||
|
|
||||||
|
The `program_client.nonce` column stores the **next usable nonce** — i.e. it is always one ahead of the nonce last issued in a challenge.
|
||||||
|
|
||||||
|
- **New client:** inserted with `nonce = 1`; the first challenge is issued with `nonce = 0`.
|
||||||
|
- **Existing client:** the current DB value is read and used as the challenge nonce, then immediately incremented within the same exclusive transaction, preventing replay.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Cryptography
|
## Cryptography
|
||||||
|
|
||||||
### Authentication
|
### Authentication
|
||||||
@@ -22,9 +82,97 @@ This document covers concrete technology choices and dependencies. For the archi
|
|||||||
## Communication
|
## Communication
|
||||||
|
|
||||||
- **Protocol:** gRPC with Protocol Buffers
|
- **Protocol:** gRPC with Protocol Buffers
|
||||||
|
- **Request/response matching:** multiplexed over a single bidirectional stream using per-connection request IDs
|
||||||
- **Server identity distribution:** `ServerInfo` protobuf struct containing the TLS public key fingerprint
|
- **Server identity distribution:** `ServerInfo` protobuf struct containing the TLS public key fingerprint
|
||||||
- **Future consideration:** grpc-web lacks bidirectional stream support, so a browser-based wallet may require protojson over WebSocket
|
- **Future consideration:** grpc-web lacks bidirectional stream support, so a browser-based wallet may require protojson over WebSocket
|
||||||
|
|
||||||
|
### Request Multiplexing
|
||||||
|
|
||||||
|
Both `client` and `user-agent` connections support multiple in-flight requests over one gRPC bidi stream.
|
||||||
|
|
||||||
|
- Every request carries a monotonically increasing request ID
|
||||||
|
- Every normal response echoes the request ID it corresponds to
|
||||||
|
- Out-of-band server messages omit the response ID entirely
|
||||||
|
- The server rejects already-seen request IDs at the transport adapter boundary before business logic sees the message
|
||||||
|
|
||||||
|
This keeps request correlation entirely in transport/client connection code while leaving actor and domain handlers unaware of request IDs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EVM Policy Engine
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
The EVM engine classifies incoming transactions, enforces grant constraints, and records executions. It is the sole path through which a wallet key is used for signing.
|
||||||
|
|
||||||
|
The central abstraction is the `Policy` trait. Each implementation handles one semantic transaction category and owns its own database tables for grant storage and transaction logging.
|
||||||
|
|
||||||
|
### Transaction Evaluation Flow
|
||||||
|
|
||||||
|
`Engine::evaluate_transaction` runs the following steps in order:
|
||||||
|
|
||||||
|
1. **Classify** — Each registered policy's `analyze(context)` inspects the transaction fields (`chain`, `to`, `value`, `calldata`). The first one returning `Some(meaning)` wins. If none match, the transaction is rejected as `UnsupportedTransactionType`.
|
||||||
|
2. **Find grant** — `Policy::try_find_grant` queries for a non-revoked grant covering this wallet, client, chain, and target address.
|
||||||
|
3. **Check shared constraints** — `check_shared_constraints` runs in the engine before any policy-specific logic. It enforces the validity window, gas fee caps, and transaction count rate limit (see below).
|
||||||
|
4. **Evaluate** — `Policy::evaluate` checks the decoded meaning against the grant's policy-specific constraints and returns any violations.
|
||||||
|
5. **Record** — If `RunKind::Execution` and there are no violations, the engine writes to `evm_transaction_log` and calls `Policy::record_transaction` for any policy-specific logging (e.g., token transfer volume).
|
||||||
|
|
||||||
|
### Policy Trait
|
||||||
|
|
||||||
|
| Method | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `analyze` | Pure — classifies a transaction into a typed `Meaning`, or `None` if this policy doesn't apply |
|
||||||
|
| `evaluate` | Checks the `Meaning` against a `Grant`; returns a list of `EvalViolation`s |
|
||||||
|
| `create_grant` | Inserts policy-specific rows; returns the specific grant ID |
|
||||||
|
| `try_find_grant` | Finds a matching non-revoked grant for the given `EvalContext` |
|
||||||
|
| `find_all_grants` | Returns all non-revoked grants (used for listing) |
|
||||||
|
| `record_transaction` | Persists policy-specific data after execution |
|
||||||
|
|
||||||
|
`analyze` and `evaluate` are intentionally separate: classification is pure and cheap, while evaluation may involve DB queries (e.g., fetching past transfer volume).
|
||||||
|
|
||||||
|
### Registered Policies
|
||||||
|
|
||||||
|
**EtherTransfer** — plain ETH transfers (empty calldata)
|
||||||
|
|
||||||
|
- Grant requires: allowlist of recipient addresses + one volumetric rate limit (max ETH over a time window)
|
||||||
|
- Violations: recipient not in allowlist, cumulative ETH volume exceeded
|
||||||
|
|
||||||
|
**TokenTransfer** — ERC-20 `transfer(address,uint256)` calls
|
||||||
|
|
||||||
|
- Recognised by ABI-decoding the `transfer(address,uint256)` selector against a static registry of known token contracts (`arbiter_tokens_registry`)
|
||||||
|
- Grant requires: token contract address, optional recipient restriction, zero or more volumetric rate limits
|
||||||
|
- Violations: recipient mismatch, any volumetric limit exceeded
|
||||||
|
|
||||||
|
### Grant Model
|
||||||
|
|
||||||
|
Every grant has two layers:
|
||||||
|
|
||||||
|
- **Shared (`evm_basic_grant`)** — wallet, chain, validity period, gas fee caps, transaction count rate limit. One row per grant regardless of type.
|
||||||
|
- **Specific** — policy-owned tables (`evm_ether_transfer_grant`, `evm_token_transfer_grant`, etc.) holding type-specific configuration.
|
||||||
|
|
||||||
|
`find_all_grants` uses a `#[diesel::auto_type]` base join between the specific and shared tables, then batch-loads related rows (targets, volume limits) in two additional queries to avoid N+1.
|
||||||
|
|
||||||
|
The engine exposes `list_all_grants` which collects across all policy types into `Vec<Grant<SpecificGrant>>` via a blanket `From<Grant<S>> for Grant<SpecificGrant>` conversion.
|
||||||
|
|
||||||
|
### Shared Constraints (enforced by the engine)
|
||||||
|
|
||||||
|
These are checked centrally in `check_shared_constraints` before policy evaluation:
|
||||||
|
|
||||||
|
| Constraint | Fields | Behaviour |
|
||||||
|
|---|---|---|
|
||||||
|
| Validity window | `valid_from`, `valid_until` | Emits `InvalidTime` if current time is outside the range |
|
||||||
|
| Gas fee cap | `max_gas_fee_per_gas`, `max_priority_fee_per_gas` | Emits `GasLimitExceeded` if either cap is breached |
|
||||||
|
| Tx count rate limit | `rate_limit` (`count` + `window`) | Counts rows in `evm_transaction_log` within the window; emits `RateLimitExceeded` if at or above the limit |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Known Limitations
|
||||||
|
|
||||||
|
- **Only EIP-1559 transactions are supported.** Legacy and EIP-2930 types are rejected outright.
|
||||||
|
- **No opaque-calldata (unknown contract) grant type.** The architecture describes a category for unrecognised contracts, but no policy implements it yet. Any transaction that is not a plain ETH transfer or a known ERC-20 transfer is unconditionally rejected.
|
||||||
|
- **Token registry is static.** Tokens are recognised only if they appear in the hard-coded `arbiter_tokens_registry` crate. There is no mechanism to register additional contracts at runtime.
|
||||||
|
- **Nonce management is not implemented.** The architecture lists nonce deduplication as a core responsibility, but no nonce tracking or enforcement exists yet.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Memory Protection
|
## Memory Protection
|
||||||
|
|||||||
31
app/.dart_tool/extension_discovery/README.md
Normal file
31
app/.dart_tool/extension_discovery/README.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
Extension Discovery Cache
|
||||||
|
=========================
|
||||||
|
|
||||||
|
This folder is used by `package:extension_discovery` to cache lists of
|
||||||
|
packages that contains extensions for other packages.
|
||||||
|
|
||||||
|
DO NOT USE THIS FOLDER
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
* Do not read (or rely) the contents of this folder.
|
||||||
|
* Do write to this folder.
|
||||||
|
|
||||||
|
If you're interested in the lists of extensions stored in this folder use the
|
||||||
|
API offered by package `extension_discovery` to get this information.
|
||||||
|
|
||||||
|
If this package doesn't work for your use-case, then don't try to read the
|
||||||
|
contents of this folder. It may change, and will not remain stable.
|
||||||
|
|
||||||
|
Use package `extension_discovery`
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
If you want to access information from this folder.
|
||||||
|
|
||||||
|
Feel free to delete this folder
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
Files in this folder act as a cache, and the cache is discarded if the files
|
||||||
|
are older than the modification time of `.dart_tool/package_config.json`.
|
||||||
|
|
||||||
|
Hence, it should never be necessary to clear this cache manually, if you find a
|
||||||
|
need to do please file a bug.
|
||||||
1
app/.dart_tool/extension_discovery/vs_code.json
Normal file
1
app/.dart_tool/extension_discovery/vs_code.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"version":2,"entries":[{"package":"app","rootUri":"../","packageUri":"lib/"}]}
|
||||||
178
app/.dart_tool/package_config.json
Normal file
178
app/.dart_tool/package_config.json
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
{
|
||||||
|
"configVersion": 2,
|
||||||
|
"packages": [
|
||||||
|
{
|
||||||
|
"name": "async",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/async-2.13.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "boolean_selector",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/boolean_selector-2.1.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "characters",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/characters-1.4.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clock",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/clock-1.1.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "collection",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/collection-1.19.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cupertino_icons",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/cupertino_icons-1.0.8",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fake_async",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/fake_async-1.3.3",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_lints",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/flutter_lints-6.0.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_test",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter_test",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker-11.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_flutter_testing",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_flutter_testing-3.0.10",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_testing",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_testing-3.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lints",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/lints-6.1.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "matcher",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/matcher-0.12.17",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "material_color_utilities",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/material_color_utilities-0.11.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "2.17"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "meta",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/meta-1.17.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "path",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/path-1.9.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sky_engine",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/bin/cache/pkg/sky_engine",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "source_span",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/source_span-1.10.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stack_trace",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stack_trace-1.12.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stream_channel",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stream_channel-2.1.4",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "string_scanner",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/string_scanner-1.4.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "term_glyph",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/term_glyph-1.2.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test_api",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/test_api-0.7.7",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vector_math",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vector_math-2.2.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vm_service",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vm_service-15.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "app",
|
||||||
|
"rootUri": "../",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.10"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"generator": "pub",
|
||||||
|
"generatorVersion": "3.10.8",
|
||||||
|
"flutterRoot": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable",
|
||||||
|
"flutterVersion": "3.38.9",
|
||||||
|
"pubCache": "file:///Users/kaska/.pub-cache"
|
||||||
|
}
|
||||||
230
app/.dart_tool/package_graph.json
Normal file
230
app/.dart_tool/package_graph.json
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
{
|
||||||
|
"roots": [
|
||||||
|
"app"
|
||||||
|
],
|
||||||
|
"packages": [
|
||||||
|
{
|
||||||
|
"name": "app",
|
||||||
|
"version": "1.0.0+1",
|
||||||
|
"dependencies": [
|
||||||
|
"cupertino_icons",
|
||||||
|
"flutter"
|
||||||
|
],
|
||||||
|
"devDependencies": [
|
||||||
|
"flutter_lints",
|
||||||
|
"flutter_test"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_lints",
|
||||||
|
"version": "6.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"lints"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_test",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection",
|
||||||
|
"fake_async",
|
||||||
|
"flutter",
|
||||||
|
"leak_tracker_flutter_testing",
|
||||||
|
"matcher",
|
||||||
|
"meta",
|
||||||
|
"path",
|
||||||
|
"stack_trace",
|
||||||
|
"stream_channel",
|
||||||
|
"test_api",
|
||||||
|
"vector_math"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cupertino_icons",
|
||||||
|
"version": "1.0.8",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"characters",
|
||||||
|
"collection",
|
||||||
|
"material_color_utilities",
|
||||||
|
"meta",
|
||||||
|
"sky_engine",
|
||||||
|
"vector_math"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lints",
|
||||||
|
"version": "6.1.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stream_channel",
|
||||||
|
"version": "2.1.4",
|
||||||
|
"dependencies": [
|
||||||
|
"async"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "meta",
|
||||||
|
"version": "1.17.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "collection",
|
||||||
|
"version": "1.19.1",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_flutter_testing",
|
||||||
|
"version": "3.0.10",
|
||||||
|
"dependencies": [
|
||||||
|
"flutter",
|
||||||
|
"leak_tracker",
|
||||||
|
"leak_tracker_testing",
|
||||||
|
"matcher",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vector_math",
|
||||||
|
"version": "2.2.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stack_trace",
|
||||||
|
"version": "1.12.1",
|
||||||
|
"dependencies": [
|
||||||
|
"path"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clock",
|
||||||
|
"version": "1.1.2",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fake_async",
|
||||||
|
"version": "1.3.3",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "path",
|
||||||
|
"version": "1.9.1",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "matcher",
|
||||||
|
"version": "0.12.17",
|
||||||
|
"dependencies": [
|
||||||
|
"async",
|
||||||
|
"meta",
|
||||||
|
"stack_trace",
|
||||||
|
"term_glyph",
|
||||||
|
"test_api"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test_api",
|
||||||
|
"version": "0.7.7",
|
||||||
|
"dependencies": [
|
||||||
|
"async",
|
||||||
|
"boolean_selector",
|
||||||
|
"collection",
|
||||||
|
"meta",
|
||||||
|
"source_span",
|
||||||
|
"stack_trace",
|
||||||
|
"stream_channel",
|
||||||
|
"string_scanner",
|
||||||
|
"term_glyph"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sky_engine",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "material_color_utilities",
|
||||||
|
"version": "0.11.1",
|
||||||
|
"dependencies": [
|
||||||
|
"collection"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "characters",
|
||||||
|
"version": "1.4.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "async",
|
||||||
|
"version": "2.13.0",
|
||||||
|
"dependencies": [
|
||||||
|
"collection",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_testing",
|
||||||
|
"version": "3.0.2",
|
||||||
|
"dependencies": [
|
||||||
|
"leak_tracker",
|
||||||
|
"matcher",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker",
|
||||||
|
"version": "11.0.2",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection",
|
||||||
|
"meta",
|
||||||
|
"path",
|
||||||
|
"vm_service"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "term_glyph",
|
||||||
|
"version": "1.2.2",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "string_scanner",
|
||||||
|
"version": "1.4.1",
|
||||||
|
"dependencies": [
|
||||||
|
"source_span"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "source_span",
|
||||||
|
"version": "1.10.2",
|
||||||
|
"dependencies": [
|
||||||
|
"collection",
|
||||||
|
"path",
|
||||||
|
"term_glyph"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "boolean_selector",
|
||||||
|
"version": "2.1.2",
|
||||||
|
"dependencies": [
|
||||||
|
"source_span",
|
||||||
|
"string_scanner"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vm_service",
|
||||||
|
"version": "15.0.2",
|
||||||
|
"dependencies": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"configVersion": 1
|
||||||
|
}
|
||||||
1
app/.dart_tool/version
Normal file
1
app/.dart_tool/version
Normal file
@@ -0,0 +1 @@
|
|||||||
|
3.38.9
|
||||||
11
app/macos/Flutter/ephemeral/Flutter-Generated.xcconfig
Normal file
11
app/macos/Flutter/ephemeral/Flutter-Generated.xcconfig
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
// This is a generated file; do not edit or check into version control.
|
||||||
|
FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable
|
||||||
|
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app
|
||||||
|
COCOAPODS_PARALLEL_CODE_SIGN=true
|
||||||
|
FLUTTER_BUILD_DIR=build
|
||||||
|
FLUTTER_BUILD_NAME=1.0.0
|
||||||
|
FLUTTER_BUILD_NUMBER=1
|
||||||
|
DART_OBFUSCATION=false
|
||||||
|
TRACK_WIDGET_CREATION=true
|
||||||
|
TREE_SHAKE_ICONS=false
|
||||||
|
PACKAGE_CONFIG=.dart_tool/package_config.json
|
||||||
12
app/macos/Flutter/ephemeral/flutter_export_environment.sh
Executable file
12
app/macos/Flutter/ephemeral/flutter_export_environment.sh
Executable file
@@ -0,0 +1,12 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
# This is a generated file; do not edit or check into version control.
|
||||||
|
export "FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable"
|
||||||
|
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app"
|
||||||
|
export "COCOAPODS_PARALLEL_CODE_SIGN=true"
|
||||||
|
export "FLUTTER_BUILD_DIR=build"
|
||||||
|
export "FLUTTER_BUILD_NAME=1.0.0"
|
||||||
|
export "FLUTTER_BUILD_NUMBER=1"
|
||||||
|
export "DART_OBFUSCATION=false"
|
||||||
|
export "TRACK_WIDGET_CREATION=true"
|
||||||
|
export "TREE_SHAKE_ICONS=false"
|
||||||
|
export "PACKAGE_CONFIG=.dart_tool/package_config.json"
|
||||||
84
mise.lock
84
mise.lock
@@ -1,7 +1,37 @@
|
|||||||
|
# @generated - this file is auto-generated by `mise lock` https://mise.jdx.dev/dev-tools/mise-lock.html
|
||||||
|
|
||||||
|
[[tools.ast-grep]]
|
||||||
|
version = "0.42.0"
|
||||||
|
backend = "aqua:ast-grep/ast-grep"
|
||||||
|
|
||||||
|
[tools.ast-grep."platforms.linux-arm64"]
|
||||||
|
checksum = "sha256:5c830eae8456569e2f7212434ed9c238f58dca412d76045418ed6d394a755836"
|
||||||
|
url = "https://github.com/ast-grep/ast-grep/releases/download/0.42.0/app-aarch64-unknown-linux-gnu.zip"
|
||||||
|
|
||||||
|
[tools.ast-grep."platforms.linux-x64"]
|
||||||
|
checksum = "sha256:e825a05603f0bcc4cd9076c4cc8c9abd6d008b7cd07d9aa3cc323ba4b8606651"
|
||||||
|
url = "https://github.com/ast-grep/ast-grep/releases/download/0.42.0/app-x86_64-unknown-linux-gnu.zip"
|
||||||
|
|
||||||
|
[tools.ast-grep."platforms.macos-arm64"]
|
||||||
|
checksum = "sha256:fc300d5293b1c770a5aece03a8a193b92e71e87cec726c28096990691a582620"
|
||||||
|
url = "https://github.com/ast-grep/ast-grep/releases/download/0.42.0/app-aarch64-apple-darwin.zip"
|
||||||
|
|
||||||
|
[tools.ast-grep."platforms.macos-x64"]
|
||||||
|
checksum = "sha256:979ffe611327056f4730a1ae71b0209b3b830f58b22c6ed194cda34f55400db2"
|
||||||
|
url = "https://github.com/ast-grep/ast-grep/releases/download/0.42.0/app-x86_64-apple-darwin.zip"
|
||||||
|
|
||||||
|
[tools.ast-grep."platforms.windows-x64"]
|
||||||
|
checksum = "sha256:55836fa1b2c65dc7d61615a4d9368622a0d2371a76d28b9a165e5a3ab6ae32a4"
|
||||||
|
url = "https://github.com/ast-grep/ast-grep/releases/download/0.42.0/app-x86_64-pc-windows-msvc.zip"
|
||||||
|
|
||||||
[[tools."cargo:cargo-audit"]]
|
[[tools."cargo:cargo-audit"]]
|
||||||
version = "0.22.1"
|
version = "0.22.1"
|
||||||
backend = "cargo:cargo-audit"
|
backend = "cargo:cargo-audit"
|
||||||
|
|
||||||
|
[[tools."cargo:cargo-edit"]]
|
||||||
|
version = "0.13.9"
|
||||||
|
backend = "cargo:cargo-edit"
|
||||||
|
|
||||||
[[tools."cargo:cargo-features"]]
|
[[tools."cargo:cargo-features"]]
|
||||||
version = "1.0.0"
|
version = "1.0.0"
|
||||||
backend = "cargo:cargo-features"
|
backend = "cargo:cargo-features"
|
||||||
@@ -42,6 +72,10 @@ backend = "cargo:diesel_cli"
|
|||||||
default-features = "false"
|
default-features = "false"
|
||||||
features = "sqlite,sqlite-bundled"
|
features = "sqlite,sqlite-bundled"
|
||||||
|
|
||||||
|
[[tools."cargo:rinf_cli"]]
|
||||||
|
version = "8.9.1"
|
||||||
|
backend = "cargo:rinf_cli"
|
||||||
|
|
||||||
[[tools.flutter]]
|
[[tools.flutter]]
|
||||||
version = "3.38.9-stable"
|
version = "3.38.9-stable"
|
||||||
backend = "asdf:flutter"
|
backend = "asdf:flutter"
|
||||||
@@ -49,20 +83,50 @@ backend = "asdf:flutter"
|
|||||||
[[tools.protoc]]
|
[[tools.protoc]]
|
||||||
version = "29.6"
|
version = "29.6"
|
||||||
backend = "aqua:protocolbuffers/protobuf/protoc"
|
backend = "aqua:protocolbuffers/protobuf/protoc"
|
||||||
"platforms.linux-arm64" = { checksum = "sha256:2594ff4fcae8cb57310d394d0961b236190ad9c5efbfdf1f597ea471d424fe79", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-linux-aarch_64.zip"}
|
|
||||||
"platforms.linux-x64" = { checksum = "sha256:48785a926e73ffa3f68e2f22b14e7b849620c7a1d36809ac9249a5495e280323", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-linux-x86_64.zip"}
|
[tools.protoc."platforms.linux-arm64"]
|
||||||
"platforms.macos-arm64" = { checksum = "sha256:b9576b5fa1a1ef3fe13a8c91d9d8204b46545759bea5ae155cd6ba2ea4cdaeed", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-aarch_64.zip"}
|
checksum = "sha256:2594ff4fcae8cb57310d394d0961b236190ad9c5efbfdf1f597ea471d424fe79"
|
||||||
"platforms.macos-x64" = { checksum = "sha256:312f04713946921cc0187ef34df80241ddca1bab6f564c636885fd2cc90d3f88", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-x86_64.zip"}
|
url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-linux-aarch_64.zip"
|
||||||
"platforms.windows-x64" = { checksum = "sha256:1ebd7c87baffb9f1c47169b640872bf5fb1e4408079c691af527be9561d8f6f7", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-win64.zip"}
|
|
||||||
|
[tools.protoc."platforms.linux-x64"]
|
||||||
|
checksum = "sha256:48785a926e73ffa3f68e2f22b14e7b849620c7a1d36809ac9249a5495e280323"
|
||||||
|
url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-linux-x86_64.zip"
|
||||||
|
|
||||||
|
[tools.protoc."platforms.macos-arm64"]
|
||||||
|
checksum = "sha256:b9576b5fa1a1ef3fe13a8c91d9d8204b46545759bea5ae155cd6ba2ea4cdaeed"
|
||||||
|
url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-aarch_64.zip"
|
||||||
|
|
||||||
|
[tools.protoc."platforms.macos-x64"]
|
||||||
|
checksum = "sha256:312f04713946921cc0187ef34df80241ddca1bab6f564c636885fd2cc90d3f88"
|
||||||
|
url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-x86_64.zip"
|
||||||
|
|
||||||
|
[tools.protoc."platforms.windows-x64"]
|
||||||
|
checksum = "sha256:1ebd7c87baffb9f1c47169b640872bf5fb1e4408079c691af527be9561d8f6f7"
|
||||||
|
url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-win64.zip"
|
||||||
|
|
||||||
[[tools.python]]
|
[[tools.python]]
|
||||||
version = "3.14.3"
|
version = "3.14.3"
|
||||||
backend = "core:python"
|
backend = "core:python"
|
||||||
"platforms.linux-arm64" = { checksum = "sha256:be0f4dc2932f762292b27d46ea7d3e8e66ddf3969a5eb0254a229015ed402625", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
|
||||||
"platforms.linux-x64" = { checksum = "sha256:0a73413f89efd417871876c9accaab28a9d1e3cd6358fbfff171a38ec99302f0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
[tools.python."platforms.linux-arm64"]
|
||||||
"platforms.macos-arm64" = { checksum = "sha256:4703cdf18b26798fde7b49b6b66149674c25f97127be6a10dbcf29309bdcdcdb", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-apple-darwin-install_only_stripped.tar.gz"}
|
checksum = "sha256:be0f4dc2932f762292b27d46ea7d3e8e66ddf3969a5eb0254a229015ed402625"
|
||||||
"platforms.macos-x64" = { checksum = "sha256:76f1cc26e3d262eae8ca546a93e8bded10cf0323613f7e246fea2e10a8115eb7", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-apple-darwin-install_only_stripped.tar.gz"}
|
url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-unknown-linux-gnu-install_only_stripped.tar.gz"
|
||||||
"platforms.windows-x64" = { checksum = "sha256:950c5f21a015c1bdd1337f233456df2470fab71e4d794407d27a84cb8b9909a0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-pc-windows-msvc-install_only_stripped.tar.gz"}
|
|
||||||
|
[tools.python."platforms.linux-x64"]
|
||||||
|
checksum = "sha256:0a73413f89efd417871876c9accaab28a9d1e3cd6358fbfff171a38ec99302f0"
|
||||||
|
url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz"
|
||||||
|
|
||||||
|
[tools.python."platforms.macos-arm64"]
|
||||||
|
checksum = "sha256:4703cdf18b26798fde7b49b6b66149674c25f97127be6a10dbcf29309bdcdcdb"
|
||||||
|
url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-apple-darwin-install_only_stripped.tar.gz"
|
||||||
|
|
||||||
|
[tools.python."platforms.macos-x64"]
|
||||||
|
checksum = "sha256:76f1cc26e3d262eae8ca546a93e8bded10cf0323613f7e246fea2e10a8115eb7"
|
||||||
|
url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-apple-darwin-install_only_stripped.tar.gz"
|
||||||
|
|
||||||
|
[tools.python."platforms.windows-x64"]
|
||||||
|
checksum = "sha256:950c5f21a015c1bdd1337f233456df2470fab71e4d794407d27a84cb8b9909a0"
|
||||||
|
url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-pc-windows-msvc-install_only_stripped.tar.gz"
|
||||||
|
|
||||||
[[tools.rust]]
|
[[tools.rust]]
|
||||||
version = "1.93.0"
|
version = "1.93.0"
|
||||||
|
|||||||
10
mise.toml
10
mise.toml
@@ -10,3 +10,13 @@ protoc = "29.6"
|
|||||||
"cargo:cargo-shear" = "latest"
|
"cargo:cargo-shear" = "latest"
|
||||||
"cargo:cargo-insta" = "1.46.3"
|
"cargo:cargo-insta" = "1.46.3"
|
||||||
python = "3.14.3"
|
python = "3.14.3"
|
||||||
|
ast-grep = "0.42.0"
|
||||||
|
"cargo:cargo-edit" = "0.13.9"
|
||||||
|
|
||||||
|
[tasks.codegen]
|
||||||
|
sources = ['protobufs/*.proto']
|
||||||
|
outputs = ['useragent/lib/proto/*']
|
||||||
|
run = '''
|
||||||
|
dart pub global activate protoc_plugin && \
|
||||||
|
protoc --dart_out=grpc:useragent/lib/proto --proto_path=protobufs/ protobufs/*.proto
|
||||||
|
'''
|
||||||
|
|||||||
@@ -2,6 +2,9 @@ syntax = "proto3";
|
|||||||
|
|
||||||
package arbiter.client;
|
package arbiter.client;
|
||||||
|
|
||||||
|
import "evm.proto";
|
||||||
|
import "google/protobuf/empty.proto";
|
||||||
|
|
||||||
message AuthChallengeRequest {
|
message AuthChallengeRequest {
|
||||||
bytes pubkey = 1;
|
bytes pubkey = 1;
|
||||||
}
|
}
|
||||||
@@ -15,18 +18,40 @@ message AuthChallengeSolution {
|
|||||||
bytes signature = 1;
|
bytes signature = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message AuthOk {}
|
enum AuthResult {
|
||||||
|
AUTH_RESULT_UNSPECIFIED = 0;
|
||||||
|
AUTH_RESULT_SUCCESS = 1;
|
||||||
|
AUTH_RESULT_INVALID_KEY = 2;
|
||||||
|
AUTH_RESULT_INVALID_SIGNATURE = 3;
|
||||||
|
AUTH_RESULT_APPROVAL_DENIED = 4;
|
||||||
|
AUTH_RESULT_NO_USER_AGENTS_ONLINE = 5;
|
||||||
|
AUTH_RESULT_INTERNAL = 6;
|
||||||
|
}
|
||||||
|
|
||||||
|
enum VaultState {
|
||||||
|
VAULT_STATE_UNSPECIFIED = 0;
|
||||||
|
VAULT_STATE_UNBOOTSTRAPPED = 1;
|
||||||
|
VAULT_STATE_SEALED = 2;
|
||||||
|
VAULT_STATE_UNSEALED = 3;
|
||||||
|
VAULT_STATE_ERROR = 4;
|
||||||
|
}
|
||||||
|
|
||||||
message ClientRequest {
|
message ClientRequest {
|
||||||
|
int32 request_id = 4;
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallengeRequest auth_challenge_request = 1;
|
AuthChallengeRequest auth_challenge_request = 1;
|
||||||
AuthChallengeSolution auth_challenge_solution = 2;
|
AuthChallengeSolution auth_challenge_solution = 2;
|
||||||
|
google.protobuf.Empty query_vault_state = 3;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
message ClientResponse {
|
message ClientResponse {
|
||||||
|
optional int32 request_id = 7;
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallenge auth_challenge = 1;
|
AuthChallenge auth_challenge = 1;
|
||||||
AuthOk auth_ok = 2;
|
AuthResult auth_result = 2;
|
||||||
|
arbiter.evm.EvmSignTransactionResponse evm_sign_transaction = 3;
|
||||||
|
arbiter.evm.EvmAnalyzeTransactionResponse evm_analyze_transaction = 4;
|
||||||
|
VaultState vault_state = 6;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,6 +2,9 @@ syntax = "proto3";
|
|||||||
|
|
||||||
package arbiter.evm;
|
package arbiter.evm;
|
||||||
|
|
||||||
|
import "google/protobuf/empty.proto";
|
||||||
|
import "google/protobuf/timestamp.proto";
|
||||||
|
|
||||||
enum EvmError {
|
enum EvmError {
|
||||||
EVM_ERROR_UNSPECIFIED = 0;
|
EVM_ERROR_UNSPECIFIED = 0;
|
||||||
EVM_ERROR_VAULT_SEALED = 1;
|
EVM_ERROR_VAULT_SEALED = 1;
|
||||||
@@ -29,3 +32,185 @@ message WalletListResponse {
|
|||||||
EvmError error = 2;
|
EvmError error = 2;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- Grant types ---
|
||||||
|
|
||||||
|
message TransactionRateLimit {
|
||||||
|
uint32 count = 1;
|
||||||
|
int64 window_secs = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message VolumeRateLimit {
|
||||||
|
bytes max_volume = 1; // U256 as big-endian bytes
|
||||||
|
int64 window_secs = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SharedSettings {
|
||||||
|
int32 wallet_id = 1;
|
||||||
|
uint64 chain_id = 2;
|
||||||
|
optional google.protobuf.Timestamp valid_from = 3;
|
||||||
|
optional google.protobuf.Timestamp valid_until = 4;
|
||||||
|
optional bytes max_gas_fee_per_gas = 5; // U256 as big-endian bytes
|
||||||
|
optional bytes max_priority_fee_per_gas = 6; // U256 as big-endian bytes
|
||||||
|
optional TransactionRateLimit rate_limit = 7;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EtherTransferSettings {
|
||||||
|
repeated bytes targets = 1; // list of 20-byte Ethereum addresses
|
||||||
|
VolumeRateLimit limit = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message TokenTransferSettings {
|
||||||
|
bytes token_contract = 1; // 20-byte Ethereum address
|
||||||
|
optional bytes target = 2; // 20-byte Ethereum address; absent means any recipient allowed
|
||||||
|
repeated VolumeRateLimit volume_limits = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SpecificGrant {
|
||||||
|
oneof grant {
|
||||||
|
EtherTransferSettings ether_transfer = 1;
|
||||||
|
TokenTransferSettings token_transfer = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message EtherTransferMeaning {
|
||||||
|
bytes to = 1; // 20-byte Ethereum address
|
||||||
|
bytes value = 2; // U256 as big-endian bytes
|
||||||
|
}
|
||||||
|
|
||||||
|
message TokenInfo {
|
||||||
|
string symbol = 1;
|
||||||
|
bytes address = 2; // 20-byte Ethereum address
|
||||||
|
uint64 chain_id = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mirror of token_transfers::Meaning
|
||||||
|
message TokenTransferMeaning {
|
||||||
|
TokenInfo token = 1;
|
||||||
|
bytes to = 2; // 20-byte Ethereum address
|
||||||
|
bytes value = 3; // U256 as big-endian bytes
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mirror of policies::SpecificMeaning
|
||||||
|
message SpecificMeaning {
|
||||||
|
oneof meaning {
|
||||||
|
EtherTransferMeaning ether_transfer = 1;
|
||||||
|
TokenTransferMeaning token_transfer = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Eval error types ---
|
||||||
|
message GasLimitExceededViolation {
|
||||||
|
optional bytes max_gas_fee_per_gas = 1; // U256 as big-endian bytes
|
||||||
|
optional bytes max_priority_fee_per_gas = 2; // U256 as big-endian bytes
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvalViolation {
|
||||||
|
oneof kind {
|
||||||
|
bytes invalid_target = 1; // 20-byte Ethereum address
|
||||||
|
GasLimitExceededViolation gas_limit_exceeded = 2;
|
||||||
|
google.protobuf.Empty rate_limit_exceeded = 3;
|
||||||
|
google.protobuf.Empty volumetric_limit_exceeded = 4;
|
||||||
|
google.protobuf.Empty invalid_time = 5;
|
||||||
|
google.protobuf.Empty invalid_transaction_type = 6;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transaction was classified but no grant covers it
|
||||||
|
message NoMatchingGrantError {
|
||||||
|
SpecificMeaning meaning = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transaction was classified and a grant was found, but constraints were violated
|
||||||
|
message PolicyViolationsError {
|
||||||
|
SpecificMeaning meaning = 1;
|
||||||
|
repeated EvalViolation violations = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// top-level error returned when transaction evaluation fails
|
||||||
|
message TransactionEvalError {
|
||||||
|
oneof kind {
|
||||||
|
google.protobuf.Empty contract_creation_not_supported = 1;
|
||||||
|
google.protobuf.Empty unsupported_transaction_type = 2;
|
||||||
|
NoMatchingGrantError no_matching_grant = 3;
|
||||||
|
PolicyViolationsError policy_violations = 4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- UserAgent grant management ---
|
||||||
|
message EvmGrantCreateRequest {
|
||||||
|
int32 client_id = 1;
|
||||||
|
SharedSettings shared = 2;
|
||||||
|
SpecificGrant specific = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantCreateResponse {
|
||||||
|
oneof result {
|
||||||
|
int32 grant_id = 1;
|
||||||
|
EvmError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantDeleteRequest {
|
||||||
|
int32 grant_id = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantDeleteResponse {
|
||||||
|
oneof result {
|
||||||
|
google.protobuf.Empty ok = 1;
|
||||||
|
EvmError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Basic grant info returned in grant listings
|
||||||
|
message GrantEntry {
|
||||||
|
int32 id = 1;
|
||||||
|
int32 client_id = 2;
|
||||||
|
SharedSettings shared = 3;
|
||||||
|
SpecificGrant specific = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantListRequest {
|
||||||
|
optional int32 wallet_id = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantListResponse {
|
||||||
|
oneof result {
|
||||||
|
EvmGrantList grants = 1;
|
||||||
|
EvmError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmGrantList {
|
||||||
|
repeated GrantEntry grants = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Client transaction operations ---
|
||||||
|
|
||||||
|
message EvmSignTransactionRequest {
|
||||||
|
bytes wallet_address = 1; // 20-byte Ethereum address
|
||||||
|
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction (unsigned)
|
||||||
|
}
|
||||||
|
|
||||||
|
// oneof because signing and evaluation happen atomically — a signing failure
|
||||||
|
// is always either an eval error or an internal error, never a partial success
|
||||||
|
message EvmSignTransactionResponse {
|
||||||
|
oneof result {
|
||||||
|
bytes signature = 1; // 65-byte signature: r[32] || s[32] || v[1]
|
||||||
|
TransactionEvalError eval_error = 2;
|
||||||
|
EvmError error = 3;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmAnalyzeTransactionRequest {
|
||||||
|
bytes wallet_address = 1; // 20-byte Ethereum address
|
||||||
|
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction
|
||||||
|
}
|
||||||
|
|
||||||
|
message EvmAnalyzeTransactionResponse {
|
||||||
|
oneof result {
|
||||||
|
SpecificMeaning meaning = 1;
|
||||||
|
TransactionEvalError eval_error = 2;
|
||||||
|
EvmError error = 3;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -2,24 +2,89 @@ syntax = "proto3";
|
|||||||
|
|
||||||
package arbiter.user_agent;
|
package arbiter.user_agent;
|
||||||
|
|
||||||
import "google/protobuf/empty.proto";
|
|
||||||
import "evm.proto";
|
import "evm.proto";
|
||||||
|
import "google/protobuf/empty.proto";
|
||||||
|
|
||||||
|
enum KeyType {
|
||||||
|
KEY_TYPE_UNSPECIFIED = 0;
|
||||||
|
KEY_TYPE_ED25519 = 1;
|
||||||
|
KEY_TYPE_ECDSA_SECP256K1 = 2;
|
||||||
|
KEY_TYPE_RSA = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- SDK client management ---
|
||||||
|
|
||||||
|
enum SdkClientError {
|
||||||
|
SDK_CLIENT_ERROR_UNSPECIFIED = 0;
|
||||||
|
SDK_CLIENT_ERROR_ALREADY_EXISTS = 1;
|
||||||
|
SDK_CLIENT_ERROR_NOT_FOUND = 2;
|
||||||
|
SDK_CLIENT_ERROR_HAS_RELATED_DATA = 3; // hard-delete blocked by FK (client has grants or transaction logs)
|
||||||
|
SDK_CLIENT_ERROR_INTERNAL = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientApproveRequest {
|
||||||
|
bytes pubkey = 1; // 32-byte ed25519 public key
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientRevokeRequest {
|
||||||
|
int32 client_id = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientEntry {
|
||||||
|
int32 id = 1;
|
||||||
|
bytes pubkey = 2;
|
||||||
|
int32 created_at = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientList {
|
||||||
|
repeated SdkClientEntry clients = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientApproveResponse {
|
||||||
|
oneof result {
|
||||||
|
SdkClientEntry client = 1;
|
||||||
|
SdkClientError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientRevokeResponse {
|
||||||
|
oneof result {
|
||||||
|
google.protobuf.Empty ok = 1;
|
||||||
|
SdkClientError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientListResponse {
|
||||||
|
oneof result {
|
||||||
|
SdkClientList clients = 1;
|
||||||
|
SdkClientError error = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
message AuthChallengeRequest {
|
message AuthChallengeRequest {
|
||||||
bytes pubkey = 1;
|
bytes pubkey = 1;
|
||||||
optional string bootstrap_token = 2;
|
optional string bootstrap_token = 2;
|
||||||
|
KeyType key_type = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
message AuthChallenge {
|
message AuthChallenge {
|
||||||
bytes pubkey = 1;
|
|
||||||
int32 nonce = 2;
|
int32 nonce = 2;
|
||||||
|
reserved 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message AuthChallengeSolution {
|
message AuthChallengeSolution {
|
||||||
bytes signature = 1;
|
bytes signature = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
message AuthOk {}
|
enum AuthResult {
|
||||||
|
AUTH_RESULT_UNSPECIFIED = 0;
|
||||||
|
AUTH_RESULT_SUCCESS = 1;
|
||||||
|
AUTH_RESULT_INVALID_KEY = 2;
|
||||||
|
AUTH_RESULT_INVALID_SIGNATURE = 3;
|
||||||
|
AUTH_RESULT_BOOTSTRAP_REQUIRED = 4;
|
||||||
|
AUTH_RESULT_TOKEN_INVALID = 5;
|
||||||
|
AUTH_RESULT_INTERNAL = 6;
|
||||||
|
}
|
||||||
|
|
||||||
message UnsealStart {
|
message UnsealStart {
|
||||||
bytes client_pubkey = 1;
|
bytes client_pubkey = 1;
|
||||||
@@ -34,6 +99,12 @@ message UnsealEncryptedKey {
|
|||||||
bytes associated_data = 3;
|
bytes associated_data = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
message BootstrapEncryptedKey {
|
||||||
|
bytes nonce = 1;
|
||||||
|
bytes ciphertext = 2;
|
||||||
|
bytes associated_data = 3;
|
||||||
|
}
|
||||||
|
|
||||||
enum UnsealResult {
|
enum UnsealResult {
|
||||||
UNSEAL_RESULT_UNSPECIFIED = 0;
|
UNSEAL_RESULT_UNSPECIFIED = 0;
|
||||||
UNSEAL_RESULT_SUCCESS = 1;
|
UNSEAL_RESULT_SUCCESS = 1;
|
||||||
@@ -41,6 +112,13 @@ enum UnsealResult {
|
|||||||
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
enum BootstrapResult {
|
||||||
|
BOOTSTRAP_RESULT_UNSPECIFIED = 0;
|
||||||
|
BOOTSTRAP_RESULT_SUCCESS = 1;
|
||||||
|
BOOTSTRAP_RESULT_ALREADY_BOOTSTRAPPED = 2;
|
||||||
|
BOOTSTRAP_RESULT_INVALID_KEY = 3;
|
||||||
|
}
|
||||||
|
|
||||||
enum VaultState {
|
enum VaultState {
|
||||||
VAULT_STATE_UNSPECIFIED = 0;
|
VAULT_STATE_UNSPECIFIED = 0;
|
||||||
VAULT_STATE_UNBOOTSTRAPPED = 1;
|
VAULT_STATE_UNBOOTSTRAPPED = 1;
|
||||||
@@ -49,7 +127,18 @@ enum VaultState {
|
|||||||
VAULT_STATE_ERROR = 4;
|
VAULT_STATE_ERROR = 4;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
message SdkClientConnectionRequest {
|
||||||
|
bytes pubkey = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientConnectionResponse {
|
||||||
|
bool approved = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SdkClientConnectionCancel {}
|
||||||
|
|
||||||
message UserAgentRequest {
|
message UserAgentRequest {
|
||||||
|
int32 id = 16;
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallengeRequest auth_challenge_request = 1;
|
AuthChallengeRequest auth_challenge_request = 1;
|
||||||
AuthChallengeSolution auth_challenge_solution = 2;
|
AuthChallengeSolution auth_challenge_solution = 2;
|
||||||
@@ -58,16 +147,33 @@ message UserAgentRequest {
|
|||||||
google.protobuf.Empty query_vault_state = 5;
|
google.protobuf.Empty query_vault_state = 5;
|
||||||
google.protobuf.Empty evm_wallet_create = 6;
|
google.protobuf.Empty evm_wallet_create = 6;
|
||||||
google.protobuf.Empty evm_wallet_list = 7;
|
google.protobuf.Empty evm_wallet_list = 7;
|
||||||
|
arbiter.evm.EvmGrantCreateRequest evm_grant_create = 8;
|
||||||
|
arbiter.evm.EvmGrantDeleteRequest evm_grant_delete = 9;
|
||||||
|
arbiter.evm.EvmGrantListRequest evm_grant_list = 10;
|
||||||
|
SdkClientConnectionResponse sdk_client_connection_response = 11;
|
||||||
|
SdkClientApproveRequest sdk_client_approve = 12;
|
||||||
|
SdkClientRevokeRequest sdk_client_revoke = 13;
|
||||||
|
google.protobuf.Empty sdk_client_list = 14;
|
||||||
|
BootstrapEncryptedKey bootstrap_encrypted_key = 15;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
message UserAgentResponse {
|
message UserAgentResponse {
|
||||||
|
optional int32 id = 16;
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallenge auth_challenge = 1;
|
AuthChallenge auth_challenge = 1;
|
||||||
AuthOk auth_ok = 2;
|
AuthResult auth_result = 2;
|
||||||
UnsealStartResponse unseal_start_response = 3;
|
UnsealStartResponse unseal_start_response = 3;
|
||||||
UnsealResult unseal_result = 4;
|
UnsealResult unseal_result = 4;
|
||||||
VaultState vault_state = 5;
|
VaultState vault_state = 5;
|
||||||
arbiter.evm.WalletCreateResponse evm_wallet_create = 6;
|
arbiter.evm.WalletCreateResponse evm_wallet_create = 6;
|
||||||
arbiter.evm.WalletListResponse evm_wallet_list = 7;
|
arbiter.evm.WalletListResponse evm_wallet_list = 7;
|
||||||
|
arbiter.evm.EvmGrantCreateResponse evm_grant_create = 8;
|
||||||
|
arbiter.evm.EvmGrantDeleteResponse evm_grant_delete = 9;
|
||||||
|
arbiter.evm.EvmGrantListResponse evm_grant_list = 10;
|
||||||
|
SdkClientConnectionResponse sdk_client_connection_response = 11;
|
||||||
|
SdkClientApproveResponse sdk_client_approve_response = 12;
|
||||||
|
SdkClientRevokeResponse sdk_client_revoke_response = 13;
|
||||||
|
SdkClientListResponse sdk_client_list_response = 14;
|
||||||
|
BootstrapResult bootstrap_result = 15;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
BIN
scripts/__pycache__/gen_erc20_registry.cpython-314.pyc
Normal file
BIN
scripts/__pycache__/gen_erc20_registry.cpython-314.pyc
Normal file
Binary file not shown.
13
server/.cargo/audit.toml
Normal file
13
server/.cargo/audit.toml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
[advisories]
|
||||||
|
# RUSTSEC-2023-0071: Marvin Attack timing side-channel in rsa crate.
|
||||||
|
# No fixed version is available upstream.
|
||||||
|
# RSA support is required for Windows Hello / KeyCredentialManager
|
||||||
|
# (https://learn.microsoft.com/en-us/uwp/api/windows.security.credentials.keycredentialmanager.requestcreateasync),
|
||||||
|
# which only issues RSA-2048 keys.
|
||||||
|
# Mitigations in place:
|
||||||
|
# - Signing uses BlindedSigningKey (PSS+SHA-256), which applies blinding to
|
||||||
|
# protect the private key from timing recovery during signing.
|
||||||
|
# - RSA decryption is never performed; we only verify public-key signatures.
|
||||||
|
# - The attack requires local, high-resolution timing access against the
|
||||||
|
# signing process, which is not exposed in our threat model.
|
||||||
|
ignore = ["RUSTSEC-2023-0071"]
|
||||||
590
server/Cargo.lock
generated
590
server/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,30 +1,32 @@
|
|||||||
[workspace]
|
[workspace]
|
||||||
members = [
|
members = ["crates/*"]
|
||||||
"crates/*",
|
|
||||||
]
|
|
||||||
resolver = "3"
|
resolver = "3"
|
||||||
|
|
||||||
|
[workspace.lints.clippy]
|
||||||
|
disallowed-methods = "deny"
|
||||||
|
|
||||||
|
|
||||||
[workspace.dependencies]
|
[workspace.dependencies]
|
||||||
tonic = { version = "0.14.3", features = [
|
tonic = { version = "0.14.5", features = [
|
||||||
"deflate",
|
"deflate",
|
||||||
"gzip",
|
"gzip",
|
||||||
"tls-connect-info",
|
"tls-connect-info",
|
||||||
"zstd",
|
"zstd",
|
||||||
] }
|
] }
|
||||||
tracing = "0.1.44"
|
tracing = "0.1.44"
|
||||||
tokio = { version = "1.49.0", features = ["full"] }
|
tokio = { version = "1.50.0", features = ["full"] }
|
||||||
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
||||||
chrono = { version = "0.4.43", features = ["serde"] }
|
chrono = { version = "0.4.44", features = ["serde"] }
|
||||||
rand = "0.10.0"
|
rand = "0.10.0"
|
||||||
rustls = "0.23.36"
|
rustls = { version = "0.23.37", features = ["aws-lc-rs"] }
|
||||||
smlang = "0.8.0"
|
smlang = "0.8.0"
|
||||||
miette = { version = "7.6.0", features = ["fancy", "serde"] }
|
miette = { version = "7.6.0", features = ["fancy", "serde"] }
|
||||||
thiserror = "2.0.18"
|
thiserror = "2.0.18"
|
||||||
async-trait = "0.1.89"
|
async-trait = "0.1.89"
|
||||||
futures = "0.3.31"
|
futures = "0.3.32"
|
||||||
tokio-stream = { version = "0.1.18", features = ["full"] }
|
tokio-stream = { version = "0.1.18", features = ["full"] }
|
||||||
kameo = "0.19.2"
|
kameo = "0.19.2"
|
||||||
|
prost-types = { version = "0.14.3", features = ["chrono"] }
|
||||||
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
||||||
rstest = "0.26.1"
|
rstest = "0.26.1"
|
||||||
rustls-pki-types = "1.14.0"
|
rustls-pki-types = "1.14.0"
|
||||||
@@ -35,3 +37,8 @@ rcgen = { version = "0.14.7", features = [
|
|||||||
"x509-parser",
|
"x509-parser",
|
||||||
"zeroize",
|
"zeroize",
|
||||||
], default-features = false }
|
], default-features = false }
|
||||||
|
k256 = { version = "0.13.4", features = ["ecdsa", "pkcs8"] }
|
||||||
|
rsa = { version = "0.9", features = ["sha2"] }
|
||||||
|
sha2 = "0.10"
|
||||||
|
spki = "0.7"
|
||||||
|
terrors = { version = "0.5", git = "https://github.com/CleverWild/terrors" }
|
||||||
|
|||||||
9
server/clippy.toml
Normal file
9
server/clippy.toml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
disallowed-methods = [
|
||||||
|
# RSA decryption is forbidden: the rsa crate has RUSTSEC-2023-0071 (Marvin Attack).
|
||||||
|
# We only use RSA for Windows Hello (KeyCredentialManager) public-key verification — decryption
|
||||||
|
# is never required and must not be introduced.
|
||||||
|
{ path = "rsa::RsaPrivateKey::decrypt", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). Only PSS signing/verification is permitted." },
|
||||||
|
{ path = "rsa::RsaPrivateKey::decrypt_blinded", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). Only PSS signing/verification is permitted." },
|
||||||
|
{ path = "rsa::traits::Decryptor::decrypt", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
||||||
|
{ path = "rsa::traits::RandomizedDecryptor::decrypt_with_rng", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt_with_rng() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
||||||
|
]
|
||||||
@@ -5,4 +5,23 @@ edition = "2024"
|
|||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
|
||||||
|
[lints]
|
||||||
|
workspace = true
|
||||||
|
|
||||||
|
[features]
|
||||||
|
evm = ["dep:alloy"]
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
arbiter-proto.path = "../arbiter-proto"
|
||||||
|
alloy = { workspace = true, optional = true }
|
||||||
|
tonic.workspace = true
|
||||||
|
tonic.features = ["tls-aws-lc"]
|
||||||
|
tokio.workspace = true
|
||||||
|
tokio-stream.workspace = true
|
||||||
|
ed25519-dalek.workspace = true
|
||||||
|
thiserror.workspace = true
|
||||||
|
http = "1.4.0"
|
||||||
|
rustls-webpki = { version = "0.103.10", features = ["aws-lc-rs"] }
|
||||||
|
async-trait.workspace = true
|
||||||
|
rand.workspace = true
|
||||||
|
terrors.workspace = true
|
||||||
103
server/crates/arbiter-client/src/auth.rs
Normal file
103
server/crates/arbiter-client/src/auth.rs
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
format_challenge,
|
||||||
|
proto::client::{
|
||||||
|
AuthChallengeRequest, AuthChallengeSolution, AuthResult, ClientRequest,
|
||||||
|
client_request::Payload as ClientRequestPayload,
|
||||||
|
client_response::Payload as ClientResponsePayload,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use ed25519_dalek::Signer as _;
|
||||||
|
use terrors::OneOf;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
errors::{
|
||||||
|
ConnectError, MissingAuthChallengeError, UnexpectedAuthResponseError, map_auth_code_error,
|
||||||
|
},
|
||||||
|
transport::{ClientTransport, next_request_id},
|
||||||
|
};
|
||||||
|
|
||||||
|
async fn send_auth_challenge_request(
|
||||||
|
transport: &mut ClientTransport,
|
||||||
|
key: &ed25519_dalek::SigningKey,
|
||||||
|
) -> std::result::Result<(), ConnectError> {
|
||||||
|
transport
|
||||||
|
.send(ClientRequest {
|
||||||
|
request_id: next_request_id(),
|
||||||
|
payload: Some(ClientRequestPayload::AuthChallengeRequest(
|
||||||
|
AuthChallengeRequest {
|
||||||
|
pubkey: key.verifying_key().to_bytes().to_vec(),
|
||||||
|
},
|
||||||
|
)),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|_| OneOf::new(UnexpectedAuthResponseError))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn receive_auth_challenge(
|
||||||
|
transport: &mut ClientTransport,
|
||||||
|
) -> std::result::Result<arbiter_proto::proto::client::AuthChallenge, ConnectError> {
|
||||||
|
let response = transport
|
||||||
|
.recv()
|
||||||
|
.await
|
||||||
|
.map_err(|_| OneOf::new(MissingAuthChallengeError))?;
|
||||||
|
|
||||||
|
let payload = response
|
||||||
|
.payload
|
||||||
|
.ok_or_else(|| OneOf::new(MissingAuthChallengeError))?;
|
||||||
|
match payload {
|
||||||
|
ClientResponsePayload::AuthChallenge(challenge) => Ok(challenge),
|
||||||
|
ClientResponsePayload::AuthResult(result) => Err(map_auth_code_error(result)),
|
||||||
|
_ => Err(OneOf::new(UnexpectedAuthResponseError)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_auth_challenge_solution(
|
||||||
|
transport: &mut ClientTransport,
|
||||||
|
key: &ed25519_dalek::SigningKey,
|
||||||
|
challenge: arbiter_proto::proto::client::AuthChallenge,
|
||||||
|
) -> std::result::Result<(), ConnectError> {
|
||||||
|
let challenge_payload = format_challenge(challenge.nonce, &challenge.pubkey);
|
||||||
|
let signature = key.sign(&challenge_payload).to_bytes().to_vec();
|
||||||
|
|
||||||
|
transport
|
||||||
|
.send(ClientRequest {
|
||||||
|
request_id: next_request_id(),
|
||||||
|
payload: Some(ClientRequestPayload::AuthChallengeSolution(
|
||||||
|
AuthChallengeSolution { signature },
|
||||||
|
)),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|_| OneOf::new(UnexpectedAuthResponseError))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn receive_auth_confirmation(
|
||||||
|
transport: &mut ClientTransport,
|
||||||
|
) -> std::result::Result<(), ConnectError> {
|
||||||
|
let response = transport
|
||||||
|
.recv()
|
||||||
|
.await
|
||||||
|
.map_err(|_| OneOf::new(UnexpectedAuthResponseError))?;
|
||||||
|
|
||||||
|
let payload = response
|
||||||
|
.payload
|
||||||
|
.ok_or_else(|| OneOf::new(UnexpectedAuthResponseError))?;
|
||||||
|
match payload {
|
||||||
|
ClientResponsePayload::AuthResult(result)
|
||||||
|
if AuthResult::try_from(result).ok() == Some(AuthResult::Success) =>
|
||||||
|
{
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
ClientResponsePayload::AuthResult(result) => Err(map_auth_code_error(result)),
|
||||||
|
_ => Err(OneOf::new(UnexpectedAuthResponseError)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) async fn authenticate(
|
||||||
|
transport: &mut ClientTransport,
|
||||||
|
key: &ed25519_dalek::SigningKey,
|
||||||
|
) -> std::result::Result<(), ConnectError> {
|
||||||
|
send_auth_challenge_request(transport, key).await?;
|
||||||
|
let challenge = receive_auth_challenge(transport).await?;
|
||||||
|
send_auth_challenge_solution(transport, key, challenge).await?;
|
||||||
|
receive_auth_confirmation(transport).await
|
||||||
|
}
|
||||||
82
server/crates/arbiter-client/src/client.rs
Normal file
82
server/crates/arbiter-client/src/client.rs
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
use arbiter_proto::{proto::arbiter_service_client::ArbiterServiceClient, url::ArbiterUrl};
|
||||||
|
use std::sync::Arc;
|
||||||
|
use terrors::{Broaden as _, OneOf};
|
||||||
|
use tokio::sync::{Mutex, mpsc};
|
||||||
|
use tokio_stream::wrappers::ReceiverStream;
|
||||||
|
use tonic::transport::ClientTlsConfig;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
auth::authenticate,
|
||||||
|
errors::ConnectError,
|
||||||
|
storage::{FileSigningKeyStorage, SigningKeyStorage},
|
||||||
|
transport::{BUFFER_LENGTH, ClientTransport},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
use crate::errors::{ClientConnectionClosedError, ClientError};
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
use crate::wallets::evm::ArbiterEvmWallet;
|
||||||
|
|
||||||
|
pub struct ArbiterClient {
|
||||||
|
#[allow(dead_code)]
|
||||||
|
transport: Arc<Mutex<ClientTransport>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ArbiterClient {
|
||||||
|
pub async fn connect(url: ArbiterUrl) -> Result<Self, ConnectError> {
|
||||||
|
let storage = FileSigningKeyStorage::from_default_location().broaden()?;
|
||||||
|
Self::connect_with_storage(url, &storage).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn connect_with_storage<S: SigningKeyStorage>(
|
||||||
|
url: ArbiterUrl,
|
||||||
|
storage: &S,
|
||||||
|
) -> Result<Self, ConnectError> {
|
||||||
|
let key = storage.load_or_create().broaden()?;
|
||||||
|
Self::connect_with_key(url, key).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn connect_with_key(
|
||||||
|
url: ArbiterUrl,
|
||||||
|
key: ed25519_dalek::SigningKey,
|
||||||
|
) -> Result<Self, ConnectError> {
|
||||||
|
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)
|
||||||
|
.map_err(OneOf::new)?
|
||||||
|
.to_owned();
|
||||||
|
let tls = ClientTlsConfig::new().trust_anchor(anchor);
|
||||||
|
|
||||||
|
let channel = tonic::transport::Channel::from_shared(format!("{}:{}", url.host, url.port))
|
||||||
|
.map_err(OneOf::new)?
|
||||||
|
.tls_config(tls)
|
||||||
|
.map_err(OneOf::new)?
|
||||||
|
.connect()
|
||||||
|
.await
|
||||||
|
.map_err(OneOf::new)?;
|
||||||
|
|
||||||
|
let mut client = ArbiterServiceClient::new(channel);
|
||||||
|
let (tx, rx) = mpsc::channel(BUFFER_LENGTH);
|
||||||
|
let response_stream = client
|
||||||
|
.client(ReceiverStream::new(rx))
|
||||||
|
.await
|
||||||
|
.map_err(OneOf::new)?
|
||||||
|
.into_inner();
|
||||||
|
|
||||||
|
let mut transport = ClientTransport {
|
||||||
|
sender: tx,
|
||||||
|
receiver: response_stream,
|
||||||
|
};
|
||||||
|
|
||||||
|
authenticate(&mut transport, &key).await?;
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
transport: Arc::new(Mutex::new(transport)),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
pub async fn evm_wallets(&self) -> Result<Vec<ArbiterEvmWallet>, ClientError> {
|
||||||
|
let _ = &self.transport;
|
||||||
|
Err(OneOf::new(ClientConnectionClosedError))
|
||||||
|
}
|
||||||
|
}
|
||||||
127
server/crates/arbiter-client/src/errors.rs
Normal file
127
server/crates/arbiter-client/src/errors.rs
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
use terrors::OneOf;
|
||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
use alloy::{primitives::ChainId, signers::Error as AlloySignerError};
|
||||||
|
|
||||||
|
pub type StorageError = OneOf<(std::io::Error, InvalidKeyLengthError)>;
|
||||||
|
|
||||||
|
pub type ConnectError = OneOf<(
|
||||||
|
tonic::transport::Error,
|
||||||
|
http::uri::InvalidUri,
|
||||||
|
webpki::Error,
|
||||||
|
tonic::Status,
|
||||||
|
MissingAuthChallengeError,
|
||||||
|
ApprovalDeniedError,
|
||||||
|
NoUserAgentsOnlineError,
|
||||||
|
UnexpectedAuthResponseError,
|
||||||
|
std::io::Error,
|
||||||
|
InvalidKeyLengthError,
|
||||||
|
)>;
|
||||||
|
|
||||||
|
pub type ClientError = OneOf<(tonic::Status, ClientConnectionClosedError)>;
|
||||||
|
|
||||||
|
pub(crate) type ClientTransportError =
|
||||||
|
OneOf<(TransportChannelClosedError, TransportConnectionClosedError)>;
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
pub(crate) type EvmWalletError = OneOf<(
|
||||||
|
EvmChainIdMismatchError,
|
||||||
|
EvmHashSigningUnsupportedError,
|
||||||
|
EvmTransactionSigningUnsupportedError,
|
||||||
|
)>;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Invalid signing key length in storage: expected {expected} bytes, got {actual} bytes")]
|
||||||
|
pub struct InvalidKeyLengthError {
|
||||||
|
pub expected: usize,
|
||||||
|
pub actual: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Auth challenge was not returned by server")]
|
||||||
|
pub struct MissingAuthChallengeError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Client approval denied by User Agent")]
|
||||||
|
pub struct ApprovalDeniedError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("No User Agents online to approve client")]
|
||||||
|
pub struct NoUserAgentsOnlineError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Unexpected auth response payload")]
|
||||||
|
pub struct UnexpectedAuthResponseError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Connection closed by server")]
|
||||||
|
pub struct ClientConnectionClosedError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Transport channel closed")]
|
||||||
|
pub struct TransportChannelClosedError;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Connection closed by server")]
|
||||||
|
pub struct TransportConnectionClosedError;
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("Transaction chain id mismatch: signer {signer}, tx {tx}")]
|
||||||
|
pub struct EvmChainIdMismatchError {
|
||||||
|
pub signer: ChainId,
|
||||||
|
pub tx: ChainId,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("hash-only signing is not supported for ArbiterEvmWallet; use transaction signing")]
|
||||||
|
pub struct EvmHashSigningUnsupportedError;
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]
|
||||||
|
#[error("transaction signing is not supported by current arbiter.client protocol")]
|
||||||
|
pub struct EvmTransactionSigningUnsupportedError;
|
||||||
|
|
||||||
|
pub(crate) fn map_auth_code_error(code: i32) -> ConnectError {
|
||||||
|
use arbiter_proto::proto::client::AuthResult;
|
||||||
|
|
||||||
|
match AuthResult::try_from(code).unwrap_or(AuthResult::Unspecified) {
|
||||||
|
AuthResult::ApprovalDenied => OneOf::new(ApprovalDeniedError),
|
||||||
|
AuthResult::NoUserAgentsOnline => OneOf::new(NoUserAgentsOnlineError),
|
||||||
|
AuthResult::Unspecified
|
||||||
|
| AuthResult::Success
|
||||||
|
| AuthResult::InvalidKey
|
||||||
|
| AuthResult::InvalidSignature
|
||||||
|
| AuthResult::Internal => OneOf::new(UnexpectedAuthResponseError),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
impl From<EvmChainIdMismatchError> for AlloySignerError {
|
||||||
|
fn from(value: EvmChainIdMismatchError) -> Self {
|
||||||
|
AlloySignerError::TransactionChainIdMismatch {
|
||||||
|
signer: value.signer,
|
||||||
|
tx: value.tx,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
impl From<EvmHashSigningUnsupportedError> for AlloySignerError {
|
||||||
|
fn from(_value: EvmHashSigningUnsupportedError) -> Self {
|
||||||
|
AlloySignerError::other(
|
||||||
|
"hash-only signing is not supported for ArbiterEvmWallet; use transaction signing",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "evm")]
|
||||||
|
impl From<EvmTransactionSigningUnsupportedError> for AlloySignerError {
|
||||||
|
fn from(_value: EvmTransactionSigningUnsupportedError) -> Self {
|
||||||
|
AlloySignerError::other(
|
||||||
|
"transaction signing is not supported by current arbiter.client protocol",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,14 +1,13 @@
|
|||||||
pub fn add(left: u64, right: u64) -> u64 {
|
mod auth;
|
||||||
left + right
|
mod client;
|
||||||
}
|
mod errors;
|
||||||
|
mod storage;
|
||||||
|
mod transport;
|
||||||
|
pub mod wallets;
|
||||||
|
|
||||||
#[cfg(test)]
|
pub use client::ArbiterClient;
|
||||||
mod tests {
|
pub use errors::{ClientError, ConnectError, StorageError};
|
||||||
use super::*;
|
pub use storage::{FileSigningKeyStorage, SigningKeyStorage};
|
||||||
|
|
||||||
#[test]
|
#[cfg(feature = "evm")]
|
||||||
fn it_works() {
|
pub use wallets::evm::ArbiterEvmWallet;
|
||||||
let result = add(2, 2);
|
|
||||||
assert_eq!(result, 4);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
130
server/crates/arbiter-client/src/storage.rs
Normal file
130
server/crates/arbiter-client/src/storage.rs
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
use arbiter_proto::home_path;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
use terrors::OneOf;
|
||||||
|
|
||||||
|
use crate::errors::{InvalidKeyLengthError, StorageError};
|
||||||
|
|
||||||
|
pub trait SigningKeyStorage {
|
||||||
|
fn load_or_create(&self) -> std::result::Result<ed25519_dalek::SigningKey, StorageError>;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct FileSigningKeyStorage {
|
||||||
|
path: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FileSigningKeyStorage {
|
||||||
|
pub const DEFAULT_FILE_NAME: &str = "sdk_client_ed25519.key";
|
||||||
|
|
||||||
|
pub fn new(path: impl Into<PathBuf>) -> Self {
|
||||||
|
Self { path: path.into() }
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_default_location() -> std::result::Result<Self, StorageError> {
|
||||||
|
Ok(Self::new(
|
||||||
|
home_path()
|
||||||
|
.map_err(OneOf::new)?
|
||||||
|
.join(Self::DEFAULT_FILE_NAME),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_key(path: &Path) -> std::result::Result<ed25519_dalek::SigningKey, StorageError> {
|
||||||
|
let bytes = std::fs::read(path).map_err(OneOf::new)?;
|
||||||
|
let raw: [u8; 32] = bytes.try_into().map_err(|v: Vec<u8>| {
|
||||||
|
OneOf::new(InvalidKeyLengthError {
|
||||||
|
expected: 32,
|
||||||
|
actual: v.len(),
|
||||||
|
})
|
||||||
|
})?;
|
||||||
|
Ok(ed25519_dalek::SigningKey::from_bytes(&raw))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SigningKeyStorage for FileSigningKeyStorage {
|
||||||
|
fn load_or_create(&self) -> std::result::Result<ed25519_dalek::SigningKey, StorageError> {
|
||||||
|
if let Some(parent) = self.path.parent() {
|
||||||
|
std::fs::create_dir_all(parent).map_err(OneOf::new)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
if self.path.exists() {
|
||||||
|
return Self::read_key(&self.path);
|
||||||
|
}
|
||||||
|
|
||||||
|
let key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
|
let raw_key = key.to_bytes();
|
||||||
|
|
||||||
|
// Use create_new to prevent accidental overwrite if another process creates the key first.
|
||||||
|
match std::fs::OpenOptions::new()
|
||||||
|
.create_new(true)
|
||||||
|
.write(true)
|
||||||
|
.open(&self.path)
|
||||||
|
{
|
||||||
|
Ok(mut file) => {
|
||||||
|
use std::io::Write as _;
|
||||||
|
file.write_all(&raw_key).map_err(OneOf::new)?;
|
||||||
|
Ok(key)
|
||||||
|
}
|
||||||
|
Err(err) if err.kind() == std::io::ErrorKind::AlreadyExists => {
|
||||||
|
Self::read_key(&self.path)
|
||||||
|
}
|
||||||
|
Err(err) => Err(OneOf::new(err)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::{FileSigningKeyStorage, SigningKeyStorage};
|
||||||
|
use crate::errors::InvalidKeyLengthError;
|
||||||
|
|
||||||
|
fn unique_temp_key_path() -> std::path::PathBuf {
|
||||||
|
let nanos = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.expect("clock should be after unix epoch")
|
||||||
|
.as_nanos();
|
||||||
|
std::env::temp_dir().join(format!(
|
||||||
|
"arbiter-client-key-{}-{}.bin",
|
||||||
|
std::process::id(),
|
||||||
|
nanos
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn file_storage_creates_and_reuses_key() {
|
||||||
|
let path = unique_temp_key_path();
|
||||||
|
let storage = FileSigningKeyStorage::new(path.clone());
|
||||||
|
|
||||||
|
let key_a = storage
|
||||||
|
.load_or_create()
|
||||||
|
.expect("first load_or_create should create key");
|
||||||
|
let key_b = storage
|
||||||
|
.load_or_create()
|
||||||
|
.expect("second load_or_create should read same key");
|
||||||
|
|
||||||
|
assert_eq!(key_a.to_bytes(), key_b.to_bytes());
|
||||||
|
assert!(path.exists());
|
||||||
|
|
||||||
|
std::fs::remove_file(path).expect("temp key file should be removable");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn file_storage_rejects_invalid_key_length() {
|
||||||
|
let path = unique_temp_key_path();
|
||||||
|
std::fs::write(&path, [42u8; 31]).expect("should write invalid key file");
|
||||||
|
let storage = FileSigningKeyStorage::new(path.clone());
|
||||||
|
|
||||||
|
let err = storage
|
||||||
|
.load_or_create()
|
||||||
|
.expect_err("storage should reject non-32-byte key file");
|
||||||
|
|
||||||
|
match err.narrow::<InvalidKeyLengthError, _>() {
|
||||||
|
Ok(invalid_len) => {
|
||||||
|
assert_eq!(invalid_len.expected, 32);
|
||||||
|
assert_eq!(invalid_len.actual, 31);
|
||||||
|
}
|
||||||
|
Err(other) => panic!("unexpected io error: {other:?}"),
|
||||||
|
}
|
||||||
|
|
||||||
|
std::fs::remove_file(path).expect("temp key file should be removable");
|
||||||
|
}
|
||||||
|
}
|
||||||
42
server/crates/arbiter-client/src/transport.rs
Normal file
42
server/crates/arbiter-client/src/transport.rs
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
|
||||||
|
use std::sync::atomic::{AtomicI32, Ordering};
|
||||||
|
use terrors::OneOf;
|
||||||
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
|
use crate::errors::{
|
||||||
|
ClientTransportError, TransportChannelClosedError, TransportConnectionClosedError,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub(crate) const BUFFER_LENGTH: usize = 16;
|
||||||
|
static NEXT_REQUEST_ID: AtomicI32 = AtomicI32::new(1);
|
||||||
|
|
||||||
|
pub(crate) fn next_request_id() -> i32 {
|
||||||
|
NEXT_REQUEST_ID.fetch_add(1, Ordering::Relaxed)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) struct ClientTransport {
|
||||||
|
pub(crate) sender: mpsc::Sender<ClientRequest>,
|
||||||
|
pub(crate) receiver: tonic::Streaming<ClientResponse>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ClientTransport {
|
||||||
|
pub(crate) async fn send(
|
||||||
|
&mut self,
|
||||||
|
request: ClientRequest,
|
||||||
|
) -> std::result::Result<(), ClientTransportError> {
|
||||||
|
self.sender
|
||||||
|
.send(request)
|
||||||
|
.await
|
||||||
|
.map_err(|_| OneOf::new(TransportChannelClosedError))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) async fn recv(
|
||||||
|
&mut self,
|
||||||
|
) -> std::result::Result<ClientResponse, ClientTransportError> {
|
||||||
|
match self.receiver.message().await {
|
||||||
|
Ok(Some(resp)) => Ok(resp),
|
||||||
|
Ok(None) => Err(OneOf::new(TransportConnectionClosedError)),
|
||||||
|
Err(_) => Err(OneOf::new(TransportConnectionClosedError)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
97
server/crates/arbiter-client/src/wallets/evm.rs
Normal file
97
server/crates/arbiter-client/src/wallets/evm.rs
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
use alloy::{
|
||||||
|
consensus::SignableTransaction,
|
||||||
|
network::TxSigner,
|
||||||
|
primitives::{Address, B256, ChainId, Signature},
|
||||||
|
signers::{Result, Signer},
|
||||||
|
};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use terrors::OneOf;
|
||||||
|
use tokio::sync::Mutex;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
errors::{
|
||||||
|
EvmChainIdMismatchError, EvmHashSigningUnsupportedError,
|
||||||
|
EvmTransactionSigningUnsupportedError, EvmWalletError,
|
||||||
|
},
|
||||||
|
transport::ClientTransport,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct ArbiterEvmWallet {
|
||||||
|
transport: Arc<Mutex<ClientTransport>>,
|
||||||
|
address: Address,
|
||||||
|
chain_id: Option<ChainId>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ArbiterEvmWallet {
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub(crate) fn new(transport: Arc<Mutex<ClientTransport>>, address: Address) -> Self {
|
||||||
|
Self {
|
||||||
|
transport,
|
||||||
|
address,
|
||||||
|
chain_id: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn address(&self) -> Address {
|
||||||
|
self.address
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn with_chain_id(mut self, chain_id: ChainId) -> Self {
|
||||||
|
self.chain_id = Some(chain_id);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
fn validate_chain_id(
|
||||||
|
&self,
|
||||||
|
tx: &mut dyn SignableTransaction<Signature>,
|
||||||
|
) -> std::result::Result<(), EvmWalletError> {
|
||||||
|
if let Some(chain_id) = self.chain_id
|
||||||
|
&& !tx.set_chain_id_checked(chain_id)
|
||||||
|
{
|
||||||
|
return Err(OneOf::new(EvmChainIdMismatchError {
|
||||||
|
signer: chain_id,
|
||||||
|
tx: tx.chain_id().unwrap(),
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Signer for ArbiterEvmWallet {
|
||||||
|
async fn sign_hash(&self, _hash: &B256) -> Result<Signature> {
|
||||||
|
Err(EvmWalletError::new(EvmHashSigningUnsupportedError).into())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn address(&self) -> Address {
|
||||||
|
self.address
|
||||||
|
}
|
||||||
|
|
||||||
|
fn chain_id(&self) -> Option<ChainId> {
|
||||||
|
self.chain_id
|
||||||
|
}
|
||||||
|
|
||||||
|
fn set_chain_id(&mut self, chain_id: Option<ChainId>) {
|
||||||
|
self.chain_id = chain_id;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl TxSigner<Signature> for ArbiterEvmWallet {
|
||||||
|
fn address(&self) -> Address {
|
||||||
|
self.address
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn sign_transaction(
|
||||||
|
&self,
|
||||||
|
tx: &mut dyn SignableTransaction<Signature>,
|
||||||
|
) -> Result<Signature> {
|
||||||
|
let _transport = self.transport.lock().await;
|
||||||
|
self.validate_chain_id(tx)
|
||||||
|
.map_err(OneOf::into::<alloy::signers::Error>)?;
|
||||||
|
|
||||||
|
Err(EvmWalletError::new(EvmTransactionSigningUnsupportedError).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
2
server/crates/arbiter-client/src/wallets/mod.rs
Normal file
2
server/crates/arbiter-client/src/wallets/mod.rs
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
#[cfg(feature = "evm")]
|
||||||
|
pub mod evm;
|
||||||
@@ -9,7 +9,8 @@ license = "Apache-2.0"
|
|||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
tokio.workspace = true
|
tokio.workspace = true
|
||||||
futures.workspace = true
|
futures.workspace = true
|
||||||
tonic-prost = "0.14.3"
|
hex = "0.4.3"
|
||||||
|
tonic-prost = "0.14.5"
|
||||||
prost = "0.14.3"
|
prost = "0.14.3"
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
url = "2.5.8"
|
url = "2.5.8"
|
||||||
@@ -17,11 +18,14 @@ miette.workspace = true
|
|||||||
thiserror.workspace = true
|
thiserror.workspace = true
|
||||||
rustls-pki-types.workspace = true
|
rustls-pki-types.workspace = true
|
||||||
base64 = "0.22.1"
|
base64 = "0.22.1"
|
||||||
|
prost-types.workspace = true
|
||||||
tracing.workspace = true
|
tracing.workspace = true
|
||||||
async-trait.workspace = true
|
async-trait.workspace = true
|
||||||
|
tokio-stream.workspace = true
|
||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
tonic-prost-build = "0.14.3"
|
tonic-prost-build = "0.14.5"
|
||||||
|
protoc-bin-vendored = "3"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
rstest.workspace = true
|
rstest.workspace = true
|
||||||
@@ -30,5 +34,3 @@ rcgen.workspace = true
|
|||||||
|
|
||||||
[package.metadata.cargo-shear]
|
[package.metadata.cargo-shear]
|
||||||
ignored = ["tonic-prost", "prost", "kameo"]
|
ignored = ["tonic-prost", "prost", "kameo"]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,23 +1,32 @@
|
|||||||
|
use std::path::PathBuf;
|
||||||
use tonic_prost_build::configure;
|
use tonic_prost_build::configure;
|
||||||
|
|
||||||
static PROTOBUF_DIR: &str = "../../../protobufs";
|
static PROTOBUF_DIR: &str = "../../../protobufs";
|
||||||
|
|
||||||
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||||
|
let manifest_dir = PathBuf::from(std::env::var("CARGO_MANIFEST_DIR")?);
|
||||||
|
let protobuf_dir = manifest_dir.join(PROTOBUF_DIR);
|
||||||
|
let protoc_include = protoc_bin_vendored::include_path()?;
|
||||||
|
let protoc_path = protoc_bin_vendored::protoc_bin_path()?;
|
||||||
|
|
||||||
println!("cargo::rerun-if-changed={PROTOBUF_DIR}");
|
unsafe {
|
||||||
|
std::env::set_var("PROTOC", &protoc_path);
|
||||||
|
std::env::set_var("PROTOC_INCLUDE", &protoc_include);
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("cargo::rerun-if-changed={}", protobuf_dir.display());
|
||||||
|
|
||||||
configure()
|
configure()
|
||||||
.message_attribute(".", "#[derive(::kameo::Reply)]")
|
.message_attribute(".", "#[derive(::kameo::Reply)]")
|
||||||
|
.compile_well_known_types(true)
|
||||||
.compile_protos(
|
.compile_protos(
|
||||||
&[
|
&[
|
||||||
format!("{}/arbiter.proto", PROTOBUF_DIR),
|
protobuf_dir.join("arbiter.proto"),
|
||||||
format!("{}/user_agent.proto", PROTOBUF_DIR),
|
protobuf_dir.join("user_agent.proto"),
|
||||||
format!("{}/client.proto", PROTOBUF_DIR),
|
protobuf_dir.join("client.proto"),
|
||||||
format!("{}/evm.proto", PROTOBUF_DIR),
|
protobuf_dir.join("evm.proto"),
|
||||||
],
|
],
|
||||||
&[PROTOBUF_DIR.to_string()],
|
&[protobuf_dir],
|
||||||
)
|
)?;
|
||||||
|
|
||||||
.unwrap();
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,6 +3,12 @@ pub mod url;
|
|||||||
|
|
||||||
use base64::{Engine, prelude::BASE64_STANDARD};
|
use base64::{Engine, prelude::BASE64_STANDARD};
|
||||||
|
|
||||||
|
pub mod google {
|
||||||
|
pub mod protobuf {
|
||||||
|
tonic::include_proto!("google.protobuf");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub mod proto {
|
pub mod proto {
|
||||||
tonic::include_proto!("arbiter");
|
tonic::include_proto!("arbiter");
|
||||||
|
|
||||||
|
|||||||
@@ -1,78 +1,59 @@
|
|||||||
//! Transport-facing abstractions for protocol/session code.
|
//! Transport-facing abstractions shared by protocol/session code.
|
||||||
//!
|
//!
|
||||||
//! This module separates three concerns:
|
//! This module defines a small set of transport traits that actors and other
|
||||||
|
//! protocol code can depend on without knowing anything about the concrete
|
||||||
|
//! transport underneath.
|
||||||
//!
|
//!
|
||||||
//! - protocol/session logic wants a small duplex interface ([`Bi`])
|
//! The abstraction is split into:
|
||||||
//! - transport adapters push concrete stream items to an underlying IO layer
|
//! - [`Sender`] for outbound delivery
|
||||||
//! - transport boundaries translate between protocol-facing and transport-facing
|
//! - [`Receiver`] for inbound delivery
|
||||||
//! item types via direction-specific converters
|
//! - [`Bi`] as the combined duplex form (`Sender + Receiver`)
|
||||||
//!
|
//!
|
||||||
//! [`Bi`] is intentionally minimal and transport-agnostic:
|
//! This split lets code depend only on the half it actually needs. For
|
||||||
//! - [`Bi::recv`] yields inbound protocol messages
|
//! example, some actor/session code only sends out-of-band messages, while
|
||||||
//! - [`Bi::send`] accepts outbound protocol/domain items
|
//! auth/state-machine code may need full duplex access.
|
||||||
|
//!
|
||||||
|
//! [`Bi`] remains intentionally minimal and transport-agnostic:
|
||||||
|
//! - [`Receiver::recv`] yields inbound messages
|
||||||
|
//! - [`Sender::send`] accepts outbound messages
|
||||||
|
//!
|
||||||
|
//! Transport-specific adapters, including protobuf or gRPC bridges, live in the
|
||||||
|
//! crates that own those boundaries rather than in `arbiter-proto`.
|
||||||
|
//!
|
||||||
|
//! [`Bi`] deliberately does not model request/response correlation. Some
|
||||||
|
//! transports may carry multiplexed request/response traffic, some may emit
|
||||||
|
//! out-of-band messages, and some may be one-message-at-a-time state machines.
|
||||||
|
//! Correlation concerns such as request IDs, pending response maps, and
|
||||||
|
//! out-of-band routing belong in the adapter or connection layer built on top
|
||||||
|
//! of [`Bi`], not in this abstraction itself.
|
||||||
//!
|
//!
|
||||||
//! # Generic Ordering Rule
|
//! # Generic Ordering Rule
|
||||||
//!
|
//!
|
||||||
//! This module uses a single convention consistently: when a type or trait is
|
//! This module consistently uses `Inbound` first and `Outbound` second in
|
||||||
//! parameterized by protocol message directions, the generic parameters are
|
//! generic parameter lists.
|
||||||
//! declared as `Inbound` first, then `Outbound`.
|
|
||||||
//!
|
//!
|
||||||
//! For [`Bi`], that means `Bi<Inbound, Outbound>`:
|
//! For [`Receiver`], [`Sender`], and [`Bi`], this means:
|
||||||
|
//! - `Receiver<Inbound>`
|
||||||
|
//! - `Sender<Outbound>`
|
||||||
|
//! - `Bi<Inbound, Outbound>`
|
||||||
|
//!
|
||||||
|
//! Concretely, for [`Bi`]:
|
||||||
//! - `recv() -> Option<Inbound>`
|
//! - `recv() -> Option<Inbound>`
|
||||||
//! - `send(Outbound)`
|
//! - `send(Outbound)`
|
||||||
//!
|
//!
|
||||||
//! For adapter types that are parameterized by direction-specific converters,
|
//! [`expect_message`] is a small helper for linear protocol steps: it reads one
|
||||||
//! inbound-related converter parameters are declared before outbound-related
|
//! inbound message from a transport and extracts a typed value from it, failing
|
||||||
//! converter parameters.
|
//! if the channel closes or the message shape is not what the caller expected.
|
||||||
//!
|
//!
|
||||||
//! [`RecvConverter`] and [`SendConverter`] are infallible conversion traits used
|
//! [`DummyTransport`] is a no-op implementation useful for tests and local
|
||||||
//! by adapters to map between protocol-facing and transport-facing item types.
|
//! actor execution where no real stream exists.
|
||||||
//! The traits themselves are not result-aware; adapters decide how transport
|
|
||||||
//! errors are handled before (or instead of) conversion.
|
|
||||||
//!
|
|
||||||
//! [`grpc::GrpcAdapter`] combines:
|
|
||||||
//! - a tonic inbound stream
|
|
||||||
//! - a Tokio sender for outbound transport items
|
|
||||||
//! - a [`RecvConverter`] for the receive path
|
|
||||||
//! - a [`SendConverter`] for the send path
|
|
||||||
//!
|
|
||||||
//! [`DummyTransport`] is a no-op implementation useful for tests and local actor
|
|
||||||
//! execution where no real network stream exists.
|
|
||||||
//!
|
|
||||||
//! # Component Interaction
|
|
||||||
//!
|
|
||||||
//! ```text
|
|
||||||
//! inbound (network -> protocol)
|
|
||||||
//! ============================
|
|
||||||
//!
|
|
||||||
//! tonic::Streaming<RecvTransport>
|
|
||||||
//! -> grpc::GrpcAdapter::recv()
|
|
||||||
//! |
|
|
||||||
//! +--> on `Ok(item)`: RecvConverter::convert(RecvTransport) -> Inbound
|
|
||||||
//! +--> on `Err(status)`: log error and close stream (`None`)
|
|
||||||
//! -> Bi::recv()
|
|
||||||
//! -> protocol/session actor
|
|
||||||
//!
|
|
||||||
//! outbound (protocol -> network)
|
|
||||||
//! ==============================
|
|
||||||
//!
|
|
||||||
//! protocol/session actor
|
|
||||||
//! -> Bi::send(Outbound)
|
|
||||||
//! -> grpc::GrpcAdapter::send()
|
|
||||||
//! |
|
|
||||||
//! +--> SendConverter::convert(Outbound) -> SendTransport
|
|
||||||
//! -> Tokio mpsc::Sender<SendTransport>
|
|
||||||
//! -> tonic response stream
|
|
||||||
//! ```
|
|
||||||
//!
|
//!
|
||||||
//! # Design Notes
|
//! # Design Notes
|
||||||
//!
|
//!
|
||||||
//! - `send()` returns [`Error`] only for transport delivery failures (for
|
//! - [`Bi::send`] returns [`Error`] only for transport delivery failures, such
|
||||||
//! example, when the outbound channel is closed).
|
//! as a closed outbound channel.
|
||||||
//! - [`grpc::GrpcAdapter`] logs tonic receive errors and treats them as stream
|
//! - [`Bi::recv`] returns `None` when the underlying transport closes.
|
||||||
//! closure (`None`).
|
//! - Message translation is intentionally out of scope for this module.
|
||||||
//! - When protocol-facing and transport-facing types are identical, use
|
|
||||||
//! [`IdentityRecvConverter`] / [`IdentitySendConverter`].
|
|
||||||
|
|
||||||
use std::marker::PhantomData;
|
use std::marker::PhantomData;
|
||||||
|
|
||||||
@@ -83,175 +64,54 @@ use async_trait::async_trait;
|
|||||||
pub enum Error {
|
pub enum Error {
|
||||||
#[error("Transport channel is closed")]
|
#[error("Transport channel is closed")]
|
||||||
ChannelClosed,
|
ChannelClosed,
|
||||||
|
#[error("Unexpected message received")]
|
||||||
|
UnexpectedMessage,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Receives one message from `transport` and extracts a value from it using
|
||||||
|
/// `extractor`. Returns [`Error::ChannelClosed`] if the transport closes and
|
||||||
|
/// [`Error::UnexpectedMessage`] if `extractor` returns `None`.
|
||||||
|
pub async fn expect_message<T, Inbound, Outbound, Target, F>(
|
||||||
|
transport: &mut T,
|
||||||
|
extractor: F,
|
||||||
|
) -> Result<Target, Error>
|
||||||
|
where
|
||||||
|
T: Bi<Inbound, Outbound> + ?Sized,
|
||||||
|
F: FnOnce(Inbound) -> Option<Target>,
|
||||||
|
{
|
||||||
|
let msg = transport.recv().await.ok_or(Error::ChannelClosed)?;
|
||||||
|
extractor(msg).ok_or(Error::UnexpectedMessage)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait Sender<Outbound>: Send + Sync {
|
||||||
|
async fn send(&mut self, item: Outbound) -> Result<(), Error>;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
pub trait Receiver<Inbound>: Send + Sync {
|
||||||
|
async fn recv(&mut self) -> Option<Inbound>;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Minimal bidirectional transport abstraction used by protocol code.
|
/// Minimal bidirectional transport abstraction used by protocol code.
|
||||||
///
|
///
|
||||||
/// `Bi<Inbound, Outbound>` models a duplex channel with:
|
/// `Bi<Inbound, Outbound>` is the combined duplex form of [`Sender`] and
|
||||||
|
/// [`Receiver`].
|
||||||
|
///
|
||||||
|
/// It models a channel with:
|
||||||
/// - inbound items of type `Inbound` read via [`Bi::recv`]
|
/// - inbound items of type `Inbound` read via [`Bi::recv`]
|
||||||
/// - outbound items of type `Outbound` written via [`Bi::send`]
|
/// - outbound items of type `Outbound` written via [`Bi::send`]
|
||||||
#[async_trait]
|
///
|
||||||
pub trait Bi<Inbound, Outbound>: Send + Sync + 'static {
|
/// It does not imply request/response sequencing, one-at-a-time exchange, or
|
||||||
async fn send(&mut self, item: Outbound) -> Result<(), Error>;
|
/// any built-in correlation mechanism between inbound and outbound items.
|
||||||
|
pub trait Bi<Inbound, Outbound>: Sender<Outbound> + Receiver<Inbound> + Send + Sync {}
|
||||||
|
|
||||||
async fn recv(&mut self) -> Option<Inbound>;
|
pub trait SplittableBi<Inbound, Outbound>: Bi<Inbound, Outbound> {
|
||||||
}
|
type Sender: Sender<Outbound>;
|
||||||
|
type Receiver: Receiver<Inbound>;
|
||||||
|
|
||||||
/// Converts transport-facing inbound items into protocol-facing inbound items.
|
fn split(self) -> (Self::Sender, Self::Receiver);
|
||||||
pub trait RecvConverter: Send + Sync + 'static {
|
fn from_parts(sender: Self::Sender, receiver: Self::Receiver) -> Self;
|
||||||
type Input;
|
|
||||||
type Output;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Converts protocol/domain outbound items into transport-facing outbound items.
|
|
||||||
pub trait SendConverter: Send + Sync + 'static {
|
|
||||||
type Input;
|
|
||||||
type Output;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// A [`RecvConverter`] that forwards values unchanged.
|
|
||||||
pub struct IdentityRecvConverter<T> {
|
|
||||||
_marker: PhantomData<T>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> IdentityRecvConverter<T> {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
_marker: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Default for IdentityRecvConverter<T> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> RecvConverter for IdentityRecvConverter<T>
|
|
||||||
where
|
|
||||||
T: Send + Sync + 'static,
|
|
||||||
{
|
|
||||||
type Input = T;
|
|
||||||
type Output = T;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
|
||||||
item
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// A [`SendConverter`] that forwards values unchanged.
|
|
||||||
pub struct IdentitySendConverter<T> {
|
|
||||||
_marker: PhantomData<T>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> IdentitySendConverter<T> {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
_marker: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Default for IdentitySendConverter<T> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> SendConverter for IdentitySendConverter<T>
|
|
||||||
where
|
|
||||||
T: Send + Sync + 'static,
|
|
||||||
{
|
|
||||||
type Input = T;
|
|
||||||
type Output = T;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
|
||||||
item
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// gRPC-specific transport adapters and helpers.
|
|
||||||
pub mod grpc {
|
|
||||||
use async_trait::async_trait;
|
|
||||||
use futures::StreamExt;
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tonic::Streaming;
|
|
||||||
|
|
||||||
use super::{Bi, Error, RecvConverter, SendConverter};
|
|
||||||
|
|
||||||
/// [`Bi`] adapter backed by a tonic gRPC bidirectional stream.
|
|
||||||
///
|
|
||||||
|
|
||||||
/// Tonic receive errors are logged and treated as stream closure (`None`).
|
|
||||||
/// The receive converter is only invoked for successful inbound transport
|
|
||||||
/// items.
|
|
||||||
pub struct GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
{
|
|
||||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
|
||||||
receiver: Streaming<InboundConverter::Input>,
|
|
||||||
inbound_converter: InboundConverter,
|
|
||||||
outbound_converter: OutboundConverter,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<InboundTransport, Inbound, InboundConverter, OutboundConverter>
|
|
||||||
GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter<Input = InboundTransport, Output = Inbound>,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
{
|
|
||||||
pub fn new(
|
|
||||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
|
||||||
receiver: Streaming<InboundTransport>,
|
|
||||||
inbound_converter: InboundConverter,
|
|
||||||
outbound_converter: OutboundConverter,
|
|
||||||
) -> Self {
|
|
||||||
Self {
|
|
||||||
sender,
|
|
||||||
receiver,
|
|
||||||
inbound_converter,
|
|
||||||
outbound_converter,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<InboundConverter, OutboundConverter> Bi<InboundConverter::Output, OutboundConverter::Input>
|
|
||||||
for GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
OutboundConverter::Input: Send + 'static,
|
|
||||||
OutboundConverter::Output: Send + 'static,
|
|
||||||
{
|
|
||||||
#[tracing::instrument(level = "trace", skip(self, item))]
|
|
||||||
async fn send(&mut self, item: OutboundConverter::Input) -> Result<(), Error> {
|
|
||||||
let outbound = self.outbound_converter.convert(item);
|
|
||||||
self.sender
|
|
||||||
.send(outbound)
|
|
||||||
.await
|
|
||||||
.map_err(|_| Error::ChannelClosed)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tracing::instrument(level = "trace", skip(self))]
|
|
||||||
async fn recv(&mut self) -> Option<InboundConverter::Output> {
|
|
||||||
match self.receiver.next().await {
|
|
||||||
Some(Ok(item)) => Some(self.inbound_converter.convert(item)),
|
|
||||||
Some(Err(error)) => {
|
|
||||||
tracing::error!(error = ?error, "grpc transport recv failed; closing stream");
|
|
||||||
None
|
|
||||||
}
|
|
||||||
None => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// No-op [`Bi`] transport for tests and manual actor usage.
|
/// No-op [`Bi`] transport for tests and manual actor usage.
|
||||||
@@ -262,22 +122,16 @@ pub struct DummyTransport<Inbound, Outbound> {
|
|||||||
_marker: PhantomData<(Inbound, Outbound)>,
|
_marker: PhantomData<(Inbound, Outbound)>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<Inbound, Outbound> DummyTransport<Inbound, Outbound> {
|
impl<Inbound, Outbound> Default for DummyTransport<Inbound, Outbound> {
|
||||||
pub fn new() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
_marker: PhantomData,
|
_marker: PhantomData,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<Inbound, Outbound> Default for DummyTransport<Inbound, Outbound> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
impl<Inbound, Outbound> Bi<Inbound, Outbound> for DummyTransport<Inbound, Outbound>
|
impl<Inbound, Outbound> Sender<Outbound> for DummyTransport<Inbound, Outbound>
|
||||||
where
|
where
|
||||||
Inbound: Send + Sync + 'static,
|
Inbound: Send + Sync + 'static,
|
||||||
Outbound: Send + Sync + 'static,
|
Outbound: Send + Sync + 'static,
|
||||||
@@ -285,9 +139,25 @@ where
|
|||||||
async fn send(&mut self, _item: Outbound) -> Result<(), Error> {
|
async fn send(&mut self, _item: Outbound) -> Result<(), Error> {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<Inbound, Outbound> Receiver<Inbound> for DummyTransport<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
async fn recv(&mut self) -> Option<Inbound> {
|
async fn recv(&mut self) -> Option<Inbound> {
|
||||||
std::future::pending::<()>().await;
|
std::future::pending::<()>().await;
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl<Inbound, Outbound> Bi<Inbound, Outbound> for DummyTransport<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
pub mod grpc;
|
||||||
|
|||||||
106
server/crates/arbiter-proto/src/transport/grpc.rs
Normal file
106
server/crates/arbiter-proto/src/transport/grpc.rs
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use futures::StreamExt;
|
||||||
|
use tokio::sync::mpsc;
|
||||||
|
use tokio_stream::wrappers::ReceiverStream;
|
||||||
|
|
||||||
|
use super::{Bi, Receiver, Sender};
|
||||||
|
|
||||||
|
pub struct GrpcSender<Outbound> {
|
||||||
|
tx: mpsc::Sender<Result<Outbound, tonic::Status>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<Outbound> Sender<Result<Outbound, tonic::Status>> for GrpcSender<Outbound>
|
||||||
|
where
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
async fn send(&mut self, item: Result<Outbound, tonic::Status>) -> Result<(), super::Error> {
|
||||||
|
self.tx
|
||||||
|
.send(item)
|
||||||
|
.await
|
||||||
|
.map_err(|_| super::Error::ChannelClosed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct GrpcReceiver<Inbound> {
|
||||||
|
rx: tonic::Streaming<Inbound>,
|
||||||
|
}
|
||||||
|
#[async_trait]
|
||||||
|
impl<Inbound> Receiver<Result<Inbound, tonic::Status>> for GrpcReceiver<Inbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
async fn recv(&mut self) -> Option<Result<Inbound, tonic::Status>> {
|
||||||
|
self.rx.next().await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct GrpcBi<Inbound, Outbound> {
|
||||||
|
sender: GrpcSender<Outbound>,
|
||||||
|
receiver: GrpcReceiver<Inbound>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Inbound, Outbound> GrpcBi<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
pub fn from_bi_stream(
|
||||||
|
receiver: tonic::Streaming<Inbound>,
|
||||||
|
) -> (Self, ReceiverStream<Result<Outbound, tonic::Status>>) {
|
||||||
|
let (tx, rx) = mpsc::channel(10);
|
||||||
|
let sender = GrpcSender { tx };
|
||||||
|
let receiver = GrpcReceiver { rx: receiver };
|
||||||
|
let bi = GrpcBi { sender, receiver };
|
||||||
|
(bi, ReceiverStream::new(rx))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<Inbound, Outbound> Sender<Result<Outbound, tonic::Status>> for GrpcBi<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
async fn send(&mut self, item: Result<Outbound, tonic::Status>) -> Result<(), super::Error> {
|
||||||
|
self.sender.send(item).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<Inbound, Outbound> Receiver<Result<Inbound, tonic::Status>> for GrpcBi<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
async fn recv(&mut self) -> Option<Result<Inbound, tonic::Status>> {
|
||||||
|
self.receiver.recv().await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Inbound, Outbound> Bi<Result<Inbound, tonic::Status>, Result<Outbound, tonic::Status>>
|
||||||
|
for GrpcBi<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Inbound, Outbound>
|
||||||
|
super::SplittableBi<Result<Inbound, tonic::Status>, Result<Outbound, tonic::Status>>
|
||||||
|
for GrpcBi<Inbound, Outbound>
|
||||||
|
where
|
||||||
|
Inbound: Send + Sync + 'static,
|
||||||
|
Outbound: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
type Sender = GrpcSender<Outbound>;
|
||||||
|
type Receiver = GrpcReceiver<Inbound>;
|
||||||
|
|
||||||
|
fn split(self) -> (Self::Sender, Self::Receiver) {
|
||||||
|
(self.sender, self.receiver)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn from_parts(sender: Self::Sender, receiver: Self::Receiver) -> Self {
|
||||||
|
GrpcBi { sender, receiver }
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -20,7 +20,7 @@ impl Display for ArbiterUrl {
|
|||||||
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
|
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
|
||||||
self.host,
|
self.host,
|
||||||
self.port,
|
self.port,
|
||||||
BASE64_URL_SAFE.encode(self.ca_cert.to_vec())
|
BASE64_URL_SAFE.encode(&self.ca_cert)
|
||||||
);
|
);
|
||||||
if let Some(token) = &self.bootstrap_token {
|
if let Some(token) = &self.bootstrap_token {
|
||||||
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
|
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
|
||||||
|
|||||||
@@ -5,9 +5,12 @@ edition = "2024"
|
|||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
|
||||||
|
[lints]
|
||||||
|
workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
diesel = { version = "2.3.7", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
||||||
diesel-async = { version = "0.7.4", features = [
|
diesel-async = { version = "0.8.0", features = [
|
||||||
"bb8",
|
"bb8",
|
||||||
"migrations",
|
"migrations",
|
||||||
"sqlite",
|
"sqlite",
|
||||||
@@ -24,6 +27,7 @@ rustls.workspace = true
|
|||||||
smlang.workspace = true
|
smlang.workspace = true
|
||||||
miette.workspace = true
|
miette.workspace = true
|
||||||
thiserror.workspace = true
|
thiserror.workspace = true
|
||||||
|
fatality = "0.1.1"
|
||||||
diesel_migrations = { version = "2.3.1", features = ["sqlite"] }
|
diesel_migrations = { version = "2.3.1", features = ["sqlite"] }
|
||||||
async-trait.workspace = true
|
async-trait.workspace = true
|
||||||
secrecy = "0.10.3"
|
secrecy = "0.10.3"
|
||||||
@@ -40,10 +44,14 @@ x25519-dalek.workspace = true
|
|||||||
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
||||||
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
||||||
restructed = "0.2.2"
|
restructed = "0.2.2"
|
||||||
strum = { version = "0.27.2", features = ["derive"] }
|
strum = { version = "0.28.0", features = ["derive"] }
|
||||||
pem = "3.0.6"
|
pem = "3.0.6"
|
||||||
k256 = "0.13.4"
|
k256.workspace = true
|
||||||
|
rsa.workspace = true
|
||||||
|
sha2.workspace = true
|
||||||
|
spki.workspace = true
|
||||||
alloy.workspace = true
|
alloy.workspace = true
|
||||||
|
prost-types.workspace = true
|
||||||
arbiter-tokens-registry.path = "../arbiter-tokens-registry"
|
arbiter-tokens-registry.path = "../arbiter-tokens-registry"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
|
|||||||
@@ -46,6 +46,7 @@ create table if not exists useragent_client (
|
|||||||
id integer not null primary key,
|
id integer not null primary key,
|
||||||
nonce integer not null default(1), -- used for auth challenge
|
nonce integer not null default(1), -- used for auth challenge
|
||||||
public_key blob not null,
|
public_key blob not null,
|
||||||
|
key_type integer not null default(1), -- 1=Ed25519, 2=ECDSA(secp256k1)
|
||||||
created_at integer not null default(unixepoch ('now')),
|
created_at integer not null default(unixepoch ('now')),
|
||||||
updated_at integer not null default(unixepoch ('now'))
|
updated_at integer not null default(unixepoch ('now'))
|
||||||
) STRICT;
|
) STRICT;
|
||||||
@@ -156,3 +157,5 @@ create table if not exists evm_ether_transfer_grant_target (
|
|||||||
|
|
||||||
create unique index if not exists uniq_ether_transfer_target on evm_ether_transfer_grant_target(grant_id, address);
|
create unique index if not exists uniq_ether_transfer_target on evm_ether_transfer_grant_target(grant_id, address);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX program_client_public_key_unique
|
||||||
|
ON program_client (public_key);
|
||||||
|
|||||||
@@ -3,12 +3,7 @@ use diesel::QueryDsl;
|
|||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::{Actor, messages};
|
use kameo::{Actor, messages};
|
||||||
use miette::Diagnostic;
|
use miette::Diagnostic;
|
||||||
use rand::{
|
use rand::{RngExt, distr::Alphanumeric, make_rng, rngs::StdRng};
|
||||||
RngExt,
|
|
||||||
distr::{Alphanumeric},
|
|
||||||
make_rng,
|
|
||||||
rngs::StdRng,
|
|
||||||
};
|
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
use crate::db::{self, DatabasePool, schema};
|
use crate::db::{self, DatabasePool, schema};
|
||||||
@@ -61,7 +56,6 @@ impl Bootstrapper {
|
|||||||
|
|
||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
|
|
||||||
let token = if row_count == 0 {
|
let token = if row_count == 0 {
|
||||||
let token = generate_token().await?;
|
let token = generate_token().await?;
|
||||||
Some(token)
|
Some(token)
|
||||||
|
|||||||
249
server/crates/arbiter-server/src/actors/client/auth.rs
Normal file
249
server/crates/arbiter-server/src/actors/client/auth.rs
Normal file
@@ -0,0 +1,249 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
format_challenge,
|
||||||
|
transport::{Bi, expect_message},
|
||||||
|
};
|
||||||
|
use diesel::{
|
||||||
|
ExpressionMethods as _, OptionalExtension as _, QueryDsl as _, dsl::insert_into, update,
|
||||||
|
};
|
||||||
|
use diesel_async::RunQueryDsl as _;
|
||||||
|
use ed25519_dalek::{Signature, VerifyingKey};
|
||||||
|
use kameo::error::SendError;
|
||||||
|
use tracing::error;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::{
|
||||||
|
client::ClientConnection,
|
||||||
|
router::{self, RequestClientApproval},
|
||||||
|
},
|
||||||
|
db::{self, schema::program_client},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("Database pool unavailable")]
|
||||||
|
DatabasePoolUnavailable,
|
||||||
|
#[error("Database operation failed")]
|
||||||
|
DatabaseOperationFailed,
|
||||||
|
#[error("Invalid challenge solution")]
|
||||||
|
InvalidChallengeSolution,
|
||||||
|
#[error("Client approval request failed")]
|
||||||
|
ApproveError(#[from] ApproveError),
|
||||||
|
#[error("Transport error")]
|
||||||
|
Transport,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub enum ApproveError {
|
||||||
|
#[error("Internal error")]
|
||||||
|
Internal,
|
||||||
|
#[error("Client connection denied by user agents")]
|
||||||
|
Denied,
|
||||||
|
#[error("Upstream error: {0}")]
|
||||||
|
Upstream(router::ApprovalError),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum Inbound {
|
||||||
|
AuthChallengeRequest { pubkey: VerifyingKey },
|
||||||
|
AuthChallengeSolution { signature: Signature },
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum Outbound {
|
||||||
|
AuthChallenge { pubkey: VerifyingKey, nonce: i32 },
|
||||||
|
AuthSuccess,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Atomically reads and increments the nonce for a known client.
|
||||||
|
/// Returns `None` if the pubkey is not registered.
|
||||||
|
async fn get_nonce(db: &db::DatabasePool, pubkey: &VerifyingKey) -> Result<Option<i32>, Error> {
|
||||||
|
let pubkey_bytes = pubkey.as_bytes().to_vec();
|
||||||
|
|
||||||
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database pool error");
|
||||||
|
Error::DatabasePoolUnavailable
|
||||||
|
})?;
|
||||||
|
|
||||||
|
conn.exclusive_transaction(|conn| {
|
||||||
|
let pubkey_bytes = pubkey_bytes.clone();
|
||||||
|
Box::pin(async move {
|
||||||
|
let Some((client_id, current_nonce)) = program_client::table
|
||||||
|
.filter(program_client::public_key.eq(&pubkey_bytes))
|
||||||
|
.select((program_client::id, program_client::nonce))
|
||||||
|
.first::<(i32, i32)>(conn)
|
||||||
|
.await
|
||||||
|
.optional()?
|
||||||
|
else {
|
||||||
|
return Result::<_, diesel::result::Error>::Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
update(program_client::table)
|
||||||
|
.filter(program_client::public_key.eq(&pubkey_bytes))
|
||||||
|
.set(program_client::nonce.eq(current_nonce + 1))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let _ = client_id;
|
||||||
|
Ok(Some(current_nonce))
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database error");
|
||||||
|
Error::DatabaseOperationFailed
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn approve_new_client(
|
||||||
|
actors: &crate::actors::GlobalActors,
|
||||||
|
pubkey: VerifyingKey,
|
||||||
|
) -> Result<(), Error> {
|
||||||
|
let result = actors
|
||||||
|
.router
|
||||||
|
.ask(RequestClientApproval {
|
||||||
|
client_pubkey: pubkey,
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Ok(true) => Ok(()),
|
||||||
|
Ok(false) => Err(Error::ApproveError(ApproveError::Denied)),
|
||||||
|
Err(SendError::HandlerError(e)) => {
|
||||||
|
error!(error = ?e, "Approval upstream error");
|
||||||
|
Err(Error::ApproveError(ApproveError::Upstream(e)))
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!(error = ?e, "Approval request to router failed");
|
||||||
|
Err(Error::ApproveError(ApproveError::Internal))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
enum InsertClientResult {
|
||||||
|
Inserted,
|
||||||
|
AlreadyExists,
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn insert_client(
|
||||||
|
db: &db::DatabasePool,
|
||||||
|
pubkey: &VerifyingKey,
|
||||||
|
) -> Result<InsertClientResult, Error> {
|
||||||
|
let now = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap_or_default()
|
||||||
|
.as_secs() as i32;
|
||||||
|
|
||||||
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database pool error");
|
||||||
|
Error::DatabasePoolUnavailable
|
||||||
|
})?;
|
||||||
|
|
||||||
|
match insert_into(program_client::table)
|
||||||
|
.values((
|
||||||
|
program_client::public_key.eq(pubkey.as_bytes().to_vec()),
|
||||||
|
program_client::nonce.eq(1), // pre-incremented; challenge uses 0
|
||||||
|
program_client::created_at.eq(now),
|
||||||
|
program_client::updated_at.eq(now),
|
||||||
|
))
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(_) => {}
|
||||||
|
Err(diesel::result::Error::DatabaseError(
|
||||||
|
diesel::result::DatabaseErrorKind::UniqueViolation,
|
||||||
|
_,
|
||||||
|
)) => return Ok(InsertClientResult::AlreadyExists),
|
||||||
|
Err(e) => {
|
||||||
|
error!(error = ?e, "Failed to insert new client");
|
||||||
|
return Err(Error::DatabaseOperationFailed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let client_id = program_client::table
|
||||||
|
.filter(program_client::public_key.eq(pubkey.as_bytes().to_vec()))
|
||||||
|
.order(program_client::id.desc())
|
||||||
|
.select(program_client::id)
|
||||||
|
.first::<i32>(&mut conn)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Failed to load inserted client id");
|
||||||
|
Error::DatabaseOperationFailed
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let _ = client_id;
|
||||||
|
Ok(InsertClientResult::Inserted)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn challenge_client<T>(
|
||||||
|
transport: &mut T,
|
||||||
|
pubkey: VerifyingKey,
|
||||||
|
nonce: i32,
|
||||||
|
) -> Result<(), Error>
|
||||||
|
where
|
||||||
|
T: Bi<Inbound, Result<Outbound, Error>> + ?Sized,
|
||||||
|
{
|
||||||
|
transport
|
||||||
|
.send(Ok(Outbound::AuthChallenge { pubkey, nonce }))
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Failed to send auth challenge");
|
||||||
|
Error::Transport
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let signature = expect_message(transport, |req: Inbound| match req {
|
||||||
|
Inbound::AuthChallengeSolution { signature } => Some(signature),
|
||||||
|
_ => None,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Failed to receive challenge solution");
|
||||||
|
Error::Transport
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let formatted = format_challenge(nonce, pubkey.as_bytes());
|
||||||
|
|
||||||
|
pubkey.verify_strict(&formatted, &signature).map_err(|_| {
|
||||||
|
error!("Challenge solution verification failed");
|
||||||
|
Error::InvalidChallengeSolution
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn authenticate<T>(
|
||||||
|
props: &mut ClientConnection,
|
||||||
|
transport: &mut T,
|
||||||
|
) -> Result<VerifyingKey, Error>
|
||||||
|
where
|
||||||
|
T: Bi<Inbound, Result<Outbound, Error>> + Send + ?Sized,
|
||||||
|
{
|
||||||
|
let Some(Inbound::AuthChallengeRequest { pubkey }) = transport.recv().await
|
||||||
|
else {
|
||||||
|
return Err(Error::Transport);
|
||||||
|
};
|
||||||
|
|
||||||
|
let nonce = match get_nonce(&props.db, &pubkey).await? {
|
||||||
|
Some(nonce) => nonce,
|
||||||
|
None => {
|
||||||
|
approve_new_client(&props.actors, pubkey).await?;
|
||||||
|
match insert_client(&props.db, &pubkey).await? {
|
||||||
|
InsertClientResult::Inserted => 0,
|
||||||
|
InsertClientResult::AlreadyExists => match get_nonce(&props.db, &pubkey).await? {
|
||||||
|
Some(nonce) => nonce,
|
||||||
|
None => return Err(Error::DatabaseOperationFailed),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
challenge_client(transport, pubkey, nonce).await?;
|
||||||
|
transport
|
||||||
|
.send(Ok(Outbound::AuthSuccess))
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Failed to send auth success");
|
||||||
|
Error::Transport
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Ok(pubkey)
|
||||||
|
}
|
||||||
@@ -1,101 +0,0 @@
|
|||||||
use arbiter_proto::proto::client::{
|
|
||||||
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
|
|
||||||
client_request::Payload as ClientRequestPayload,
|
|
||||||
};
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
use crate::actors::client::{
|
|
||||||
ClientConnection,
|
|
||||||
auth::state::{AuthContext, AuthStateMachine},
|
|
||||||
session::ClientSession,
|
|
||||||
};
|
|
||||||
|
|
||||||
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Unexpected message payload")]
|
|
||||||
UnexpectedMessagePayload,
|
|
||||||
#[error("Invalid client public key length")]
|
|
||||||
InvalidClientPubkeyLength,
|
|
||||||
#[error("Invalid client public key encoding")]
|
|
||||||
InvalidAuthPubkeyEncoding,
|
|
||||||
#[error("Database pool unavailable")]
|
|
||||||
DatabasePoolUnavailable,
|
|
||||||
#[error("Database operation failed")]
|
|
||||||
DatabaseOperationFailed,
|
|
||||||
#[error("Public key not registered")]
|
|
||||||
PublicKeyNotRegistered,
|
|
||||||
#[error("Invalid signature length")]
|
|
||||||
InvalidSignatureLength,
|
|
||||||
#[error("Invalid challenge solution")]
|
|
||||||
InvalidChallengeSolution,
|
|
||||||
#[error("Transport error")]
|
|
||||||
Transport,
|
|
||||||
}
|
|
||||||
|
|
||||||
mod state;
|
|
||||||
use state::*;
|
|
||||||
|
|
||||||
fn parse_auth_event(payload: ClientRequestPayload) -> Result<AuthEvents, Error> {
|
|
||||||
match payload {
|
|
||||||
ClientRequestPayload::AuthChallengeRequest(AuthChallengeRequest { pubkey }) => {
|
|
||||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
|
||||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
|
||||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
|
||||||
Ok(AuthEvents::AuthRequest(ChallengeRequest {
|
|
||||||
pubkey: pubkey.into(),
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
ClientRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
|
|
||||||
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
|
|
||||||
solution: signature,
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn authenticate(props: &mut ClientConnection) -> Result<VerifyingKey, Error> {
|
|
||||||
let mut state = AuthStateMachine::new(AuthContext::new(props));
|
|
||||||
|
|
||||||
loop {
|
|
||||||
let transport = state.context_mut().conn.transport.as_mut();
|
|
||||||
let Some(ClientRequest {
|
|
||||||
payload: Some(payload),
|
|
||||||
}) = transport.recv().await
|
|
||||||
else {
|
|
||||||
return Err(Error::Transport);
|
|
||||||
};
|
|
||||||
|
|
||||||
let event = parse_auth_event(payload)?;
|
|
||||||
|
|
||||||
match state.process_event(event).await {
|
|
||||||
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
|
||||||
Err(AuthError::ActionFailed(err)) => {
|
|
||||||
error!(?err, "State machine action failed");
|
|
||||||
return Err(err);
|
|
||||||
}
|
|
||||||
Err(AuthError::GuardFailed(err)) => {
|
|
||||||
error!(?err, "State machine guard failed");
|
|
||||||
return Err(err);
|
|
||||||
}
|
|
||||||
Err(AuthError::InvalidEvent) => {
|
|
||||||
error!("Invalid event for current state");
|
|
||||||
return Err(Error::InvalidChallengeSolution);
|
|
||||||
}
|
|
||||||
Err(AuthError::TransitionsFailed) => {
|
|
||||||
error!("Invalid state transition");
|
|
||||||
return Err(Error::InvalidChallengeSolution);
|
|
||||||
}
|
|
||||||
|
|
||||||
_ => (),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn authenticate_and_create(
|
|
||||||
mut props: ClientConnection,
|
|
||||||
) -> Result<ClientSession, Error> {
|
|
||||||
let key = authenticate(&mut props).await?;
|
|
||||||
let session = ClientSession::new(props, key);
|
|
||||||
Ok(session)
|
|
||||||
}
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
use arbiter_proto::proto::client::{
|
|
||||||
AuthChallenge, ClientResponse,
|
|
||||||
client_response::Payload as ClientResponsePayload,
|
|
||||||
};
|
|
||||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
use super::Error;
|
|
||||||
use crate::{actors::client::ClientConnection, db::schema};
|
|
||||||
|
|
||||||
pub struct ChallengeRequest {
|
|
||||||
pub pubkey: VerifyingKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ChallengeContext {
|
|
||||||
pub challenge: AuthChallenge,
|
|
||||||
pub key: VerifyingKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ChallengeSolution {
|
|
||||||
pub solution: Vec<u8>,
|
|
||||||
}
|
|
||||||
|
|
||||||
smlang::statemachine!(
|
|
||||||
name: Auth,
|
|
||||||
custom_error: true,
|
|
||||||
transitions: {
|
|
||||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
|
||||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
|
||||||
let mut db_conn = db.get().await.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database pool error");
|
|
||||||
Error::DatabasePoolUnavailable
|
|
||||||
})?;
|
|
||||||
db_conn
|
|
||||||
.exclusive_transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let current_nonce = schema::program_client::table
|
|
||||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
|
||||||
.select(schema::program_client::nonce)
|
|
||||||
.first::<i32>(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
update(schema::program_client::table)
|
|
||||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
|
||||||
.set(schema::program_client::nonce.eq(current_nonce + 1))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, diesel::result::Error>::Ok(current_nonce)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.optional()
|
|
||||||
.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database error");
|
|
||||||
Error::DatabaseOperationFailed
|
|
||||||
})?
|
|
||||||
.ok_or_else(|| {
|
|
||||||
error!(?pubkey_bytes, "Public key not found in database");
|
|
||||||
Error::PublicKeyNotRegistered
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct AuthContext<'a> {
|
|
||||||
pub(super) conn: &'a mut ClientConnection,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a> AuthContext<'a> {
|
|
||||||
pub fn new(conn: &'a mut ClientConnection) -> Self {
|
|
||||||
Self { conn }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl AuthStateMachineContext for AuthContext<'_> {
|
|
||||||
type Error = Error;
|
|
||||||
|
|
||||||
async fn verify_solution(
|
|
||||||
&self,
|
|
||||||
ChallengeContext { challenge, key }: &ChallengeContext,
|
|
||||||
ChallengeSolution { solution }: &ChallengeSolution,
|
|
||||||
) -> Result<bool, Self::Error> {
|
|
||||||
let formatted_challenge =
|
|
||||||
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
|
|
||||||
let signature = solution.as_slice().try_into().map_err(|_| {
|
|
||||||
error!(?solution, "Invalid signature length");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
|
|
||||||
|
|
||||||
Ok(valid)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn prepare_challenge(
|
|
||||||
&mut self,
|
|
||||||
ChallengeRequest { pubkey }: ChallengeRequest,
|
|
||||||
) -> Result<ChallengeContext, Self::Error> {
|
|
||||||
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
|
|
||||||
|
|
||||||
let challenge = AuthChallenge {
|
|
||||||
pubkey: pubkey.as_bytes().to_vec(),
|
|
||||||
nonce,
|
|
||||||
};
|
|
||||||
|
|
||||||
self.conn
|
|
||||||
.transport
|
|
||||||
.send(Ok(ClientResponse {
|
|
||||||
payload: Some(ClientResponsePayload::AuthChallenge(challenge.clone())),
|
|
||||||
}))
|
|
||||||
.await
|
|
||||||
.map_err(|e| {
|
|
||||||
error!(?e, "Failed to send auth challenge");
|
|
||||||
Error::Transport
|
|
||||||
})?;
|
|
||||||
|
|
||||||
Ok(ChallengeContext {
|
|
||||||
challenge,
|
|
||||||
key: pubkey,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn provide_key(
|
|
||||||
&mut self,
|
|
||||||
state_data: &ChallengeContext,
|
|
||||||
_: ChallengeSolution,
|
|
||||||
) -> Result<VerifyingKey, Self::Error> {
|
|
||||||
Ok(state_data.key)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,7 +1,4 @@
|
|||||||
use arbiter_proto::{
|
use arbiter_proto::transport::Bi;
|
||||||
proto::client::{ClientRequest, ClientResponse},
|
|
||||||
transport::Bi,
|
|
||||||
};
|
|
||||||
use kameo::actor::Spawn;
|
use kameo::actor::Spawn;
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
|
|
||||||
@@ -10,48 +7,31 @@ use crate::{
|
|||||||
db,
|
db,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
|
|
||||||
pub enum ClientError {
|
|
||||||
#[error("Expected message with payload")]
|
|
||||||
MissingRequestPayload,
|
|
||||||
#[error("Unexpected request payload")]
|
|
||||||
UnexpectedRequestPayload,
|
|
||||||
#[error("State machine error")]
|
|
||||||
StateTransitionFailed,
|
|
||||||
#[error("Connection registration failed")]
|
|
||||||
ConnectionRegistrationFailed,
|
|
||||||
#[error(transparent)]
|
|
||||||
Auth(#[from] auth::Error),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub type Transport = Box<dyn Bi<ClientRequest, Result<ClientResponse, ClientError>> + Send>;
|
|
||||||
|
|
||||||
pub struct ClientConnection {
|
pub struct ClientConnection {
|
||||||
pub(crate) db: db::DatabasePool,
|
pub(crate) db: db::DatabasePool,
|
||||||
pub(crate) transport: Transport,
|
|
||||||
pub(crate) actors: GlobalActors,
|
pub(crate) actors: GlobalActors,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientConnection {
|
impl ClientConnection {
|
||||||
pub fn new(db: db::DatabasePool, transport: Transport, actors: GlobalActors) -> Self {
|
pub fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
Self {
|
Self { db, actors }
|
||||||
db,
|
|
||||||
transport,
|
|
||||||
actors,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod auth;
|
pub mod auth;
|
||||||
pub mod session;
|
pub mod session;
|
||||||
|
|
||||||
pub async fn connect_client(props: ClientConnection) {
|
pub async fn connect_client<T>(mut props: ClientConnection, transport: &mut T)
|
||||||
match auth::authenticate_and_create(props).await {
|
where
|
||||||
Ok(session) => {
|
T: Bi<auth::Inbound, Result<auth::Outbound, auth::Error>> + Send + ?Sized,
|
||||||
ClientSession::spawn(session);
|
{
|
||||||
|
match auth::authenticate(&mut props, transport).await {
|
||||||
|
Ok(_pubkey) => {
|
||||||
|
ClientSession::spawn(ClientSession::new(props));
|
||||||
info!("Client authenticated, session started");
|
info!("Client authenticated, session started");
|
||||||
}
|
}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
|
let _ = transport.send(Err(err.clone())).await;
|
||||||
error!(?err, "Authentication failed, closing connection");
|
error!(?err, "Authentication failed, closing connection");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,41 +1,45 @@
|
|||||||
use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
|
use kameo::{Actor, messages};
|
||||||
use ed25519_dalek::VerifyingKey;
|
use tracing::error;
|
||||||
use kameo::Actor;
|
|
||||||
use tokio::select;
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::{actors::{
|
use crate::{
|
||||||
GlobalActors, client::{ClientError, ClientConnection}, router::RegisterClient
|
actors::{
|
||||||
}, db};
|
GlobalActors, client::ClientConnection, keyholder::KeyHolderState, router::RegisterClient,
|
||||||
|
},
|
||||||
|
db,
|
||||||
|
};
|
||||||
|
|
||||||
pub struct ClientSession {
|
pub struct ClientSession {
|
||||||
props: ClientConnection,
|
props: ClientConnection,
|
||||||
key: VerifyingKey,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientSession {
|
impl ClientSession {
|
||||||
pub(crate) fn new(props: ClientConnection, key: VerifyingKey) -> Self {
|
pub(crate) fn new(props: ClientConnection) -> Self {
|
||||||
Self { props, key }
|
Self { props }
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn process_transport_inbound(&mut self, req: ClientRequest) -> Output {
|
|
||||||
let msg = req.payload.ok_or_else(|| {
|
|
||||||
error!(actor = "client", "Received message with no payload");
|
|
||||||
ClientError::MissingRequestPayload
|
|
||||||
})?;
|
|
||||||
|
|
||||||
match msg {
|
|
||||||
_ => Err(ClientError::UnexpectedRequestPayload),
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type Output = Result<ClientResponse, ClientError>;
|
#[messages]
|
||||||
|
impl ClientSession {
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_query_vault_state(&mut self) -> Result<KeyHolderState, Error> {
|
||||||
|
use crate::actors::keyholder::GetState;
|
||||||
|
|
||||||
|
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
||||||
|
Ok(state) => state,
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, actor = "client", "keyholder.query.failed");
|
||||||
|
return Err(Error::Internal);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(vault_state)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl Actor for ClientSession {
|
impl Actor for ClientSession {
|
||||||
type Args = Self;
|
type Args = Self;
|
||||||
|
|
||||||
type Error = ClientError;
|
type Error = Error;
|
||||||
|
|
||||||
async fn on_start(
|
async fn on_start(
|
||||||
args: Self::Args,
|
args: Self::Args,
|
||||||
@@ -46,53 +50,22 @@ impl Actor for ClientSession {
|
|||||||
.router
|
.router
|
||||||
.ask(RegisterClient { actor: this })
|
.ask(RegisterClient { actor: this })
|
||||||
.await
|
.await
|
||||||
.map_err(|_| ClientError::ConnectionRegistrationFailed)?;
|
.map_err(|_| Error::ConnectionRegistrationFailed)?;
|
||||||
Ok(args)
|
Ok(args)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn next(
|
|
||||||
&mut self,
|
|
||||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
|
||||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
|
||||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
|
||||||
loop {
|
|
||||||
select! {
|
|
||||||
signal = mailbox_rx.recv() => {
|
|
||||||
return signal;
|
|
||||||
}
|
|
||||||
msg = self.props.transport.recv() => {
|
|
||||||
match msg {
|
|
||||||
Some(request) => {
|
|
||||||
match self.process_transport_inbound(request).await {
|
|
||||||
Ok(resp) => {
|
|
||||||
if self.props.transport.send(Ok(resp)).await.is_err() {
|
|
||||||
error!(actor = "client", reason = "channel closed", "send.failed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
let _ = self.props.transport.send(Err(err)).await;
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
info!(actor = "client", "transport.closed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientSession {
|
impl ClientSession {
|
||||||
pub fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
pub fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
use arbiter_proto::transport::DummyTransport;
|
let props = ClientConnection::new(db, actors);
|
||||||
let transport: super::Transport = Box::new(DummyTransport::new());
|
Self { props }
|
||||||
let props = ClientConnection::new(db, transport, actors);
|
|
||||||
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
|
|
||||||
Self { props, key }
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("Connection registration failed")]
|
||||||
|
ConnectionRegistrationFailed,
|
||||||
|
#[error("Internal error")]
|
||||||
|
Internal,
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,21 +1,26 @@
|
|||||||
use alloy::{consensus::TxEip1559, network::TxSigner, primitives::Address, signers::Signature};
|
use alloy::{consensus::TxEip1559, primitives::Address, signers::Signature};
|
||||||
use diesel::{ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into};
|
use diesel::{
|
||||||
|
ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into,
|
||||||
|
};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::{Actor, actor::ActorRef, messages};
|
use kameo::{Actor, actor::ActorRef, messages};
|
||||||
use memsafe::MemSafe;
|
|
||||||
use rand::{SeedableRng, rng, rngs::StdRng};
|
use rand::{SeedableRng, rng, rngs::StdRng};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::keyholder::{CreateNew, Decrypt, KeyHolder},
|
actors::keyholder::{CreateNew, Decrypt, KeyHolder},
|
||||||
db::{self, DatabasePool, models::{self, EvmBasicGrant, SqliteTimestamp}, schema},
|
db::{
|
||||||
|
self, DatabasePool,
|
||||||
|
models::{self, SqliteTimestamp},
|
||||||
|
schema,
|
||||||
|
},
|
||||||
evm::{
|
evm::{
|
||||||
self, RunKind,
|
self, ListGrantsError, RunKind,
|
||||||
policies::{
|
policies::{
|
||||||
FullGrant, SharedGrantSettings, SpecificGrant, SpecificMeaning,
|
FullGrant, Grant, SharedGrantSettings, SpecificGrant, SpecificMeaning,
|
||||||
ether_transfer::EtherTransfer,
|
ether_transfer::EtherTransfer, token_transfers::TokenTransfer,
|
||||||
token_transfers::TokenTransfer,
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
|
|
||||||
pub use crate::evm::safe_signer;
|
pub use crate::evm::safe_signer;
|
||||||
@@ -88,7 +93,12 @@ impl EvmActor {
|
|||||||
// todo: audit
|
// todo: audit
|
||||||
let rng = StdRng::from_rng(&mut rng());
|
let rng = StdRng::from_rng(&mut rng());
|
||||||
let engine = evm::Engine::new(db.clone());
|
let engine = evm::Engine::new(db.clone());
|
||||||
Self { keyholder, db, rng, engine }
|
Self {
|
||||||
|
keyholder,
|
||||||
|
db,
|
||||||
|
rng,
|
||||||
|
engine,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -98,11 +108,7 @@ impl EvmActor {
|
|||||||
pub async fn generate(&mut self) -> Result<Address, Error> {
|
pub async fn generate(&mut self) -> Result<Address, Error> {
|
||||||
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
||||||
|
|
||||||
// Move raw key bytes into a Vec<u8> MemSafe for KeyHolder
|
let plaintext = key_cell.read_inline(|reader| SafeCell::new(reader.to_vec()));
|
||||||
let plaintext = {
|
|
||||||
let reader = key_cell.read().expect("MemSafe read");
|
|
||||||
MemSafe::new(reader.to_vec()).expect("MemSafe allocation")
|
|
||||||
};
|
|
||||||
|
|
||||||
let aead_id: i32 = self
|
let aead_id: i32 = self
|
||||||
.keyholder
|
.keyholder
|
||||||
@@ -149,12 +155,24 @@ impl EvmActor {
|
|||||||
match grant {
|
match grant {
|
||||||
SpecificGrant::EtherTransfer(settings) => {
|
SpecificGrant::EtherTransfer(settings) => {
|
||||||
self.engine
|
self.engine
|
||||||
.create_grant::<EtherTransfer>(client_id, FullGrant { basic, specific: settings })
|
.create_grant::<EtherTransfer>(
|
||||||
|
client_id,
|
||||||
|
FullGrant {
|
||||||
|
basic,
|
||||||
|
specific: settings,
|
||||||
|
},
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
}
|
}
|
||||||
SpecificGrant::TokenTransfer(settings) => {
|
SpecificGrant::TokenTransfer(settings) => {
|
||||||
self.engine
|
self.engine
|
||||||
.create_grant::<TokenTransfer>(client_id, FullGrant { basic, specific: settings })
|
.create_grant::<TokenTransfer>(
|
||||||
|
client_id,
|
||||||
|
FullGrant {
|
||||||
|
basic,
|
||||||
|
specific: settings,
|
||||||
|
},
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -172,19 +190,12 @@ impl EvmActor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn useragent_list_grants(
|
pub async fn useragent_list_grants(&mut self) -> Result<Vec<Grant<SpecificGrant>>, Error> {
|
||||||
&mut self,
|
match self.engine.list_all_grants().await {
|
||||||
wallet_id: Option<i32>,
|
Ok(grants) => Ok(grants),
|
||||||
) -> Result<Vec<EvmBasicGrant>, Error> {
|
Err(ListGrantsError::Database(db)) => Err(Error::Database(db)),
|
||||||
let mut conn = self.db.get().await?;
|
Err(ListGrantsError::Pool(pool)) => Err(Error::DatabasePool(pool)),
|
||||||
let mut query = schema::evm_basic_grant::table
|
|
||||||
.select(EvmBasicGrant::as_select())
|
|
||||||
.filter(schema::evm_basic_grant::revoked_at.is_null())
|
|
||||||
.into_boxed();
|
|
||||||
if let Some(wid) = wallet_id {
|
|
||||||
query = query.filter(schema::evm_basic_grant::wallet_id.eq(wid));
|
|
||||||
}
|
}
|
||||||
Ok(query.load(&mut conn).await?)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
@@ -204,8 +215,14 @@ impl EvmActor {
|
|||||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
.ok_or(SignTransactionError::WalletNotFound)?;
|
||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
let meaning = self.engine
|
let meaning = self
|
||||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
.engine
|
||||||
|
.evaluate_transaction(
|
||||||
|
wallet.id,
|
||||||
|
client_id,
|
||||||
|
transaction.clone(),
|
||||||
|
RunKind::Execution,
|
||||||
|
)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
Ok(meaning)
|
Ok(meaning)
|
||||||
@@ -228,16 +245,23 @@ impl EvmActor {
|
|||||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
.ok_or(SignTransactionError::WalletNotFound)?;
|
||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
let raw_key: MemSafe<Vec<u8>> = self
|
let raw_key: SafeCell<Vec<u8>> = self
|
||||||
.keyholder
|
.keyholder
|
||||||
.ask(Decrypt { aead_id: wallet.aead_encrypted_id })
|
.ask(Decrypt {
|
||||||
|
aead_id: wallet.aead_encrypted_id,
|
||||||
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(|_| SignTransactionError::KeyholderSend)?;
|
.map_err(|_| SignTransactionError::KeyholderSend)?;
|
||||||
|
|
||||||
let signer = safe_signer::SafeSigner::from_memsafe(raw_key)?;
|
let signer = safe_signer::SafeSigner::from_cell(raw_key)?;
|
||||||
|
|
||||||
self.engine
|
self.engine
|
||||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
.evaluate_transaction(
|
||||||
|
wallet.id,
|
||||||
|
client_id,
|
||||||
|
transaction.clone(),
|
||||||
|
RunKind::Execution,
|
||||||
|
)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
use alloy::network::TxSignerSync as _;
|
use alloy::network::TxSignerSync as _;
|
||||||
|
|||||||
@@ -5,12 +5,13 @@ use chacha20poly1305::{
|
|||||||
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
|
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
|
||||||
aead::{AeadMut, Error, Payload},
|
aead::{AeadMut, Error, Payload},
|
||||||
};
|
};
|
||||||
use memsafe::MemSafe;
|
|
||||||
use rand::{
|
use rand::{
|
||||||
Rng as _, SeedableRng,
|
Rng as _, SeedableRng,
|
||||||
rngs::{StdRng, SysRng},
|
rngs::{StdRng, SysRng},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
use crate::safe_cell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
pub const ROOT_KEY_TAG: &[u8] = "arbiter/seal/v1".as_bytes();
|
pub const ROOT_KEY_TAG: &[u8] = "arbiter/seal/v1".as_bytes();
|
||||||
pub const TAG: &[u8] = "arbiter/private-key/v1".as_bytes();
|
pub const TAG: &[u8] = "arbiter/private-key/v1".as_bytes();
|
||||||
|
|
||||||
@@ -47,40 +48,37 @@ impl<'a> TryFrom<&'a [u8]> for Nonce {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct KeyCell(pub MemSafe<Key>);
|
pub struct KeyCell(pub SafeCell<Key>);
|
||||||
impl From<MemSafe<Key>> for KeyCell {
|
impl From<SafeCell<Key>> for KeyCell {
|
||||||
fn from(value: MemSafe<Key>) -> Self {
|
fn from(value: SafeCell<Key>) -> Self {
|
||||||
Self(value)
|
Self(value)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl TryFrom<MemSafe<Vec<u8>>> for KeyCell {
|
impl TryFrom<SafeCell<Vec<u8>>> for KeyCell {
|
||||||
type Error = ();
|
type Error = ();
|
||||||
|
|
||||||
fn try_from(mut value: MemSafe<Vec<u8>>) -> Result<Self, Self::Error> {
|
fn try_from(mut value: SafeCell<Vec<u8>>) -> Result<Self, Self::Error> {
|
||||||
let value = value.read().unwrap();
|
let value = value.read();
|
||||||
if value.len() != size_of::<Key>() {
|
if value.len() != size_of::<Key>() {
|
||||||
return Err(());
|
return Err(());
|
||||||
}
|
}
|
||||||
let mut cell = MemSafe::new(Key::default()).unwrap();
|
let cell = SafeCell::new_inline(|cell_write: &mut Key| {
|
||||||
{
|
cell_write.copy_from_slice(&value);
|
||||||
let mut cell_write = cell.write().unwrap();
|
});
|
||||||
let cell_slice: &mut [u8] = cell_write.as_mut();
|
|
||||||
cell_slice.copy_from_slice(&value);
|
|
||||||
}
|
|
||||||
Ok(Self(cell))
|
Ok(Self(cell))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl KeyCell {
|
impl KeyCell {
|
||||||
pub fn new_secure_random() -> Self {
|
pub fn new_secure_random() -> Self {
|
||||||
let mut key = MemSafe::new(Key::default()).unwrap();
|
let key = SafeCell::new_inline(|key_buffer: &mut Key| {
|
||||||
{
|
#[allow(
|
||||||
let mut key_buffer = key.write().unwrap();
|
clippy::unwrap_used,
|
||||||
let key_buffer: &mut [u8] = key_buffer.as_mut();
|
reason = "Rng failure is unrecoverable and should panic"
|
||||||
|
)]
|
||||||
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
||||||
rng.fill_bytes(key_buffer);
|
rng.fill_bytes(key_buffer);
|
||||||
}
|
});
|
||||||
|
|
||||||
key.into()
|
key.into()
|
||||||
}
|
}
|
||||||
@@ -91,7 +89,7 @@ impl KeyCell {
|
|||||||
associated_data: &[u8],
|
associated_data: &[u8],
|
||||||
mut buffer: impl AsMut<Vec<u8>>,
|
mut buffer: impl AsMut<Vec<u8>>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let key_reader = self.0.read().unwrap();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let key_ref = key_reader.deref();
|
||||||
let cipher = XChaCha20Poly1305::new(key_ref);
|
let cipher = XChaCha20Poly1305::new(key_ref);
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
@@ -102,13 +100,13 @@ impl KeyCell {
|
|||||||
&mut self,
|
&mut self,
|
||||||
nonce: &Nonce,
|
nonce: &Nonce,
|
||||||
associated_data: &[u8],
|
associated_data: &[u8],
|
||||||
buffer: &mut MemSafe<Vec<u8>>,
|
buffer: &mut SafeCell<Vec<u8>>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let key_reader = self.0.read().unwrap();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let key_ref = key_reader.deref();
|
||||||
let cipher = XChaCha20Poly1305::new(key_ref);
|
let cipher = XChaCha20Poly1305::new(key_ref);
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
let mut buffer = buffer.write().unwrap();
|
let mut buffer = buffer.write();
|
||||||
let buffer: &mut Vec<u8> = buffer.as_mut();
|
let buffer: &mut Vec<u8> = buffer.as_mut();
|
||||||
cipher.decrypt_in_place(nonce, associated_data, buffer)
|
cipher.decrypt_in_place(nonce, associated_data, buffer)
|
||||||
}
|
}
|
||||||
@@ -119,7 +117,7 @@ impl KeyCell {
|
|||||||
associated_data: &[u8],
|
associated_data: &[u8],
|
||||||
plaintext: impl AsRef<[u8]>,
|
plaintext: impl AsRef<[u8]>,
|
||||||
) -> Result<Vec<u8>, Error> {
|
) -> Result<Vec<u8>, Error> {
|
||||||
let key_reader = self.0.read().unwrap();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let key_ref = key_reader.deref();
|
||||||
let mut cipher = XChaCha20Poly1305::new(key_ref);
|
let mut cipher = XChaCha20Poly1305::new(key_ref);
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
@@ -139,6 +137,10 @@ pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
|
|||||||
|
|
||||||
pub fn generate_salt() -> Salt {
|
pub fn generate_salt() -> Salt {
|
||||||
let mut salt = Salt::default();
|
let mut salt = Salt::default();
|
||||||
|
#[allow(
|
||||||
|
clippy::unwrap_used,
|
||||||
|
reason = "Rng failure is unrecoverable and should panic"
|
||||||
|
)]
|
||||||
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
||||||
rng.fill_bytes(&mut salt);
|
rng.fill_bytes(&mut salt);
|
||||||
salt
|
salt
|
||||||
@@ -146,19 +148,23 @@ pub fn generate_salt() -> Salt {
|
|||||||
|
|
||||||
/// User password might be of different length, have not enough entropy, etc...
|
/// User password might be of different length, have not enough entropy, etc...
|
||||||
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
|
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
|
||||||
pub fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
|
pub fn derive_seal_key(mut password: SafeCell<Vec<u8>>, salt: &Salt) -> KeyCell {
|
||||||
|
#[allow(clippy::unwrap_used)]
|
||||||
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
|
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
|
||||||
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
||||||
let mut key = MemSafe::new(Key::default()).unwrap();
|
let mut key = SafeCell::new(Key::default());
|
||||||
{
|
password.read_inline(|password_source| {
|
||||||
let password_source = password.read().unwrap();
|
let mut key_buffer = key.write();
|
||||||
let mut key_buffer = key.write().unwrap();
|
|
||||||
let key_buffer: &mut [u8] = key_buffer.as_mut();
|
let key_buffer: &mut [u8] = key_buffer.as_mut();
|
||||||
|
|
||||||
|
#[allow(
|
||||||
|
clippy::unwrap_used,
|
||||||
|
reason = "Better fail completely than return a weak key"
|
||||||
|
)]
|
||||||
hasher
|
hasher
|
||||||
.hash_password_into(password_source.deref(), salt, key_buffer)
|
.hash_password_into(password_source.deref(), salt, key_buffer)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
});
|
||||||
|
|
||||||
key.into()
|
key.into()
|
||||||
}
|
}
|
||||||
@@ -166,20 +172,20 @@ pub fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
|
|||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
use memsafe::MemSafe;
|
use crate::safe_cell::SafeCell;
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
pub fn derive_seal_key_deterministic() {
|
pub fn derive_seal_key_deterministic() {
|
||||||
static PASSWORD: &[u8] = b"password";
|
static PASSWORD: &[u8] = b"password";
|
||||||
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
|
let password = SafeCell::new(PASSWORD.to_vec());
|
||||||
let password2 = MemSafe::new(PASSWORD.to_vec()).unwrap();
|
let password2 = SafeCell::new(PASSWORD.to_vec());
|
||||||
let salt = generate_salt();
|
let salt = generate_salt();
|
||||||
|
|
||||||
let mut key1 = derive_seal_key(password, &salt);
|
let mut key1 = derive_seal_key(password, &salt);
|
||||||
let mut key2 = derive_seal_key(password2, &salt);
|
let mut key2 = derive_seal_key(password2, &salt);
|
||||||
|
|
||||||
let key1_reader = key1.0.read().unwrap();
|
let key1_reader = key1.0.read();
|
||||||
let key2_reader = key2.0.read().unwrap();
|
let key2_reader = key2.0.read();
|
||||||
|
|
||||||
assert_eq!(key1_reader.deref(), key2_reader.deref());
|
assert_eq!(key1_reader.deref(), key2_reader.deref());
|
||||||
}
|
}
|
||||||
@@ -187,11 +193,11 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
pub fn successful_derive() {
|
pub fn successful_derive() {
|
||||||
static PASSWORD: &[u8] = b"password";
|
static PASSWORD: &[u8] = b"password";
|
||||||
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
|
let password = SafeCell::new(PASSWORD.to_vec());
|
||||||
let salt = generate_salt();
|
let salt = generate_salt();
|
||||||
|
|
||||||
let mut key = derive_seal_key(password, &salt);
|
let mut key = derive_seal_key(password, &salt);
|
||||||
let key_reader = key.0.read().unwrap();
|
let key_reader = key.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let key_ref = key_reader.deref();
|
||||||
|
|
||||||
assert_ne!(key_ref.as_slice(), &[0u8; 32][..]);
|
assert_ne!(key_ref.as_slice(), &[0u8; 32][..]);
|
||||||
@@ -200,7 +206,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
pub fn encrypt_decrypt() {
|
pub fn encrypt_decrypt() {
|
||||||
static PASSWORD: &[u8] = b"password";
|
static PASSWORD: &[u8] = b"password";
|
||||||
let password = MemSafe::new(PASSWORD.to_vec()).unwrap();
|
let password = SafeCell::new(PASSWORD.to_vec());
|
||||||
let salt = generate_salt();
|
let salt = generate_salt();
|
||||||
|
|
||||||
let mut key = derive_seal_key(password, &salt);
|
let mut key = derive_seal_key(password, &salt);
|
||||||
@@ -212,12 +218,12 @@ mod tests {
|
|||||||
.unwrap();
|
.unwrap();
|
||||||
assert_ne!(buffer, b"secret data");
|
assert_ne!(buffer, b"secret data");
|
||||||
|
|
||||||
let mut buffer = MemSafe::new(buffer).unwrap();
|
let mut buffer = SafeCell::new(buffer);
|
||||||
|
|
||||||
key.decrypt_in_place(&nonce, associated_data, &mut buffer)
|
key.decrypt_in_place(&nonce, associated_data, &mut buffer)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let buffer = buffer.read().unwrap();
|
let buffer = buffer.read();
|
||||||
assert_eq!(*buffer, b"secret data");
|
assert_eq!(*buffer, b"secret data");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,21 +5,24 @@ use diesel::{
|
|||||||
};
|
};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
use kameo::{Actor, Reply, messages};
|
use kameo::{Actor, Reply, messages};
|
||||||
use memsafe::MemSafe;
|
|
||||||
use strum::{EnumDiscriminants, IntoDiscriminant};
|
use strum::{EnumDiscriminants, IntoDiscriminant};
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
|
|
||||||
use crate::db::{
|
use crate::safe_cell::SafeCell;
|
||||||
|
use crate::{
|
||||||
|
db::{
|
||||||
self,
|
self,
|
||||||
models::{self, RootKeyHistory},
|
models::{self, RootKeyHistory},
|
||||||
schema::{self},
|
schema::{self},
|
||||||
|
},
|
||||||
|
safe_cell::SafeCellHandle as _,
|
||||||
};
|
};
|
||||||
use encryption::v1::{self, KeyCell, Nonce};
|
use encryption::v1::{self, KeyCell, Nonce};
|
||||||
|
|
||||||
pub mod encryption;
|
pub mod encryption;
|
||||||
|
|
||||||
#[derive(Default, EnumDiscriminants)]
|
#[derive(Default, EnumDiscriminants)]
|
||||||
#[strum_discriminants(derive(Reply), vis(pub))]
|
#[strum_discriminants(derive(Reply), vis(pub), name(KeyHolderState))]
|
||||||
enum State {
|
enum State {
|
||||||
#[default]
|
#[default]
|
||||||
Unbootstrapped,
|
Unbootstrapped,
|
||||||
@@ -136,7 +139,7 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
pub async fn bootstrap(&mut self, seal_key_raw: SafeCell<Vec<u8>>) -> Result<(), Error> {
|
||||||
if !matches!(self.state, State::Unbootstrapped) {
|
if !matches!(self.state, State::Unbootstrapped) {
|
||||||
return Err(Error::AlreadyBootstrapped);
|
return Err(Error::AlreadyBootstrapped);
|
||||||
}
|
}
|
||||||
@@ -148,16 +151,15 @@ impl KeyHolder {
|
|||||||
let root_key_nonce = v1::Nonce::default();
|
let root_key_nonce = v1::Nonce::default();
|
||||||
let data_encryption_nonce = v1::Nonce::default();
|
let data_encryption_nonce = v1::Nonce::default();
|
||||||
|
|
||||||
let root_key_ciphertext: Vec<u8> = {
|
let root_key_ciphertext: Vec<u8> = root_key.0.read_inline(|reader| {
|
||||||
let root_key_reader = root_key.0.read().unwrap();
|
let root_key_reader = reader.as_slice();
|
||||||
let root_key_reader = root_key_reader.as_slice();
|
|
||||||
seal_key
|
seal_key
|
||||||
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
||||||
.map_err(|err| {
|
.map_err(|err| {
|
||||||
error!(?err, "Fatal bootstrap error");
|
error!(?err, "Fatal bootstrap error");
|
||||||
Error::Encryption(err)
|
Error::Encryption(err)
|
||||||
})?
|
})
|
||||||
};
|
})?;
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
let mut conn = self.db.get().await?;
|
||||||
|
|
||||||
@@ -199,7 +201,7 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
pub async fn try_unseal(&mut self, seal_key_raw: SafeCell<Vec<u8>>) -> Result<(), Error> {
|
||||||
let State::Sealed {
|
let State::Sealed {
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
} = &self.state
|
} = &self.state
|
||||||
@@ -225,7 +227,7 @@ impl KeyHolder {
|
|||||||
})?;
|
})?;
|
||||||
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
||||||
|
|
||||||
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
|
let mut root_key = SafeCell::new(current_key.ciphertext.clone());
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
||||||
|_| {
|
|_| {
|
||||||
@@ -256,7 +258,7 @@ impl KeyHolder {
|
|||||||
|
|
||||||
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
|
pub async fn decrypt(&mut self, aead_id: i32) -> Result<SafeCell<Vec<u8>>, Error> {
|
||||||
let State::Unsealed { root_key, .. } = &mut self.state else {
|
let State::Unsealed { root_key, .. } = &mut self.state else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(Error::NotBootstrapped);
|
||||||
};
|
};
|
||||||
@@ -279,14 +281,14 @@ impl KeyHolder {
|
|||||||
);
|
);
|
||||||
Error::BrokenDatabase
|
Error::BrokenDatabase
|
||||||
})?;
|
})?;
|
||||||
let mut output = MemSafe::new(row.ciphertext).unwrap();
|
let mut output = SafeCell::new(row.ciphertext);
|
||||||
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
||||||
Ok(output)
|
Ok(output)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
|
pub async fn create_new(&mut self, mut plaintext: SafeCell<Vec<u8>>) -> Result<i32, Error> {
|
||||||
let State::Unsealed {
|
let State::Unsealed {
|
||||||
root_key,
|
root_key,
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
@@ -299,7 +301,7 @@ impl KeyHolder {
|
|||||||
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
||||||
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
||||||
|
|
||||||
let mut ciphertext_buffer = plaintext.write().unwrap();
|
let mut ciphertext_buffer = plaintext.write();
|
||||||
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
||||||
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
||||||
|
|
||||||
@@ -313,7 +315,7 @@ impl KeyHolder {
|
|||||||
current_nonce: nonce.to_vec(),
|
current_nonce: nonce.to_vec(),
|
||||||
schema_version: 1,
|
schema_version: 1,
|
||||||
associated_root_key_id: *root_key_history_id,
|
associated_root_key_id: *root_key_history_id,
|
||||||
created_at: Utc::now().into()
|
created_at: Utc::now().into(),
|
||||||
})
|
})
|
||||||
.returning(schema::aead_encrypted::id)
|
.returning(schema::aead_encrypted::id)
|
||||||
.get_result(&mut conn)
|
.get_result(&mut conn)
|
||||||
@@ -323,7 +325,7 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub fn get_state(&self) -> StateDiscriminants {
|
pub fn get_state(&self) -> KeyHolderState {
|
||||||
self.state.discriminant()
|
self.state.discriminant()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -348,15 +350,17 @@ mod tests {
|
|||||||
use diesel::SelectableHelper;
|
use diesel::SelectableHelper;
|
||||||
|
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use memsafe::MemSafe;
|
|
||||||
|
|
||||||
use crate::db::{self};
|
use crate::{
|
||||||
|
db::{self},
|
||||||
|
safe_cell::SafeCell,
|
||||||
|
};
|
||||||
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
||||||
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
let seal_key = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
actor.bootstrap(seal_key).await.unwrap();
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
actor
|
actor
|
||||||
}
|
}
|
||||||
@@ -391,7 +395,7 @@ mod tests {
|
|||||||
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
||||||
|
|
||||||
let id = actor
|
let id = actor
|
||||||
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
|
.create_new(SafeCell::new(b"post-interleave".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
|||||||
@@ -1,17 +1,20 @@
|
|||||||
use std::{
|
use std::{collections::HashMap, ops::ControlFlow};
|
||||||
collections::{HashMap},
|
|
||||||
ops::ControlFlow,
|
|
||||||
};
|
|
||||||
|
|
||||||
|
use ed25519_dalek::VerifyingKey;
|
||||||
use kameo::{
|
use kameo::{
|
||||||
Actor,
|
Actor,
|
||||||
actor::{ActorId, ActorRef},
|
actor::{ActorId, ActorRef},
|
||||||
messages,
|
messages,
|
||||||
prelude::{ActorStopReason, Context, WeakActorRef},
|
prelude::{ActorStopReason, Context, WeakActorRef},
|
||||||
|
reply::DelegatedReply,
|
||||||
};
|
};
|
||||||
use tracing::info;
|
use tokio::{sync::watch, task::JoinSet};
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
use crate::actors::{client::session::ClientSession, user_agent::session::UserAgentSession};
|
use crate::actors::{
|
||||||
|
client::session::ClientSession,
|
||||||
|
user_agent::session::{RequestNewClientApproval, UserAgentSession},
|
||||||
|
};
|
||||||
|
|
||||||
#[derive(Default)]
|
#[derive(Default)]
|
||||||
pub struct MessageRouter {
|
pub struct MessageRouter {
|
||||||
@@ -53,6 +56,73 @@ impl Actor for MessageRouter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error, Clone, PartialEq, Eq, Hash)]
|
||||||
|
pub enum ApprovalError {
|
||||||
|
#[error("No user agents connected")]
|
||||||
|
NoUserAgentsConnected,
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn request_client_approval(
|
||||||
|
user_agents: &[WeakActorRef<UserAgentSession>],
|
||||||
|
client_pubkey: VerifyingKey,
|
||||||
|
) -> Result<bool, ApprovalError> {
|
||||||
|
if user_agents.is_empty() {
|
||||||
|
return Err(ApprovalError::NoUserAgentsConnected);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut pool = JoinSet::new();
|
||||||
|
let (cancel_tx, cancel_rx) = watch::channel(());
|
||||||
|
|
||||||
|
for weak_ref in user_agents {
|
||||||
|
match weak_ref.upgrade() {
|
||||||
|
Some(agent) => {
|
||||||
|
let cancel_rx = cancel_rx.clone();
|
||||||
|
pool.spawn(async move {
|
||||||
|
agent
|
||||||
|
.ask(RequestNewClientApproval {
|
||||||
|
client_pubkey,
|
||||||
|
cancel_flag: cancel_rx.clone(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
});
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
warn!(
|
||||||
|
id = weak_ref.id().to_string(),
|
||||||
|
actor = "MessageRouter",
|
||||||
|
event = "useragent.disconnected_before_approval"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
while let Some(result) = pool.join_next().await {
|
||||||
|
match result {
|
||||||
|
Ok(Ok(approved)) => {
|
||||||
|
// cancel other pending requests
|
||||||
|
let _ = cancel_tx.send(());
|
||||||
|
return Ok(approved);
|
||||||
|
}
|
||||||
|
Ok(Err(err)) => {
|
||||||
|
warn!(
|
||||||
|
?err,
|
||||||
|
actor = "MessageRouter",
|
||||||
|
event = "useragent.approval_error"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
warn!(
|
||||||
|
?err,
|
||||||
|
actor = "MessageRouter",
|
||||||
|
event = "useragent.approval_task_failed"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(ApprovalError::NoUserAgentsConnected)
|
||||||
|
}
|
||||||
|
|
||||||
#[messages]
|
#[messages]
|
||||||
impl MessageRouter {
|
impl MessageRouter {
|
||||||
#[message(ctx)]
|
#[message(ctx)]
|
||||||
@@ -76,4 +146,28 @@ impl MessageRouter {
|
|||||||
ctx.actor_ref().link(&actor).await;
|
ctx.actor_ref().link(&actor).await;
|
||||||
self.clients.insert(actor.id(), actor);
|
self.clients.insert(actor.id(), actor);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[message(ctx)]
|
||||||
|
pub async fn request_client_approval(
|
||||||
|
&mut self,
|
||||||
|
client_pubkey: VerifyingKey,
|
||||||
|
ctx: &mut Context<Self, DelegatedReply<Result<bool, ApprovalError>>>,
|
||||||
|
) -> DelegatedReply<Result<bool, ApprovalError>> {
|
||||||
|
let (reply, Some(reply_sender)) = ctx.reply_sender() else {
|
||||||
|
unreachable!("Expected `request_client_approval` to have callback channel");
|
||||||
|
};
|
||||||
|
|
||||||
|
let weak_refs = self
|
||||||
|
.user_agents
|
||||||
|
.values()
|
||||||
|
.map(|agent| agent.downgrade())
|
||||||
|
.collect::<Vec<_>>();
|
||||||
|
|
||||||
|
tokio::task::spawn(async move {
|
||||||
|
let result = request_client_approval(&weak_refs, client_pubkey).await;
|
||||||
|
reply_sender.send(result);
|
||||||
|
});
|
||||||
|
|
||||||
|
reply
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,92 +1,82 @@
|
|||||||
use arbiter_proto::proto::user_agent::{
|
use arbiter_proto::transport::Bi;
|
||||||
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
};
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
use tracing::error;
|
||||||
|
|
||||||
use crate::actors::user_agent::{
|
use crate::actors::user_agent::{
|
||||||
UserAgentConnection,
|
AuthPublicKey, UserAgentConnection,
|
||||||
auth::state::{AuthContext, AuthStateMachine}, session::UserAgentSession,
|
auth::state::{AuthContext, AuthStateMachine},
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(thiserror::Error, Debug, PartialEq)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Unexpected message payload")]
|
|
||||||
UnexpectedMessagePayload,
|
|
||||||
#[error("Invalid client public key length")]
|
|
||||||
InvalidClientPubkeyLength,
|
|
||||||
#[error("Invalid client public key encoding")]
|
|
||||||
InvalidAuthPubkeyEncoding,
|
|
||||||
#[error("Database pool unavailable")]
|
|
||||||
DatabasePoolUnavailable,
|
|
||||||
#[error("Database operation failed")]
|
|
||||||
DatabaseOperationFailed,
|
|
||||||
#[error("Public key not registered")]
|
|
||||||
PublicKeyNotRegistered,
|
|
||||||
#[error("Transport error")]
|
|
||||||
Transport,
|
|
||||||
#[error("Invalid bootstrap token")]
|
|
||||||
InvalidBootstrapToken,
|
|
||||||
#[error("Bootstrapper actor unreachable")]
|
|
||||||
BootstrapperActorUnreachable,
|
|
||||||
#[error("Invalid challenge solution")]
|
|
||||||
InvalidChallengeSolution,
|
|
||||||
}
|
|
||||||
|
|
||||||
mod state;
|
mod state;
|
||||||
use state::*;
|
use state::*;
|
||||||
|
|
||||||
fn parse_auth_event(payload: UserAgentRequestPayload) -> Result<AuthEvents, Error> {
|
#[derive(Debug, Clone)]
|
||||||
match payload {
|
pub enum Inbound {
|
||||||
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
|
AuthChallengeRequest {
|
||||||
pubkey,
|
pubkey: AuthPublicKey,
|
||||||
bootstrap_token: None,
|
bootstrap_token: Option<String>,
|
||||||
}) => {
|
},
|
||||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
AuthChallengeSolution {
|
||||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
signature: Vec<u8>,
|
||||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
},
|
||||||
Ok(AuthEvents::AuthRequest(ChallengeRequest {
|
}
|
||||||
pubkey: pubkey.into(),
|
|
||||||
}))
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
UnregisteredPublicKey,
|
||||||
|
InvalidChallengeSolution,
|
||||||
|
InvalidBootstrapToken,
|
||||||
|
Internal { details: String },
|
||||||
|
Transport,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Error {
|
||||||
|
fn internal(details: impl Into<String>) -> Self {
|
||||||
|
Self::Internal {
|
||||||
|
details: details.into(),
|
||||||
}
|
}
|
||||||
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
|
|
||||||
pubkey,
|
|
||||||
bootstrap_token: Some(token),
|
|
||||||
}) => {
|
|
||||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
|
||||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
|
||||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
|
||||||
Ok(AuthEvents::BootstrapAuthRequest(BootstrapAuthRequest {
|
|
||||||
pubkey: pubkey.into(),
|
|
||||||
token,
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
UserAgentRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
|
|
||||||
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
|
|
||||||
solution: signature,
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
_ => Err(Error::UnexpectedMessagePayload),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn authenticate(props: &mut UserAgentConnection) -> Result<VerifyingKey, Error> {
|
#[derive(Debug, Clone)]
|
||||||
let mut state = AuthStateMachine::new(AuthContext::new(props));
|
pub enum Outbound {
|
||||||
|
AuthChallenge { nonce: i32 },
|
||||||
|
AuthSuccess,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_auth_event(payload: Inbound) -> AuthEvents {
|
||||||
|
match payload {
|
||||||
|
Inbound::AuthChallengeRequest {
|
||||||
|
pubkey,
|
||||||
|
bootstrap_token: None,
|
||||||
|
} => AuthEvents::AuthRequest(ChallengeRequest { pubkey }),
|
||||||
|
Inbound::AuthChallengeRequest {
|
||||||
|
pubkey,
|
||||||
|
bootstrap_token: Some(token),
|
||||||
|
} => AuthEvents::BootstrapAuthRequest(BootstrapAuthRequest { pubkey, token }),
|
||||||
|
Inbound::AuthChallengeSolution { signature } => {
|
||||||
|
AuthEvents::ReceivedSolution(ChallengeSolution {
|
||||||
|
solution: signature,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn authenticate<T>(
|
||||||
|
props: &mut UserAgentConnection,
|
||||||
|
transport: T,
|
||||||
|
) -> Result<AuthPublicKey, Error>
|
||||||
|
where
|
||||||
|
T: Bi<Inbound, Result<Outbound, Error>> + Send,
|
||||||
|
{
|
||||||
|
let mut state = AuthStateMachine::new(AuthContext::new(props, transport));
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
// This is needed because `state` now holds mutable reference to `ConnectionProps`, so we can't directly access `props` here
|
// `state` holds a mutable reference to `props` so we can't access it directly here
|
||||||
let transport = state.context_mut().conn.transport.as_mut();
|
let Some(payload) = state.context_mut().transport.recv().await else {
|
||||||
let Some(UserAgentRequest {
|
|
||||||
payload: Some(payload),
|
|
||||||
}) = transport.recv().await
|
|
||||||
else {
|
|
||||||
return Err(Error::Transport);
|
return Err(Error::Transport);
|
||||||
};
|
};
|
||||||
|
|
||||||
let event = parse_auth_event(payload)?;
|
match state.process_event(parse_auth_event(payload)).await {
|
||||||
|
|
||||||
match state.process_event(event).await {
|
|
||||||
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
||||||
Err(AuthError::ActionFailed(err)) => {
|
Err(AuthError::ActionFailed(err)) => {
|
||||||
error!(?err, "State machine action failed");
|
error!(?err, "State machine action failed");
|
||||||
@@ -109,10 +99,3 @@ pub async fn authenticate(props: &mut UserAgentConnection) -> Result<VerifyingKe
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
pub async fn authenticate_and_create(mut props: UserAgentConnection) -> Result<UserAgentSession, Error> {
|
|
||||||
let key = authenticate(&mut props).await?;
|
|
||||||
let session = UserAgentSession::new(props, key.clone());
|
|
||||||
Ok(session)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,30 +1,29 @@
|
|||||||
use arbiter_proto::proto::user_agent::{
|
use arbiter_proto::transport::Bi;
|
||||||
AuthChallenge, UserAgentResponse,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
};
|
|
||||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
use tracing::error;
|
||||||
|
|
||||||
use super::Error;
|
use super::Error;
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::{bootstrap::ConsumeToken, user_agent::UserAgentConnection},
|
actors::{
|
||||||
|
bootstrap::ConsumeToken,
|
||||||
|
user_agent::{AuthPublicKey, UserAgentConnection, auth::Outbound},
|
||||||
|
},
|
||||||
db::schema,
|
db::schema,
|
||||||
};
|
};
|
||||||
|
|
||||||
pub struct ChallengeRequest {
|
pub struct ChallengeRequest {
|
||||||
pub pubkey: VerifyingKey,
|
pub pubkey: AuthPublicKey,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct BootstrapAuthRequest {
|
pub struct BootstrapAuthRequest {
|
||||||
pub pubkey: VerifyingKey,
|
pub pubkey: AuthPublicKey,
|
||||||
pub token: String,
|
pub token: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ChallengeContext {
|
pub struct ChallengeContext {
|
||||||
pub challenge: AuthChallenge,
|
pub challenge_nonce: i32,
|
||||||
pub key: VerifyingKey,
|
pub key: AuthPublicKey,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ChallengeSolution {
|
pub struct ChallengeSolution {
|
||||||
@@ -36,15 +35,15 @@ smlang::statemachine!(
|
|||||||
custom_error: true,
|
custom_error: true,
|
||||||
transitions: {
|
transitions: {
|
||||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
||||||
Init + BootstrapAuthRequest(BootstrapAuthRequest) [async verify_bootstrap_token] / provide_key_bootstrap = AuthOk(VerifyingKey),
|
Init + BootstrapAuthRequest(BootstrapAuthRequest) / async verify_bootstrap_token = AuthOk(AuthPublicKey),
|
||||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
|
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) / async verify_solution = AuthOk(AuthPublicKey),
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|
||||||
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
||||||
let mut db_conn = db.get().await.map_err(|e| {
|
let mut db_conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
Error::internal("Database unavailable")
|
||||||
})?;
|
})?;
|
||||||
db_conn
|
db_conn
|
||||||
.exclusive_transaction(|conn| {
|
.exclusive_transaction(|conn| {
|
||||||
@@ -68,82 +67,64 @@ async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Resu
|
|||||||
.optional()
|
.optional()
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Database error");
|
error!(error = ?e, "Database error");
|
||||||
Error::DatabaseOperationFailed
|
Error::internal("Database operation failed")
|
||||||
})?
|
})?
|
||||||
.ok_or_else(|| {
|
.ok_or_else(|| {
|
||||||
error!(?pubkey_bytes, "Public key not found in database");
|
error!(?pubkey_bytes, "Public key not found in database");
|
||||||
Error::PublicKeyNotRegistered
|
Error::UnregisteredPublicKey
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn register_key(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<(), Error> {
|
async fn register_key(db: &crate::db::DatabasePool, pubkey: &AuthPublicKey) -> Result<(), Error> {
|
||||||
|
let pubkey_bytes = pubkey.to_stored_bytes();
|
||||||
|
let key_type = pubkey.key_type();
|
||||||
let mut conn = db.get().await.map_err(|e| {
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
Error::internal("Database unavailable")
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
diesel::insert_into(schema::useragent_client::table)
|
diesel::insert_into(schema::useragent_client::table)
|
||||||
.values((
|
.values((
|
||||||
schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()),
|
schema::useragent_client::public_key.eq(pubkey_bytes),
|
||||||
schema::useragent_client::nonce.eq(1),
|
schema::useragent_client::nonce.eq(1),
|
||||||
|
schema::useragent_client::key_type.eq(key_type),
|
||||||
))
|
))
|
||||||
.execute(&mut conn)
|
.execute(&mut conn)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Database error");
|
error!(error = ?e, "Database error");
|
||||||
Error::DatabaseOperationFailed
|
Error::internal("Database operation failed")
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct AuthContext<'a> {
|
pub struct AuthContext<'a, T> {
|
||||||
pub(super) conn: &'a mut UserAgentConnection,
|
pub(super) conn: &'a mut UserAgentConnection,
|
||||||
|
pub(super) transport: T,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> AuthContext<'a> {
|
impl<'a, T> AuthContext<'a, T> {
|
||||||
pub fn new(conn: &'a mut UserAgentConnection) -> Self {
|
pub fn new(conn: &'a mut UserAgentConnection, transport: T) -> Self {
|
||||||
Self { conn }
|
Self { conn, transport }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AuthStateMachineContext for AuthContext<'_> {
|
impl<T> AuthStateMachineContext for AuthContext<'_, T>
|
||||||
|
where
|
||||||
|
T: Bi<super::Inbound, Result<super::Outbound, Error>> + Send,
|
||||||
|
{
|
||||||
type Error = Error;
|
type Error = Error;
|
||||||
|
|
||||||
async fn verify_solution(
|
|
||||||
&self,
|
|
||||||
ChallengeContext { challenge, key }: &ChallengeContext,
|
|
||||||
ChallengeSolution { solution }: &ChallengeSolution,
|
|
||||||
) -> Result<bool, Self::Error> {
|
|
||||||
let formatted_challenge =
|
|
||||||
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
|
|
||||||
let signature = solution.as_slice().try_into().map_err(|_| {
|
|
||||||
error!(?solution, "Invalid signature length");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
|
|
||||||
|
|
||||||
Ok(valid)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn prepare_challenge(
|
async fn prepare_challenge(
|
||||||
&mut self,
|
&mut self,
|
||||||
ChallengeRequest { pubkey }: ChallengeRequest,
|
ChallengeRequest { pubkey }: ChallengeRequest,
|
||||||
) -> Result<ChallengeContext, Self::Error> {
|
) -> Result<ChallengeContext, Self::Error> {
|
||||||
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
|
let stored_bytes = pubkey.to_stored_bytes();
|
||||||
|
let nonce = create_nonce(&self.conn.db, &stored_bytes).await?;
|
||||||
|
|
||||||
let challenge = AuthChallenge {
|
self.transport
|
||||||
pubkey: pubkey.as_bytes().to_vec(),
|
.send(Ok(Outbound::AuthChallenge { nonce }))
|
||||||
nonce,
|
|
||||||
};
|
|
||||||
|
|
||||||
self.conn
|
|
||||||
.transport
|
|
||||||
.send(Ok(UserAgentResponse {
|
|
||||||
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
|
|
||||||
}))
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(?e, "Failed to send auth challenge");
|
error!(?e, "Failed to send auth challenge");
|
||||||
@@ -151,7 +132,7 @@ impl AuthStateMachineContext for AuthContext<'_> {
|
|||||||
})?;
|
})?;
|
||||||
|
|
||||||
Ok(ChallengeContext {
|
Ok(ChallengeContext {
|
||||||
challenge,
|
challenge_nonce: nonce,
|
||||||
key: pubkey,
|
key: pubkey,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -159,9 +140,9 @@ impl AuthStateMachineContext for AuthContext<'_> {
|
|||||||
#[allow(missing_docs)]
|
#[allow(missing_docs)]
|
||||||
#[allow(clippy::result_unit_err)]
|
#[allow(clippy::result_unit_err)]
|
||||||
async fn verify_bootstrap_token(
|
async fn verify_bootstrap_token(
|
||||||
&self,
|
&mut self,
|
||||||
BootstrapAuthRequest { pubkey, token }: &BootstrapAuthRequest,
|
BootstrapAuthRequest { pubkey, token }: BootstrapAuthRequest,
|
||||||
) -> Result<bool, Self::Error> {
|
) -> Result<AuthPublicKey, Self::Error> {
|
||||||
let token_ok: bool = self
|
let token_ok: bool = self
|
||||||
.conn
|
.conn
|
||||||
.actors
|
.actors
|
||||||
@@ -171,32 +152,71 @@ impl AuthStateMachineContext for AuthContext<'_> {
|
|||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(?pubkey, "Failed to consume bootstrap token: {e}");
|
error!(?e, "Failed to consume bootstrap token");
|
||||||
Error::BootstrapperActorUnreachable
|
Error::internal("Failed to consume bootstrap token")
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
if !token_ok {
|
if !token_ok {
|
||||||
error!(?pubkey, "Invalid bootstrap token provided");
|
error!("Invalid bootstrap token provided");
|
||||||
return Err(Error::InvalidBootstrapToken);
|
return Err(Error::InvalidBootstrapToken);
|
||||||
}
|
}
|
||||||
|
|
||||||
register_key(&self.conn.db, pubkey.as_bytes()).await?;
|
register_key(&self.conn.db, &pubkey).await?;
|
||||||
|
|
||||||
Ok(true)
|
self.transport
|
||||||
|
.send(Ok(Outbound::AuthSuccess))
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::Transport)?;
|
||||||
|
|
||||||
|
Ok(pubkey)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn provide_key_bootstrap(
|
#[allow(missing_docs)]
|
||||||
|
#[allow(clippy::unused_unit)]
|
||||||
|
async fn verify_solution(
|
||||||
&mut self,
|
&mut self,
|
||||||
event_data: BootstrapAuthRequest,
|
ChallengeContext {
|
||||||
) -> Result<VerifyingKey, Self::Error> {
|
challenge_nonce,
|
||||||
Ok(event_data.pubkey)
|
key,
|
||||||
|
}: &ChallengeContext,
|
||||||
|
ChallengeSolution { solution }: ChallengeSolution,
|
||||||
|
) -> Result<AuthPublicKey, Self::Error> {
|
||||||
|
let formatted = arbiter_proto::format_challenge(*challenge_nonce, &key.to_stored_bytes());
|
||||||
|
|
||||||
|
let valid = match key {
|
||||||
|
AuthPublicKey::Ed25519(vk) => {
|
||||||
|
let sig = solution.as_slice().try_into().map_err(|_| {
|
||||||
|
error!(?solution, "Invalid Ed25519 signature length");
|
||||||
|
Error::InvalidChallengeSolution
|
||||||
|
})?;
|
||||||
|
vk.verify_strict(&formatted, &sig).is_ok()
|
||||||
|
}
|
||||||
|
AuthPublicKey::EcdsaSecp256k1(vk) => {
|
||||||
|
use k256::ecdsa::signature::Verifier as _;
|
||||||
|
let sig = k256::ecdsa::Signature::try_from(solution.as_slice()).map_err(|_| {
|
||||||
|
error!(?solution, "Invalid ECDSA signature bytes");
|
||||||
|
Error::InvalidChallengeSolution
|
||||||
|
})?;
|
||||||
|
vk.verify(&formatted, &sig).is_ok()
|
||||||
|
}
|
||||||
|
AuthPublicKey::Rsa(pk) => {
|
||||||
|
use rsa::signature::Verifier as _;
|
||||||
|
let verifying_key = rsa::pss::VerifyingKey::<sha2::Sha256>::new(pk.clone());
|
||||||
|
let sig = rsa::pss::Signature::try_from(solution.as_slice()).map_err(|_| {
|
||||||
|
error!(?solution, "Invalid RSA signature bytes");
|
||||||
|
Error::InvalidChallengeSolution
|
||||||
|
})?;
|
||||||
|
verifying_key.verify(&formatted, &sig).is_ok()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if valid {
|
||||||
|
self.transport
|
||||||
|
.send(Ok(Outbound::AuthSuccess))
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::Transport)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
fn provide_key(
|
Ok(key.clone())
|
||||||
&mut self,
|
|
||||||
state_data: &ChallengeContext,
|
|
||||||
_: ChallengeSolution,
|
|
||||||
) -> Result<VerifyingKey, Self::Error> {
|
|
||||||
Ok(state_data.key)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,65 +1,94 @@
|
|||||||
use arbiter_proto::{
|
|
||||||
proto::user_agent::{UserAgentRequest, UserAgentResponse},
|
|
||||||
transport::Bi,
|
|
||||||
};
|
|
||||||
use kameo::actor::Spawn as _;
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::{GlobalActors, user_agent::session::UserAgentSession},
|
actors::GlobalActors,
|
||||||
db::{self},
|
db::{self, models::KeyType},
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, PartialEq)]
|
/// Abstraction over Ed25519 / ECDSA-secp256k1 / RSA public keys used during the auth handshake.
|
||||||
pub enum UserAgentError {
|
#[derive(Clone, Debug)]
|
||||||
#[error("Expected message with payload")]
|
pub enum AuthPublicKey {
|
||||||
MissingRequestPayload,
|
Ed25519(ed25519_dalek::VerifyingKey),
|
||||||
#[error("Unexpected request payload")]
|
/// Compressed SEC1 public key; signature bytes are raw 64-byte (r||s).
|
||||||
UnexpectedRequestPayload,
|
EcdsaSecp256k1(k256::ecdsa::VerifyingKey),
|
||||||
#[error("Invalid state for unseal encrypted key")]
|
/// RSA-2048+ public key (Windows Hello / KeyCredentialManager); signature bytes are PSS+SHA-256.
|
||||||
InvalidStateForUnsealEncryptedKey,
|
Rsa(rsa::RsaPublicKey),
|
||||||
#[error("client_pubkey must be 32 bytes")]
|
|
||||||
InvalidClientPubkeyLength,
|
|
||||||
#[error("State machine error")]
|
|
||||||
StateTransitionFailed,
|
|
||||||
#[error("Vault is not available")]
|
|
||||||
KeyHolderActorUnreachable,
|
|
||||||
#[error(transparent)]
|
|
||||||
Auth(#[from] auth::Error),
|
|
||||||
#[error("Failed registering connection")]
|
|
||||||
ConnectionRegistrationFailed,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub type Transport =
|
impl AuthPublicKey {
|
||||||
Box<dyn Bi<UserAgentRequest, Result<UserAgentResponse, UserAgentError>> + Send>;
|
/// Canonical bytes stored in DB and echoed back in the challenge.
|
||||||
|
/// Ed25519: raw 32 bytes. ECDSA: SEC1 compressed 33 bytes. RSA: DER-encoded SPKI.
|
||||||
|
pub fn to_stored_bytes(&self) -> Vec<u8> {
|
||||||
|
match self {
|
||||||
|
AuthPublicKey::Ed25519(k) => k.to_bytes().to_vec(),
|
||||||
|
// SEC1 compressed (33 bytes) is the natural compact format for secp256k1
|
||||||
|
AuthPublicKey::EcdsaSecp256k1(k) => k.to_encoded_point(true).as_bytes().to_vec(),
|
||||||
|
AuthPublicKey::Rsa(k) => {
|
||||||
|
use rsa::pkcs8::EncodePublicKey as _;
|
||||||
|
#[allow(clippy::expect_used)]
|
||||||
|
k.to_public_key_der()
|
||||||
|
.expect("rsa SPKI encoding is infallible")
|
||||||
|
.to_vec()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn key_type(&self) -> KeyType {
|
||||||
|
match self {
|
||||||
|
AuthPublicKey::Ed25519(_) => KeyType::Ed25519,
|
||||||
|
AuthPublicKey::EcdsaSecp256k1(_) => KeyType::EcdsaSecp256k1,
|
||||||
|
AuthPublicKey::Rsa(_) => KeyType::Rsa,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<(KeyType, Vec<u8>)> for AuthPublicKey {
|
||||||
|
type Error = &'static str;
|
||||||
|
|
||||||
|
fn try_from(value: (KeyType, Vec<u8>)) -> Result<Self, Self::Error> {
|
||||||
|
let (key_type, bytes) = value;
|
||||||
|
match key_type {
|
||||||
|
KeyType::Ed25519 => {
|
||||||
|
let bytes: [u8; 32] = bytes.try_into().map_err(|_| "invalid Ed25519 key length")?;
|
||||||
|
let key = ed25519_dalek::VerifyingKey::from_bytes(&bytes)
|
||||||
|
.map_err(|_e| "invalid Ed25519 key")?;
|
||||||
|
Ok(AuthPublicKey::Ed25519(key))
|
||||||
|
}
|
||||||
|
KeyType::EcdsaSecp256k1 => {
|
||||||
|
let point =
|
||||||
|
k256::EncodedPoint::from_bytes(&bytes).map_err(|_e| "invalid ECDSA key")?;
|
||||||
|
let key = k256::ecdsa::VerifyingKey::from_encoded_point(&point)
|
||||||
|
.map_err(|_e| "invalid ECDSA key")?;
|
||||||
|
Ok(AuthPublicKey::EcdsaSecp256k1(key))
|
||||||
|
}
|
||||||
|
KeyType::Rsa => {
|
||||||
|
use rsa::pkcs8::DecodePublicKey as _;
|
||||||
|
let key = rsa::RsaPublicKey::from_public_key_der(&bytes)
|
||||||
|
.map_err(|_e| "invalid RSA key")?;
|
||||||
|
Ok(AuthPublicKey::Rsa(key))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Messages, sent by user agent to connection client without having a request
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum OutOfBand {
|
||||||
|
ClientConnectionRequest { pubkey: ed25519_dalek::VerifyingKey },
|
||||||
|
ClientConnectionCancel,
|
||||||
|
}
|
||||||
|
|
||||||
pub struct UserAgentConnection {
|
pub struct UserAgentConnection {
|
||||||
db: db::DatabasePool,
|
pub(crate) db: db::DatabasePool,
|
||||||
actors: GlobalActors,
|
pub(crate) actors: GlobalActors,
|
||||||
transport: Transport,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl UserAgentConnection {
|
impl UserAgentConnection {
|
||||||
pub fn new(db: db::DatabasePool, actors: GlobalActors, transport: Transport) -> Self {
|
pub fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
Self {
|
Self { db, actors }
|
||||||
db,
|
|
||||||
actors,
|
|
||||||
transport,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod auth;
|
pub mod auth;
|
||||||
pub mod session;
|
pub mod session;
|
||||||
|
|
||||||
pub async fn connect_user_agent(props: UserAgentConnection) {
|
pub use auth::authenticate;
|
||||||
match auth::authenticate_and_create(props).await {
|
pub use session::UserAgentSession;
|
||||||
Ok(session) => {
|
|
||||||
UserAgentSession::spawn(session);
|
|
||||||
info!("User authenticated, session started");
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
error!(?err, "Authentication failed, closing connection");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,254 +1,116 @@
|
|||||||
use std::{ops::DerefMut, sync::Mutex};
|
use std::borrow::Cow;
|
||||||
|
|
||||||
use arbiter_proto::proto::{
|
use arbiter_proto::transport::Sender;
|
||||||
evm as evm_proto,
|
use async_trait::async_trait;
|
||||||
user_agent::{
|
|
||||||
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
|
|
||||||
UserAgentResponse, user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
use ed25519_dalek::VerifyingKey;
|
||||||
use kameo::{
|
use kameo::{Actor, messages};
|
||||||
Actor,
|
use thiserror::Error;
|
||||||
error::SendError,
|
use tokio::sync::watch;
|
||||||
};
|
use tracing::error;
|
||||||
use memsafe::MemSafe;
|
|
||||||
use tokio::select;
|
|
||||||
use tracing::{error, info};
|
|
||||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
|
||||||
|
|
||||||
use crate::actors::{
|
use crate::actors::{
|
||||||
evm::{Generate, ListWallets},
|
|
||||||
keyholder::{self, TryUnseal},
|
|
||||||
router::RegisterUserAgent,
|
router::RegisterUserAgent,
|
||||||
user_agent::{UserAgentConnection, UserAgentError},
|
user_agent::{OutOfBand, UserAgentConnection},
|
||||||
};
|
};
|
||||||
|
|
||||||
mod state;
|
mod state;
|
||||||
use state::{DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine, UserAgentStates};
|
use state::{DummyContext, UserAgentEvents, UserAgentStateMachine};
|
||||||
|
|
||||||
|
#[derive(Debug, Error)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("State transition failed")]
|
||||||
|
State,
|
||||||
|
|
||||||
|
#[error("Internal error: {message}")]
|
||||||
|
Internal { message: Cow<'static, str> },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Error {
|
||||||
|
pub fn internal(message: impl Into<Cow<'static, str>>) -> Self {
|
||||||
|
Self::Internal {
|
||||||
|
message: message.into(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub struct UserAgentSession {
|
pub struct UserAgentSession {
|
||||||
props: UserAgentConnection,
|
props: UserAgentConnection,
|
||||||
key: VerifyingKey,
|
|
||||||
state: UserAgentStateMachine<DummyContext>,
|
state: UserAgentStateMachine<DummyContext>,
|
||||||
|
#[allow(dead_code, reason = "The session keeps ownership of the outbound transport even before the state-machine flow starts using it directly")]
|
||||||
|
sender: Box<dyn Sender<OutOfBand>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mod connection;
|
||||||
|
pub(crate) use connection::{
|
||||||
|
BootstrapError, HandleBootstrapEncryptedKey, HandleEvmWalletCreate, HandleEvmWalletList,
|
||||||
|
HandleGrantCreate, HandleGrantDelete, HandleGrantList, HandleQueryVaultState,
|
||||||
|
};
|
||||||
|
pub use connection::{HandleUnsealEncryptedKey, HandleUnsealRequest, UnsealError};
|
||||||
|
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
pub(crate) fn new(props: UserAgentConnection, key: VerifyingKey) -> Self {
|
pub(crate) fn new(props: UserAgentConnection, sender: Box<dyn Sender<OutOfBand>>) -> Self {
|
||||||
Self {
|
Self {
|
||||||
props,
|
props,
|
||||||
key,
|
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
state: UserAgentStateMachine::new(DummyContext),
|
||||||
|
sender,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
|
pub fn new_test(db: crate::db::DatabasePool, actors: crate::actors::GlobalActors) -> Self {
|
||||||
|
struct DummySender;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Sender<OutOfBand> for DummySender {
|
||||||
|
async fn send(
|
||||||
|
&mut self,
|
||||||
|
_item: OutOfBand,
|
||||||
|
) -> Result<(), arbiter_proto::transport::Error> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Self::new(UserAgentConnection::new(db, actors), Box::new(DummySender))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Error> {
|
||||||
self.state.process_event(event).map_err(|e| {
|
self.state.process_event(event).map_err(|e| {
|
||||||
error!(?e, "State transition failed");
|
error!(?e, "State transition failed");
|
||||||
UserAgentError::StateTransitionFailed
|
Error::State
|
||||||
})?;
|
})?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn process_transport_inbound(&mut self, req: UserAgentRequest) -> Output {
|
|
||||||
let msg = req.payload.ok_or_else(|| {
|
|
||||||
error!(actor = "useragent", "Received message with no payload");
|
|
||||||
UserAgentError::MissingRequestPayload
|
|
||||||
})?;
|
|
||||||
|
|
||||||
match msg {
|
|
||||||
UserAgentRequestPayload::UnsealStart(unseal_start) => {
|
|
||||||
self.handle_unseal_request(unseal_start).await
|
|
||||||
}
|
|
||||||
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
|
|
||||||
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
|
|
||||||
}
|
|
||||||
UserAgentRequestPayload::EvmWalletCreate(_) => self.handle_evm_wallet_create().await,
|
|
||||||
UserAgentRequestPayload::EvmWalletList(_) => self.handle_evm_wallet_list().await,
|
|
||||||
_ => Err(UserAgentError::UnexpectedRequestPayload),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Output = Result<UserAgentResponse, UserAgentError>;
|
|
||||||
|
|
||||||
fn response(payload: UserAgentResponsePayload) -> UserAgentResponse {
|
|
||||||
UserAgentResponse {
|
|
||||||
payload: Some(payload),
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
async fn handle_unseal_request(&mut self, req: UnsealStart) -> Output {
|
#[message]
|
||||||
let secret = EphemeralSecret::random();
|
pub async fn request_new_client_approval(
|
||||||
let public_key = PublicKey::from(&secret);
|
&mut self,
|
||||||
|
client_pubkey: VerifyingKey,
|
||||||
let client_pubkey_bytes: [u8; 32] = req
|
mut cancel_flag: watch::Receiver<()>,
|
||||||
.client_pubkey
|
) -> Result<bool, ()> {
|
||||||
.try_into()
|
if self
|
||||||
.map_err(|_| UserAgentError::InvalidClientPubkeyLength)?;
|
.sender
|
||||||
|
.send(OutOfBand::ClientConnectionRequest {
|
||||||
let client_public_key = PublicKey::from(client_pubkey_bytes);
|
pubkey: client_pubkey,
|
||||||
|
|
||||||
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
|
||||||
secret: Mutex::new(Some(secret)),
|
|
||||||
client_public_key,
|
|
||||||
}))?;
|
|
||||||
|
|
||||||
Ok(response(UserAgentResponsePayload::UnsealStartResponse(
|
|
||||||
UnsealStartResponse {
|
|
||||||
server_pubkey: public_key.as_bytes().to_vec(),
|
|
||||||
},
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
|
|
||||||
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
|
||||||
error!("Received unseal encrypted key in invalid state");
|
|
||||||
return Err(UserAgentError::InvalidStateForUnsealEncryptedKey);
|
|
||||||
};
|
|
||||||
let ephemeral_secret = {
|
|
||||||
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
|
||||||
let secret = secret_lock.take();
|
|
||||||
match secret {
|
|
||||||
Some(secret) => secret,
|
|
||||||
None => {
|
|
||||||
drop(secret_lock);
|
|
||||||
error!("Ephemeral secret already taken");
|
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
return Ok(response(UserAgentResponsePayload::UnsealResult(
|
|
||||||
UnsealResult::InvalidKey.into(),
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let nonce = XNonce::from_slice(&req.nonce);
|
|
||||||
|
|
||||||
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
|
|
||||||
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
|
||||||
|
|
||||||
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
|
|
||||||
|
|
||||||
let decryption_result = {
|
|
||||||
let mut write_handle = seal_key_buffer.write().unwrap();
|
|
||||||
let write_handle = write_handle.deref_mut();
|
|
||||||
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
|
|
||||||
};
|
|
||||||
|
|
||||||
match decryption_result {
|
|
||||||
Ok(_) => {
|
|
||||||
match self
|
|
||||||
.props
|
|
||||||
.actors
|
|
||||||
.key_holder
|
|
||||||
.ask(TryUnseal {
|
|
||||||
seal_key_raw: seal_key_buffer,
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
|
.is_err()
|
||||||
{
|
{
|
||||||
Ok(_) => {
|
return Err(());
|
||||||
info!("Successfully unsealed key with client-provided key");
|
|
||||||
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
|
||||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
|
||||||
UnsealResult::Success.into(),
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
|
||||||
UnsealResult::InvalidKey.into(),
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
Err(SendError::HandlerError(err)) => {
|
|
||||||
error!(?err, "Keyholder failed to unseal key");
|
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
|
||||||
UnsealResult::InvalidKey.into(),
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
error!(?err, "Failed to send unseal request to keyholder");
|
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
Err(UserAgentError::KeyHolderActorUnreachable)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
error!(?err, "Failed to decrypt unseal key");
|
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
|
||||||
UnsealResult::InvalidKey.into(),
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl UserAgentSession {
|
|
||||||
async fn handle_evm_wallet_create(&mut self) -> Output {
|
|
||||||
use evm_proto::wallet_create_response::Result as CreateResult;
|
|
||||||
|
|
||||||
let result = match self.props.actors.evm.ask(Generate {}).await {
|
|
||||||
Ok(address) => CreateResult::Wallet(evm_proto::WalletEntry {
|
|
||||||
address: address.as_slice().to_vec(),
|
|
||||||
}),
|
|
||||||
Err(err) => CreateResult::Error(map_evm_error("wallet create", err).into()),
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(response(UserAgentResponsePayload::EvmWalletCreate(
|
|
||||||
evm_proto::WalletCreateResponse {
|
|
||||||
result: Some(result),
|
|
||||||
},
|
|
||||||
)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn handle_evm_wallet_list(&mut self) -> Output {
|
let _ = cancel_flag.changed().await;
|
||||||
use evm_proto::wallet_list_response::Result as ListResult;
|
|
||||||
|
|
||||||
let result = match self.props.actors.evm.ask(ListWallets {}).await {
|
let _ = self.sender.send(OutOfBand::ClientConnectionCancel).await;
|
||||||
Ok(wallets) => ListResult::Wallets(evm_proto::WalletList {
|
Ok(false)
|
||||||
wallets: wallets
|
|
||||||
.into_iter()
|
|
||||||
.map(|addr| evm_proto::WalletEntry {
|
|
||||||
address: addr.as_slice().to_vec(),
|
|
||||||
})
|
|
||||||
.collect(),
|
|
||||||
}),
|
|
||||||
Err(err) => ListResult::Error(map_evm_error("wallet list", err).into()),
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(response(UserAgentResponsePayload::EvmWalletList(
|
|
||||||
evm_proto::WalletListResponse {
|
|
||||||
result: Some(result),
|
|
||||||
},
|
|
||||||
)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn map_evm_error<M>(op: &str, err: SendError<M, crate::actors::evm::Error>) -> evm_proto::EvmError {
|
|
||||||
use crate::actors::{evm::Error as EvmError, keyholder::Error as KhError};
|
|
||||||
match err {
|
|
||||||
SendError::HandlerError(EvmError::Keyholder(KhError::NotBootstrapped)) => {
|
|
||||||
evm_proto::EvmError::VaultSealed
|
|
||||||
}
|
|
||||||
SendError::HandlerError(err) => {
|
|
||||||
error!(?err, "EVM {op} failed");
|
|
||||||
evm_proto::EvmError::Internal
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
error!("EVM actor unreachable during {op}");
|
|
||||||
evm_proto::EvmError::Internal
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Actor for UserAgentSession {
|
impl Actor for UserAgentSession {
|
||||||
type Args = Self;
|
type Args = Self;
|
||||||
|
|
||||||
type Error = UserAgentError;
|
type Error = Error;
|
||||||
|
|
||||||
async fn on_start(
|
async fn on_start(
|
||||||
args: Self::Args,
|
args: Self::Args,
|
||||||
@@ -263,58 +125,8 @@ impl Actor for UserAgentSession {
|
|||||||
.await
|
.await
|
||||||
.map_err(|err| {
|
.map_err(|err| {
|
||||||
error!(?err, "Failed to register user agent connection with router");
|
error!(?err, "Failed to register user agent connection with router");
|
||||||
UserAgentError::ConnectionRegistrationFailed
|
Error::internal("Failed to register user agent connection with router")
|
||||||
})?;
|
})?;
|
||||||
Ok(args)
|
Ok(args)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn next(
|
|
||||||
&mut self,
|
|
||||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
|
||||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
|
||||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
|
||||||
loop {
|
|
||||||
select! {
|
|
||||||
signal = mailbox_rx.recv() => {
|
|
||||||
return signal;
|
|
||||||
}
|
|
||||||
msg = self.props.transport.recv() => {
|
|
||||||
match msg {
|
|
||||||
Some(request) => {
|
|
||||||
match self.process_transport_inbound(request).await {
|
|
||||||
Ok(response) => {
|
|
||||||
if self.props.transport.send(Ok(response)).await.is_err() {
|
|
||||||
error!(actor = "useragent", reason = "channel closed", "send.failed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
let _ = self.props.transport.send(Err(err)).await;
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
info!(actor = "useragent", "transport.closed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl UserAgentSession {
|
|
||||||
pub fn new_test(db: crate::db::DatabasePool, actors: crate::actors::GlobalActors) -> Self {
|
|
||||||
use arbiter_proto::transport::DummyTransport;
|
|
||||||
let transport: super::Transport = Box::new(DummyTransport::new());
|
|
||||||
let props = UserAgentConnection::new(db, actors, transport);
|
|
||||||
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
|
|
||||||
Self {
|
|
||||||
props,
|
|
||||||
key,
|
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,354 @@
|
|||||||
|
use std::sync::Mutex;
|
||||||
|
|
||||||
|
use alloy::primitives::Address;
|
||||||
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
|
use kameo::error::SendError;
|
||||||
|
use kameo::messages;
|
||||||
|
use tracing::{error, info};
|
||||||
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
|
|
||||||
|
use crate::actors::keyholder::KeyHolderState;
|
||||||
|
use crate::actors::user_agent::session::Error;
|
||||||
|
use crate::evm::policies::{Grant, SpecificGrant};
|
||||||
|
use crate::safe_cell::SafeCell;
|
||||||
|
use crate::{
|
||||||
|
actors::{
|
||||||
|
evm::{
|
||||||
|
Generate, ListWallets, UseragentCreateGrant, UseragentDeleteGrant, UseragentListGrants,
|
||||||
|
},
|
||||||
|
keyholder::{self, Bootstrap, TryUnseal},
|
||||||
|
user_agent::session::{
|
||||||
|
UserAgentSession,
|
||||||
|
state::{UnsealContext, UserAgentEvents, UserAgentStates},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
safe_cell::SafeCellHandle as _,
|
||||||
|
};
|
||||||
|
|
||||||
|
impl UserAgentSession {
|
||||||
|
fn take_unseal_secret(&mut self) -> Result<(EphemeralSecret, PublicKey), Error> {
|
||||||
|
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
||||||
|
error!("Received encrypted key in invalid state");
|
||||||
|
return Err(Error::internal("Invalid state for unseal encrypted key"));
|
||||||
|
};
|
||||||
|
|
||||||
|
let ephemeral_secret = {
|
||||||
|
#[allow(
|
||||||
|
clippy::unwrap_used,
|
||||||
|
reason = "Mutex poison is unrecoverable and should panic"
|
||||||
|
)]
|
||||||
|
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
||||||
|
let secret = secret_lock.take();
|
||||||
|
match secret {
|
||||||
|
Some(secret) => secret,
|
||||||
|
None => {
|
||||||
|
drop(secret_lock);
|
||||||
|
error!("Ephemeral secret already taken");
|
||||||
|
return Err(Error::internal("Ephemeral secret already taken"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok((ephemeral_secret, unseal_context.client_public_key))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn decrypt_client_key_material(
|
||||||
|
ephemeral_secret: EphemeralSecret,
|
||||||
|
client_public_key: PublicKey,
|
||||||
|
nonce: &[u8],
|
||||||
|
ciphertext: &[u8],
|
||||||
|
associated_data: &[u8],
|
||||||
|
) -> Result<SafeCell<Vec<u8>>, ()> {
|
||||||
|
let nonce = XNonce::from_slice(nonce);
|
||||||
|
|
||||||
|
let shared_secret = ephemeral_secret.diffie_hellman(&client_public_key);
|
||||||
|
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||||
|
|
||||||
|
let mut key_buffer = SafeCell::new(ciphertext.to_vec());
|
||||||
|
|
||||||
|
let decryption_result = key_buffer.write_inline(|write_handle| {
|
||||||
|
cipher.decrypt_in_place(nonce, associated_data, write_handle)
|
||||||
|
});
|
||||||
|
|
||||||
|
match decryption_result {
|
||||||
|
Ok(_) => Ok(key_buffer),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "Failed to decrypt encrypted key material");
|
||||||
|
Err(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct UnsealStartResponse {
|
||||||
|
pub server_pubkey: PublicKey,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Error)]
|
||||||
|
pub enum UnsealError {
|
||||||
|
#[error("Invalid key provided for unsealing")]
|
||||||
|
InvalidKey,
|
||||||
|
#[error("Internal error during unsealing process")]
|
||||||
|
General(#[from] super::Error),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Error)]
|
||||||
|
pub enum BootstrapError {
|
||||||
|
#[error("Invalid key provided for bootstrapping")]
|
||||||
|
InvalidKey,
|
||||||
|
#[error("Vault is already bootstrapped")]
|
||||||
|
AlreadyBootstrapped,
|
||||||
|
|
||||||
|
#[error("Internal error during bootstrapping process")]
|
||||||
|
General(#[from] super::Error),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl UserAgentSession {
|
||||||
|
#[message]
|
||||||
|
pub async fn handle_unseal_request(
|
||||||
|
&mut self,
|
||||||
|
client_pubkey: x25519_dalek::PublicKey,
|
||||||
|
) -> Result<UnsealStartResponse, Error> {
|
||||||
|
let secret = EphemeralSecret::random();
|
||||||
|
let public_key = PublicKey::from(&secret);
|
||||||
|
|
||||||
|
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
||||||
|
secret: Mutex::new(Some(secret)),
|
||||||
|
client_public_key: client_pubkey,
|
||||||
|
}))?;
|
||||||
|
|
||||||
|
Ok(UnsealStartResponse {
|
||||||
|
server_pubkey: public_key,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub async fn handle_unseal_encrypted_key(
|
||||||
|
&mut self,
|
||||||
|
nonce: Vec<u8>,
|
||||||
|
ciphertext: Vec<u8>,
|
||||||
|
associated_data: Vec<u8>,
|
||||||
|
) -> Result<(), UnsealError> {
|
||||||
|
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
||||||
|
Ok(values) => values,
|
||||||
|
Err(Error::State) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
return Err(UnsealError::InvalidKey);
|
||||||
|
}
|
||||||
|
Err(_err) => {
|
||||||
|
return Err(Error::internal("Failed to take unseal secret").into());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let seal_key_buffer = match Self::decrypt_client_key_material(
|
||||||
|
ephemeral_secret,
|
||||||
|
client_public_key,
|
||||||
|
&nonce,
|
||||||
|
&ciphertext,
|
||||||
|
&associated_data,
|
||||||
|
) {
|
||||||
|
Ok(buffer) => buffer,
|
||||||
|
Err(()) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
return Err(UnsealError::InvalidKey);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match self
|
||||||
|
.props
|
||||||
|
.actors
|
||||||
|
.key_holder
|
||||||
|
.ask(TryUnseal {
|
||||||
|
seal_key_raw: seal_key_buffer,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(_) => {
|
||||||
|
info!("Successfully unsealed key with client-provided key");
|
||||||
|
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(UnsealError::InvalidKey)
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(err)) => {
|
||||||
|
error!(?err, "Keyholder failed to unseal key");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(UnsealError::InvalidKey)
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "Failed to send unseal request to keyholder");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(Error::internal("Vault actor error").into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_bootstrap_encrypted_key(
|
||||||
|
&mut self,
|
||||||
|
nonce: Vec<u8>,
|
||||||
|
ciphertext: Vec<u8>,
|
||||||
|
associated_data: Vec<u8>,
|
||||||
|
) -> Result<(), BootstrapError> {
|
||||||
|
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
||||||
|
Ok(values) => values,
|
||||||
|
Err(Error::State) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
return Err(BootstrapError::InvalidKey);
|
||||||
|
}
|
||||||
|
Err(err) => return Err(err.into()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let seal_key_buffer = match Self::decrypt_client_key_material(
|
||||||
|
ephemeral_secret,
|
||||||
|
client_public_key,
|
||||||
|
&nonce,
|
||||||
|
&ciphertext,
|
||||||
|
&associated_data,
|
||||||
|
) {
|
||||||
|
Ok(buffer) => buffer,
|
||||||
|
Err(()) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
return Err(BootstrapError::InvalidKey);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match self
|
||||||
|
.props
|
||||||
|
.actors
|
||||||
|
.key_holder
|
||||||
|
.ask(Bootstrap {
|
||||||
|
seal_key_raw: seal_key_buffer,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(_) => {
|
||||||
|
info!("Successfully bootstrapped vault with client-provided key");
|
||||||
|
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(keyholder::Error::AlreadyBootstrapped)) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(BootstrapError::AlreadyBootstrapped)
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(err)) => {
|
||||||
|
error!(?err, "Keyholder failed to bootstrap vault");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(BootstrapError::InvalidKey)
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "Failed to send bootstrap request to keyholder");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(BootstrapError::General(Error::internal(
|
||||||
|
"Vault actor error",
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl UserAgentSession {
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_query_vault_state(&mut self) -> Result<KeyHolderState, Error> {
|
||||||
|
use crate::actors::keyholder::GetState;
|
||||||
|
|
||||||
|
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
||||||
|
Ok(state) => state,
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, actor = "useragent", "keyholder.query.failed");
|
||||||
|
return Err(Error::internal("Vault is in broken state"));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(vault_state)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl UserAgentSession {
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_evm_wallet_create(&mut self) -> Result<Address, Error> {
|
||||||
|
match self.props.actors.evm.ask(Generate {}).await {
|
||||||
|
Ok(address) => Ok(address),
|
||||||
|
Err(SendError::HandlerError(err)) => Err(Error::internal(format!(
|
||||||
|
"EVM wallet generation failed: {err}"
|
||||||
|
))),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "EVM actor unreachable during wallet create");
|
||||||
|
Err(Error::internal("EVM actor unreachable"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_evm_wallet_list(&mut self) -> Result<Vec<Address>, Error> {
|
||||||
|
match self.props.actors.evm.ask(ListWallets {}).await {
|
||||||
|
Ok(wallets) => Ok(wallets),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "EVM wallet list failed");
|
||||||
|
Err(Error::internal("Failed to list EVM wallets"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl UserAgentSession {
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_grant_list(&mut self) -> Result<Vec<Grant<SpecificGrant>>, Error> {
|
||||||
|
match self.props.actors.evm.ask(UseragentListGrants {}).await {
|
||||||
|
Ok(grants) => Ok(grants),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "EVM grant list failed");
|
||||||
|
Err(Error::internal("Failed to list EVM grants"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_grant_create(
|
||||||
|
&mut self,
|
||||||
|
client_id: i32,
|
||||||
|
basic: crate::evm::policies::SharedGrantSettings,
|
||||||
|
grant: crate::evm::policies::SpecificGrant,
|
||||||
|
) -> Result<i32, Error> {
|
||||||
|
match self
|
||||||
|
.props
|
||||||
|
.actors
|
||||||
|
.evm
|
||||||
|
.ask(UseragentCreateGrant {
|
||||||
|
client_id,
|
||||||
|
basic,
|
||||||
|
grant,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(grant_id) => Ok(grant_id),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "EVM grant create failed");
|
||||||
|
Err(Error::internal("Failed to create EVM grant"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub(crate) async fn handle_grant_delete(&mut self, grant_id: i32) -> Result<(), Error> {
|
||||||
|
match self
|
||||||
|
.props
|
||||||
|
.actors
|
||||||
|
.evm
|
||||||
|
.ask(UseragentDeleteGrant { grant_id })
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(()) => Ok(()),
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "EVM grant delete failed");
|
||||||
|
Err(Error::internal("Failed to delete EVM grant"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -8,7 +8,7 @@ use rcgen::{
|
|||||||
BasicConstraints, Certificate, CertificateParams, CertifiedIssuer, DistinguishedName, DnType,
|
BasicConstraints, Certificate, CertificateParams, CertifiedIssuer, DistinguishedName, DnType,
|
||||||
IsCa, Issuer, KeyPair, KeyUsagePurpose,
|
IsCa, Issuer, KeyPair, KeyUsagePurpose,
|
||||||
};
|
};
|
||||||
use rustls::pki_types::{pem::PemObject};
|
use rustls::pki_types::pem::PemObject;
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
use tonic::transport::CertificateDer;
|
use tonic::transport::CertificateDer;
|
||||||
|
|
||||||
@@ -59,10 +59,7 @@ pub enum InitError {
|
|||||||
pub type PemCert = String;
|
pub type PemCert = String;
|
||||||
|
|
||||||
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
|
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
|
||||||
pem::encode_config(
|
pem::encode_config(&Pem::new("CERTIFICATE", cert.to_vec()), ENCODE_CONFIG)
|
||||||
&Pem::new("CERTIFICATE", cert.to_vec()),
|
|
||||||
ENCODE_CONFIG,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(unused)]
|
#[allow(unused)]
|
||||||
@@ -94,6 +91,10 @@ impl TlsCa {
|
|||||||
|
|
||||||
let cert_key_pem = certified_issuer.key().serialize_pem();
|
let cert_key_pem = certified_issuer.key().serialize_pem();
|
||||||
|
|
||||||
|
#[allow(
|
||||||
|
clippy::unwrap_used,
|
||||||
|
reason = "Broken cert couldn't bootstrap server anyway"
|
||||||
|
)]
|
||||||
let issuer = Issuer::from_ca_cert_pem(
|
let issuer = Issuer::from_ca_cert_pem(
|
||||||
&certified_issuer.pem(),
|
&certified_issuer.pem(),
|
||||||
KeyPair::from_pem(cert_key_pem.as_ref()).unwrap(),
|
KeyPair::from_pem(cert_key_pem.as_ref()).unwrap(),
|
||||||
|
|||||||
@@ -44,6 +44,14 @@ pub enum DatabaseSetupError {
|
|||||||
Pool(#[from] PoolInitError),
|
Pool(#[from] PoolInitError),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Error, Debug)]
|
||||||
|
pub enum DatabaseError {
|
||||||
|
#[error("Database connection error")]
|
||||||
|
Pool(#[from] PoolError),
|
||||||
|
#[error("Database query error")]
|
||||||
|
Connection(#[from] diesel::result::Error),
|
||||||
|
}
|
||||||
|
|
||||||
#[tracing::instrument(level = "info")]
|
#[tracing::instrument(level = "info")]
|
||||||
fn database_path() -> Result<std::path::PathBuf, DatabaseSetupError> {
|
fn database_path() -> Result<std::path::PathBuf, DatabaseSetupError> {
|
||||||
let arbiter_home = arbiter_proto::home_path().map_err(DatabaseSetupError::HomeDir)?;
|
let arbiter_home = arbiter_proto::home_path().map_err(DatabaseSetupError::HomeDir)?;
|
||||||
@@ -92,6 +100,7 @@ fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
|
|||||||
#[tracing::instrument(level = "info")]
|
#[tracing::instrument(level = "info")]
|
||||||
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
||||||
let database_url = url.map(String::from).unwrap_or(
|
let database_url = url.map(String::from).unwrap_or(
|
||||||
|
#[allow(clippy::expect_used)]
|
||||||
database_path()?
|
database_path()?
|
||||||
.to_str()
|
.to_str()
|
||||||
.expect("database path is not valid UTF-8")
|
.expect("database path is not valid UTF-8")
|
||||||
@@ -135,11 +144,13 @@ pub async fn create_test_pool() -> DatabasePool {
|
|||||||
let tempfile_name = Alphanumeric.sample_string(&mut rand::rng(), 16);
|
let tempfile_name = Alphanumeric.sample_string(&mut rand::rng(), 16);
|
||||||
|
|
||||||
let file = std::env::temp_dir().join(tempfile_name);
|
let file = std::env::temp_dir().join(tempfile_name);
|
||||||
let url = format!(
|
#[allow(clippy::expect_used)]
|
||||||
"{}?mode=rwc",
|
let url = file
|
||||||
file.to_str().expect("temp file path is not valid UTF-8")
|
.to_str()
|
||||||
);
|
.expect("temp file path is not valid UTF-8")
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
#[allow(clippy::expect_used)]
|
||||||
create_pool(Some(&url))
|
create_pool(Some(&url))
|
||||||
.await
|
.await
|
||||||
.expect("Failed to create test database pool")
|
.expect("Failed to create test database pool")
|
||||||
|
|||||||
@@ -2,15 +2,16 @@
|
|||||||
#![allow(clippy::all)]
|
#![allow(clippy::all)]
|
||||||
|
|
||||||
use crate::db::schema::{
|
use crate::db::schema::{
|
||||||
self, aead_encrypted, arbiter_settings, evm_basic_grant, evm_ether_transfer_grant, evm_ether_transfer_grant_target, evm_ether_transfer_limit, evm_token_transfer_grant, evm_token_transfer_log, evm_token_transfer_volume_limit, evm_transaction_log, evm_wallet, root_key_history, tls_history
|
self, aead_encrypted, arbiter_settings, evm_basic_grant, evm_ether_transfer_grant,
|
||||||
|
evm_ether_transfer_grant_target, evm_ether_transfer_limit, evm_token_transfer_grant,
|
||||||
|
evm_token_transfer_log, evm_token_transfer_volume_limit, evm_transaction_log, evm_wallet,
|
||||||
|
root_key_history, tls_history,
|
||||||
};
|
};
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use diesel::{prelude::*, sqlite::Sqlite};
|
use diesel::{prelude::*, sqlite::Sqlite};
|
||||||
use restructed::Models;
|
use restructed::Models;
|
||||||
|
|
||||||
pub mod types {
|
pub mod types {
|
||||||
use std::os::unix;
|
|
||||||
|
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use diesel::{
|
use diesel::{
|
||||||
deserialize::{FromSql, FromSqlRow},
|
deserialize::{FromSql, FromSqlRow},
|
||||||
@@ -21,7 +22,7 @@ pub mod types {
|
|||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, FromSqlRow, AsExpression)]
|
#[derive(Debug, FromSqlRow, AsExpression)]
|
||||||
#[sql_type = "Integer"]
|
#[diesel(sql_type = Integer)]
|
||||||
#[repr(transparent)] // hint compiler to optimize the wrapper struct away
|
#[repr(transparent)] // hint compiler to optimize the wrapper struct away
|
||||||
pub struct SqliteTimestamp(pub DateTime<Utc>);
|
pub struct SqliteTimestamp(pub DateTime<Utc>);
|
||||||
impl SqliteTimestamp {
|
impl SqliteTimestamp {
|
||||||
@@ -35,9 +36,9 @@ pub mod types {
|
|||||||
SqliteTimestamp(dt)
|
SqliteTimestamp(dt)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl Into<chrono::DateTime<Utc>> for SqliteTimestamp {
|
impl From<SqliteTimestamp> for chrono::DateTime<Utc> {
|
||||||
fn into(self) -> chrono::DateTime<Utc> {
|
fn from(ts: SqliteTimestamp) -> Self {
|
||||||
self.0
|
ts.0
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -56,7 +57,7 @@ pub mod types {
|
|||||||
fn from_sql(
|
fn from_sql(
|
||||||
mut bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
mut bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
||||||
) -> diesel::deserialize::Result<Self> {
|
) -> diesel::deserialize::Result<Self> {
|
||||||
let Some(SqliteType::Integer) = bytes.value_type() else {
|
let Some(SqliteType::Long) = bytes.value_type() else {
|
||||||
return Err(format!(
|
return Err(format!(
|
||||||
"Expected Integer type for SqliteTimestamp, got {:?}",
|
"Expected Integer type for SqliteTimestamp, got {:?}",
|
||||||
bytes.value_type()
|
bytes.value_type()
|
||||||
@@ -64,13 +65,47 @@ pub mod types {
|
|||||||
.into());
|
.into());
|
||||||
};
|
};
|
||||||
|
|
||||||
let unix_timestamp = bytes.read_integer();
|
let unix_timestamp = bytes.read_long();
|
||||||
let datetime = DateTime::from_timestamp(unix_timestamp as i64, 0)
|
let datetime =
|
||||||
.ok_or("Timestamp is out of bounds")?;
|
DateTime::from_timestamp(unix_timestamp, 0).ok_or("Timestamp is out of bounds")?;
|
||||||
|
|
||||||
Ok(SqliteTimestamp(datetime))
|
Ok(SqliteTimestamp(datetime))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Key algorithm stored in the `useragent_client.key_type` column.
|
||||||
|
/// Values must stay stable — they are persisted in the database.
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, FromSqlRow, AsExpression, strum::FromRepr)]
|
||||||
|
#[diesel(sql_type = Integer)]
|
||||||
|
#[repr(i32)]
|
||||||
|
pub enum KeyType {
|
||||||
|
Ed25519 = 1,
|
||||||
|
EcdsaSecp256k1 = 2,
|
||||||
|
Rsa = 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ToSql<Integer, Sqlite> for KeyType {
|
||||||
|
fn to_sql<'b>(
|
||||||
|
&'b self,
|
||||||
|
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
||||||
|
) -> diesel::serialize::Result {
|
||||||
|
out.set_value(*self as i32);
|
||||||
|
Ok(IsNull::No)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FromSql<Integer, Sqlite> for KeyType {
|
||||||
|
fn from_sql(
|
||||||
|
mut bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
||||||
|
) -> diesel::deserialize::Result<Self> {
|
||||||
|
let Some(SqliteType::Long) = bytes.value_type() else {
|
||||||
|
return Err("Expected Integer for KeyType".into());
|
||||||
|
};
|
||||||
|
let discriminant = bytes.read_long();
|
||||||
|
KeyType::from_repr(discriminant as i32)
|
||||||
|
.ok_or_else(|| format!("Unknown KeyType discriminant: {discriminant}").into())
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
pub use types::*;
|
pub use types::*;
|
||||||
|
|
||||||
@@ -150,7 +185,7 @@ pub struct EvmWallet {
|
|||||||
pub created_at: SqliteTimestamp,
|
pub created_at: SqliteTimestamp,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Queryable, Debug)]
|
#[derive(Queryable, Debug, Insertable, Selectable)]
|
||||||
#[diesel(table_name = schema::program_client, check_for_backend(Sqlite))]
|
#[diesel(table_name = schema::program_client, check_for_backend(Sqlite))]
|
||||||
pub struct ProgramClient {
|
pub struct ProgramClient {
|
||||||
pub id: i32,
|
pub id: i32,
|
||||||
@@ -168,6 +203,7 @@ pub struct UseragentClient {
|
|||||||
pub public_key: Vec<u8>,
|
pub public_key: Vec<u8>,
|
||||||
pub created_at: SqliteTimestamp,
|
pub created_at: SqliteTimestamp,
|
||||||
pub updated_at: SqliteTimestamp,
|
pub updated_at: SqliteTimestamp,
|
||||||
|
pub key_type: KeyType,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||||
@@ -253,7 +289,6 @@ pub struct EvmEtherTransferGrantTarget {
|
|||||||
pub address: Vec<u8>,
|
pub address: Vec<u8>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||||
#[diesel(table_name = evm_token_transfer_grant, check_for_backend(Sqlite))]
|
#[diesel(table_name = evm_token_transfer_grant, check_for_backend(Sqlite))]
|
||||||
#[view(
|
#[view(
|
||||||
|
|||||||
@@ -153,6 +153,7 @@ diesel::table! {
|
|||||||
public_key -> Binary,
|
public_key -> Binary,
|
||||||
created_at -> Integer,
|
created_at -> Integer,
|
||||||
updated_at -> Integer,
|
updated_at -> Integer,
|
||||||
|
key_type -> Integer,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,23 +1,24 @@
|
|||||||
pub mod abi;
|
pub mod abi;
|
||||||
pub mod safe_signer;
|
pub mod safe_signer;
|
||||||
|
|
||||||
use alloy::{consensus::TxEip1559, primitives::TxKind, signers::Signature};
|
use alloy::{
|
||||||
|
consensus::TxEip1559,
|
||||||
|
primitives::{TxKind, U256},
|
||||||
|
};
|
||||||
use chrono::Utc;
|
use chrono::Utc;
|
||||||
use diesel::{QueryResult, insert_into};
|
use diesel::{ExpressionMethods as _, QueryDsl, QueryResult, insert_into, sqlite::Sqlite};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
db::{
|
db::{
|
||||||
self,
|
self,
|
||||||
models::{
|
models::{EvmBasicGrant, NewEvmBasicGrant, NewEvmTransactionLog, SqliteTimestamp},
|
||||||
EvmBasicGrant, EvmTransactionLog, NewEvmBasicGrant, NewEvmTransactionLog,
|
|
||||||
SqliteTimestamp,
|
|
||||||
},
|
|
||||||
schema::{self, evm_transaction_log},
|
schema::{self, evm_transaction_log},
|
||||||
},
|
},
|
||||||
evm::policies::{
|
evm::policies::{
|
||||||
EvalContext, EvalViolation, FullGrant, Policy, SpecificMeaning,
|
DatabaseID, EvalContext, EvalViolation, FullGrant, Grant, Policy, SharedGrantSettings,
|
||||||
ether_transfer::EtherTransfer, token_transfers::TokenTransfer,
|
SpecificGrant, SpecificMeaning, ether_transfer::EtherTransfer,
|
||||||
|
token_transfers::TokenTransfer,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -54,7 +55,6 @@ pub enum VetError {
|
|||||||
Evaluated(SpecificMeaning, #[source] PolicyError),
|
Evaluated(SpecificMeaning, #[source] PolicyError),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||||
pub enum SignError {
|
pub enum SignError {
|
||||||
#[error("Database connection pool error")]
|
#[error("Database connection pool error")]
|
||||||
@@ -87,6 +87,17 @@ pub enum CreationError {
|
|||||||
Database(#[from] diesel::result::Error),
|
Database(#[from] diesel::result::Error),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||||
|
pub enum ListGrantsError {
|
||||||
|
#[error("Database connection pool error")]
|
||||||
|
#[diagnostic(code(arbiter_server::evm::list_grants_error::pool))]
|
||||||
|
Pool(#[from] db::PoolError),
|
||||||
|
|
||||||
|
#[error("Database returned error")]
|
||||||
|
#[diagnostic(code(arbiter_server::evm::list_grants_error::database))]
|
||||||
|
Database(#[from] diesel::result::Error),
|
||||||
|
}
|
||||||
|
|
||||||
/// Controls whether a transaction should be executed or only validated
|
/// Controls whether a transaction should be executed or only validated
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
pub enum RunKind {
|
pub enum RunKind {
|
||||||
@@ -96,6 +107,52 @@ pub enum RunKind {
|
|||||||
CheckOnly,
|
CheckOnly,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn check_shared_constraints(
|
||||||
|
context: &EvalContext,
|
||||||
|
shared: &SharedGrantSettings,
|
||||||
|
shared_grant_id: DatabaseID,
|
||||||
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
|
) -> QueryResult<Vec<EvalViolation>> {
|
||||||
|
let mut violations = Vec::new();
|
||||||
|
let now = Utc::now();
|
||||||
|
|
||||||
|
// Validity window
|
||||||
|
if shared.valid_from.is_some_and(|t| now < t) || shared.valid_until.is_some_and(|t| now > t) {
|
||||||
|
violations.push(EvalViolation::InvalidTime);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gas fee caps
|
||||||
|
let fee_exceeded = shared
|
||||||
|
.max_gas_fee_per_gas
|
||||||
|
.is_some_and(|cap| U256::from(context.max_fee_per_gas) > cap);
|
||||||
|
let priority_exceeded = shared
|
||||||
|
.max_priority_fee_per_gas
|
||||||
|
.is_some_and(|cap| U256::from(context.max_priority_fee_per_gas) > cap);
|
||||||
|
if fee_exceeded || priority_exceeded {
|
||||||
|
violations.push(EvalViolation::GasLimitExceeded {
|
||||||
|
max_gas_fee_per_gas: shared.max_gas_fee_per_gas,
|
||||||
|
max_priority_fee_per_gas: shared.max_priority_fee_per_gas,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transaction count rate limit
|
||||||
|
if let Some(rate_limit) = &shared.rate_limit {
|
||||||
|
let window_start = SqliteTimestamp(now - rate_limit.window);
|
||||||
|
let count: i64 = evm_transaction_log::table
|
||||||
|
.filter(evm_transaction_log::grant_id.eq(shared_grant_id))
|
||||||
|
.filter(evm_transaction_log::signed_at.ge(window_start))
|
||||||
|
.count()
|
||||||
|
.get_result(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if count >= rate_limit.count as i64 {
|
||||||
|
violations.push(EvalViolation::RateLimitExceeded);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(violations)
|
||||||
|
}
|
||||||
|
|
||||||
// Supporting only EIP-1559 transactions for now, but we can easily extend this to support legacy transactions if needed
|
// Supporting only EIP-1559 transactions for now, but we can easily extend this to support legacy transactions if needed
|
||||||
pub struct Engine {
|
pub struct Engine {
|
||||||
db: db::DatabasePool,
|
db: db::DatabasePool,
|
||||||
@@ -114,7 +171,11 @@ impl Engine {
|
|||||||
.await?
|
.await?
|
||||||
.ok_or(PolicyError::NoMatchingGrant)?;
|
.ok_or(PolicyError::NoMatchingGrant)?;
|
||||||
|
|
||||||
let violations = P::evaluate(&context, meaning, &grant, &mut conn).await?;
|
let mut violations =
|
||||||
|
check_shared_constraints(&context, &grant.shared, grant.shared_grant_id, &mut conn)
|
||||||
|
.await?;
|
||||||
|
violations.extend(P::evaluate(&context, meaning, &grant, &mut conn).await?);
|
||||||
|
|
||||||
if !violations.is_empty() {
|
if !violations.is_empty() {
|
||||||
return Err(PolicyError::Violations(violations));
|
return Err(PolicyError::Violations(violations));
|
||||||
} else if run_kind == RunKind::Execution {
|
} else if run_kind == RunKind::Execution {
|
||||||
@@ -166,7 +227,7 @@ impl Engine {
|
|||||||
.values(&NewEvmBasicGrant {
|
.values(&NewEvmBasicGrant {
|
||||||
wallet_id: full_grant.basic.wallet_id,
|
wallet_id: full_grant.basic.wallet_id,
|
||||||
chain_id: full_grant.basic.chain as i32,
|
chain_id: full_grant.basic.chain as i32,
|
||||||
client_id: client_id,
|
client_id,
|
||||||
valid_from: full_grant.basic.valid_from.map(SqliteTimestamp),
|
valid_from: full_grant.basic.valid_from.map(SqliteTimestamp),
|
||||||
valid_until: full_grant.basic.valid_until.map(SqliteTimestamp),
|
valid_until: full_grant.basic.valid_until.map(SqliteTimestamp),
|
||||||
max_gas_fee_per_gas: full_grant
|
max_gas_fee_per_gas: full_grant
|
||||||
@@ -201,6 +262,37 @@ impl Engine {
|
|||||||
Ok(id)
|
Ok(id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub async fn list_all_grants(&self) -> Result<Vec<Grant<SpecificGrant>>, ListGrantsError> {
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
|
||||||
|
let mut grants: Vec<Grant<SpecificGrant>> = Vec::new();
|
||||||
|
|
||||||
|
grants.extend(
|
||||||
|
EtherTransfer::find_all_grants(&mut conn)
|
||||||
|
.await?
|
||||||
|
.into_iter()
|
||||||
|
.map(|g| Grant {
|
||||||
|
id: g.id,
|
||||||
|
shared_grant_id: g.shared_grant_id,
|
||||||
|
shared: g.shared,
|
||||||
|
settings: SpecificGrant::EtherTransfer(g.settings),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
grants.extend(
|
||||||
|
TokenTransfer::find_all_grants(&mut conn)
|
||||||
|
.await?
|
||||||
|
.into_iter()
|
||||||
|
.map(|g| Grant {
|
||||||
|
id: g.id,
|
||||||
|
shared_grant_id: g.shared_grant_id,
|
||||||
|
shared: g.shared,
|
||||||
|
settings: SpecificGrant::TokenTransfer(g.settings),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(grants)
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn evaluate_transaction(
|
pub async fn evaluate_transaction(
|
||||||
&self,
|
&self,
|
||||||
wallet_id: i32,
|
wallet_id: i32,
|
||||||
@@ -215,9 +307,11 @@ impl Engine {
|
|||||||
wallet_id,
|
wallet_id,
|
||||||
client_id,
|
client_id,
|
||||||
chain: transaction.chain_id,
|
chain: transaction.chain_id,
|
||||||
to: to,
|
to,
|
||||||
value: transaction.value,
|
value: transaction.value,
|
||||||
calldata: transaction.input.clone(),
|
calldata: transaction.input.clone(),
|
||||||
|
max_fee_per_gas: transaction.max_fee_per_gas,
|
||||||
|
max_priority_fee_per_gas: transaction.max_priority_fee_per_gas,
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(meaning) = EtherTransfer::analyze(&context) {
|
if let Some(meaning) = EtherTransfer::analyze(&context) {
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ use crate::{
|
|||||||
pub mod ether_transfer;
|
pub mod ether_transfer;
|
||||||
pub mod token_transfers;
|
pub mod token_transfers;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
pub struct EvalContext {
|
pub struct EvalContext {
|
||||||
// Which wallet is this transaction for
|
// Which wallet is this transaction for
|
||||||
pub client_id: i32,
|
pub client_id: i32,
|
||||||
@@ -27,6 +28,10 @@ pub struct EvalContext {
|
|||||||
pub to: Address,
|
pub to: Address,
|
||||||
pub value: U256,
|
pub value: U256,
|
||||||
pub calldata: Bytes,
|
pub calldata: Bytes,
|
||||||
|
|
||||||
|
// Gas pricing (EIP-1559)
|
||||||
|
pub max_fee_per_gas: u128,
|
||||||
|
pub max_priority_fee_per_gas: u128,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Error, Diagnostic)]
|
#[derive(Debug, Error, Diagnostic)]
|
||||||
@@ -61,6 +66,7 @@ pub enum EvalViolation {
|
|||||||
|
|
||||||
pub type DatabaseID = i32;
|
pub type DatabaseID = i32;
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
pub struct Grant<PolicySettings> {
|
pub struct Grant<PolicySettings> {
|
||||||
pub id: DatabaseID,
|
pub id: DatabaseID,
|
||||||
pub shared_grant_id: DatabaseID, // ID of the basic grant for shared-logic checks like rate limits and validity periods
|
pub shared_grant_id: DatabaseID, // ID of the basic grant for shared-logic checks like rate limits and validity periods
|
||||||
@@ -97,6 +103,11 @@ pub trait Policy: Sized {
|
|||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> impl Future<Output = QueryResult<Option<Grant<Self::Settings>>>> + Send;
|
) -> impl Future<Output = QueryResult<Option<Grant<Self::Settings>>>> + Send;
|
||||||
|
|
||||||
|
// Return all non-revoked grants, eagerly loading policy-specific settings
|
||||||
|
fn find_all_grants(
|
||||||
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
|
) -> impl Future<Output = QueryResult<Vec<Grant<Self::Settings>>>> + Send;
|
||||||
|
|
||||||
// Records, updates or deletes rate limits
|
// Records, updates or deletes rate limits
|
||||||
// In other words, records grant-specific things after transaction is executed
|
// In other words, records grant-specific things after transaction is executed
|
||||||
fn record_transaction(
|
fn record_transaction(
|
||||||
@@ -135,6 +146,7 @@ pub struct VolumeRateLimit {
|
|||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub struct SharedGrantSettings {
|
pub struct SharedGrantSettings {
|
||||||
pub wallet_id: i32,
|
pub wallet_id: i32,
|
||||||
|
pub client_id: i32,
|
||||||
pub chain: ChainId,
|
pub chain: ChainId,
|
||||||
|
|
||||||
pub valid_from: Option<DateTime<Utc>>,
|
pub valid_from: Option<DateTime<Utc>>,
|
||||||
@@ -150,6 +162,7 @@ impl SharedGrantSettings {
|
|||||||
fn try_from_model(model: EvmBasicGrant) -> QueryResult<Self> {
|
fn try_from_model(model: EvmBasicGrant) -> QueryResult<Self> {
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
wallet_id: model.wallet_id,
|
wallet_id: model.wallet_id,
|
||||||
|
client_id: model.client_id,
|
||||||
chain: model.chain_id as u64, // safe because chain_id is stored as i32 but is guaranteed to be a valid ChainId by the API when creating grants
|
chain: model.chain_id as u64, // safe because chain_id is stored as i32 but is guaranteed to be a valid ChainId by the API when creating grants
|
||||||
valid_from: model.valid_from.map(Into::into),
|
valid_from: model.valid_from.map(Into::into),
|
||||||
valid_until: model.valid_until.map(Into::into),
|
valid_until: model.valid_until.map(Into::into),
|
||||||
@@ -187,6 +200,7 @@ impl SharedGrantSettings {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
pub enum SpecificGrant {
|
pub enum SpecificGrant {
|
||||||
EtherTransfer(ether_transfer::Settings),
|
EtherTransfer(ether_transfer::Settings),
|
||||||
TokenTransfer(token_transfers::Settings),
|
TokenTransfer(token_transfers::Settings),
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
use std::fmt::Display;
|
use std::fmt::Display;
|
||||||
|
|
||||||
use alloy::primitives::{Address, U256};
|
use alloy::primitives::{Address, U256};
|
||||||
use chrono::{DateTime, Duration, Utc};
|
use chrono::{DateTime, Duration, Utc};
|
||||||
use diesel::dsl::insert_into;
|
use diesel::dsl::{auto_type, insert_into};
|
||||||
use diesel::sqlite::Sqlite;
|
use diesel::sqlite::Sqlite;
|
||||||
use diesel::{ExpressionMethods, JoinOnDsl, prelude::*};
|
use diesel::{ExpressionMethods, JoinOnDsl, prelude::*};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
@@ -11,7 +12,7 @@ use crate::db::models::{
|
|||||||
EvmBasicGrant, EvmEtherTransferGrant, EvmEtherTransferGrantTarget, EvmEtherTransferLimit,
|
EvmBasicGrant, EvmEtherTransferGrant, EvmEtherTransferGrantTarget, EvmEtherTransferLimit,
|
||||||
NewEvmEtherTransferLimit, SqliteTimestamp,
|
NewEvmEtherTransferLimit, SqliteTimestamp,
|
||||||
};
|
};
|
||||||
use crate::db::schema::{evm_ether_transfer_limit, evm_transaction_log};
|
use crate::db::schema::{evm_basic_grant, evm_ether_transfer_limit, evm_transaction_log};
|
||||||
use crate::evm::policies::{
|
use crate::evm::policies::{
|
||||||
Grant, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit,
|
Grant, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit,
|
||||||
};
|
};
|
||||||
@@ -23,6 +24,13 @@ use crate::{
|
|||||||
evm::{policies::Policy, utils},
|
evm::{policies::Policy, utils},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#[auto_type]
|
||||||
|
fn grant_join() -> _ {
|
||||||
|
evm_ether_transfer_grant::table.inner_join(
|
||||||
|
evm_basic_grant::table.on(evm_ether_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
use super::{DatabaseID, EvalContext, EvalViolation};
|
use super::{DatabaseID, EvalContext, EvalViolation};
|
||||||
|
|
||||||
// Plain ether transfer
|
// Plain ether transfer
|
||||||
@@ -33,29 +41,25 @@ pub struct Meaning {
|
|||||||
}
|
}
|
||||||
impl Display for Meaning {
|
impl Display for Meaning {
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
write!(
|
write!(f, "Ether transfer of {} to {}", self.value, self.to)
|
||||||
f,
|
|
||||||
"Ether transfer of {} to {}",
|
|
||||||
self.value,
|
|
||||||
self.to.to_string()
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl Into<SpecificMeaning> for Meaning {
|
impl From<Meaning> for SpecificMeaning {
|
||||||
fn into(self) -> SpecificMeaning {
|
fn from(val: Meaning) -> SpecificMeaning {
|
||||||
SpecificMeaning::EtherTransfer(self)
|
SpecificMeaning::EtherTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// A grant for ether transfers, which can be scoped to specific target addresses and volume limits
|
// A grant for ether transfers, which can be scoped to specific target addresses and volume limits
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
pub struct Settings {
|
pub struct Settings {
|
||||||
target: Vec<Address>,
|
pub target: Vec<Address>,
|
||||||
limit: VolumeRateLimit,
|
pub limit: VolumeRateLimit,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Into<SpecificGrant> for Settings {
|
impl From<Settings> for SpecificGrant {
|
||||||
fn into(self) -> SpecificGrant {
|
fn from(val: Settings) -> SpecificGrant {
|
||||||
SpecificGrant::EtherTransfer(self)
|
SpecificGrant::EtherTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -183,27 +187,21 @@ impl Policy for EtherTransfer {
|
|||||||
context: &EvalContext,
|
context: &EvalContext,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> diesel::result::QueryResult<Option<Grant<Self::Settings>>> {
|
) -> diesel::result::QueryResult<Option<Grant<Self::Settings>>> {
|
||||||
use crate::db::schema::{
|
|
||||||
evm_basic_grant, evm_ether_transfer_grant, evm_ether_transfer_grant_target,
|
|
||||||
};
|
|
||||||
|
|
||||||
let target_bytes = context.to.to_vec();
|
let target_bytes = context.to.to_vec();
|
||||||
|
|
||||||
// Find a grant where:
|
// Find a grant where:
|
||||||
// 1. The basic grant's wallet_id and client_id match the context
|
// 1. The basic grant's wallet_id and client_id match the context
|
||||||
// 2. Any of the grant's targets match the context's `to` address
|
// 2. Any of the grant's targets match the context's `to` address
|
||||||
let grant: Option<(EvmBasicGrant, EvmEtherTransferGrant)> = evm_ether_transfer_grant::table
|
let grant: Option<(EvmBasicGrant, EvmEtherTransferGrant)> = evm_ether_transfer_grant::table
|
||||||
.inner_join(
|
.inner_join(evm_basic_grant::table)
|
||||||
evm_basic_grant::table
|
.inner_join(evm_ether_transfer_grant_target::table)
|
||||||
.on(evm_ether_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
.filter(
|
||||||
|
evm_basic_grant::wallet_id
|
||||||
|
.eq(context.wallet_id)
|
||||||
|
.and(evm_basic_grant::client_id.eq(context.client_id))
|
||||||
|
.and(evm_basic_grant::revoked_at.is_null())
|
||||||
|
.and(evm_ether_transfer_grant_target::address.eq(&target_bytes)),
|
||||||
)
|
)
|
||||||
.inner_join(
|
|
||||||
evm_ether_transfer_grant_target::table
|
|
||||||
.on(evm_ether_transfer_grant::id.eq(evm_ether_transfer_grant_target::grant_id)),
|
|
||||||
)
|
|
||||||
.filter(evm_basic_grant::wallet_id.eq(context.wallet_id))
|
|
||||||
.filter(evm_basic_grant::client_id.eq(context.client_id))
|
|
||||||
.filter(evm_ether_transfer_grant_target::address.eq(&target_bytes))
|
|
||||||
.select((
|
.select((
|
||||||
EvmBasicGrant::as_select(),
|
EvmBasicGrant::as_select(),
|
||||||
EvmEtherTransferGrant::as_select(),
|
EvmEtherTransferGrant::as_select(),
|
||||||
@@ -266,4 +264,85 @@ impl Policy for EtherTransfer {
|
|||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn find_all_grants(
|
||||||
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
|
) -> QueryResult<Vec<Grant<Self::Settings>>> {
|
||||||
|
let grants: Vec<(EvmBasicGrant, EvmEtherTransferGrant)> = grant_join()
|
||||||
|
.filter(evm_basic_grant::revoked_at.is_null())
|
||||||
|
.select((
|
||||||
|
EvmBasicGrant::as_select(),
|
||||||
|
EvmEtherTransferGrant::as_select(),
|
||||||
|
))
|
||||||
|
.load(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if grants.is_empty() {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
let grant_ids: Vec<i32> = grants.iter().map(|(_, g)| g.id).collect();
|
||||||
|
let limit_ids: Vec<i32> = grants.iter().map(|(_, g)| g.limit_id).collect();
|
||||||
|
|
||||||
|
let all_targets: Vec<EvmEtherTransferGrantTarget> = evm_ether_transfer_grant_target::table
|
||||||
|
.filter(evm_ether_transfer_grant_target::grant_id.eq_any(&grant_ids))
|
||||||
|
.select(EvmEtherTransferGrantTarget::as_select())
|
||||||
|
.load(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let all_limits: Vec<EvmEtherTransferLimit> = evm_ether_transfer_limit::table
|
||||||
|
.filter(evm_ether_transfer_limit::id.eq_any(&limit_ids))
|
||||||
|
.select(EvmEtherTransferLimit::as_select())
|
||||||
|
.load(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut targets_by_grant: HashMap<i32, Vec<EvmEtherTransferGrantTarget>> = HashMap::new();
|
||||||
|
for target in all_targets {
|
||||||
|
targets_by_grant
|
||||||
|
.entry(target.grant_id)
|
||||||
|
.or_default()
|
||||||
|
.push(target);
|
||||||
|
}
|
||||||
|
|
||||||
|
let limits_by_id: HashMap<i32, EvmEtherTransferLimit> =
|
||||||
|
all_limits.into_iter().map(|l| (l.id, l)).collect();
|
||||||
|
|
||||||
|
grants
|
||||||
|
.into_iter()
|
||||||
|
.map(|(basic, specific)| {
|
||||||
|
let targets: Vec<Address> = targets_by_grant
|
||||||
|
.get(&specific.id)
|
||||||
|
.map(|v| v.as_slice())
|
||||||
|
.unwrap_or_default()
|
||||||
|
.iter()
|
||||||
|
.filter_map(|t| {
|
||||||
|
let arr: [u8; 20] = t.address.clone().try_into().ok()?;
|
||||||
|
Some(Address::from(arr))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let limit = limits_by_id
|
||||||
|
.get(&specific.limit_id)
|
||||||
|
.ok_or(diesel::result::Error::NotFound)?;
|
||||||
|
|
||||||
|
Ok(Grant {
|
||||||
|
id: specific.id,
|
||||||
|
shared_grant_id: specific.basic_grant_id,
|
||||||
|
shared: SharedGrantSettings::try_from_model(basic)?,
|
||||||
|
settings: Settings {
|
||||||
|
target: targets,
|
||||||
|
limit: VolumeRateLimit {
|
||||||
|
max_volume: utils::try_bytes_to_u256(&limit.max_volume).map_err(
|
||||||
|
|e| diesel::result::Error::DeserializationError(Box::new(e)),
|
||||||
|
)?,
|
||||||
|
window: Duration::seconds(limit.window_secs as i64),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests;
|
||||||
@@ -0,0 +1,386 @@
|
|||||||
|
use alloy::primitives::{Address, Bytes, U256, address};
|
||||||
|
use chrono::{Duration, Utc};
|
||||||
|
use diesel::{SelectableHelper, insert_into};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
|
||||||
|
use crate::db::{
|
||||||
|
self, DatabaseConnection,
|
||||||
|
models::{EvmBasicGrant, NewEvmBasicGrant, NewEvmTransactionLog, SqliteTimestamp},
|
||||||
|
schema::{evm_basic_grant, evm_transaction_log},
|
||||||
|
};
|
||||||
|
use crate::evm::{
|
||||||
|
policies::{EvalContext, EvalViolation, Grant, Policy, SharedGrantSettings, VolumeRateLimit},
|
||||||
|
utils,
|
||||||
|
};
|
||||||
|
|
||||||
|
use super::{EtherTransfer, Settings};
|
||||||
|
|
||||||
|
const WALLET_ID: i32 = 1;
|
||||||
|
const CLIENT_ID: i32 = 2;
|
||||||
|
const CHAIN_ID: u64 = 1;
|
||||||
|
|
||||||
|
const ALLOWED: Address = address!("1111111111111111111111111111111111111111");
|
||||||
|
const OTHER: Address = address!("2222222222222222222222222222222222222222");
|
||||||
|
|
||||||
|
fn ctx(to: Address, value: U256) -> EvalContext {
|
||||||
|
EvalContext {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
chain: CHAIN_ID,
|
||||||
|
to,
|
||||||
|
value,
|
||||||
|
calldata: Bytes::new(),
|
||||||
|
max_fee_per_gas: 0,
|
||||||
|
max_priority_fee_per_gas: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn insert_basic(conn: &mut DatabaseConnection, revoked: bool) -> EvmBasicGrant {
|
||||||
|
insert_into(evm_basic_grant::table)
|
||||||
|
.values(NewEvmBasicGrant {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
valid_from: None,
|
||||||
|
valid_until: None,
|
||||||
|
max_gas_fee_per_gas: None,
|
||||||
|
max_priority_fee_per_gas: None,
|
||||||
|
rate_limit_count: None,
|
||||||
|
rate_limit_window_secs: None,
|
||||||
|
revoked_at: revoked.then(|| SqliteTimestamp(Utc::now())),
|
||||||
|
})
|
||||||
|
.returning(EvmBasicGrant::as_select())
|
||||||
|
.get_result(conn)
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make_settings(targets: Vec<Address>, max_volume: u64) -> Settings {
|
||||||
|
Settings {
|
||||||
|
target: targets,
|
||||||
|
limit: VolumeRateLimit {
|
||||||
|
max_volume: U256::from(max_volume),
|
||||||
|
window: Duration::hours(1),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shared() -> SharedGrantSettings {
|
||||||
|
SharedGrantSettings {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
chain: CHAIN_ID,
|
||||||
|
valid_from: None,
|
||||||
|
valid_until: None,
|
||||||
|
max_gas_fee_per_gas: None,
|
||||||
|
max_priority_fee_per_gas: None,
|
||||||
|
rate_limit: None,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── analyze ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_matches_empty_calldata() {
|
||||||
|
let m = EtherTransfer::analyze(&ctx(ALLOWED, U256::from(1_000u64))).unwrap();
|
||||||
|
assert_eq!(m.to, ALLOWED);
|
||||||
|
assert_eq!(m.value, U256::from(1_000u64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_rejects_nonempty_calldata() {
|
||||||
|
let context = EvalContext {
|
||||||
|
calldata: Bytes::from(vec![0xde, 0xad, 0xbe, 0xef]),
|
||||||
|
..ctx(ALLOWED, U256::from(1u64))
|
||||||
|
};
|
||||||
|
assert!(EtherTransfer::analyze(&context).is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── evaluate ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_for_allowed_target() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(vec![ALLOWED], 1_000_000),
|
||||||
|
};
|
||||||
|
let context = ctx(ALLOWED, U256::from(100u64));
|
||||||
|
let m = EtherTransfer::analyze(&context).unwrap();
|
||||||
|
let v = EtherTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(v.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_rejects_disallowed_target() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(vec![ALLOWED], 1_000_000),
|
||||||
|
};
|
||||||
|
let context = ctx(OTHER, U256::from(100u64));
|
||||||
|
let m = EtherTransfer::analyze(&context).unwrap();
|
||||||
|
let v = EtherTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::InvalidTarget { .. }))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_when_volume_within_limit() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000);
|
||||||
|
let grant_id = EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
insert_into(evm_transaction_log::table)
|
||||||
|
.values(NewEvmTransactionLog {
|
||||||
|
grant_id,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
eth_value: utils::u256_to_bytes(U256::from(500u64)).to_vec(),
|
||||||
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
|
})
|
||||||
|
.execute(&mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: grant_id,
|
||||||
|
shared_grant_id: basic.id,
|
||||||
|
shared: shared(),
|
||||||
|
settings,
|
||||||
|
};
|
||||||
|
let context = ctx(ALLOWED, U256::from(100u64));
|
||||||
|
let m = EtherTransfer::analyze(&context).unwrap();
|
||||||
|
let v = EtherTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
!v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_rejects_volume_over_limit() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000);
|
||||||
|
let grant_id = EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
insert_into(evm_transaction_log::table)
|
||||||
|
.values(NewEvmTransactionLog {
|
||||||
|
grant_id,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
eth_value: utils::u256_to_bytes(U256::from(1_001u64)).to_vec(),
|
||||||
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
|
})
|
||||||
|
.execute(&mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: grant_id,
|
||||||
|
shared_grant_id: basic.id,
|
||||||
|
shared: shared(),
|
||||||
|
settings,
|
||||||
|
};
|
||||||
|
let context = ctx(ALLOWED, U256::from(100u64));
|
||||||
|
let m = EtherTransfer::analyze(&context).unwrap();
|
||||||
|
let v = EtherTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_at_exactly_volume_limit() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000);
|
||||||
|
let grant_id = EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Exactly at the limit — the check is `>`, so this should not violate
|
||||||
|
insert_into(evm_transaction_log::table)
|
||||||
|
.values(NewEvmTransactionLog {
|
||||||
|
grant_id,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
eth_value: utils::u256_to_bytes(U256::from(1_000u64)).to_vec(),
|
||||||
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
|
})
|
||||||
|
.execute(&mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: grant_id,
|
||||||
|
shared_grant_id: basic.id,
|
||||||
|
shared: shared(),
|
||||||
|
settings,
|
||||||
|
};
|
||||||
|
let context = ctx(ALLOWED, U256::from(100u64));
|
||||||
|
let m = EtherTransfer::analyze(&context).unwrap();
|
||||||
|
let v = EtherTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
!v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── try_find_grant ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_roundtrip() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000_000);
|
||||||
|
EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let found = EtherTransfer::try_find_grant(&ctx(ALLOWED, U256::from(1u64)), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(found.is_some());
|
||||||
|
let g = found.unwrap();
|
||||||
|
assert_eq!(g.settings.target, vec![ALLOWED]);
|
||||||
|
assert_eq!(g.settings.limit.max_volume, U256::from(1_000_000u64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_revoked_returns_none() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, true).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000_000);
|
||||||
|
EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let found = EtherTransfer::try_find_grant(&ctx(ALLOWED, U256::from(1u64)), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(found.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_wrong_target_returns_none() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000_000);
|
||||||
|
EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let found = EtherTransfer::try_find_grant(&ctx(OTHER, U256::from(1u64)), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(found.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── find_all_grants ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_empty_db() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let all = EtherTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert!(all.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_excludes_revoked() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let settings = make_settings(vec![ALLOWED], 1_000_000);
|
||||||
|
let active = insert_basic(&mut conn, false).await;
|
||||||
|
EtherTransfer::create_grant(&active, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let revoked = insert_basic(&mut conn, true).await;
|
||||||
|
EtherTransfer::create_grant(&revoked, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = EtherTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 1);
|
||||||
|
assert_eq!(all[0].settings.target, vec![ALLOWED]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_multiple_targets() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(vec![ALLOWED, OTHER], 1_000_000);
|
||||||
|
EtherTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = EtherTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 1);
|
||||||
|
assert_eq!(all[0].settings.target.len(), 2);
|
||||||
|
assert_eq!(all[0].settings.limit.max_volume, U256::from(1_000_000u64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_multiple_grants() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic1 = insert_basic(&mut conn, false).await;
|
||||||
|
EtherTransfer::create_grant(&basic1, &make_settings(vec![ALLOWED], 500), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let basic2 = insert_basic(&mut conn, false).await;
|
||||||
|
EtherTransfer::create_grant(&basic2, &make_settings(vec![OTHER], 1_000), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = EtherTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 2);
|
||||||
|
}
|
||||||
@@ -1,10 +1,12 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
use alloy::{
|
use alloy::{
|
||||||
primitives::{Address, U256},
|
primitives::{Address, U256},
|
||||||
sol_types::SolCall,
|
sol_types::SolCall,
|
||||||
};
|
};
|
||||||
use arbiter_tokens_registry::evm::nonfungible::{self, TokenInfo};
|
use arbiter_tokens_registry::evm::nonfungible::{self, TokenInfo};
|
||||||
use chrono::{DateTime, Duration, Utc};
|
use chrono::{DateTime, Duration, Utc};
|
||||||
use diesel::dsl::insert_into;
|
use diesel::dsl::{auto_type, insert_into};
|
||||||
use diesel::sqlite::Sqlite;
|
use diesel::sqlite::Sqlite;
|
||||||
use diesel::{ExpressionMethods, prelude::*};
|
use diesel::{ExpressionMethods, prelude::*};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
@@ -14,16 +16,26 @@ use crate::db::models::{
|
|||||||
NewEvmTokenTransferLog, NewEvmTokenTransferVolumeLimit, SqliteTimestamp,
|
NewEvmTokenTransferLog, NewEvmTokenTransferVolumeLimit, SqliteTimestamp,
|
||||||
};
|
};
|
||||||
use crate::db::schema::{
|
use crate::db::schema::{
|
||||||
evm_token_transfer_grant, evm_token_transfer_log, evm_token_transfer_volume_limit,
|
evm_basic_grant, evm_token_transfer_grant, evm_token_transfer_log,
|
||||||
|
evm_token_transfer_volume_limit,
|
||||||
};
|
};
|
||||||
use crate::evm::{
|
use crate::evm::{
|
||||||
abi::IERC20::transferCall,
|
abi::IERC20::transferCall,
|
||||||
policies::{Grant, Policy, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit},
|
policies::{
|
||||||
|
Grant, Policy, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit,
|
||||||
|
},
|
||||||
utils,
|
utils,
|
||||||
};
|
};
|
||||||
|
|
||||||
use super::{DatabaseID, EvalContext, EvalViolation};
|
use super::{DatabaseID, EvalContext, EvalViolation};
|
||||||
|
|
||||||
|
#[auto_type]
|
||||||
|
fn grant_join() -> _ {
|
||||||
|
evm_token_transfer_grant::table.inner_join(
|
||||||
|
evm_basic_grant::table.on(evm_token_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub struct Meaning {
|
pub struct Meaning {
|
||||||
token: &'static TokenInfo,
|
token: &'static TokenInfo,
|
||||||
@@ -39,21 +51,22 @@ impl std::fmt::Display for Meaning {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl Into<SpecificMeaning> for Meaning {
|
impl From<Meaning> for SpecificMeaning {
|
||||||
fn into(self) -> SpecificMeaning {
|
fn from(val: Meaning) -> SpecificMeaning {
|
||||||
SpecificMeaning::TokenTransfer(self)
|
SpecificMeaning::TokenTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// A grant for token transfers, which can be scoped to specific target addresses and volume limits
|
// A grant for token transfers, which can be scoped to specific target addresses and volume limits
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
pub struct Settings {
|
pub struct Settings {
|
||||||
token_contract: Address,
|
pub token_contract: Address,
|
||||||
target: Option<Address>,
|
pub target: Option<Address>,
|
||||||
volume_limits: Vec<VolumeRateLimit>,
|
pub volume_limits: Vec<VolumeRateLimit>,
|
||||||
}
|
}
|
||||||
impl Into<SpecificGrant> for Settings {
|
impl From<Settings> for SpecificGrant {
|
||||||
fn into(self) -> SpecificGrant {
|
fn from(val: Settings) -> SpecificGrant {
|
||||||
SpecificGrant::TokenTransfer(self)
|
SpecificGrant::TokenTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -144,11 +157,11 @@ impl Policy for TokenTransfer {
|
|||||||
return Ok(violations);
|
return Ok(violations);
|
||||||
}
|
}
|
||||||
|
|
||||||
if let Some(allowed) = grant.settings.target {
|
if let Some(allowed) = grant.settings.target
|
||||||
if allowed != meaning.to {
|
&& allowed != meaning.to
|
||||||
|
{
|
||||||
violations.push(EvalViolation::InvalidTarget { target: meaning.to });
|
violations.push(EvalViolation::InvalidTarget { target: meaning.to });
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
let rate_violations = check_volume_rate_limits(grant, db).await?;
|
let rate_violations = check_volume_rate_limits(grant, db).await?;
|
||||||
violations.extend(rate_violations);
|
violations.extend(rate_violations);
|
||||||
@@ -192,15 +205,10 @@ impl Policy for TokenTransfer {
|
|||||||
context: &EvalContext,
|
context: &EvalContext,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> QueryResult<Option<Grant<Self::Settings>>> {
|
) -> QueryResult<Option<Grant<Self::Settings>>> {
|
||||||
use crate::db::schema::{evm_basic_grant, evm_token_transfer_grant};
|
|
||||||
|
|
||||||
let token_contract_bytes = context.to.to_vec();
|
let token_contract_bytes = context.to.to_vec();
|
||||||
|
|
||||||
let grant: Option<(EvmBasicGrant, EvmTokenTransferGrant)> = evm_token_transfer_grant::table
|
let grant: Option<(EvmBasicGrant, EvmTokenTransferGrant)> = grant_join()
|
||||||
.inner_join(
|
.filter(evm_basic_grant::revoked_at.is_null())
|
||||||
evm_basic_grant::table
|
|
||||||
.on(evm_token_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
|
||||||
)
|
|
||||||
.filter(evm_basic_grant::wallet_id.eq(context.wallet_id))
|
.filter(evm_basic_grant::wallet_id.eq(context.wallet_id))
|
||||||
.filter(evm_basic_grant::client_id.eq(context.client_id))
|
.filter(evm_basic_grant::client_id.eq(context.client_id))
|
||||||
.filter(evm_token_transfer_grant::token_contract.eq(&token_contract_bytes))
|
.filter(evm_token_transfer_grant::token_contract.eq(&token_contract_bytes))
|
||||||
@@ -288,4 +296,91 @@ impl Policy for TokenTransfer {
|
|||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn find_all_grants(
|
||||||
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
|
) -> QueryResult<Vec<Grant<Self::Settings>>> {
|
||||||
|
let grants: Vec<(EvmBasicGrant, EvmTokenTransferGrant)> = grant_join()
|
||||||
|
.filter(evm_basic_grant::revoked_at.is_null())
|
||||||
|
.select((
|
||||||
|
EvmBasicGrant::as_select(),
|
||||||
|
EvmTokenTransferGrant::as_select(),
|
||||||
|
))
|
||||||
|
.load(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if grants.is_empty() {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
let grant_ids: Vec<i32> = grants.iter().map(|(_, g)| g.id).collect();
|
||||||
|
|
||||||
|
let all_volume_limits: Vec<EvmTokenTransferVolumeLimit> =
|
||||||
|
evm_token_transfer_volume_limit::table
|
||||||
|
.filter(evm_token_transfer_volume_limit::grant_id.eq_any(&grant_ids))
|
||||||
|
.select(EvmTokenTransferVolumeLimit::as_select())
|
||||||
|
.load(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut limits_by_grant: HashMap<i32, Vec<EvmTokenTransferVolumeLimit>> = HashMap::new();
|
||||||
|
for limit in all_volume_limits {
|
||||||
|
limits_by_grant
|
||||||
|
.entry(limit.grant_id)
|
||||||
|
.or_default()
|
||||||
|
.push(limit);
|
||||||
|
}
|
||||||
|
|
||||||
|
grants
|
||||||
|
.into_iter()
|
||||||
|
.map(|(basic, specific)| {
|
||||||
|
let volume_limits: Vec<VolumeRateLimit> = limits_by_grant
|
||||||
|
.get(&specific.id)
|
||||||
|
.map(|v| v.as_slice())
|
||||||
|
.unwrap_or_default()
|
||||||
|
.iter()
|
||||||
|
.map(|row| {
|
||||||
|
Ok(VolumeRateLimit {
|
||||||
|
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|e| {
|
||||||
|
diesel::result::Error::DeserializationError(Box::new(e))
|
||||||
|
})?,
|
||||||
|
window: Duration::seconds(row.window_secs as i64),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect::<QueryResult<Vec<_>>>()?;
|
||||||
|
|
||||||
|
let token_contract: [u8; 20] =
|
||||||
|
specific.token_contract.clone().try_into().map_err(|_| {
|
||||||
|
diesel::result::Error::DeserializationError(
|
||||||
|
"Invalid token contract address length".into(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let target: Option<Address> = match &specific.receiver {
|
||||||
|
None => None,
|
||||||
|
Some(bytes) => {
|
||||||
|
let arr: [u8; 20] = bytes.clone().try_into().map_err(|_| {
|
||||||
|
diesel::result::Error::DeserializationError(
|
||||||
|
"Invalid receiver address length".into(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
Some(Address::from(arr))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Grant {
|
||||||
|
id: specific.id,
|
||||||
|
shared_grant_id: specific.basic_grant_id,
|
||||||
|
shared: SharedGrantSettings::try_from_model(basic)?,
|
||||||
|
settings: Settings {
|
||||||
|
token_contract: Address::from(token_contract),
|
||||||
|
target,
|
||||||
|
volume_limits,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests;
|
||||||
@@ -0,0 +1,463 @@
|
|||||||
|
use alloy::primitives::{Address, Bytes, U256, address};
|
||||||
|
use alloy::sol_types::SolCall;
|
||||||
|
use chrono::{Duration, Utc};
|
||||||
|
use diesel::{SelectableHelper, insert_into};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
|
||||||
|
use crate::db::{
|
||||||
|
self, DatabaseConnection,
|
||||||
|
models::{EvmBasicGrant, NewEvmBasicGrant, SqliteTimestamp},
|
||||||
|
schema::evm_basic_grant,
|
||||||
|
};
|
||||||
|
use crate::evm::{
|
||||||
|
abi::IERC20::transferCall,
|
||||||
|
policies::{EvalContext, EvalViolation, Grant, Policy, SharedGrantSettings, VolumeRateLimit},
|
||||||
|
utils,
|
||||||
|
};
|
||||||
|
|
||||||
|
use super::{Settings, TokenTransfer};
|
||||||
|
|
||||||
|
// DAI on Ethereum mainnet — present in the static token registry
|
||||||
|
const CHAIN_ID: u64 = 1;
|
||||||
|
const DAI: Address = address!("6B175474E89094C44Da98b954EedeAC495271d0F");
|
||||||
|
|
||||||
|
const WALLET_ID: i32 = 1;
|
||||||
|
const CLIENT_ID: i32 = 2;
|
||||||
|
|
||||||
|
const RECIPIENT: Address = address!("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
|
||||||
|
const OTHER: Address = address!("bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb");
|
||||||
|
const UNKNOWN_TOKEN: Address = address!("cccccccccccccccccccccccccccccccccccccccc");
|
||||||
|
|
||||||
|
/// Encode `transfer(to, value)` raw params (no 4-byte selector).
|
||||||
|
/// `abi_decode_raw_validate` expects exactly this format.
|
||||||
|
fn transfer_calldata(to: Address, value: U256) -> Bytes {
|
||||||
|
let mut raw = Vec::new();
|
||||||
|
transferCall { to, value }.abi_encode_raw(&mut raw);
|
||||||
|
Bytes::from(raw)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ctx(to: Address, calldata: Bytes) -> EvalContext {
|
||||||
|
EvalContext {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
chain: CHAIN_ID,
|
||||||
|
to,
|
||||||
|
value: U256::ZERO,
|
||||||
|
calldata,
|
||||||
|
max_fee_per_gas: 0,
|
||||||
|
max_priority_fee_per_gas: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn insert_basic(conn: &mut DatabaseConnection, revoked: bool) -> EvmBasicGrant {
|
||||||
|
insert_into(evm_basic_grant::table)
|
||||||
|
.values(NewEvmBasicGrant {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
valid_from: None,
|
||||||
|
valid_until: None,
|
||||||
|
max_gas_fee_per_gas: None,
|
||||||
|
max_priority_fee_per_gas: None,
|
||||||
|
rate_limit_count: None,
|
||||||
|
rate_limit_window_secs: None,
|
||||||
|
revoked_at: revoked.then(|| SqliteTimestamp(Utc::now())),
|
||||||
|
})
|
||||||
|
.returning(EvmBasicGrant::as_select())
|
||||||
|
.get_result(conn)
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make_settings(target: Option<Address>, max_volume: Option<u64>) -> Settings {
|
||||||
|
Settings {
|
||||||
|
token_contract: DAI,
|
||||||
|
target,
|
||||||
|
volume_limits: max_volume
|
||||||
|
.map(|v| {
|
||||||
|
vec![VolumeRateLimit {
|
||||||
|
max_volume: U256::from(v),
|
||||||
|
window: Duration::hours(1),
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
.unwrap_or_default(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shared() -> SharedGrantSettings {
|
||||||
|
SharedGrantSettings {
|
||||||
|
wallet_id: WALLET_ID,
|
||||||
|
chain: CHAIN_ID,
|
||||||
|
valid_from: None,
|
||||||
|
valid_until: None,
|
||||||
|
max_gas_fee_per_gas: None,
|
||||||
|
max_priority_fee_per_gas: None,
|
||||||
|
rate_limit: None,
|
||||||
|
client_id: CLIENT_ID,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── analyze ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_known_token_valid_calldata() {
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let m = TokenTransfer::analyze(&ctx(DAI, calldata)).unwrap();
|
||||||
|
assert_eq!(m.to, RECIPIENT);
|
||||||
|
assert_eq!(m.value, U256::from(100u64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_unknown_token_returns_none() {
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
assert!(TokenTransfer::analyze(&ctx(UNKNOWN_TOKEN, calldata)).is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_invalid_calldata_returns_none() {
|
||||||
|
let calldata = Bytes::from(vec![0xde, 0xad, 0xbe, 0xef]);
|
||||||
|
assert!(TokenTransfer::analyze(&ctx(DAI, calldata)).is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn analyze_empty_calldata_returns_none() {
|
||||||
|
assert!(TokenTransfer::analyze(&ctx(DAI, Bytes::new())).is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── evaluate ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_rejects_nonzero_eth_value() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(None, None),
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let mut context = ctx(DAI, calldata);
|
||||||
|
context.value = U256::from(1u64); // ETH attached to an ERC-20 call
|
||||||
|
|
||||||
|
let m = TokenTransfer::analyze(&EvalContext {
|
||||||
|
value: U256::ZERO,
|
||||||
|
..context.clone()
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::InvalidTransactionType))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_any_recipient_when_no_restriction() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(None, None),
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(v.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_matching_restricted_recipient() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(Some(RECIPIENT), None),
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(v.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_rejects_wrong_restricted_recipient() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(Some(RECIPIENT), None),
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(OTHER, U256::from(100u64));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::InvalidTarget { .. }))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_passes_volume_within_limit() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(None, Some(1_000));
|
||||||
|
let grant_id = TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Record a past transfer of 500 (within 1000 limit)
|
||||||
|
use crate::db::{models::NewEvmTokenTransferLog, schema::evm_token_transfer_log};
|
||||||
|
insert_into(evm_token_transfer_log::table)
|
||||||
|
.values(NewEvmTokenTransferLog {
|
||||||
|
grant_id,
|
||||||
|
log_id: 0,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
token_contract: DAI.to_vec(),
|
||||||
|
recipient_address: RECIPIENT.to_vec(),
|
||||||
|
value: utils::u256_to_bytes(U256::from(500u64)).to_vec(),
|
||||||
|
})
|
||||||
|
.execute(&mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: grant_id,
|
||||||
|
shared_grant_id: basic.id,
|
||||||
|
shared: shared(),
|
||||||
|
settings,
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
!v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_rejects_volume_over_limit() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(None, Some(1_000));
|
||||||
|
let grant_id = TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
use crate::db::{models::NewEvmTokenTransferLog, schema::evm_token_transfer_log};
|
||||||
|
insert_into(evm_token_transfer_log::table)
|
||||||
|
.values(NewEvmTokenTransferLog {
|
||||||
|
grant_id,
|
||||||
|
log_id: 0,
|
||||||
|
chain_id: CHAIN_ID as i32,
|
||||||
|
token_contract: DAI.to_vec(),
|
||||||
|
recipient_address: RECIPIENT.to_vec(),
|
||||||
|
value: utils::u256_to_bytes(U256::from(1_001u64)).to_vec(),
|
||||||
|
})
|
||||||
|
.execute(&mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: grant_id,
|
||||||
|
shared_grant_id: basic.id,
|
||||||
|
shared: shared(),
|
||||||
|
settings,
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn evaluate_no_volume_limits_always_passes() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let grant = Grant {
|
||||||
|
id: 999,
|
||||||
|
shared_grant_id: 999,
|
||||||
|
shared: shared(),
|
||||||
|
settings: make_settings(None, None), // no volume limits
|
||||||
|
};
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(u64::MAX));
|
||||||
|
let context = ctx(DAI, calldata);
|
||||||
|
let m = TokenTransfer::analyze(&context).unwrap();
|
||||||
|
let v = TokenTransfer::evaluate(&context, &m, &grant, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
!v.iter()
|
||||||
|
.any(|e| matches!(e, EvalViolation::VolumetricLimitExceeded))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── try_find_grant ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_roundtrip() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(Some(RECIPIENT), Some(5_000));
|
||||||
|
TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(100u64));
|
||||||
|
let found = TokenTransfer::try_find_grant(&ctx(DAI, calldata), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(found.is_some());
|
||||||
|
let g = found.unwrap();
|
||||||
|
assert_eq!(g.settings.token_contract, DAI);
|
||||||
|
assert_eq!(g.settings.target, Some(RECIPIENT));
|
||||||
|
assert_eq!(g.settings.volume_limits.len(), 1);
|
||||||
|
assert_eq!(g.settings.volume_limits[0].max_volume, U256::from(5_000u64));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_revoked_returns_none() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, true).await;
|
||||||
|
let settings = make_settings(None, None);
|
||||||
|
TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(1u64));
|
||||||
|
let found = TokenTransfer::try_find_grant(&ctx(DAI, calldata), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(found.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn try_find_grant_unknown_token_returns_none() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(None, None);
|
||||||
|
TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Query with a different token contract
|
||||||
|
let calldata = transfer_calldata(RECIPIENT, U256::from(1u64));
|
||||||
|
let found = TokenTransfer::try_find_grant(&ctx(UNKNOWN_TOKEN, calldata), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(found.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── find_all_grants ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_empty_db() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let all = TokenTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert!(all.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_excludes_revoked() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let settings = make_settings(None, Some(1_000));
|
||||||
|
let active = insert_basic(&mut conn, false).await;
|
||||||
|
TokenTransfer::create_grant(&active, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let revoked = insert_basic(&mut conn, true).await;
|
||||||
|
TokenTransfer::create_grant(&revoked, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = TokenTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_loads_volume_limits() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let basic = insert_basic(&mut conn, false).await;
|
||||||
|
let settings = make_settings(None, Some(9_999));
|
||||||
|
TokenTransfer::create_grant(&basic, &settings, &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = TokenTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 1);
|
||||||
|
assert_eq!(all[0].settings.volume_limits.len(), 1);
|
||||||
|
assert_eq!(
|
||||||
|
all[0].settings.volume_limits[0].max_volume,
|
||||||
|
U256::from(9_999u64)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn find_all_grants_multiple_grants_batch_loaded() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
|
let b1 = insert_basic(&mut conn, false).await;
|
||||||
|
TokenTransfer::create_grant(&b1, &make_settings(None, Some(1_000)), &mut *conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let b2 = insert_basic(&mut conn, false).await;
|
||||||
|
TokenTransfer::create_grant(
|
||||||
|
&b2,
|
||||||
|
&make_settings(Some(RECIPIENT), Some(2_000)),
|
||||||
|
&mut *conn,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let all = TokenTransfer::find_all_grants(&mut *conn).await.unwrap();
|
||||||
|
assert_eq!(all.len(), 2);
|
||||||
|
}
|
||||||
@@ -1,14 +1,14 @@
|
|||||||
use std::sync::Mutex;
|
use std::sync::Mutex;
|
||||||
|
|
||||||
|
use crate::safe_cell::{SafeCell, SafeCellHandle as _};
|
||||||
use alloy::{
|
use alloy::{
|
||||||
consensus::SignableTransaction,
|
consensus::SignableTransaction,
|
||||||
network::{TxSigner, TxSignerSync},
|
network::{TxSigner, TxSignerSync},
|
||||||
primitives::{Address, ChainId, Signature, B256},
|
primitives::{Address, B256, ChainId, Signature},
|
||||||
signers::{Error, Result, Signer, SignerSync, utils::secret_key_to_address},
|
signers::{Error, Result, Signer, SignerSync, utils::secret_key_to_address},
|
||||||
};
|
};
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use k256::ecdsa::{self, signature::hazmat::PrehashSigner, RecoveryId, SigningKey};
|
use k256::ecdsa::{self, RecoveryId, SigningKey, signature::hazmat::PrehashSigner};
|
||||||
use memsafe::MemSafe;
|
|
||||||
|
|
||||||
/// An Ethereum signer that stores its secp256k1 secret key inside a
|
/// An Ethereum signer that stores its secp256k1 secret key inside a
|
||||||
/// hardware-protected [`MemSafe`] cell.
|
/// hardware-protected [`MemSafe`] cell.
|
||||||
@@ -20,7 +20,7 @@ use memsafe::MemSafe;
|
|||||||
/// Because [`MemSafe::read`] requires `&mut self` while the [`Signer`] trait
|
/// Because [`MemSafe::read`] requires `&mut self` while the [`Signer`] trait
|
||||||
/// requires `&self`, the cell is wrapped in a [`Mutex`].
|
/// requires `&self`, the cell is wrapped in a [`Mutex`].
|
||||||
pub struct SafeSigner {
|
pub struct SafeSigner {
|
||||||
key: Mutex<MemSafe<SigningKey>>,
|
key: Mutex<SafeCell<SigningKey>>,
|
||||||
address: Address,
|
address: Address,
|
||||||
chain_id: Option<ChainId>,
|
chain_id: Option<ChainId>,
|
||||||
}
|
}
|
||||||
@@ -42,14 +42,13 @@ impl std::fmt::Debug for SafeSigner {
|
|||||||
/// rejection, but we retry to be correct).
|
/// rejection, but we retry to be correct).
|
||||||
///
|
///
|
||||||
/// Returns the protected key bytes and the derived Ethereum address.
|
/// Returns the protected key bytes and the derived Ethereum address.
|
||||||
pub fn generate(rng: &mut impl rand::Rng) -> (MemSafe<[u8; 32]>, Address) {
|
pub fn generate(rng: &mut impl rand::Rng) -> (SafeCell<[u8; 32]>, Address) {
|
||||||
loop {
|
loop {
|
||||||
let mut cell = MemSafe::new([0u8; 32]).expect("MemSafe allocation");
|
let mut cell = SafeCell::new_inline(|w: &mut [u8; 32]| {
|
||||||
{
|
rng.fill_bytes(w);
|
||||||
let mut w = cell.write().expect("MemSafe write");
|
});
|
||||||
rng.fill_bytes(w.as_mut());
|
|
||||||
}
|
let reader = cell.read();
|
||||||
let reader = cell.read().expect("MemSafe read");
|
|
||||||
if let Ok(sk) = SigningKey::from_slice(reader.as_ref()) {
|
if let Ok(sk) = SigningKey::from_slice(reader.as_ref()) {
|
||||||
let address = secret_key_to_address(&sk);
|
let address = secret_key_to_address(&sk);
|
||||||
drop(reader);
|
drop(reader);
|
||||||
@@ -64,8 +63,8 @@ impl SafeSigner {
|
|||||||
/// The key bytes are read from protected memory, parsed as a secp256k1
|
/// The key bytes are read from protected memory, parsed as a secp256k1
|
||||||
/// scalar, and immediately moved into a new [`MemSafe`] cell. The raw
|
/// scalar, and immediately moved into a new [`MemSafe`] cell. The raw
|
||||||
/// bytes are never exposed outside this function.
|
/// bytes are never exposed outside this function.
|
||||||
pub fn from_memsafe(mut cell: MemSafe<Vec<u8>>) -> Result<Self> {
|
pub fn from_cell(mut cell: SafeCell<Vec<u8>>) -> Result<Self> {
|
||||||
let reader = cell.read().map_err(Error::other)?;
|
let reader = cell.read();
|
||||||
let sk = SigningKey::from_slice(reader.as_slice()).map_err(Error::other)?;
|
let sk = SigningKey::from_slice(reader.as_slice()).map_err(Error::other)?;
|
||||||
drop(reader);
|
drop(reader);
|
||||||
Self::new(sk)
|
Self::new(sk)
|
||||||
@@ -75,7 +74,7 @@ impl SafeSigner {
|
|||||||
/// memory region.
|
/// memory region.
|
||||||
pub fn new(key: SigningKey) -> Result<Self> {
|
pub fn new(key: SigningKey) -> Result<Self> {
|
||||||
let address = secret_key_to_address(&key);
|
let address = secret_key_to_address(&key);
|
||||||
let cell = MemSafe::new(key).map_err(Error::other)?;
|
let cell = SafeCell::new(key);
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
key: Mutex::new(cell),
|
key: Mutex::new(cell),
|
||||||
address,
|
address,
|
||||||
@@ -84,25 +83,25 @@ impl SafeSigner {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn sign_hash_inner(&self, hash: &B256) -> Result<Signature> {
|
fn sign_hash_inner(&self, hash: &B256) -> Result<Signature> {
|
||||||
|
#[allow(clippy::expect_used)]
|
||||||
let mut cell = self.key.lock().expect("SafeSigner mutex poisoned");
|
let mut cell = self.key.lock().expect("SafeSigner mutex poisoned");
|
||||||
let reader = cell.read().map_err(Error::other)?;
|
let reader = cell.read();
|
||||||
let sig: (ecdsa::Signature, RecoveryId) = reader.sign_prehash(hash.as_ref())?;
|
let sig: (ecdsa::Signature, RecoveryId) = reader.sign_prehash(hash.as_ref())?;
|
||||||
Ok(sig.into())
|
Ok(sig.into())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn sign_tx_inner(
|
fn sign_tx_inner(&self, tx: &mut dyn SignableTransaction<Signature>) -> Result<Signature> {
|
||||||
&self,
|
if let Some(chain_id) = self.chain_id
|
||||||
tx: &mut dyn SignableTransaction<Signature>,
|
&& !tx.set_chain_id_checked(chain_id)
|
||||||
) -> Result<Signature> {
|
{
|
||||||
if let Some(chain_id) = self.chain_id {
|
|
||||||
if !tx.set_chain_id_checked(chain_id) {
|
|
||||||
return Err(Error::TransactionChainIdMismatch {
|
return Err(Error::TransactionChainIdMismatch {
|
||||||
signer: chain_id,
|
signer: chain_id,
|
||||||
tx: tx.chain_id().unwrap(),
|
#[allow(clippy::expect_used)]
|
||||||
|
tx: tx.chain_id().expect("Chain ID is guaranteed to be set"),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
self.sign_hash_inner(&tx.signature_hash())
|
||||||
self.sign_hash_inner(&tx.signature_hash()).map_err(Error::other)
|
.map_err(Error::other)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
137
server/crates/arbiter-server/src/grpc/client.rs
Normal file
137
server/crates/arbiter-server/src/grpc/client.rs
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
proto::client::{
|
||||||
|
ClientRequest, ClientResponse, VaultState as ProtoVaultState,
|
||||||
|
client_request::Payload as ClientRequestPayload,
|
||||||
|
client_response::Payload as ClientResponsePayload,
|
||||||
|
},
|
||||||
|
transport::{Receiver, Sender, grpc::GrpcBi},
|
||||||
|
};
|
||||||
|
use kameo::{
|
||||||
|
actor::{ActorRef, Spawn as _},
|
||||||
|
error::SendError,
|
||||||
|
};
|
||||||
|
use tonic::Status;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::{
|
||||||
|
client::{
|
||||||
|
self, ClientConnection,
|
||||||
|
session::{ClientSession, Error, HandleQueryVaultState},
|
||||||
|
},
|
||||||
|
keyholder::KeyHolderState,
|
||||||
|
},
|
||||||
|
grpc::request_tracker::RequestTracker,
|
||||||
|
utils::defer,
|
||||||
|
};
|
||||||
|
|
||||||
|
mod auth;
|
||||||
|
|
||||||
|
async fn dispatch_loop(
|
||||||
|
mut bi: GrpcBi<ClientRequest, ClientResponse>,
|
||||||
|
actor: ActorRef<ClientSession>,
|
||||||
|
mut request_tracker: RequestTracker,
|
||||||
|
) {
|
||||||
|
loop {
|
||||||
|
let Some(conn) = bi.recv().await else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
if dispatch_conn_message(&mut bi, &actor, &mut request_tracker, conn)
|
||||||
|
.await
|
||||||
|
.is_err()
|
||||||
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_conn_message(
|
||||||
|
bi: &mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
|
actor: &ActorRef<ClientSession>,
|
||||||
|
request_tracker: &mut RequestTracker,
|
||||||
|
conn: Result<ClientRequest, Status>,
|
||||||
|
) -> Result<(), ()> {
|
||||||
|
let conn = match conn {
|
||||||
|
Ok(conn) => conn,
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to receive client request");
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let request_id = match request_tracker.request(conn.request_id) {
|
||||||
|
Ok(request_id) => request_id,
|
||||||
|
Err(err) => {
|
||||||
|
let _ = bi.send(Err(err)).await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let Some(payload) = conn.payload else {
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::invalid_argument(
|
||||||
|
"Missing client request payload",
|
||||||
|
)))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
};
|
||||||
|
|
||||||
|
let payload = match payload {
|
||||||
|
ClientRequestPayload::QueryVaultState(_) => ClientResponsePayload::VaultState(
|
||||||
|
match actor.ask(HandleQueryVaultState {}).await {
|
||||||
|
Ok(KeyHolderState::Unbootstrapped) => ProtoVaultState::Unbootstrapped,
|
||||||
|
Ok(KeyHolderState::Sealed) => ProtoVaultState::Sealed,
|
||||||
|
Ok(KeyHolderState::Unsealed) => ProtoVaultState::Unsealed,
|
||||||
|
Err(SendError::HandlerError(Error::Internal)) => ProtoVaultState::Error,
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to query vault state");
|
||||||
|
ProtoVaultState::Error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
),
|
||||||
|
payload => {
|
||||||
|
warn!(?payload, "Unsupported post-auth client request");
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::invalid_argument("Unsupported client request")))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
bi.send(Ok(ClientResponse {
|
||||||
|
request_id: Some(request_id),
|
||||||
|
payload: Some(payload),
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.map_err(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn start(conn: ClientConnection, mut bi: GrpcBi<ClientRequest, ClientResponse>) {
|
||||||
|
let mut conn = conn;
|
||||||
|
let mut request_tracker = RequestTracker::default();
|
||||||
|
let mut response_id = None;
|
||||||
|
|
||||||
|
match auth::start(&mut conn, &mut bi, &mut request_tracker, &mut response_id).await {
|
||||||
|
Ok(_) => {
|
||||||
|
let actor =
|
||||||
|
client::session::ClientSession::spawn(client::session::ClientSession::new(conn));
|
||||||
|
let actor_for_cleanup = actor.clone();
|
||||||
|
let _ = defer(move || {
|
||||||
|
actor_for_cleanup.kill();
|
||||||
|
});
|
||||||
|
|
||||||
|
info!("Client authenticated successfully");
|
||||||
|
dispatch_loop(bi, actor, request_tracker).await;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
let mut transport = auth::AuthTransportAdapter::new(
|
||||||
|
&mut bi,
|
||||||
|
&mut request_tracker,
|
||||||
|
&mut response_id,
|
||||||
|
);
|
||||||
|
let _ = transport.send(Err(e.clone())).await;
|
||||||
|
warn!(error = ?e, "Authentication failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
173
server/crates/arbiter-server/src/grpc/client/auth.rs
Normal file
173
server/crates/arbiter-server/src/grpc/client/auth.rs
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
proto::client::{
|
||||||
|
AuthChallenge as ProtoAuthChallenge, AuthChallengeRequest as ProtoAuthChallengeRequest,
|
||||||
|
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
||||||
|
ClientRequest, ClientResponse, client_request::Payload as ClientRequestPayload,
|
||||||
|
client_response::Payload as ClientResponsePayload,
|
||||||
|
},
|
||||||
|
transport::{Bi, Error as TransportError, Receiver, Sender, grpc::GrpcBi},
|
||||||
|
};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use tonic::Status;
|
||||||
|
use tracing::warn;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::client::{self, ClientConnection, auth},
|
||||||
|
grpc::request_tracker::RequestTracker,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct AuthTransportAdapter<'a> {
|
||||||
|
bi: &'a mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
|
request_tracker: &'a mut RequestTracker,
|
||||||
|
response_id: &'a mut Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> AuthTransportAdapter<'a> {
|
||||||
|
pub fn new(
|
||||||
|
bi: &'a mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
|
request_tracker: &'a mut RequestTracker,
|
||||||
|
response_id: &'a mut Option<i32>,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
bi,
|
||||||
|
request_tracker,
|
||||||
|
response_id,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn response_to_proto(response: auth::Outbound) -> ClientResponsePayload {
|
||||||
|
match response {
|
||||||
|
auth::Outbound::AuthChallenge { pubkey, nonce } => {
|
||||||
|
ClientResponsePayload::AuthChallenge(ProtoAuthChallenge {
|
||||||
|
pubkey: pubkey.to_bytes().to_vec(),
|
||||||
|
nonce,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
auth::Outbound::AuthSuccess => {
|
||||||
|
ClientResponsePayload::AuthResult(ProtoAuthResult::Success.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn error_to_proto(error: auth::Error) -> ClientResponsePayload {
|
||||||
|
ClientResponsePayload::AuthResult(
|
||||||
|
match error {
|
||||||
|
auth::Error::InvalidChallengeSolution => ProtoAuthResult::InvalidSignature,
|
||||||
|
auth::Error::ApproveError(auth::ApproveError::Denied) => {
|
||||||
|
ProtoAuthResult::ApprovalDenied
|
||||||
|
}
|
||||||
|
auth::Error::ApproveError(auth::ApproveError::Upstream(
|
||||||
|
crate::actors::router::ApprovalError::NoUserAgentsConnected,
|
||||||
|
)) => ProtoAuthResult::NoUserAgentsOnline,
|
||||||
|
auth::Error::ApproveError(auth::ApproveError::Internal)
|
||||||
|
| auth::Error::DatabasePoolUnavailable
|
||||||
|
| auth::Error::DatabaseOperationFailed
|
||||||
|
| auth::Error::Transport => ProtoAuthResult::Internal,
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_client_response(
|
||||||
|
&mut self,
|
||||||
|
payload: ClientResponsePayload,
|
||||||
|
) -> Result<(), TransportError> {
|
||||||
|
let request_id = self.response_id.take();
|
||||||
|
|
||||||
|
self.bi
|
||||||
|
.send(Ok(ClientResponse {
|
||||||
|
request_id,
|
||||||
|
payload: Some(payload),
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_auth_result(&mut self, result: ProtoAuthResult) -> Result<(), TransportError> {
|
||||||
|
self.send_client_response(ClientResponsePayload::AuthResult(result.into()))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Sender<Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {
|
||||||
|
async fn send(
|
||||||
|
&mut self,
|
||||||
|
item: Result<auth::Outbound, auth::Error>,
|
||||||
|
) -> Result<(), TransportError> {
|
||||||
|
let payload = match item {
|
||||||
|
Ok(message) => AuthTransportAdapter::response_to_proto(message),
|
||||||
|
Err(err) => AuthTransportAdapter::error_to_proto(err),
|
||||||
|
};
|
||||||
|
|
||||||
|
self.send_client_response(payload).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
||||||
|
async fn recv(&mut self) -> Option<auth::Inbound> {
|
||||||
|
let request = match self.bi.recv().await? {
|
||||||
|
Ok(request) => request,
|
||||||
|
Err(error) => {
|
||||||
|
warn!(error = ?error, "grpc client recv failed; closing stream");
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let request_id = match self.request_tracker.request(request.request_id) {
|
||||||
|
Ok(request_id) => request_id,
|
||||||
|
Err(error) => {
|
||||||
|
let _ = self.bi.send(Err(error)).await;
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
*self.response_id = Some(request_id);
|
||||||
|
|
||||||
|
let payload = request.payload?;
|
||||||
|
|
||||||
|
match payload {
|
||||||
|
ClientRequestPayload::AuthChallengeRequest(ProtoAuthChallengeRequest { pubkey }) => {
|
||||||
|
let Ok(pubkey) = <[u8; 32]>::try_from(pubkey) else {
|
||||||
|
let _ = self.send_auth_result(ProtoAuthResult::InvalidKey).await;
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
let Ok(pubkey) = ed25519_dalek::VerifyingKey::from_bytes(&pubkey) else {
|
||||||
|
let _ = self.send_auth_result(ProtoAuthResult::InvalidKey).await;
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
Some(auth::Inbound::AuthChallengeRequest { pubkey })
|
||||||
|
}
|
||||||
|
ClientRequestPayload::AuthChallengeSolution(ProtoAuthChallengeSolution {
|
||||||
|
signature,
|
||||||
|
}) => {
|
||||||
|
let Ok(signature) = ed25519_dalek::Signature::try_from(signature.as_slice()) else {
|
||||||
|
let _ = self
|
||||||
|
.send_auth_result(ProtoAuthResult::InvalidSignature)
|
||||||
|
.await;
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
Some(auth::Inbound::AuthChallengeSolution { signature })
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
let _ = self
|
||||||
|
.bi
|
||||||
|
.send(Err(Status::invalid_argument("Unsupported client auth request")))
|
||||||
|
.await;
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Bi<auth::Inbound, Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {}
|
||||||
|
|
||||||
|
pub async fn start(
|
||||||
|
conn: &mut ClientConnection,
|
||||||
|
bi: &mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
|
request_tracker: &mut RequestTracker,
|
||||||
|
response_id: &mut Option<i32>,
|
||||||
|
) -> Result<(), auth::Error> {
|
||||||
|
let mut transport = AuthTransportAdapter::new(bi, request_tracker, response_id);
|
||||||
|
client::auth::authenticate(conn, &mut transport).await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
62
server/crates/arbiter-server/src/grpc/mod.rs
Normal file
62
server/crates/arbiter-server/src/grpc/mod.rs
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
proto::{
|
||||||
|
client::{ClientRequest, ClientResponse},
|
||||||
|
user_agent::{UserAgentRequest, UserAgentResponse},
|
||||||
|
},
|
||||||
|
transport::grpc::GrpcBi,
|
||||||
|
};
|
||||||
|
use tokio_stream::wrappers::ReceiverStream;
|
||||||
|
use tonic::{Request, Response, Status, async_trait};
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::{client::ClientConnection, user_agent::UserAgentConnection},
|
||||||
|
grpc::user_agent::start,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub mod client;
|
||||||
|
mod request_tracker;
|
||||||
|
pub mod user_agent;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl arbiter_proto::proto::arbiter_service_server::ArbiterService for super::Server {
|
||||||
|
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
|
||||||
|
type ClientStream = ReceiverStream<Result<ClientResponse, Status>>;
|
||||||
|
|
||||||
|
#[tracing::instrument(level = "debug", skip(self))]
|
||||||
|
async fn client(
|
||||||
|
&self,
|
||||||
|
request: Request<tonic::Streaming<ClientRequest>>,
|
||||||
|
) -> Result<Response<Self::ClientStream>, Status> {
|
||||||
|
let req_stream = request.into_inner();
|
||||||
|
let (bi, rx) = GrpcBi::from_bi_stream(req_stream);
|
||||||
|
let props = ClientConnection::new(self.context.db.clone(), self.context.actors.clone());
|
||||||
|
tokio::spawn(client::start(props, bi));
|
||||||
|
|
||||||
|
info!(event = "connection established", "grpc.client");
|
||||||
|
|
||||||
|
Ok(Response::new(rx))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tracing::instrument(level = "debug", skip(self))]
|
||||||
|
async fn user_agent(
|
||||||
|
&self,
|
||||||
|
request: Request<tonic::Streaming<UserAgentRequest>>,
|
||||||
|
) -> Result<Response<Self::UserAgentStream>, Status> {
|
||||||
|
let req_stream = request.into_inner();
|
||||||
|
|
||||||
|
let (bi, rx) = GrpcBi::from_bi_stream(req_stream);
|
||||||
|
|
||||||
|
tokio::spawn(start(
|
||||||
|
UserAgentConnection {
|
||||||
|
db: self.context.db.clone(),
|
||||||
|
actors: self.context.actors.clone(),
|
||||||
|
},
|
||||||
|
bi,
|
||||||
|
));
|
||||||
|
|
||||||
|
info!(event = "connection established", "grpc.user_agent");
|
||||||
|
|
||||||
|
Ok(Response::new(rx))
|
||||||
|
}
|
||||||
|
}
|
||||||
20
server/crates/arbiter-server/src/grpc/request_tracker.rs
Normal file
20
server/crates/arbiter-server/src/grpc/request_tracker.rs
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
use tonic::Status;
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
pub struct RequestTracker {
|
||||||
|
next_request_id: i32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RequestTracker {
|
||||||
|
pub fn request(&mut self, id: i32) -> Result<i32, Status> {
|
||||||
|
if id < self.next_request_id {
|
||||||
|
return Err(Status::invalid_argument("Duplicate request id"));
|
||||||
|
}
|
||||||
|
|
||||||
|
self.next_request_id = id
|
||||||
|
.checked_add(1)
|
||||||
|
.ok_or_else(|| Status::invalid_argument("Invalid request id"))?;
|
||||||
|
|
||||||
|
Ok(id)
|
||||||
|
}
|
||||||
|
}
|
||||||
604
server/crates/arbiter-server/src/grpc/user_agent.rs
Normal file
604
server/crates/arbiter-server/src/grpc/user_agent.rs
Normal file
@@ -0,0 +1,604 @@
|
|||||||
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
|
use arbiter_proto::{
|
||||||
|
google::protobuf::{Empty as ProtoEmpty, Timestamp as ProtoTimestamp},
|
||||||
|
proto::{
|
||||||
|
evm::{
|
||||||
|
EtherTransferSettings as ProtoEtherTransferSettings, EvmError as ProtoEvmError,
|
||||||
|
EvmGrantCreateRequest, EvmGrantCreateResponse, EvmGrantDeleteRequest,
|
||||||
|
EvmGrantDeleteResponse, EvmGrantList, EvmGrantListResponse, GrantEntry,
|
||||||
|
SharedSettings as ProtoSharedSettings, SpecificGrant as ProtoSpecificGrant,
|
||||||
|
TokenTransferSettings as ProtoTokenTransferSettings,
|
||||||
|
TransactionRateLimit as ProtoTransactionRateLimit,
|
||||||
|
VolumeRateLimit as ProtoVolumeRateLimit, WalletCreateResponse, WalletEntry, WalletList,
|
||||||
|
WalletListResponse, evm_grant_create_response::Result as EvmGrantCreateResult,
|
||||||
|
evm_grant_delete_response::Result as EvmGrantDeleteResult,
|
||||||
|
evm_grant_list_response::Result as EvmGrantListResult,
|
||||||
|
specific_grant::Grant as ProtoSpecificGrantType,
|
||||||
|
wallet_create_response::Result as WalletCreateResult,
|
||||||
|
wallet_list_response::Result as WalletListResult,
|
||||||
|
},
|
||||||
|
user_agent::{
|
||||||
|
BootstrapEncryptedKey as ProtoBootstrapEncryptedKey,
|
||||||
|
BootstrapResult as ProtoBootstrapResult,
|
||||||
|
SdkClientConnectionResponse as ProtoSdkClientConnectionResponse,
|
||||||
|
UnsealEncryptedKey as ProtoUnsealEncryptedKey, UnsealResult as ProtoUnsealResult,
|
||||||
|
UnsealStart, UserAgentRequest, UserAgentResponse, VaultState as ProtoVaultState,
|
||||||
|
user_agent_request::Payload as UserAgentRequestPayload,
|
||||||
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
transport::{Error as TransportError, Receiver, Sender, grpc::GrpcBi},
|
||||||
|
};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use chrono::{TimeZone, Utc};
|
||||||
|
use kameo::{
|
||||||
|
actor::{ActorRef, Spawn as _},
|
||||||
|
error::SendError,
|
||||||
|
};
|
||||||
|
use tonic::Status;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::{
|
||||||
|
keyholder::KeyHolderState,
|
||||||
|
user_agent::{
|
||||||
|
OutOfBand, UserAgentConnection, UserAgentSession,
|
||||||
|
session::{
|
||||||
|
BootstrapError, Error, HandleBootstrapEncryptedKey, HandleEvmWalletCreate,
|
||||||
|
HandleEvmWalletList, HandleGrantCreate, HandleGrantDelete, HandleGrantList,
|
||||||
|
HandleQueryVaultState, HandleUnsealEncryptedKey, HandleUnsealRequest, UnsealError,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
evm::policies::{
|
||||||
|
Grant, SharedGrantSettings, SpecificGrant, TransactionRateLimit, VolumeRateLimit,
|
||||||
|
ether_transfer, token_transfers,
|
||||||
|
},
|
||||||
|
grpc::request_tracker::RequestTracker,
|
||||||
|
utils::defer,
|
||||||
|
};
|
||||||
|
use alloy::primitives::{Address, U256};
|
||||||
|
mod auth;
|
||||||
|
|
||||||
|
pub struct OutOfBandAdapter(mpsc::Sender<OutOfBand>);
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Sender<OutOfBand> for OutOfBandAdapter {
|
||||||
|
async fn send(&mut self, item: OutOfBand) -> Result<(), TransportError> {
|
||||||
|
self.0.send(item).await.map_err(|e| {
|
||||||
|
warn!(error = ?e, "Failed to send out-of-band message");
|
||||||
|
TransportError::ChannelClosed
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_loop(
|
||||||
|
mut bi: GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
actor: ActorRef<UserAgentSession>,
|
||||||
|
mut receiver: mpsc::Receiver<OutOfBand>,
|
||||||
|
mut request_tracker: RequestTracker,
|
||||||
|
) {
|
||||||
|
loop {
|
||||||
|
tokio::select! {
|
||||||
|
oob = receiver.recv() => {
|
||||||
|
let Some(oob) = oob else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
if send_out_of_band(&mut bi, oob).await.is_err() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
conn = bi.recv() => {
|
||||||
|
let Some(conn) = conn else {
|
||||||
|
return;
|
||||||
|
};
|
||||||
|
|
||||||
|
if dispatch_conn_message(&mut bi, &actor, &mut request_tracker, conn)
|
||||||
|
.await
|
||||||
|
.is_err()
|
||||||
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_conn_message(
|
||||||
|
bi: &mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
actor: &ActorRef<UserAgentSession>,
|
||||||
|
request_tracker: &mut RequestTracker,
|
||||||
|
conn: Result<UserAgentRequest, Status>,
|
||||||
|
) -> Result<(), ()> {
|
||||||
|
let conn = match conn {
|
||||||
|
Ok(conn) => conn,
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to receive user agent request");
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let request_id = match request_tracker.request(conn.id) {
|
||||||
|
Ok(request_id) => request_id,
|
||||||
|
Err(err) => {
|
||||||
|
let _ = bi.send(Err(err)).await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some(payload) = conn.payload else {
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::invalid_argument(
|
||||||
|
"Missing user-agent request payload",
|
||||||
|
)))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
};
|
||||||
|
|
||||||
|
let payload = match payload {
|
||||||
|
UserAgentRequestPayload::UnsealStart(UnsealStart { client_pubkey }) => {
|
||||||
|
let client_pubkey = match <[u8; 32]>::try_from(client_pubkey) {
|
||||||
|
Ok(bytes) => x25519_dalek::PublicKey::from(bytes),
|
||||||
|
Err(_) => {
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::invalid_argument("Invalid X25519 public key")))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match actor.ask(HandleUnsealRequest { client_pubkey }).await {
|
||||||
|
Ok(response) => UserAgentResponsePayload::UnsealStartResponse(
|
||||||
|
arbiter_proto::proto::user_agent::UnsealStartResponse {
|
||||||
|
server_pubkey: response.server_pubkey.as_bytes().to_vec(),
|
||||||
|
},
|
||||||
|
),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to handle unseal start request");
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::internal("Failed to start unseal flow")))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
UserAgentRequestPayload::UnsealEncryptedKey(ProtoUnsealEncryptedKey {
|
||||||
|
nonce,
|
||||||
|
ciphertext,
|
||||||
|
associated_data,
|
||||||
|
}) => UserAgentResponsePayload::UnsealResult(
|
||||||
|
match actor
|
||||||
|
.ask(HandleUnsealEncryptedKey {
|
||||||
|
nonce,
|
||||||
|
ciphertext,
|
||||||
|
associated_data,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(()) => ProtoUnsealResult::Success,
|
||||||
|
Err(SendError::HandlerError(UnsealError::InvalidKey)) => {
|
||||||
|
ProtoUnsealResult::InvalidKey
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to handle unseal request");
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::internal("Failed to unseal vault")))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::BootstrapEncryptedKey(ProtoBootstrapEncryptedKey {
|
||||||
|
nonce,
|
||||||
|
ciphertext,
|
||||||
|
associated_data,
|
||||||
|
}) => UserAgentResponsePayload::BootstrapResult(
|
||||||
|
match actor
|
||||||
|
.ask(HandleBootstrapEncryptedKey {
|
||||||
|
nonce,
|
||||||
|
ciphertext,
|
||||||
|
associated_data,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(()) => ProtoBootstrapResult::Success,
|
||||||
|
Err(SendError::HandlerError(BootstrapError::InvalidKey)) => {
|
||||||
|
ProtoBootstrapResult::InvalidKey
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(BootstrapError::AlreadyBootstrapped)) => {
|
||||||
|
ProtoBootstrapResult::AlreadyBootstrapped
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to handle bootstrap request");
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::internal("Failed to bootstrap vault")))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::QueryVaultState(_) => UserAgentResponsePayload::VaultState(
|
||||||
|
match actor.ask(HandleQueryVaultState {}).await {
|
||||||
|
Ok(KeyHolderState::Unbootstrapped) => ProtoVaultState::Unbootstrapped,
|
||||||
|
Ok(KeyHolderState::Sealed) => ProtoVaultState::Sealed,
|
||||||
|
Ok(KeyHolderState::Unsealed) => ProtoVaultState::Unsealed,
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to query vault state");
|
||||||
|
ProtoVaultState::Error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::EvmWalletCreate(_) => UserAgentResponsePayload::EvmWalletCreate(
|
||||||
|
EvmGrantOrWallet::wallet_create_response(actor.ask(HandleEvmWalletCreate {}).await),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::EvmWalletList(_) => UserAgentResponsePayload::EvmWalletList(
|
||||||
|
EvmGrantOrWallet::wallet_list_response(actor.ask(HandleEvmWalletList {}).await),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::EvmGrantList(_) => UserAgentResponsePayload::EvmGrantList(
|
||||||
|
EvmGrantOrWallet::grant_list_response(actor.ask(HandleGrantList {}).await),
|
||||||
|
),
|
||||||
|
UserAgentRequestPayload::EvmGrantCreate(EvmGrantCreateRequest {
|
||||||
|
client_id,
|
||||||
|
shared,
|
||||||
|
specific,
|
||||||
|
}) => {
|
||||||
|
let (basic, grant) = match parse_grant_request(shared, specific) {
|
||||||
|
Ok(values) => values,
|
||||||
|
Err(status) => {
|
||||||
|
let _ = bi.send(Err(status)).await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
UserAgentResponsePayload::EvmGrantCreate(EvmGrantOrWallet::grant_create_response(
|
||||||
|
actor
|
||||||
|
.ask(HandleGrantCreate {
|
||||||
|
client_id,
|
||||||
|
basic,
|
||||||
|
grant,
|
||||||
|
})
|
||||||
|
.await,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
UserAgentRequestPayload::EvmGrantDelete(EvmGrantDeleteRequest { grant_id }) => {
|
||||||
|
UserAgentResponsePayload::EvmGrantDelete(EvmGrantOrWallet::grant_delete_response(
|
||||||
|
actor.ask(HandleGrantDelete { grant_id }).await,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
payload => {
|
||||||
|
warn!(?payload, "Unsupported post-auth user agent request");
|
||||||
|
let _ = bi
|
||||||
|
.send(Err(Status::invalid_argument(
|
||||||
|
"Unsupported user-agent request",
|
||||||
|
)))
|
||||||
|
.await;
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
bi.send(Ok(UserAgentResponse {
|
||||||
|
id: Some(request_id),
|
||||||
|
payload: Some(payload),
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.map_err(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_out_of_band(
|
||||||
|
bi: &mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
oob: OutOfBand,
|
||||||
|
) -> Result<(), ()> {
|
||||||
|
let payload = match oob {
|
||||||
|
// The current protobuf response payload carries only an approval boolean.
|
||||||
|
// Keep emitting this shape until a dedicated out-of-band request/cancel payload
|
||||||
|
// is reintroduced in the protocol definition.
|
||||||
|
OutOfBand::ClientConnectionRequest { pubkey: _ } => {
|
||||||
|
UserAgentResponsePayload::SdkClientConnectionResponse(
|
||||||
|
ProtoSdkClientConnectionResponse { approved: false },
|
||||||
|
)
|
||||||
|
}
|
||||||
|
OutOfBand::ClientConnectionCancel => UserAgentResponsePayload::SdkClientConnectionResponse(
|
||||||
|
ProtoSdkClientConnectionResponse { approved: false },
|
||||||
|
),
|
||||||
|
};
|
||||||
|
|
||||||
|
bi.send(Ok(UserAgentResponse {
|
||||||
|
id: None,
|
||||||
|
payload: Some(payload),
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.map_err(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_grant_request(
|
||||||
|
shared: Option<ProtoSharedSettings>,
|
||||||
|
specific: Option<ProtoSpecificGrant>,
|
||||||
|
) -> Result<(SharedGrantSettings, SpecificGrant), Status> {
|
||||||
|
let shared = shared.ok_or_else(|| Status::invalid_argument("Missing shared grant settings"))?;
|
||||||
|
let specific =
|
||||||
|
specific.ok_or_else(|| Status::invalid_argument("Missing specific grant settings"))?;
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
shared_settings_from_proto(shared)?,
|
||||||
|
specific_grant_from_proto(specific)?,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shared_settings_from_proto(shared: ProtoSharedSettings) -> Result<SharedGrantSettings, Status> {
|
||||||
|
Ok(SharedGrantSettings {
|
||||||
|
wallet_id: shared.wallet_id,
|
||||||
|
client_id: 0,
|
||||||
|
chain: shared.chain_id,
|
||||||
|
valid_from: shared.valid_from.map(proto_timestamp_to_utc).transpose()?,
|
||||||
|
valid_until: shared.valid_until.map(proto_timestamp_to_utc).transpose()?,
|
||||||
|
max_gas_fee_per_gas: shared
|
||||||
|
.max_gas_fee_per_gas
|
||||||
|
.as_deref()
|
||||||
|
.map(u256_from_proto_bytes)
|
||||||
|
.transpose()?,
|
||||||
|
max_priority_fee_per_gas: shared
|
||||||
|
.max_priority_fee_per_gas
|
||||||
|
.as_deref()
|
||||||
|
.map(u256_from_proto_bytes)
|
||||||
|
.transpose()?,
|
||||||
|
rate_limit: shared.rate_limit.map(|limit| TransactionRateLimit {
|
||||||
|
count: limit.count,
|
||||||
|
window: chrono::Duration::seconds(limit.window_secs),
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn specific_grant_from_proto(specific: ProtoSpecificGrant) -> Result<SpecificGrant, Status> {
|
||||||
|
match specific.grant {
|
||||||
|
Some(ProtoSpecificGrantType::EtherTransfer(ProtoEtherTransferSettings {
|
||||||
|
targets,
|
||||||
|
limit,
|
||||||
|
})) => Ok(SpecificGrant::EtherTransfer(ether_transfer::Settings {
|
||||||
|
target: targets
|
||||||
|
.into_iter()
|
||||||
|
.map(address_from_bytes)
|
||||||
|
.collect::<Result<_, _>>()?,
|
||||||
|
limit: volume_rate_limit_from_proto(limit.ok_or_else(|| {
|
||||||
|
Status::invalid_argument("Missing ether transfer volume rate limit")
|
||||||
|
})?)?,
|
||||||
|
})),
|
||||||
|
Some(ProtoSpecificGrantType::TokenTransfer(ProtoTokenTransferSettings {
|
||||||
|
token_contract,
|
||||||
|
target,
|
||||||
|
volume_limits,
|
||||||
|
})) => Ok(SpecificGrant::TokenTransfer(token_transfers::Settings {
|
||||||
|
token_contract: address_from_bytes(token_contract)?,
|
||||||
|
target: target.map(address_from_bytes).transpose()?,
|
||||||
|
volume_limits: volume_limits
|
||||||
|
.into_iter()
|
||||||
|
.map(volume_rate_limit_from_proto)
|
||||||
|
.collect::<Result<_, _>>()?,
|
||||||
|
})),
|
||||||
|
None => Err(Status::invalid_argument("Missing specific grant kind")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn volume_rate_limit_from_proto(limit: ProtoVolumeRateLimit) -> Result<VolumeRateLimit, Status> {
|
||||||
|
Ok(VolumeRateLimit {
|
||||||
|
max_volume: u256_from_proto_bytes(&limit.max_volume)?,
|
||||||
|
window: chrono::Duration::seconds(limit.window_secs),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn address_from_bytes(bytes: Vec<u8>) -> Result<Address, Status> {
|
||||||
|
if bytes.len() != 20 {
|
||||||
|
return Err(Status::invalid_argument("Invalid EVM address"));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Address::from_slice(&bytes))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn u256_from_proto_bytes(bytes: &[u8]) -> Result<U256, Status> {
|
||||||
|
if bytes.len() > 32 {
|
||||||
|
return Err(Status::invalid_argument("Invalid U256 byte length"));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(U256::from_be_slice(bytes))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn proto_timestamp_to_utc(timestamp: ProtoTimestamp) -> Result<chrono::DateTime<Utc>, Status> {
|
||||||
|
Utc.timestamp_opt(timestamp.seconds, timestamp.nanos as u32)
|
||||||
|
.single()
|
||||||
|
.ok_or_else(|| Status::invalid_argument("Invalid timestamp"))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shared_settings_to_proto(shared: SharedGrantSettings) -> ProtoSharedSettings {
|
||||||
|
ProtoSharedSettings {
|
||||||
|
wallet_id: shared.wallet_id,
|
||||||
|
chain_id: shared.chain,
|
||||||
|
valid_from: shared.valid_from.map(|time| ProtoTimestamp {
|
||||||
|
seconds: time.timestamp(),
|
||||||
|
nanos: time.timestamp_subsec_nanos() as i32,
|
||||||
|
}),
|
||||||
|
valid_until: shared.valid_until.map(|time| ProtoTimestamp {
|
||||||
|
seconds: time.timestamp(),
|
||||||
|
nanos: time.timestamp_subsec_nanos() as i32,
|
||||||
|
}),
|
||||||
|
max_gas_fee_per_gas: shared
|
||||||
|
.max_gas_fee_per_gas
|
||||||
|
.map(|value| value.to_be_bytes::<32>().to_vec()),
|
||||||
|
max_priority_fee_per_gas: shared
|
||||||
|
.max_priority_fee_per_gas
|
||||||
|
.map(|value| value.to_be_bytes::<32>().to_vec()),
|
||||||
|
rate_limit: shared.rate_limit.map(|limit| ProtoTransactionRateLimit {
|
||||||
|
count: limit.count,
|
||||||
|
window_secs: limit.window.num_seconds(),
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn specific_grant_to_proto(grant: SpecificGrant) -> ProtoSpecificGrant {
|
||||||
|
let grant = match grant {
|
||||||
|
SpecificGrant::EtherTransfer(settings) => {
|
||||||
|
ProtoSpecificGrantType::EtherTransfer(ProtoEtherTransferSettings {
|
||||||
|
targets: settings
|
||||||
|
.target
|
||||||
|
.into_iter()
|
||||||
|
.map(|address| address.to_vec())
|
||||||
|
.collect(),
|
||||||
|
limit: Some(ProtoVolumeRateLimit {
|
||||||
|
max_volume: settings.limit.max_volume.to_be_bytes::<32>().to_vec(),
|
||||||
|
window_secs: settings.limit.window.num_seconds(),
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
SpecificGrant::TokenTransfer(settings) => {
|
||||||
|
ProtoSpecificGrantType::TokenTransfer(ProtoTokenTransferSettings {
|
||||||
|
token_contract: settings.token_contract.to_vec(),
|
||||||
|
target: settings.target.map(|address| address.to_vec()),
|
||||||
|
volume_limits: settings
|
||||||
|
.volume_limits
|
||||||
|
.into_iter()
|
||||||
|
.map(|limit| ProtoVolumeRateLimit {
|
||||||
|
max_volume: limit.max_volume.to_be_bytes::<32>().to_vec(),
|
||||||
|
window_secs: limit.window.num_seconds(),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
ProtoSpecificGrant { grant: Some(grant) }
|
||||||
|
}
|
||||||
|
|
||||||
|
struct EvmGrantOrWallet;
|
||||||
|
|
||||||
|
impl EvmGrantOrWallet {
|
||||||
|
fn wallet_create_response<M>(
|
||||||
|
result: Result<Address, SendError<M, Error>>,
|
||||||
|
) -> WalletCreateResponse {
|
||||||
|
let result = match result {
|
||||||
|
Ok(wallet) => WalletCreateResult::Wallet(WalletEntry {
|
||||||
|
address: wallet.to_vec(),
|
||||||
|
}),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to create EVM wallet");
|
||||||
|
WalletCreateResult::Error(ProtoEvmError::Internal.into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
WalletCreateResponse {
|
||||||
|
result: Some(result),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn wallet_list_response<M>(
|
||||||
|
result: Result<Vec<Address>, SendError<M, Error>>,
|
||||||
|
) -> WalletListResponse {
|
||||||
|
let result = match result {
|
||||||
|
Ok(wallets) => WalletListResult::Wallets(WalletList {
|
||||||
|
wallets: wallets
|
||||||
|
.into_iter()
|
||||||
|
.map(|wallet| WalletEntry {
|
||||||
|
address: wallet.to_vec(),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
}),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to list EVM wallets");
|
||||||
|
WalletListResult::Error(ProtoEvmError::Internal.into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
WalletListResponse {
|
||||||
|
result: Some(result),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn grant_create_response<M>(
|
||||||
|
result: Result<i32, SendError<M, Error>>,
|
||||||
|
) -> EvmGrantCreateResponse {
|
||||||
|
let result = match result {
|
||||||
|
Ok(grant_id) => EvmGrantCreateResult::GrantId(grant_id),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to create EVM grant");
|
||||||
|
EvmGrantCreateResult::Error(ProtoEvmError::Internal.into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
EvmGrantCreateResponse {
|
||||||
|
result: Some(result),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn grant_delete_response<M>(result: Result<(), SendError<M, Error>>) -> EvmGrantDeleteResponse {
|
||||||
|
let result = match result {
|
||||||
|
Ok(()) => EvmGrantDeleteResult::Ok(ProtoEmpty {}),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to delete EVM grant");
|
||||||
|
EvmGrantDeleteResult::Error(ProtoEvmError::Internal.into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
EvmGrantDeleteResponse {
|
||||||
|
result: Some(result),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn grant_list_response<M>(
|
||||||
|
result: Result<Vec<Grant<SpecificGrant>>, SendError<M, Error>>,
|
||||||
|
) -> EvmGrantListResponse {
|
||||||
|
let result = match result {
|
||||||
|
Ok(grants) => EvmGrantListResult::Grants(EvmGrantList {
|
||||||
|
grants: grants
|
||||||
|
.into_iter()
|
||||||
|
.map(|grant| GrantEntry {
|
||||||
|
id: grant.id,
|
||||||
|
client_id: grant.shared.client_id,
|
||||||
|
shared: Some(shared_settings_to_proto(grant.shared)),
|
||||||
|
specific: Some(specific_grant_to_proto(grant.settings)),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
}),
|
||||||
|
Err(err) => {
|
||||||
|
warn!(error = ?err, "Failed to list EVM grants");
|
||||||
|
EvmGrantListResult::Error(ProtoEvmError::Internal.into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
EvmGrantListResponse {
|
||||||
|
result: Some(result),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn start(
|
||||||
|
mut conn: UserAgentConnection,
|
||||||
|
mut bi: GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
) {
|
||||||
|
let mut request_tracker = RequestTracker::default();
|
||||||
|
let mut response_id = None;
|
||||||
|
|
||||||
|
let pubkey = match auth::start(&mut conn, &mut bi, &mut request_tracker, &mut response_id).await
|
||||||
|
{
|
||||||
|
Ok(pubkey) => pubkey,
|
||||||
|
Err(e) => {
|
||||||
|
warn!(error = ?e, "Authentication failed");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let (oob_sender, oob_receiver) = mpsc::channel(16);
|
||||||
|
let oob_adapter = OutOfBandAdapter(oob_sender);
|
||||||
|
|
||||||
|
let actor = UserAgentSession::spawn(UserAgentSession::new(conn, Box::new(oob_adapter)));
|
||||||
|
let actor_for_cleanup = actor.clone();
|
||||||
|
|
||||||
|
let _ = defer(move || {
|
||||||
|
actor_for_cleanup.kill();
|
||||||
|
});
|
||||||
|
|
||||||
|
info!(?pubkey, "User authenticated successfully");
|
||||||
|
dispatch_loop(bi, actor, oob_receiver, request_tracker).await;
|
||||||
|
}
|
||||||
180
server/crates/arbiter-server/src/grpc/user_agent/auth.rs
Normal file
180
server/crates/arbiter-server/src/grpc/user_agent/auth.rs
Normal file
@@ -0,0 +1,180 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
proto::user_agent::{
|
||||||
|
AuthChallenge as ProtoAuthChallenge, AuthChallengeRequest as ProtoAuthChallengeRequest,
|
||||||
|
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
||||||
|
KeyType as ProtoKeyType, UserAgentRequest, UserAgentResponse,
|
||||||
|
user_agent_request::Payload as UserAgentRequestPayload,
|
||||||
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
|
},
|
||||||
|
transport::{Bi, Error as TransportError, Receiver, Sender, grpc::GrpcBi},
|
||||||
|
};
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use tonic::Status;
|
||||||
|
use tracing::warn;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::user_agent::{AuthPublicKey, UserAgentConnection, auth},
|
||||||
|
db::models::KeyType,
|
||||||
|
grpc::request_tracker::RequestTracker,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub struct AuthTransportAdapter<'a> {
|
||||||
|
bi: &'a mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
request_tracker: &'a mut RequestTracker,
|
||||||
|
response_id: &'a mut Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> AuthTransportAdapter<'a> {
|
||||||
|
pub fn new(
|
||||||
|
bi: &'a mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
request_tracker: &'a mut RequestTracker,
|
||||||
|
response_id: &'a mut Option<i32>,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
bi,
|
||||||
|
request_tracker,
|
||||||
|
response_id,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_user_agent_response(
|
||||||
|
&mut self,
|
||||||
|
payload: UserAgentResponsePayload,
|
||||||
|
) -> Result<(), TransportError> {
|
||||||
|
let id = self.response_id.take();
|
||||||
|
|
||||||
|
self.bi
|
||||||
|
.send(Ok(UserAgentResponse {
|
||||||
|
id,
|
||||||
|
payload: Some(payload),
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Sender<Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {
|
||||||
|
async fn send(
|
||||||
|
&mut self,
|
||||||
|
item: Result<auth::Outbound, auth::Error>,
|
||||||
|
) -> Result<(), TransportError> {
|
||||||
|
use auth::{Error, Outbound};
|
||||||
|
let payload = match item {
|
||||||
|
Ok(Outbound::AuthChallenge { nonce }) => {
|
||||||
|
UserAgentResponsePayload::AuthChallenge(ProtoAuthChallenge { nonce })
|
||||||
|
}
|
||||||
|
Ok(Outbound::AuthSuccess) => {
|
||||||
|
UserAgentResponsePayload::AuthResult(ProtoAuthResult::Success.into())
|
||||||
|
}
|
||||||
|
Err(Error::UnregisteredPublicKey) => {
|
||||||
|
UserAgentResponsePayload::AuthResult(ProtoAuthResult::InvalidKey.into())
|
||||||
|
}
|
||||||
|
Err(Error::InvalidChallengeSolution) => {
|
||||||
|
UserAgentResponsePayload::AuthResult(ProtoAuthResult::InvalidSignature.into())
|
||||||
|
}
|
||||||
|
Err(Error::InvalidBootstrapToken) => {
|
||||||
|
UserAgentResponsePayload::AuthResult(ProtoAuthResult::TokenInvalid.into())
|
||||||
|
}
|
||||||
|
Err(Error::Internal { details }) => return self.bi.send(Err(Status::internal(details))).await,
|
||||||
|
Err(Error::Transport) => {
|
||||||
|
return self.bi.send(Err(Status::unavailable("transport error"))).await;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
self.send_user_agent_response(payload).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
||||||
|
async fn recv(&mut self) -> Option<auth::Inbound> {
|
||||||
|
let request = match self.bi.recv().await? {
|
||||||
|
Ok(request) => request,
|
||||||
|
Err(error) => {
|
||||||
|
warn!(error = ?error, "Failed to receive user agent auth request");
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let request_id = match self.request_tracker.request(request.id) {
|
||||||
|
Ok(request_id) => request_id,
|
||||||
|
Err(error) => {
|
||||||
|
let _ = self.bi.send(Err(error)).await;
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
*self.response_id = Some(request_id);
|
||||||
|
|
||||||
|
let Some(payload) = request.payload else {
|
||||||
|
warn!(
|
||||||
|
event = "received request with empty payload",
|
||||||
|
"grpc.useragent.auth_adapter"
|
||||||
|
);
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
|
||||||
|
match payload {
|
||||||
|
UserAgentRequestPayload::AuthChallengeRequest(ProtoAuthChallengeRequest {
|
||||||
|
pubkey,
|
||||||
|
bootstrap_token,
|
||||||
|
key_type,
|
||||||
|
}) => {
|
||||||
|
let Ok(key_type) = ProtoKeyType::try_from(key_type) else {
|
||||||
|
warn!(
|
||||||
|
event = "received request with invalid key type",
|
||||||
|
"grpc.useragent.auth_adapter"
|
||||||
|
);
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
let key_type = match key_type {
|
||||||
|
ProtoKeyType::Ed25519 => KeyType::Ed25519,
|
||||||
|
ProtoKeyType::EcdsaSecp256k1 => KeyType::EcdsaSecp256k1,
|
||||||
|
ProtoKeyType::Rsa => KeyType::Rsa,
|
||||||
|
ProtoKeyType::Unspecified => {
|
||||||
|
warn!(
|
||||||
|
event = "received request with unspecified key type",
|
||||||
|
"grpc.useragent.auth_adapter"
|
||||||
|
);
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let Ok(pubkey) = AuthPublicKey::try_from((key_type, pubkey)) else {
|
||||||
|
warn!(
|
||||||
|
event = "received request with invalid public key",
|
||||||
|
"grpc.useragent.auth_adapter"
|
||||||
|
);
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
|
||||||
|
Some(auth::Inbound::AuthChallengeRequest {
|
||||||
|
pubkey,
|
||||||
|
bootstrap_token,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
UserAgentRequestPayload::AuthChallengeSolution(ProtoAuthChallengeSolution {
|
||||||
|
signature,
|
||||||
|
}) => Some(auth::Inbound::AuthChallengeSolution { signature }),
|
||||||
|
_ => {
|
||||||
|
let _ = self
|
||||||
|
.bi
|
||||||
|
.send(Err(Status::invalid_argument(
|
||||||
|
"Unsupported user-agent auth request",
|
||||||
|
)))
|
||||||
|
.await;
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Bi<auth::Inbound, Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {}
|
||||||
|
|
||||||
|
pub async fn start(
|
||||||
|
conn: &mut UserAgentConnection,
|
||||||
|
bi: &mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
|
request_tracker: &mut RequestTracker,
|
||||||
|
response_id: &mut Option<i32>,
|
||||||
|
) -> Result<AuthPublicKey, auth::Error> {
|
||||||
|
let transport = AuthTransportAdapter::new(bi, request_tracker, response_id);
|
||||||
|
auth::authenticate(conn, transport).await
|
||||||
|
}
|
||||||
@@ -1,134 +1,13 @@
|
|||||||
#![forbid(unsafe_code)]
|
#![forbid(unsafe_code)]
|
||||||
use arbiter_proto::{
|
use crate::context::ServerContext;
|
||||||
proto::{
|
|
||||||
client::{ClientRequest, ClientResponse},
|
|
||||||
user_agent::{UserAgentRequest, UserAgentResponse},
|
|
||||||
},
|
|
||||||
transport::{IdentityRecvConverter, SendConverter, grpc},
|
|
||||||
};
|
|
||||||
use async_trait::async_trait;
|
|
||||||
use tokio_stream::wrappers::ReceiverStream;
|
|
||||||
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tonic::{Request, Response, Status};
|
|
||||||
use tracing::info;
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::{
|
|
||||||
client::{self, ClientError, ClientConnection as ClientConnectionProps, connect_client},
|
|
||||||
user_agent::{self, UserAgentConnection, UserAgentError, connect_user_agent},
|
|
||||||
},
|
|
||||||
context::ServerContext,
|
|
||||||
};
|
|
||||||
|
|
||||||
pub mod actors;
|
pub mod actors;
|
||||||
pub mod context;
|
pub mod context;
|
||||||
pub mod db;
|
pub mod db;
|
||||||
pub mod evm;
|
pub mod evm;
|
||||||
|
pub mod grpc;
|
||||||
const DEFAULT_CHANNEL_SIZE: usize = 1000;
|
pub mod safe_cell;
|
||||||
|
pub mod utils;
|
||||||
struct UserAgentGrpcSender;
|
|
||||||
|
|
||||||
impl SendConverter for UserAgentGrpcSender {
|
|
||||||
type Input = Result<UserAgentResponse, UserAgentError>;
|
|
||||||
type Output = Result<UserAgentResponse, Status>;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
|
||||||
match item {
|
|
||||||
Ok(message) => Ok(message),
|
|
||||||
Err(err) => Err(user_agent_error_status(err)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
struct ClientGrpcSender;
|
|
||||||
|
|
||||||
impl SendConverter for ClientGrpcSender {
|
|
||||||
type Input = Result<ClientResponse, ClientError>;
|
|
||||||
type Output = Result<ClientResponse, Status>;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
|
||||||
match item {
|
|
||||||
Ok(message) => Ok(message),
|
|
||||||
Err(err) => Err(client_error_status(err)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn client_error_status(value: ClientError) -> Status {
|
|
||||||
match value {
|
|
||||||
ClientError::MissingRequestPayload | ClientError::UnexpectedRequestPayload => {
|
|
||||||
Status::invalid_argument("Expected message with payload")
|
|
||||||
}
|
|
||||||
ClientError::StateTransitionFailed => Status::internal("State machine error"),
|
|
||||||
ClientError::Auth(ref err) => client_auth_error_status(err),
|
|
||||||
ClientError::ConnectionRegistrationFailed => {
|
|
||||||
Status::internal("Connection registration failed")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn client_auth_error_status(value: &client::auth::Error) -> Status {
|
|
||||||
use client::auth::Error;
|
|
||||||
match value {
|
|
||||||
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
|
|
||||||
Status::invalid_argument(value.to_string())
|
|
||||||
}
|
|
||||||
Error::InvalidAuthPubkeyEncoding => {
|
|
||||||
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
|
|
||||||
}
|
|
||||||
Error::InvalidSignatureLength => Status::invalid_argument("Invalid signature length"),
|
|
||||||
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
|
|
||||||
Status::unauthenticated(value.to_string())
|
|
||||||
}
|
|
||||||
Error::Transport => Status::internal("Transport error"),
|
|
||||||
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
|
|
||||||
Error::DatabaseOperationFailed => Status::internal("Database error"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn user_agent_error_status(value: UserAgentError) -> Status {
|
|
||||||
match value {
|
|
||||||
UserAgentError::MissingRequestPayload | UserAgentError::UnexpectedRequestPayload => {
|
|
||||||
Status::invalid_argument("Expected message with payload")
|
|
||||||
}
|
|
||||||
UserAgentError::InvalidStateForUnsealEncryptedKey => {
|
|
||||||
Status::failed_precondition("Invalid state for unseal encrypted key")
|
|
||||||
}
|
|
||||||
UserAgentError::InvalidClientPubkeyLength => {
|
|
||||||
Status::invalid_argument("client_pubkey must be 32 bytes")
|
|
||||||
}
|
|
||||||
UserAgentError::StateTransitionFailed => Status::internal("State machine error"),
|
|
||||||
UserAgentError::KeyHolderActorUnreachable => Status::internal("Vault is not available"),
|
|
||||||
UserAgentError::Auth(ref err) => auth_error_status(err),
|
|
||||||
UserAgentError::ConnectionRegistrationFailed => {
|
|
||||||
Status::internal("Failed registering connection")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn auth_error_status(value: &user_agent::auth::Error) -> Status {
|
|
||||||
use user_agent::auth::Error;
|
|
||||||
match value {
|
|
||||||
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
|
|
||||||
Status::invalid_argument(value.to_string())
|
|
||||||
}
|
|
||||||
Error::InvalidAuthPubkeyEncoding => {
|
|
||||||
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
|
|
||||||
}
|
|
||||||
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
|
|
||||||
Status::unauthenticated(value.to_string())
|
|
||||||
}
|
|
||||||
Error::InvalidBootstrapToken => Status::invalid_argument("Invalid bootstrap token"),
|
|
||||||
Error::Transport => Status::internal("Transport error"),
|
|
||||||
Error::BootstrapperActorUnreachable => {
|
|
||||||
Status::internal("Bootstrap token consumption failed")
|
|
||||||
}
|
|
||||||
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
|
|
||||||
Error::DatabaseOperationFailed => Status::internal("Database error"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct Server {
|
pub struct Server {
|
||||||
context: ServerContext,
|
context: ServerContext,
|
||||||
@@ -139,61 +18,3 @@ impl Server {
|
|||||||
Self { context }
|
Self { context }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
|
|
||||||
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
|
|
||||||
type ClientStream = ReceiverStream<Result<ClientResponse, Status>>;
|
|
||||||
|
|
||||||
#[tracing::instrument(level = "debug", skip(self))]
|
|
||||||
async fn client(
|
|
||||||
&self,
|
|
||||||
request: Request<tonic::Streaming<ClientRequest>>,
|
|
||||||
) -> Result<Response<Self::ClientStream>, Status> {
|
|
||||||
let req_stream = request.into_inner();
|
|
||||||
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
|
||||||
|
|
||||||
let transport = grpc::GrpcAdapter::new(
|
|
||||||
tx,
|
|
||||||
req_stream,
|
|
||||||
IdentityRecvConverter::<ClientRequest>::new(),
|
|
||||||
ClientGrpcSender,
|
|
||||||
);
|
|
||||||
let props = ClientConnectionProps::new(
|
|
||||||
self.context.db.clone(),
|
|
||||||
Box::new(transport),
|
|
||||||
self.context.actors.clone(),
|
|
||||||
);
|
|
||||||
tokio::spawn(connect_client(props));
|
|
||||||
|
|
||||||
info!(event = "connection established", "grpc.client");
|
|
||||||
|
|
||||||
Ok(Response::new(ReceiverStream::new(rx)))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tracing::instrument(level = "debug", skip(self))]
|
|
||||||
async fn user_agent(
|
|
||||||
&self,
|
|
||||||
request: Request<tonic::Streaming<UserAgentRequest>>,
|
|
||||||
) -> Result<Response<Self::UserAgentStream>, Status> {
|
|
||||||
let req_stream = request.into_inner();
|
|
||||||
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
|
||||||
|
|
||||||
let transport = grpc::GrpcAdapter::new(
|
|
||||||
tx,
|
|
||||||
req_stream,
|
|
||||||
IdentityRecvConverter::<UserAgentRequest>::new(),
|
|
||||||
UserAgentGrpcSender,
|
|
||||||
);
|
|
||||||
let props = UserAgentConnection::new(
|
|
||||||
self.context.db.clone(),
|
|
||||||
self.context.actors.clone(),
|
|
||||||
Box::new(transport),
|
|
||||||
);
|
|
||||||
tokio::spawn(connect_user_agent(props));
|
|
||||||
|
|
||||||
info!(event = "connection established", "grpc.user_agent");
|
|
||||||
|
|
||||||
Ok(Response::new(ReceiverStream::new(rx)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ use std::net::SocketAddr;
|
|||||||
use arbiter_proto::{proto::arbiter_service_server::ArbiterServiceServer, url::ArbiterUrl};
|
use arbiter_proto::{proto::arbiter_service_server::ArbiterServiceServer, url::ArbiterUrl};
|
||||||
use arbiter_server::{Server, actors::bootstrap::GetToken, context::ServerContext, db};
|
use arbiter_server::{Server, actors::bootstrap::GetToken, context::ServerContext, db};
|
||||||
use miette::miette;
|
use miette::miette;
|
||||||
|
use rustls::crypto::aws_lc_rs;
|
||||||
use tonic::transport::{Identity, ServerTlsConfig};
|
use tonic::transport::{Identity, ServerTlsConfig};
|
||||||
use tracing::info;
|
use tracing::info;
|
||||||
|
|
||||||
@@ -10,6 +11,8 @@ const PORT: u16 = 50051;
|
|||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> miette::Result<()> {
|
async fn main() -> miette::Result<()> {
|
||||||
|
aws_lc_rs::default_provider().install_default().unwrap();
|
||||||
|
|
||||||
tracing_subscriber::fmt()
|
tracing_subscriber::fmt()
|
||||||
.with_env_filter(
|
.with_env_filter(
|
||||||
tracing_subscriber::EnvFilter::try_from_default_env()
|
tracing_subscriber::EnvFilter::try_from_default_env()
|
||||||
|
|||||||
111
server/crates/arbiter-server/src/safe_cell.rs
Normal file
111
server/crates/arbiter-server/src/safe_cell.rs
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
use std::ops::{Deref, DerefMut};
|
||||||
|
use std::{any::type_name, fmt};
|
||||||
|
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
|
||||||
|
pub trait SafeCellHandle<T> {
|
||||||
|
type CellRead<'a>: Deref<Target = T>
|
||||||
|
where
|
||||||
|
Self: 'a,
|
||||||
|
T: 'a;
|
||||||
|
type CellWrite<'a>: Deref<Target = T> + DerefMut<Target = T>
|
||||||
|
where
|
||||||
|
Self: 'a,
|
||||||
|
T: 'a;
|
||||||
|
|
||||||
|
fn new(value: T) -> Self
|
||||||
|
where
|
||||||
|
Self: Sized;
|
||||||
|
|
||||||
|
fn read(&mut self) -> Self::CellRead<'_>;
|
||||||
|
fn write(&mut self) -> Self::CellWrite<'_>;
|
||||||
|
|
||||||
|
fn new_inline<F>(f: F) -> Self
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
T: Default,
|
||||||
|
F: for<'a> FnOnce(&'a mut T),
|
||||||
|
{
|
||||||
|
let mut cell = Self::new(T::default());
|
||||||
|
{
|
||||||
|
let mut handle = cell.write();
|
||||||
|
f(handle.deref_mut());
|
||||||
|
}
|
||||||
|
cell
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline(always)]
|
||||||
|
fn read_inline<F, R>(&mut self, f: F) -> R
|
||||||
|
where
|
||||||
|
F: FnOnce(&T) -> R,
|
||||||
|
{
|
||||||
|
f(&*self.read())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline(always)]
|
||||||
|
fn write_inline<F, R>(&mut self, f: F) -> R
|
||||||
|
where
|
||||||
|
F: FnOnce(&mut T) -> R,
|
||||||
|
{
|
||||||
|
f(&mut *self.write())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct MemSafeCell<T>(MemSafe<T>);
|
||||||
|
|
||||||
|
impl<T> fmt::Debug for MemSafeCell<T> {
|
||||||
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
|
f.debug_struct("MemSafeCell")
|
||||||
|
.field("inner", &format_args!("<protected {}>", type_name::<T>()))
|
||||||
|
.finish()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T> SafeCellHandle<T> for MemSafeCell<T> {
|
||||||
|
type CellRead<'a>
|
||||||
|
= memsafe::MemSafeRead<'a, T>
|
||||||
|
where
|
||||||
|
Self: 'a,
|
||||||
|
T: 'a;
|
||||||
|
type CellWrite<'a>
|
||||||
|
= memsafe::MemSafeWrite<'a, T>
|
||||||
|
where
|
||||||
|
Self: 'a,
|
||||||
|
T: 'a;
|
||||||
|
|
||||||
|
fn new(value: T) -> Self {
|
||||||
|
match MemSafe::new(value) {
|
||||||
|
Ok(inner) => Self(inner),
|
||||||
|
Err(err) => {
|
||||||
|
// If protected memory cannot be allocated, process integrity is compromised.
|
||||||
|
abort_memory_breach("safe cell allocation", &err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline(always)]
|
||||||
|
fn read(&mut self) -> Self::CellRead<'_> {
|
||||||
|
match self.0.read() {
|
||||||
|
Ok(inner) => inner,
|
||||||
|
Err(err) => abort_memory_breach("safe cell read", &err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline(always)]
|
||||||
|
fn write(&mut self) -> Self::CellWrite<'_> {
|
||||||
|
match self.0.write() {
|
||||||
|
Ok(inner) => inner,
|
||||||
|
Err(err) => {
|
||||||
|
// If protected memory becomes unwritable here, treat it as a fatal memory breach.
|
||||||
|
abort_memory_breach("safe cell write", &err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn abort_memory_breach(action: &str, err: &memsafe::error::MemoryError) -> ! {
|
||||||
|
eprintln!("fatal {action}: {err}");
|
||||||
|
std::process::abort();
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type SafeCell<T> = MemSafeCell<T>;
|
||||||
16
server/crates/arbiter-server/src/utils.rs
Normal file
16
server/crates/arbiter-server/src/utils.rs
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
struct DeferClosure<F: FnOnce()> {
|
||||||
|
f: Option<F>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<F: FnOnce()> Drop for DeferClosure<F> {
|
||||||
|
fn drop(&mut self) {
|
||||||
|
if let Some(f) = self.f.take() {
|
||||||
|
f();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run some code when a scope is exited, similar to Go's defer statement
|
||||||
|
pub fn defer<F: FnOnce()>(f: F) -> impl Drop + Sized {
|
||||||
|
DeferClosure { f: Some(f) }
|
||||||
|
}
|
||||||
@@ -1,12 +1,7 @@
|
|||||||
use arbiter_proto::proto::client::{
|
use arbiter_proto::transport::{Receiver, Sender};
|
||||||
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
|
|
||||||
client_request::Payload as ClientRequestPayload,
|
|
||||||
client_response::Payload as ClientResponsePayload,
|
|
||||||
};
|
|
||||||
use arbiter_proto::transport::Bi;
|
|
||||||
use arbiter_server::actors::GlobalActors;
|
use arbiter_server::actors::GlobalActors;
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::client::{ClientConnection, connect_client},
|
actors::client::{ClientConnection, auth, connect_client},
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
};
|
};
|
||||||
use diesel::{ExpressionMethods as _, insert_into};
|
use diesel::{ExpressionMethods as _, insert_into};
|
||||||
@@ -22,19 +17,17 @@ pub async fn test_unregistered_pubkey_rejected() {
|
|||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
let props = ClientConnection::new(db.clone(), Box::new(server_transport), actors);
|
let props = ClientConnection::new(db.clone(), actors);
|
||||||
let task = tokio::spawn(connect_client(props));
|
let task = tokio::spawn(async move {
|
||||||
|
let mut server_transport = server_transport;
|
||||||
|
connect_client(props, &mut server_transport).await;
|
||||||
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(ClientRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
payload: Some(ClientRequestPayload::AuthChallengeRequest(
|
pubkey: new_key.verifying_key(),
|
||||||
AuthChallengeRequest {
|
|
||||||
pubkey: pubkey_bytes,
|
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -63,17 +56,16 @@ pub async fn test_challenge_auth() {
|
|||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
|
||||||
let props = ClientConnection::new(db.clone(), Box::new(server_transport), actors);
|
let props = ClientConnection::new(db.clone(), actors);
|
||||||
let task = tokio::spawn(connect_client(props));
|
let task = tokio::spawn(async move {
|
||||||
|
let mut server_transport = server_transport;
|
||||||
|
connect_client(props, &mut server_transport).await;
|
||||||
|
});
|
||||||
|
|
||||||
// Send challenge request
|
// Send challenge request
|
||||||
test_transport
|
test_transport
|
||||||
.send(ClientRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
payload: Some(ClientRequestPayload::AuthChallengeRequest(
|
pubkey: new_key.verifying_key(),
|
||||||
AuthChallengeRequest {
|
|
||||||
pubkey: pubkey_bytes,
|
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -84,28 +76,33 @@ pub async fn test_challenge_auth() {
|
|||||||
.await
|
.await
|
||||||
.expect("should receive challenge");
|
.expect("should receive challenge");
|
||||||
let challenge = match response {
|
let challenge = match response {
|
||||||
Ok(resp) => match resp.payload {
|
Ok(resp) => match resp {
|
||||||
Some(ClientResponsePayload::AuthChallenge(c)) => c,
|
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
other => panic!("Expected AuthChallenge, got {other:?}"),
|
||||||
},
|
},
|
||||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Sign the challenge and send solution
|
// Sign the challenge and send solution
|
||||||
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
let formatted_challenge = arbiter_proto::format_challenge(challenge.1, challenge.0.as_bytes());
|
||||||
let signature = new_key.sign(&formatted_challenge);
|
let signature = new_key.sign(&formatted_challenge);
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(ClientRequest {
|
.send(auth::Inbound::AuthChallengeSolution { signature })
|
||||||
payload: Some(ClientRequestPayload::AuthChallengeSolution(
|
|
||||||
AuthChallengeSolution {
|
|
||||||
signature: signature.to_bytes().to_vec(),
|
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
let response = test_transport
|
||||||
|
.recv()
|
||||||
|
.await
|
||||||
|
.expect("should receive auth success");
|
||||||
|
match response {
|
||||||
|
Ok(auth::Outbound::AuthSuccess) => {}
|
||||||
|
Ok(other) => panic!("Expected AuthSuccess, got {other:?}"),
|
||||||
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
|
}
|
||||||
|
|
||||||
// Auth completes, session spawned
|
// Auth completes, session spawned
|
||||||
task.await.unwrap();
|
task.await.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,20 +1,19 @@
|
|||||||
use arbiter_proto::transport::{Bi, Error};
|
use arbiter_proto::transport::{Bi, Error, Receiver, Sender};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::KeyHolder,
|
actors::keyholder::KeyHolder,
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use diesel::QueryDsl;
|
use diesel::QueryDsl;
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use memsafe::MemSafe;
|
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
|
|
||||||
#[allow(dead_code)]
|
#[allow(dead_code)]
|
||||||
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
actor
|
actor
|
||||||
.bootstrap(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
.bootstrap(SafeCell::new(b"test-seal-key".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
actor
|
actor
|
||||||
@@ -31,13 +30,14 @@ pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
|||||||
id.expect("root_key_id should be set after bootstrap")
|
id.expect("root_key_id should be set after bootstrap")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub struct ChannelTransport<T, Y> {
|
pub struct ChannelTransport<T, Y> {
|
||||||
receiver: mpsc::Receiver<T>,
|
receiver: mpsc::Receiver<T>,
|
||||||
sender: mpsc::Sender<Y>,
|
sender: mpsc::Sender<Y>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T, Y> ChannelTransport<T, Y> {
|
impl<T, Y> ChannelTransport<T, Y> {
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn new() -> (Self, ChannelTransport<Y, T>) {
|
pub fn new() -> (Self, ChannelTransport<Y, T>) {
|
||||||
let (tx1, rx1) = mpsc::channel(10);
|
let (tx1, rx1) = mpsc::channel(10);
|
||||||
let (tx2, rx2) = mpsc::channel(10);
|
let (tx2, rx2) = mpsc::channel(10);
|
||||||
@@ -54,13 +54,11 @@ impl<T, Y> ChannelTransport<T, Y> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
impl<T, Y> Bi<T, Y> for ChannelTransport<T, Y>
|
impl<T, Y> Sender<Y> for ChannelTransport<T, Y>
|
||||||
where
|
where
|
||||||
T: Send + 'static,
|
T: Send + Sync + 'static,
|
||||||
Y: Send + 'static,
|
Y: Send + Sync + 'static,
|
||||||
{
|
{
|
||||||
async fn send(&mut self, item: Y) -> Result<(), Error> {
|
async fn send(&mut self, item: Y) -> Result<(), Error> {
|
||||||
self.sender
|
self.sender
|
||||||
@@ -68,8 +66,22 @@ where
|
|||||||
.await
|
.await
|
||||||
.map_err(|_| Error::ChannelClosed)
|
.map_err(|_| Error::ChannelClosed)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T, Y> Receiver<T> for ChannelTransport<T, Y>
|
||||||
|
where
|
||||||
|
T: Send + Sync + 'static,
|
||||||
|
Y: Send + Sync + 'static,
|
||||||
|
{
|
||||||
async fn recv(&mut self) -> Option<T> {
|
async fn recv(&mut self) -> Option<T> {
|
||||||
self.receiver.recv().await
|
self.receiver.recv().await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl<T, Y> Bi<T, Y> for ChannelTransport<T, Y>
|
||||||
|
where
|
||||||
|
T: Send + Sync + 'static,
|
||||||
|
Y: Send + Sync + 'static,
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,11 +3,11 @@ use std::collections::{HashMap, HashSet};
|
|||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::{CreateNew, Error, KeyHolder},
|
actors::keyholder::{CreateNew, Error, KeyHolder},
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::actor::{ActorRef, Spawn as _};
|
use kameo::actor::{ActorRef, Spawn as _};
|
||||||
use memsafe::MemSafe;
|
|
||||||
use tokio::task::JoinSet;
|
use tokio::task::JoinSet;
|
||||||
|
|
||||||
use crate::common;
|
use crate::common;
|
||||||
@@ -24,7 +24,7 @@ async fn write_concurrently(
|
|||||||
let plaintext = format!("{prefix}-{i}").into_bytes();
|
let plaintext = format!("{prefix}-{i}").into_bytes();
|
||||||
let id = actor
|
let id = actor
|
||||||
.ask(CreateNew {
|
.ask(CreateNew {
|
||||||
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
|
plaintext: SafeCell::new(plaintext.clone()),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -118,7 +118,7 @@ async fn insert_failure_does_not_create_partial_row() {
|
|||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
let err = actor
|
let err = actor
|
||||||
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
|
.create_new(SafeCell::new(b"should fail".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
||||||
@@ -162,12 +162,12 @@ async fn decrypt_roundtrip_after_high_concurrency() {
|
|||||||
|
|
||||||
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
decryptor
|
decryptor
|
||||||
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
.try_unseal(SafeCell::new(b"test-seal-key".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
for (id, plaintext) in expected {
|
for (id, plaintext) in expected {
|
||||||
let mut decrypted = decryptor.decrypt(id).await.unwrap();
|
let mut decrypted = decryptor.decrypt(id).await.unwrap();
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
assert_eq!(*decrypted.read(), plaintext);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::{Error, KeyHolder},
|
actors::keyholder::{Error, KeyHolder},
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
use diesel::{QueryDsl, SelectableHelper};
|
use diesel::{QueryDsl, SelectableHelper};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use memsafe::MemSafe;
|
|
||||||
|
|
||||||
use crate::common;
|
use crate::common;
|
||||||
|
|
||||||
@@ -14,7 +14,7 @@ async fn test_bootstrap() {
|
|||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
let seal_key = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
actor.bootstrap(seal_key).await.unwrap();
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
@@ -43,7 +43,7 @@ async fn test_bootstrap_rejects_double() {
|
|||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
let seal_key2 = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::AlreadyBootstrapped));
|
assert!(matches!(err, Error::AlreadyBootstrapped));
|
||||||
}
|
}
|
||||||
@@ -55,7 +55,7 @@ async fn test_create_new_before_bootstrap_fails() {
|
|||||||
let mut actor = KeyHolder::new(db).await.unwrap();
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
let err = actor
|
let err = actor
|
||||||
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
|
.create_new(SafeCell::new(b"data".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
@@ -91,17 +91,17 @@ async fn test_unseal_correct_password() {
|
|||||||
|
|
||||||
let plaintext = b"survive a restart";
|
let plaintext = b"survive a restart";
|
||||||
let aead_id = actor
|
let aead_id = actor
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
.create_new(SafeCell::new(plaintext.to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
drop(actor);
|
drop(actor);
|
||||||
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
let seal_key = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
actor.try_unseal(seal_key).await.unwrap();
|
actor.try_unseal(seal_key).await.unwrap();
|
||||||
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
assert_eq!(*decrypted.read(), plaintext);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -112,20 +112,20 @@ async fn test_unseal_wrong_then_correct_password() {
|
|||||||
|
|
||||||
let plaintext = b"important data";
|
let plaintext = b"important data";
|
||||||
let aead_id = actor
|
let aead_id = actor
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
.create_new(SafeCell::new(plaintext.to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
drop(actor);
|
drop(actor);
|
||||||
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
|
let bad_key = SafeCell::new(b"wrong-password".to_vec());
|
||||||
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::InvalidKey));
|
assert!(matches!(err, Error::InvalidKey));
|
||||||
|
|
||||||
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
let good_key = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
actor.try_unseal(good_key).await.unwrap();
|
actor.try_unseal(good_key).await.unwrap();
|
||||||
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
assert_eq!(*decrypted.read(), plaintext);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,10 +3,10 @@ use std::collections::HashSet;
|
|||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::{Error, encryption::v1},
|
actors::keyholder::{Error, encryption::v1},
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use memsafe::MemSafe;
|
|
||||||
|
|
||||||
use crate::common;
|
use crate::common;
|
||||||
|
|
||||||
@@ -18,12 +18,12 @@ async fn test_create_decrypt_roundtrip() {
|
|||||||
|
|
||||||
let plaintext = b"hello arbiter";
|
let plaintext = b"hello arbiter";
|
||||||
let aead_id = actor
|
let aead_id = actor
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
.create_new(SafeCell::new(plaintext.to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
assert_eq!(*decrypted.read(), plaintext);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -44,11 +44,11 @@ async fn test_ciphertext_differs_across_entries() {
|
|||||||
|
|
||||||
let plaintext = b"same content";
|
let plaintext = b"same content";
|
||||||
let id1 = actor
|
let id1 = actor
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
.create_new(SafeCell::new(plaintext.to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let id2 = actor
|
let id2 = actor
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
.create_new(SafeCell::new(plaintext.to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -70,8 +70,8 @@ async fn test_ciphertext_differs_across_entries() {
|
|||||||
|
|
||||||
let mut d1 = actor.decrypt(id1).await.unwrap();
|
let mut d1 = actor.decrypt(id1).await.unwrap();
|
||||||
let mut d2 = actor.decrypt(id2).await.unwrap();
|
let mut d2 = actor.decrypt(id2).await.unwrap();
|
||||||
assert_eq!(*d1.read().unwrap(), plaintext);
|
assert_eq!(*d1.read(), plaintext);
|
||||||
assert_eq!(*d2.read().unwrap(), plaintext);
|
assert_eq!(*d2.read(), plaintext);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -83,7 +83,7 @@ async fn test_nonce_never_reused() {
|
|||||||
let n = 5;
|
let n = 5;
|
||||||
for i in 0..n {
|
for i in 0..n {
|
||||||
actor
|
actor
|
||||||
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
|
.create_new(SafeCell::new(format!("secret {i}").into_bytes()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
@@ -137,7 +137,7 @@ async fn broken_db_nonce_format_fails_closed() {
|
|||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
let err = actor
|
let err = actor
|
||||||
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
|
.create_new(SafeCell::new(b"must fail".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::BrokenDatabase));
|
assert!(matches!(err, Error::BrokenDatabase));
|
||||||
@@ -145,7 +145,7 @@ async fn broken_db_nonce_format_fails_closed() {
|
|||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
let id = actor
|
let id = actor
|
||||||
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
|
.create_new(SafeCell::new(b"decrypt target".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|||||||
@@ -1,14 +1,9 @@
|
|||||||
use arbiter_proto::proto::user_agent::{
|
use arbiter_proto::transport::{Receiver, Sender};
|
||||||
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
};
|
|
||||||
use arbiter_proto::transport::Bi;
|
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::{
|
actors::{
|
||||||
GlobalActors,
|
GlobalActors,
|
||||||
bootstrap::GetToken,
|
bootstrap::GetToken,
|
||||||
user_agent::{UserAgentConnection, connect_user_agent},
|
user_agent::{AuthPublicKey, UserAgentConnection, auth},
|
||||||
},
|
},
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
};
|
};
|
||||||
@@ -26,25 +21,31 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
let db_for_task = db.clone();
|
||||||
let task = tokio::spawn(connect_user_agent(props));
|
let task = tokio::spawn(async move {
|
||||||
|
let mut props = UserAgentConnection::new(db_for_task, actors);
|
||||||
|
auth::authenticate(&mut props, server_transport).await
|
||||||
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(UserAgentRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
||||||
AuthChallengeRequest {
|
|
||||||
pubkey: pubkey_bytes,
|
|
||||||
bootstrap_token: Some(token),
|
bootstrap_token: Some(token),
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
task.await.unwrap();
|
let response = test_transport
|
||||||
|
.recv()
|
||||||
|
.await
|
||||||
|
.expect("should receive auth result");
|
||||||
|
match response {
|
||||||
|
Ok(auth::Outbound::AuthSuccess) => {}
|
||||||
|
other => panic!("Expected AuthSuccess, got {other:?}"),
|
||||||
|
}
|
||||||
|
|
||||||
|
task.await.unwrap().unwrap();
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
let stored_pubkey: Vec<u8> = schema::useragent_client::table
|
let stored_pubkey: Vec<u8> = schema::useragent_client::table
|
||||||
@@ -62,26 +63,25 @@ pub async fn test_bootstrap_invalid_token_auth() {
|
|||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
let db_for_task = db.clone();
|
||||||
let task = tokio::spawn(connect_user_agent(props));
|
let task = tokio::spawn(async move {
|
||||||
|
let mut props = UserAgentConnection::new(db_for_task, actors);
|
||||||
|
auth::authenticate(&mut props, server_transport).await
|
||||||
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(UserAgentRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
||||||
AuthChallengeRequest {
|
|
||||||
pubkey: pubkey_bytes,
|
|
||||||
bootstrap_token: Some("invalid_token".to_string()),
|
bootstrap_token: Some("invalid_token".to_string()),
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Auth fails, connect_user_agent returns, transport drops
|
assert!(matches!(
|
||||||
task.await.unwrap();
|
task.await.unwrap(),
|
||||||
|
Err(auth::Error::InvalidBootstrapToken)
|
||||||
|
));
|
||||||
|
|
||||||
// Verify no key was registered
|
// Verify no key was registered
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
@@ -102,28 +102,31 @@ pub async fn test_challenge_auth() {
|
|||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||||
|
|
||||||
|
// Pre-register key with key_type
|
||||||
{
|
{
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
insert_into(schema::useragent_client::table)
|
insert_into(schema::useragent_client::table)
|
||||||
.values(schema::useragent_client::public_key.eq(pubkey_bytes.clone()))
|
.values((
|
||||||
|
schema::useragent_client::public_key.eq(pubkey_bytes.clone()),
|
||||||
|
schema::useragent_client::key_type.eq(1i32),
|
||||||
|
))
|
||||||
.execute(&mut conn)
|
.execute(&mut conn)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
let db_for_task = db.clone();
|
||||||
let task = tokio::spawn(connect_user_agent(props));
|
let task = tokio::spawn(async move {
|
||||||
|
let mut props = UserAgentConnection::new(db_for_task, actors);
|
||||||
|
auth::authenticate(&mut props, server_transport).await
|
||||||
|
});
|
||||||
|
|
||||||
// Send challenge request
|
// Send challenge request
|
||||||
test_transport
|
test_transport
|
||||||
.send(UserAgentRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
||||||
AuthChallengeRequest {
|
|
||||||
pubkey: pubkey_bytes,
|
|
||||||
bootstrap_token: None,
|
bootstrap_token: None,
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -134,28 +137,31 @@ pub async fn test_challenge_auth() {
|
|||||||
.await
|
.await
|
||||||
.expect("should receive challenge");
|
.expect("should receive challenge");
|
||||||
let challenge = match response {
|
let challenge = match response {
|
||||||
Ok(resp) => match resp.payload {
|
Ok(resp) => match resp {
|
||||||
Some(UserAgentResponsePayload::AuthChallenge(c)) => c,
|
auth::Outbound::AuthChallenge { nonce } => nonce,
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
other => panic!("Expected AuthChallenge, got {other:?}"),
|
||||||
},
|
},
|
||||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Sign the challenge and send solution
|
let formatted_challenge = arbiter_proto::format_challenge(challenge, &pubkey_bytes);
|
||||||
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
let signature = new_key.sign(&formatted_challenge);
|
let signature = new_key.sign(&formatted_challenge);
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(UserAgentRequest {
|
.send(auth::Inbound::AuthChallengeSolution {
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(
|
|
||||||
AuthChallengeSolution {
|
|
||||||
signature: signature.to_bytes().to_vec(),
|
signature: signature.to_bytes().to_vec(),
|
||||||
},
|
|
||||||
)),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Auth completes, session spawned
|
let response = test_transport
|
||||||
task.await.unwrap();
|
.recv()
|
||||||
|
.await
|
||||||
|
.expect("should receive auth result");
|
||||||
|
match response {
|
||||||
|
Ok(auth::Outbound::AuthSuccess) => {}
|
||||||
|
other => panic!("Expected AuthSuccess, got {other:?}"),
|
||||||
|
}
|
||||||
|
|
||||||
|
task.await.unwrap().unwrap();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,63 +1,55 @@
|
|||||||
use arbiter_proto::proto::user_agent::{
|
|
||||||
UnsealEncryptedKey, UnsealResult, UnsealStart, UserAgentRequest,
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
};
|
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::{
|
actors::{
|
||||||
GlobalActors,
|
GlobalActors,
|
||||||
keyholder::{Bootstrap, Seal},
|
keyholder::{Bootstrap, Seal},
|
||||||
user_agent::session::UserAgentSession,
|
user_agent::session::{
|
||||||
|
HandleUnsealEncryptedKey, HandleUnsealRequest, UnsealError, UserAgentSession,
|
||||||
|
},
|
||||||
},
|
},
|
||||||
db,
|
db,
|
||||||
|
safe_cell::{SafeCell, SafeCellHandle as _},
|
||||||
};
|
};
|
||||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
use memsafe::MemSafe;
|
use kameo::actor::Spawn as _;
|
||||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
|
|
||||||
async fn setup_sealed_user_agent(
|
async fn setup_sealed_user_agent(
|
||||||
seal_key: &[u8],
|
seal_key: &[u8],
|
||||||
) -> (db::DatabasePool, UserAgentSession) {
|
) -> (db::DatabasePool, kameo::actor::ActorRef<UserAgentSession>) {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
|
||||||
actors
|
actors
|
||||||
.key_holder
|
.key_holder
|
||||||
.ask(Bootstrap {
|
.ask(Bootstrap {
|
||||||
seal_key_raw: MemSafe::new(seal_key.to_vec()).unwrap(),
|
seal_key_raw: SafeCell::new(seal_key.to_vec()),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
actors.key_holder.ask(Seal).await.unwrap();
|
actors.key_holder.ask(Seal).await.unwrap();
|
||||||
|
|
||||||
let session = UserAgentSession::new_test(db.clone(), actors);
|
let session = UserAgentSession::spawn(UserAgentSession::new_test(db.clone(), actors));
|
||||||
|
|
||||||
(db, session)
|
(db, session)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn client_dh_encrypt(
|
async fn client_dh_encrypt(
|
||||||
user_agent: &mut UserAgentSession,
|
user_agent: &kameo::actor::ActorRef<UserAgentSession>,
|
||||||
key_to_send: &[u8],
|
key_to_send: &[u8],
|
||||||
) -> UnsealEncryptedKey {
|
) -> HandleUnsealEncryptedKey {
|
||||||
let client_secret = EphemeralSecret::random();
|
let client_secret = EphemeralSecret::random();
|
||||||
let client_public = PublicKey::from(&client_secret);
|
let client_public = PublicKey::from(&client_secret);
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent
|
||||||
.process_transport_inbound(UserAgentRequest {
|
.ask(HandleUnsealRequest {
|
||||||
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
|
client_pubkey: client_public,
|
||||||
client_pubkey: client_public.as_bytes().to_vec(),
|
|
||||||
})),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let server_pubkey = match response.payload.unwrap() {
|
let server_pubkey = response.server_pubkey;
|
||||||
UserAgentResponsePayload::UnsealStartResponse(resp) => resp.server_pubkey,
|
|
||||||
other => panic!("Expected UnsealStartResponse, got {other:?}"),
|
|
||||||
};
|
|
||||||
let server_public = PublicKey::from(<[u8; 32]>::try_from(server_pubkey.as_slice()).unwrap());
|
|
||||||
|
|
||||||
let shared_secret = client_secret.diffie_hellman(&server_public);
|
let shared_secret = client_secret.diffie_hellman(&server_pubkey);
|
||||||
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||||
let nonce = XNonce::from([0u8; 24]);
|
let nonce = XNonce::from([0u8; 24]);
|
||||||
let associated_data = b"unseal";
|
let associated_data = b"unseal";
|
||||||
@@ -66,119 +58,94 @@ async fn client_dh_encrypt(
|
|||||||
.encrypt_in_place(&nonce, associated_data, &mut ciphertext)
|
.encrypt_in_place(&nonce, associated_data, &mut ciphertext)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
UnsealEncryptedKey {
|
HandleUnsealEncryptedKey {
|
||||||
nonce: nonce.to_vec(),
|
nonce: nonce.to_vec(),
|
||||||
ciphertext,
|
ciphertext,
|
||||||
associated_data: associated_data.to_vec(),
|
associated_data: associated_data.to_vec(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn unseal_key_request(req: UnsealEncryptedKey) -> UserAgentRequest {
|
|
||||||
UserAgentRequest {
|
|
||||||
payload: Some(UserAgentRequestPayload::UnsealEncryptedKey(req)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_success() {
|
pub async fn test_unseal_success() {
|
||||||
let seal_key = b"test-seal-key";
|
let seal_key = b"test-seal-key";
|
||||||
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
|
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||||
|
|
||||||
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
|
let encrypted_key = client_dh_encrypt(&user_agent, seal_key).await;
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent.ask(encrypted_key).await;
|
||||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
assert!(matches!(response, Ok(())));
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
response.payload.unwrap(),
|
|
||||||
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_wrong_seal_key() {
|
pub async fn test_unseal_wrong_seal_key() {
|
||||||
let (_db, mut user_agent) = setup_sealed_user_agent(b"correct-key").await;
|
let (_db, user_agent) = setup_sealed_user_agent(b"correct-key").await;
|
||||||
|
|
||||||
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
|
let encrypted_key = client_dh_encrypt(&user_agent, b"wrong-key").await;
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent.ask(encrypted_key).await;
|
||||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
assert!(matches!(
|
||||||
.await
|
response,
|
||||||
.unwrap();
|
Err(kameo::error::SendError::HandlerError(
|
||||||
|
UnsealError::InvalidKey
|
||||||
assert_eq!(
|
))
|
||||||
response.payload.unwrap(),
|
));
|
||||||
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_corrupted_ciphertext() {
|
pub async fn test_unseal_corrupted_ciphertext() {
|
||||||
let (_db, mut user_agent) = setup_sealed_user_agent(b"test-key").await;
|
let (_db, user_agent) = setup_sealed_user_agent(b"test-key").await;
|
||||||
|
|
||||||
let client_secret = EphemeralSecret::random();
|
let client_secret = EphemeralSecret::random();
|
||||||
let client_public = PublicKey::from(&client_secret);
|
let client_public = PublicKey::from(&client_secret);
|
||||||
|
|
||||||
user_agent
|
user_agent
|
||||||
.process_transport_inbound(UserAgentRequest {
|
.ask(HandleUnsealRequest {
|
||||||
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
|
client_pubkey: client_public,
|
||||||
client_pubkey: client_public.as_bytes().to_vec(),
|
|
||||||
})),
|
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent
|
||||||
.process_transport_inbound(unseal_key_request(UnsealEncryptedKey {
|
.ask(HandleUnsealEncryptedKey {
|
||||||
nonce: vec![0u8; 24],
|
nonce: vec![0u8; 24],
|
||||||
ciphertext: vec![0u8; 32],
|
ciphertext: vec![0u8; 32],
|
||||||
associated_data: vec![],
|
associated_data: vec![],
|
||||||
}))
|
})
|
||||||
.await
|
.await;
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(
|
assert!(matches!(
|
||||||
response.payload.unwrap(),
|
response,
|
||||||
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
Err(kameo::error::SendError::HandlerError(
|
||||||
);
|
UnsealError::InvalidKey
|
||||||
|
))
|
||||||
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_retry_after_invalid_key() {
|
pub async fn test_unseal_retry_after_invalid_key() {
|
||||||
let seal_key = b"real-seal-key";
|
let seal_key = b"real-seal-key";
|
||||||
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
|
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||||
|
|
||||||
{
|
{
|
||||||
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
|
let encrypted_key = client_dh_encrypt(&user_agent, b"wrong-key").await;
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent.ask(encrypted_key).await;
|
||||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
assert!(matches!(
|
||||||
.await
|
response,
|
||||||
.unwrap();
|
Err(kameo::error::SendError::HandlerError(
|
||||||
|
UnsealError::InvalidKey
|
||||||
assert_eq!(
|
))
|
||||||
response.payload.unwrap(),
|
));
|
||||||
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
|
let encrypted_key = client_dh_encrypt(&user_agent, seal_key).await;
|
||||||
|
|
||||||
let response = user_agent
|
let response = user_agent.ask(encrypted_key).await;
|
||||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
assert!(matches!(response, Ok(())));
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
response.payload.unwrap(),
|
|
||||||
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,21 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "arbiter-useragent"
|
|
||||||
version = "0.1.0"
|
|
||||||
edition = "2024"
|
|
||||||
license = "Apache-2.0"
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
arbiter-proto.path = "../arbiter-proto"
|
|
||||||
kameo.workspace = true
|
|
||||||
tokio = {workspace = true, features = ["net"]}
|
|
||||||
tonic.workspace = true
|
|
||||||
tonic.features = ["tls-aws-lc"]
|
|
||||||
tracing.workspace = true
|
|
||||||
ed25519-dalek.workspace = true
|
|
||||||
smlang.workspace = true
|
|
||||||
x25519-dalek.workspace = true
|
|
||||||
thiserror.workspace = true
|
|
||||||
tokio-stream.workspace = true
|
|
||||||
http = "1.4.0"
|
|
||||||
rustls-webpki = { version = "0.103.9", features = ["aws-lc-rs"] }
|
|
||||||
async-trait.workspace = true
|
|
||||||
@@ -1,72 +0,0 @@
|
|||||||
use arbiter_proto::{
|
|
||||||
proto::{
|
|
||||||
user_agent::{UserAgentRequest, UserAgentResponse},
|
|
||||||
arbiter_service_client::ArbiterServiceClient,
|
|
||||||
},
|
|
||||||
transport::{IdentityRecvConverter, IdentitySendConverter, grpc},
|
|
||||||
url::ArbiterUrl,
|
|
||||||
};
|
|
||||||
use ed25519_dalek::SigningKey;
|
|
||||||
use kameo::actor::{ActorRef, Spawn};
|
|
||||||
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tokio_stream::wrappers::ReceiverStream;
|
|
||||||
|
|
||||||
use tonic::transport::ClientTlsConfig;
|
|
||||||
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
|
||||||
pub enum ConnectError {
|
|
||||||
#[error("Could establish connection")]
|
|
||||||
Connection(#[from] tonic::transport::Error),
|
|
||||||
|
|
||||||
#[error("Invalid server URI")]
|
|
||||||
InvalidUri(#[from] http::uri::InvalidUri),
|
|
||||||
|
|
||||||
#[error("Invalid CA certificate")]
|
|
||||||
InvalidCaCert(#[from] webpki::Error),
|
|
||||||
|
|
||||||
#[error("gRPC error")]
|
|
||||||
Grpc(#[from] tonic::Status),
|
|
||||||
}
|
|
||||||
|
|
||||||
use super::UserAgentActor;
|
|
||||||
|
|
||||||
pub type UserAgentGrpc = ActorRef<
|
|
||||||
UserAgentActor<
|
|
||||||
grpc::GrpcAdapter<
|
|
||||||
IdentityRecvConverter<UserAgentResponse>,
|
|
||||||
IdentitySendConverter<UserAgentRequest>,
|
|
||||||
>,
|
|
||||||
>,
|
|
||||||
>;
|
|
||||||
pub async fn connect_grpc(
|
|
||||||
url: ArbiterUrl,
|
|
||||||
key: SigningKey,
|
|
||||||
) -> Result<UserAgentGrpc, ConnectError> {
|
|
||||||
let bootstrap_token = url.bootstrap_token.clone();
|
|
||||||
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)?.to_owned();
|
|
||||||
let tls = ClientTlsConfig::new().trust_anchor(anchor);
|
|
||||||
|
|
||||||
// TODO: if `host` is localhost, we need to verify server's process authenticity
|
|
||||||
let channel = tonic::transport::Channel::from_shared(format!("{}:{}", url.host, url.port))?
|
|
||||||
.tls_config(tls)?
|
|
||||||
.connect()
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
let mut client = ArbiterServiceClient::new(channel);
|
|
||||||
let (tx, rx) = mpsc::channel(16);
|
|
||||||
let bistream = client.user_agent(ReceiverStream::new(rx)).await?;
|
|
||||||
let bistream = bistream.into_inner();
|
|
||||||
|
|
||||||
let adapter = grpc::GrpcAdapter::new(
|
|
||||||
tx,
|
|
||||||
bistream,
|
|
||||||
IdentityRecvConverter::new(),
|
|
||||||
IdentitySendConverter::new(),
|
|
||||||
);
|
|
||||||
|
|
||||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token, adapter));
|
|
||||||
|
|
||||||
Ok(actor)
|
|
||||||
}
|
|
||||||
@@ -1,195 +0,0 @@
|
|||||||
use arbiter_proto::{
|
|
||||||
format_challenge,
|
|
||||||
proto::user_agent::{
|
|
||||||
AuthChallengeRequest, AuthChallengeSolution, AuthOk,
|
|
||||||
UserAgentRequest, UserAgentResponse,
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
},
|
|
||||||
transport::Bi,
|
|
||||||
};
|
|
||||||
use ed25519_dalek::{Signer, SigningKey};
|
|
||||||
use kameo::{Actor, actor::ActorRef};
|
|
||||||
use smlang::statemachine;
|
|
||||||
use tokio::select;
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
statemachine! {
|
|
||||||
name: UserAgent,
|
|
||||||
custom_error: false,
|
|
||||||
transitions: {
|
|
||||||
*Init + SentAuthChallengeRequest = WaitingForServerAuth,
|
|
||||||
WaitingForServerAuth + ReceivedAuthChallenge = WaitingForAuthOk,
|
|
||||||
WaitingForServerAuth + ReceivedAuthOk = Authenticated,
|
|
||||||
WaitingForAuthOk + ReceivedAuthOk = Authenticated,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct DummyContext;
|
|
||||||
impl UserAgentStateMachineContext for DummyContext {}
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
|
||||||
pub enum InboundError {
|
|
||||||
#[error("Invalid user agent response")]
|
|
||||||
InvalidResponse,
|
|
||||||
#[error("Expected response payload")]
|
|
||||||
MissingResponsePayload,
|
|
||||||
#[error("Unexpected response payload")]
|
|
||||||
UnexpectedResponsePayload,
|
|
||||||
#[error("Invalid state for auth challenge")]
|
|
||||||
InvalidStateForAuthChallenge,
|
|
||||||
#[error("Invalid state for auth ok")]
|
|
||||||
InvalidStateForAuthOk,
|
|
||||||
#[error("State machine error")]
|
|
||||||
StateTransitionFailed,
|
|
||||||
#[error("Transport send failed")]
|
|
||||||
TransportSendFailed,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct UserAgentActor<Transport>
|
|
||||||
where
|
|
||||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
|
||||||
{
|
|
||||||
key: SigningKey,
|
|
||||||
bootstrap_token: Option<String>,
|
|
||||||
state: UserAgentStateMachine<DummyContext>,
|
|
||||||
transport: Transport,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<Transport> UserAgentActor<Transport>
|
|
||||||
where
|
|
||||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
|
||||||
{
|
|
||||||
pub fn new(key: SigningKey, bootstrap_token: Option<String>, transport: Transport) -> Self {
|
|
||||||
Self {
|
|
||||||
key,
|
|
||||||
bootstrap_token,
|
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
|
||||||
transport,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), InboundError> {
|
|
||||||
self.state.process_event(event).map_err(|e| {
|
|
||||||
error!(?e, "useragent state transition failed");
|
|
||||||
InboundError::StateTransitionFailed
|
|
||||||
})?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn send_auth_challenge_request(&mut self) -> Result<(), InboundError> {
|
|
||||||
let req = AuthChallengeRequest {
|
|
||||||
pubkey: self.key.verifying_key().to_bytes().to_vec(),
|
|
||||||
bootstrap_token: self.bootstrap_token.take(),
|
|
||||||
};
|
|
||||||
|
|
||||||
self.transition(UserAgentEvents::SentAuthChallengeRequest)?;
|
|
||||||
|
|
||||||
self.transport
|
|
||||||
.send(UserAgentRequest {
|
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.map_err(|_| InboundError::TransportSendFailed)?;
|
|
||||||
|
|
||||||
info!(actor = "useragent", "auth.request.sent");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn handle_auth_challenge(
|
|
||||||
&mut self,
|
|
||||||
challenge: arbiter_proto::proto::user_agent::AuthChallenge,
|
|
||||||
) -> Result<(), InboundError> {
|
|
||||||
self.transition(UserAgentEvents::ReceivedAuthChallenge)?;
|
|
||||||
|
|
||||||
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
let signature = self.key.sign(&formatted);
|
|
||||||
let solution = AuthChallengeSolution {
|
|
||||||
signature: signature.to_bytes().to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
self.transport
|
|
||||||
.send(UserAgentRequest {
|
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.map_err(|_| InboundError::TransportSendFailed)?;
|
|
||||||
|
|
||||||
info!(actor = "useragent", "auth.solution.sent");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn handle_auth_ok(&mut self, _ok: AuthOk) -> Result<(), InboundError> {
|
|
||||||
self.transition(UserAgentEvents::ReceivedAuthOk)?;
|
|
||||||
info!(actor = "useragent", "auth.ok");
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn process_inbound_transport(
|
|
||||||
&mut self,
|
|
||||||
inbound: UserAgentResponse
|
|
||||||
) -> Result<(), InboundError> {
|
|
||||||
let payload = inbound
|
|
||||||
.payload
|
|
||||||
.ok_or(InboundError::MissingResponsePayload)?;
|
|
||||||
|
|
||||||
match payload {
|
|
||||||
UserAgentResponsePayload::AuthChallenge(challenge) => {
|
|
||||||
self.handle_auth_challenge(challenge).await
|
|
||||||
}
|
|
||||||
UserAgentResponsePayload::AuthOk(ok) => self.handle_auth_ok(ok),
|
|
||||||
_ => Err(InboundError::UnexpectedResponsePayload),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<Transport> Actor for UserAgentActor<Transport>
|
|
||||||
where
|
|
||||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
|
||||||
{
|
|
||||||
type Args = Self;
|
|
||||||
|
|
||||||
type Error = ();
|
|
||||||
|
|
||||||
async fn on_start(
|
|
||||||
mut args: Self::Args,
|
|
||||||
_actor_ref: ActorRef<Self>,
|
|
||||||
) -> Result<Self, Self::Error> {
|
|
||||||
if let Err(err) = args.send_auth_challenge_request().await {
|
|
||||||
error!(?err, actor = "useragent", "auth.start.failed");
|
|
||||||
return Err(());
|
|
||||||
}
|
|
||||||
Ok(args)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn next(
|
|
||||||
&mut self,
|
|
||||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
|
||||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
|
||||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
|
||||||
loop {
|
|
||||||
select! {
|
|
||||||
signal = mailbox_rx.recv() => {
|
|
||||||
return signal;
|
|
||||||
}
|
|
||||||
inbound = self.transport.recv() => {
|
|
||||||
match inbound {
|
|
||||||
Some(inbound) => {
|
|
||||||
if let Err(err) = self.process_inbound_transport(inbound).await {
|
|
||||||
error!(?err, actor = "useragent", "transport.inbound.failed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
info!(actor = "useragent", "transport.closed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
mod grpc;
|
|
||||||
pub use grpc::{connect_grpc, ConnectError};
|
|
||||||
@@ -1,141 +0,0 @@
|
|||||||
use arbiter_proto::{
|
|
||||||
format_challenge,
|
|
||||||
proto::user_agent::{
|
|
||||||
AuthChallenge, AuthOk,
|
|
||||||
UserAgentRequest, UserAgentResponse,
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
|
||||||
},
|
|
||||||
transport::Bi,
|
|
||||||
};
|
|
||||||
use arbiter_useragent::UserAgentActor;
|
|
||||||
use ed25519_dalek::SigningKey;
|
|
||||||
use kameo::actor::Spawn;
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tokio::time::{Duration, timeout};
|
|
||||||
use async_trait::async_trait;
|
|
||||||
|
|
||||||
struct TestTransport {
|
|
||||||
inbound_rx: mpsc::Receiver<UserAgentResponse>,
|
|
||||||
outbound_tx: mpsc::Sender<UserAgentRequest>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl Bi<UserAgentResponse, UserAgentRequest> for TestTransport {
|
|
||||||
async fn send(&mut self, item: UserAgentRequest) -> Result<(), arbiter_proto::transport::Error> {
|
|
||||||
self.outbound_tx
|
|
||||||
.send(item)
|
|
||||||
.await
|
|
||||||
.map_err(|_| arbiter_proto::transport::Error::ChannelClosed)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn recv(&mut self) -> Option<UserAgentResponse> {
|
|
||||||
self.inbound_rx.recv().await
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn make_transport() -> (
|
|
||||||
TestTransport,
|
|
||||||
mpsc::Sender<UserAgentResponse>,
|
|
||||||
mpsc::Receiver<UserAgentRequest>,
|
|
||||||
) {
|
|
||||||
let (inbound_tx, inbound_rx) = mpsc::channel(8);
|
|
||||||
let (outbound_tx, outbound_rx) = mpsc::channel(8);
|
|
||||||
(
|
|
||||||
TestTransport {
|
|
||||||
inbound_rx,
|
|
||||||
outbound_tx,
|
|
||||||
},
|
|
||||||
inbound_tx,
|
|
||||||
outbound_rx,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn test_key() -> SigningKey {
|
|
||||||
SigningKey::from_bytes(&[7u8; 32])
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn sends_auth_request_on_start_with_bootstrap_token() {
|
|
||||||
let key = test_key();
|
|
||||||
let pubkey = key.verifying_key().to_bytes().to_vec();
|
|
||||||
let bootstrap_token = Some("bootstrap-123".to_string());
|
|
||||||
let (transport, inbound_tx, mut outbound_rx) = make_transport();
|
|
||||||
|
|
||||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token.clone(), transport));
|
|
||||||
|
|
||||||
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
|
|
||||||
.await
|
|
||||||
.expect("timed out waiting for auth request")
|
|
||||||
.expect("channel closed before auth request");
|
|
||||||
|
|
||||||
let UserAgentRequest {
|
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
|
|
||||||
} = outbound
|
|
||||||
else {
|
|
||||||
panic!("expected auth challenge request");
|
|
||||||
};
|
|
||||||
|
|
||||||
assert_eq!(req.pubkey, pubkey);
|
|
||||||
assert_eq!(req.bootstrap_token, bootstrap_token);
|
|
||||||
|
|
||||||
drop(inbound_tx);
|
|
||||||
drop(actor);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn challenge_flow_sends_solution_from_transport_inbound() {
|
|
||||||
let key = test_key();
|
|
||||||
let verify_key = key.verifying_key();
|
|
||||||
let (transport, inbound_tx, mut outbound_rx) = make_transport();
|
|
||||||
|
|
||||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, None, transport));
|
|
||||||
|
|
||||||
let _initial_auth_request = timeout(Duration::from_secs(1), outbound_rx.recv())
|
|
||||||
.await
|
|
||||||
.expect("timed out waiting for initial auth request")
|
|
||||||
.expect("missing initial auth request");
|
|
||||||
|
|
||||||
let challenge = AuthChallenge {
|
|
||||||
pubkey: verify_key.to_bytes().to_vec(),
|
|
||||||
nonce: 42,
|
|
||||||
};
|
|
||||||
inbound_tx
|
|
||||||
.send(UserAgentResponse {
|
|
||||||
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
|
|
||||||
.await
|
|
||||||
.expect("timed out waiting for challenge solution")
|
|
||||||
.expect("missing challenge solution");
|
|
||||||
|
|
||||||
let UserAgentRequest {
|
|
||||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
|
|
||||||
} = outbound
|
|
||||||
else {
|
|
||||||
panic!("expected auth challenge solution");
|
|
||||||
};
|
|
||||||
|
|
||||||
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
let sig: ed25519_dalek::Signature = solution
|
|
||||||
.signature
|
|
||||||
.as_slice()
|
|
||||||
.try_into()
|
|
||||||
.expect("signature bytes length");
|
|
||||||
verify_key
|
|
||||||
.verify_strict(&formatted, &sig)
|
|
||||||
.expect("solution signature should verify");
|
|
||||||
|
|
||||||
inbound_tx
|
|
||||||
.send(UserAgentResponse {
|
|
||||||
payload: Some(UserAgentResponsePayload::AuthOk(AuthOk {})),
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
drop(inbound_tx);
|
|
||||||
drop(actor);
|
|
||||||
}
|
|
||||||
0
server/rules/.gitkeep
Normal file
0
server/rules/.gitkeep
Normal file
10
server/rules/safecell/new-inline.yaml
Normal file
10
server/rules/safecell/new-inline.yaml
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
id: safecell-new-inline
|
||||||
|
language: Rust
|
||||||
|
rule:
|
||||||
|
pattern: $CELL.write_inline(|$W| $BODY);
|
||||||
|
follows:
|
||||||
|
pattern: let mut $CELL = SafeCell::new($INIT);
|
||||||
|
fix:
|
||||||
|
template: let mut $CELL = SafeCell::new_inline(|$W| $BODY);
|
||||||
|
expandStart:
|
||||||
|
pattern: let mut $CELL = SafeCell::new($INIT)
|
||||||
17
server/rules/safecell/read-inline.yaml
Normal file
17
server/rules/safecell/read-inline.yaml
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
id: safecell-read-inline
|
||||||
|
language: Rust
|
||||||
|
rule:
|
||||||
|
pattern:
|
||||||
|
context: |
|
||||||
|
{
|
||||||
|
let $READ = $CELL.read();
|
||||||
|
$$$BODY
|
||||||
|
}
|
||||||
|
selector: block
|
||||||
|
inside:
|
||||||
|
kind: block
|
||||||
|
fix:
|
||||||
|
template: |
|
||||||
|
$CELL.read_inline(|$READ| {
|
||||||
|
$$$BODY
|
||||||
|
});
|
||||||
13
server/rules/safecell/write-inline.yaml
Normal file
13
server/rules/safecell/write-inline.yaml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
id: safecell-write-inline
|
||||||
|
language: Rust
|
||||||
|
rule:
|
||||||
|
pattern: |
|
||||||
|
{
|
||||||
|
let mut $WRITE = $CELL.write();
|
||||||
|
$$$BODY
|
||||||
|
}
|
||||||
|
fix:
|
||||||
|
template: |
|
||||||
|
$CELL.write_inline(|$WRITE| {
|
||||||
|
$$$BODY
|
||||||
|
});
|
||||||
2
server/sgconfig.yml
Normal file
2
server/sgconfig.yml
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
ruleDirs:
|
||||||
|
- ./rules
|
||||||
@@ -1,6 +1,41 @@
|
|||||||
|
|
||||||
# cargo-vet audits file
|
# cargo-vet audits file
|
||||||
|
|
||||||
|
[[audits.alloy-primitives]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
version = "1.5.7"
|
||||||
|
|
||||||
|
[[audits.console]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
version = "0.15.11"
|
||||||
|
|
||||||
|
[[audits.encode_unicode]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
version = "0.3.6"
|
||||||
|
|
||||||
|
[[audits.futures-timer]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-run"
|
||||||
|
version = "3.0.3"
|
||||||
|
|
||||||
|
[[audits.insta]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-run"
|
||||||
|
version = "1.46.3"
|
||||||
|
|
||||||
|
[[audits.pin-project]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
version = "0.2.16"
|
||||||
|
|
||||||
|
[[audits.protoc-bin-vendored]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
version = "3.2.0"
|
||||||
|
|
||||||
[[audits.similar]]
|
[[audits.similar]]
|
||||||
who = "hdbg <httpdebugger@protonmail.com>"
|
who = "hdbg <httpdebugger@protonmail.com>"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -16,11 +51,214 @@ who = "hdbg <httpdebugger@protonmail.com>"
|
|||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
delta = "0.2.18 -> 0.2.19"
|
delta = "0.2.18 -> 0.2.19"
|
||||||
|
|
||||||
|
[[audits.wasm-bindgen]]
|
||||||
|
who = "CleverWild <cleverwilddev@gmail.com>"
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
delta = "0.2.100 -> 0.2.114"
|
||||||
|
|
||||||
|
[[trusted.addr2line]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 4415 # Philip Craig (philipc)
|
||||||
|
start = "2019-05-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.aho-corasick]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-03-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.anyhow]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.async-stream]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
|
start = "2019-06-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.async-stream]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2021-04-21"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.async-stream-impl]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
|
start = "2019-08-13"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.async-stream-impl]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2021-04-21"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.async-trait]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-07-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.auto_impl]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3204 # Ashley Mannix (KodrAus)
|
||||||
|
start = "2022-06-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.aws-lc-rs]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 156764 # Justin W Smith (justsmth)
|
||||||
|
start = "2023-04-11"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.aws-lc-sys]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 156764 # Justin W Smith (justsmth)
|
||||||
|
start = "2022-11-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.backtrace]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 55123 # rust-lang-owner
|
||||||
|
start = "2025-05-06"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.bitflags]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3204 # Ashley Mannix (KodrAus)
|
||||||
|
start = "2019-05-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.bytes]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-11-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.bytes]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2021-01-11"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.cc]]
|
[[trusted.cc]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 55123 # rust-lang-owner
|
user-id = 55123 # rust-lang-owner
|
||||||
start = "2022-10-29"
|
start = "2022-10-29"
|
||||||
end = "2027-02-16"
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.cmake]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 55123 # rust-lang-owner
|
||||||
|
start = "2022-10-29"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.crossbeam-utils]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-12"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.derive_more]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3797 # Jelte Fennema-Nio (JelteF)
|
||||||
|
start = "2019-05-25"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.derive_more-impl]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3797 # Jelte Fennema-Nio (JelteF)
|
||||||
|
start = "2023-07-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.dyn-clone]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-12-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.ff]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6289 # Jack Grigg (str4d)
|
||||||
|
start = "2021-08-11"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.find-msvc-tools]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 539 # Josh Stone (cuviper)
|
||||||
|
start = "2025-08-29"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.flate2]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 980 # Sebastian Thiel (Byron)
|
||||||
|
start = "2023-08-15"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-channel]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-core]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-executor]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-io]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-macro]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-sink]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-task]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2019-07-29"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.futures-util]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2020-10-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.group]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1244 # ebfull
|
||||||
|
start = "2019-10-08"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.h2]]
|
[[trusted.h2]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -28,36 +266,372 @@ user-id = 359 # Sean McArthur (seanmonstar)
|
|||||||
start = "2019-03-13"
|
start = "2019-03-13"
|
||||||
end = "2027-02-14"
|
end = "2027-02-14"
|
||||||
|
|
||||||
|
[[trusted.hashbrown]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 2915 # Amanieu d'Antras (Amanieu)
|
||||||
|
start = "2019-04-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.hashbrown]]
|
[[trusted.hashbrown]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 55123 # rust-lang-owner
|
user-id = 55123 # rust-lang-owner
|
||||||
start = "2025-04-30"
|
start = "2025-04-30"
|
||||||
end = "2027-02-14"
|
end = "2027-02-14"
|
||||||
|
|
||||||
|
[[trusted.http]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-04-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.http-body-util]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2022-10-25"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.httparse]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-07-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.hyper]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-03-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.hyper-util]]
|
[[trusted.hyper-util]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 359 # Sean McArthur (seanmonstar)
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
start = "2022-01-15"
|
start = "2022-01-15"
|
||||||
end = "2027-02-14"
|
end = "2027-02-14"
|
||||||
|
|
||||||
|
[[trusted.id-arena]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 696 # Nick Fitzgerald (fitzgen)
|
||||||
|
start = "2026-01-14"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.indexmap]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 539 # Josh Stone (cuviper)
|
||||||
|
start = "2020-01-15"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.itoa]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-05-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.jobserver]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 55123 # rust-lang-owner
|
||||||
|
start = "2024-07-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.js-sys]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.libc]]
|
[[trusted.libc]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 55123 # rust-lang-owner
|
user-id = 55123 # rust-lang-owner
|
||||||
start = "2024-08-15"
|
start = "2024-08-15"
|
||||||
end = "2027-02-16"
|
end = "2027-02-16"
|
||||||
|
|
||||||
|
[[trusted.libm]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 55123 # rust-lang-owner
|
||||||
|
start = "2024-10-26"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.linux-raw-sys]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6825 # Dan Gohman (sunfishcode)
|
||||||
|
start = "2021-06-12"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.lock_api]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 2915 # Amanieu d'Antras (Amanieu)
|
||||||
|
start = "2019-05-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.log]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3204 # Ashley Mannix (KodrAus)
|
||||||
|
start = "2019-07-10"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.macro-string]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2025-02-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.memchr]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-07-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.mime]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-09-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.mio]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6025 # Thomas de Zeeuw (Thomasdezeeuw)
|
||||||
|
start = "2019-12-17"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.num-bigint]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 539 # Josh Stone (cuviper)
|
||||||
|
start = "2019-09-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.num_cpus]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-06-10"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.object]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 4415 # Philip Craig (philipc)
|
||||||
|
start = "2019-04-26"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.parking_lot]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 2915 # Amanieu d'Antras (Amanieu)
|
||||||
|
start = "2019-05-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.parking_lot_core]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 2915 # Amanieu d'Antras (Amanieu)
|
||||||
|
start = "2019-05-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.paste]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-03-19"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.pin-project]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2019-03-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.pin-project-internal]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2019-08-11"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.pin-project-lite]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2019-10-22"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.portable-atomic]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 33035 # Taiki Endo (taiki-e)
|
||||||
|
start = "2022-02-24"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.prettyplease]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2022-01-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.proc-macro2]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-04-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.prost]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2021-07-08"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.prost-build]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2021-07-08"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.prost-derive]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2021-07-08"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.prost-types]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2021-07-08"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-linux-aarch_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-linux-ppcle_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-linux-s390_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2025-07-21"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-linux-x86_32]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-linux-x86_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-macos-aarch_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2024-09-30"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-macos-x86_64]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.protoc-bin-vendored-win32]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 220 # Stepan Koltsov (stepancheg)
|
||||||
|
start = "2022-02-07"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.pulldown-cmark-to-cmark]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 980 # Sebastian Thiel (Byron)
|
||||||
|
start = "2019-07-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.quote]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-04-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.ref-cast]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-05-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.ref-cast-impl]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-05-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.regex]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-02-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.regex-automata]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-02-25"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.regex-syntax]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-03-30"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.reqwest]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.rustc-demangle]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 55123 # rust-lang-owner
|
||||||
|
start = "2023-03-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.rustix]]
|
[[trusted.rustix]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 6825 # Dan Gohman (sunfishcode)
|
user-id = 6825 # Dan Gohman (sunfishcode)
|
||||||
start = "2021-10-29"
|
start = "2021-10-29"
|
||||||
end = "2027-02-14"
|
end = "2027-02-14"
|
||||||
|
|
||||||
|
[[trusted.ryu]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2019-05-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.scopeguard]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 2915 # Amanieu d'Antras (Amanieu)
|
||||||
|
start = "2020-02-16"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.semver]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2021-05-25"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.serde_json]]
|
[[trusted.serde_json]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 3618 # David Tolnay (dtolnay)
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
start = "2019-02-28"
|
start = "2019-02-28"
|
||||||
end = "2027-02-14"
|
end = "2027-02-14"
|
||||||
|
|
||||||
|
[[trusted.slab]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2021-10-13"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.socket2]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6025 # Thomas de Zeeuw (Thomasdezeeuw)
|
||||||
|
start = "2020-09-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.syn]]
|
[[trusted.syn]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 3618 # David Tolnay (dtolnay)
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
@@ -70,26 +644,350 @@ user-id = 2915 # Amanieu d'Antras (Amanieu)
|
|||||||
start = "2019-09-07"
|
start = "2019-09-07"
|
||||||
end = "2027-02-16"
|
end = "2027-02-16"
|
||||||
|
|
||||||
|
[[trusted.time]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 15682 # Jacob Pratt (jhpratt)
|
||||||
|
start = "2019-12-19"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tinystr]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1139 # Manish Goregaokar (Manishearth)
|
||||||
|
start = "2021-01-14"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tokio]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2020-12-25"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tokio-macros]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2020-10-26"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tokio-stream]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2021-01-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tokio-util]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6741 # Alice Ryhl (Darksonn)
|
||||||
|
start = "2021-01-12"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.toml]]
|
[[trusted.toml]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 6743 # Ed Page (epage)
|
user-id = 6743 # Ed Page (epage)
|
||||||
start = "2022-12-14"
|
start = "2022-12-14"
|
||||||
end = "2027-02-16"
|
end = "2027-02-16"
|
||||||
|
|
||||||
|
[[trusted.toml_datetime]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6743 # Ed Page (epage)
|
||||||
|
start = "2022-10-21"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.toml_edit]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6743 # Ed Page (epage)
|
||||||
|
start = "2021-09-13"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.toml_parser]]
|
[[trusted.toml_parser]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 6743 # Ed Page (epage)
|
user-id = 6743 # Ed Page (epage)
|
||||||
start = "2025-07-08"
|
start = "2025-07-08"
|
||||||
end = "2027-02-16"
|
end = "2027-02-16"
|
||||||
|
|
||||||
|
[[trusted.tonic]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2019-10-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.tonic-build]]
|
[[trusted.tonic-build]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 10
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
start = "2019-09-10"
|
start = "2019-09-10"
|
||||||
end = "2027-02-16"
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tonic-build]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2019-10-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tonic-prost]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2025-07-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tonic-prost-build]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2025-07-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tower]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2024-09-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tower-http]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2024-09-23"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tower-layer]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
|
start = "2019-04-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tower-layer]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2019-09-11"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tower-service]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3959 # Lucio Franco (LucioFranco)
|
||||||
|
start = "2019-08-20"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.tracing-subscriber]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
|
start = "2025-08-29"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.ucd-trie]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 189 # Andrew Gallant (BurntSushi)
|
||||||
|
start = "2019-07-21"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.unicase]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 359 # Sean McArthur (seanmonstar)
|
||||||
|
start = "2019-03-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.unicode-ident]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2021-10-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.url]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1139 # Manish Goregaokar (Manishearth)
|
||||||
|
start = "2021-02-18"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.uuid]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3204 # Ashley Mannix (KodrAus)
|
||||||
|
start = "2019-10-18"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.valuable]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 10 # Carl Lerche (carllerche)
|
||||||
|
start = "2022-01-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wait-timeout]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2025-02-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasi]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2020-06-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasi]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6825 # Dan Gohman (sunfishcode)
|
||||||
|
start = "2019-07-22"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasm-bindgen]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasm-bindgen-futures]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasm-bindgen-macro]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasm-bindgen-macro-support]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.wasm-bindgen-shared]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.web-sys]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1 # Alex Crichton (alexcrichton)
|
||||||
|
start = "2019-03-04"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows-core]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-11-15"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows-implement]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2022-01-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows-interface]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2022-02-18"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows-result]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2024-02-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows-strings]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2024-02-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
[[trusted.windows-sys]]
|
[[trusted.windows-sys]]
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
user-id = 64539 # Kenny Kerr (kennykerr)
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
start = "2021-11-15"
|
start = "2021-11-15"
|
||||||
end = "2027-02-16"
|
end = "2027-02-16"
|
||||||
|
|
||||||
|
[[trusted.windows-targets]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2022-09-09"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_aarch64_gnullvm]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2022-09-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_aarch64_msvc]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-11-05"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_i686_gnu]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-10-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_i686_gnullvm]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2024-04-02"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_i686_msvc]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-10-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_x86_64_gnu]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-10-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_x86_64_gnullvm]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2022-09-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.windows_x86_64_msvc]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 64539 # Kenny Kerr (kennykerr)
|
||||||
|
start = "2021-10-27"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.winnow]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 6743 # Ed Page (epage)
|
||||||
|
start = "2023-02-22"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.yoke]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1139 # Manish Goregaokar (Manishearth)
|
||||||
|
start = "2021-05-01"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.zerocopy]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 7178 # Joshua Liebow-Feeser (joshlf)
|
||||||
|
start = "2019-02-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.zerocopy-derive]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 7178 # Joshua Liebow-Feeser (joshlf)
|
||||||
|
start = "2019-02-28"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.zerotrie]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1139 # Manish Goregaokar (Manishearth)
|
||||||
|
start = "2023-10-03"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.zerovec]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 1139 # Manish Goregaokar (Manishearth)
|
||||||
|
start = "2021-04-19"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|
||||||
|
[[trusted.zmij]]
|
||||||
|
criteria = "safe-to-deploy"
|
||||||
|
user-id = 3618 # David Tolnay (dtolnay)
|
||||||
|
start = "2025-12-18"
|
||||||
|
end = "2027-03-14"
|
||||||
|
|||||||
@@ -4,30 +4,27 @@
|
|||||||
[cargo-vet]
|
[cargo-vet]
|
||||||
version = "0.10"
|
version = "0.10"
|
||||||
|
|
||||||
|
[imports.OpenDevicePartnership]
|
||||||
|
url = "https://raw.githubusercontent.com/OpenDevicePartnership/rust-crate-audits/refs/heads/main/audits.toml"
|
||||||
|
|
||||||
[imports.bytecode-alliance]
|
[imports.bytecode-alliance]
|
||||||
url = "https://raw.githubusercontent.com/bytecodealliance/wasmtime/main/supply-chain/audits.toml"
|
url = "https://raw.githubusercontent.com/bytecodealliance/wasmtime/main/supply-chain/audits.toml"
|
||||||
|
|
||||||
|
[imports.embark-studios]
|
||||||
|
url = "https://raw.githubusercontent.com/EmbarkStudios/rust-ecosystem/main/audits.toml"
|
||||||
|
|
||||||
[imports.google]
|
[imports.google]
|
||||||
url = "https://raw.githubusercontent.com/google/supply-chain/main/audits.toml"
|
url = "https://raw.githubusercontent.com/google/supply-chain/main/audits.toml"
|
||||||
|
|
||||||
|
[imports.isrg]
|
||||||
|
url = "https://raw.githubusercontent.com/divviup/libprio-rs/main/supply-chain/audits.toml"
|
||||||
|
|
||||||
[imports.mozilla]
|
[imports.mozilla]
|
||||||
url = "https://raw.githubusercontent.com/mozilla/supply-chain/main/audits.toml"
|
url = "https://raw.githubusercontent.com/mozilla/supply-chain/main/audits.toml"
|
||||||
|
|
||||||
[imports.zcash]
|
[imports.zcash]
|
||||||
url = "https://raw.githubusercontent.com/zcash/rust-ecosystem/main/supply-chain/audits.toml"
|
url = "https://raw.githubusercontent.com/zcash/rust-ecosystem/main/supply-chain/audits.toml"
|
||||||
|
|
||||||
[[exemptions.addr2line]]
|
|
||||||
version = "0.25.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.aho-corasick]]
|
|
||||||
version = "1.1.4"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.anyhow]]
|
|
||||||
version = "1.0.101"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.asn1-rs]]
|
[[exemptions.asn1-rs]]
|
||||||
version = "0.7.1"
|
version = "0.7.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -40,18 +37,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.2.0"
|
version = "0.2.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.async-trait]]
|
|
||||||
version = "0.1.89"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.aws-lc-rs]]
|
|
||||||
version = "1.15.4"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.aws-lc-sys]]
|
|
||||||
version = "0.37.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.axum]]
|
[[exemptions.axum]]
|
||||||
version = "0.8.8"
|
version = "0.8.8"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -60,10 +45,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.5.6"
|
version = "0.5.6"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.backtrace]]
|
|
||||||
version = "0.3.76"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.backtrace-ext]]
|
[[exemptions.backtrace-ext]]
|
||||||
version = "0.2.1"
|
version = "0.2.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -72,26 +53,14 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.9.1"
|
version = "0.9.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.bitflags]]
|
|
||||||
version = "2.10.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.block-buffer]]
|
[[exemptions.block-buffer]]
|
||||||
version = "0.11.0"
|
version = "0.11.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.bytes]]
|
|
||||||
version = "1.11.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.cc]]
|
[[exemptions.cc]]
|
||||||
version = "1.2.55"
|
version = "1.2.55"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.cfg-if]]
|
|
||||||
version = "1.0.4"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.chacha20]]
|
[[exemptions.chacha20]]
|
||||||
version = "0.10.0"
|
version = "0.10.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -100,26 +69,14 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.4.43"
|
version = "0.4.43"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.cmake]]
|
|
||||||
version = "0.1.57"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.cpufeatures]]
|
[[exemptions.cpufeatures]]
|
||||||
version = "0.2.17"
|
version = "0.2.17"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.cpufeatures]]
|
|
||||||
version = "0.3.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.crc32fast]]
|
[[exemptions.crc32fast]]
|
||||||
version = "1.5.0"
|
version = "1.5.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.crossbeam-utils]]
|
|
||||||
version = "0.8.21"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.crypto-common]]
|
[[exemptions.crypto-common]]
|
||||||
version = "0.2.0"
|
version = "0.2.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -156,10 +113,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "10.0.0"
|
version = "10.0.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.deranged]]
|
|
||||||
version = "0.5.5"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.diesel]]
|
[[exemptions.diesel]]
|
||||||
version = "2.3.6"
|
version = "2.3.6"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -192,10 +145,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.2.0"
|
version = "0.2.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.dyn-clone]]
|
|
||||||
version = "1.0.20"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.ed25519]]
|
[[exemptions.ed25519]]
|
||||||
version = "3.0.0-rc.4"
|
version = "3.0.0-rc.4"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -204,10 +153,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "3.0.0-pre.6"
|
version = "3.0.0-pre.6"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.fiat-crypto]]
|
|
||||||
version = "0.3.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.find-msvc-tools]]
|
[[exemptions.find-msvc-tools]]
|
||||||
version = "0.1.9"
|
version = "0.1.9"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -216,22 +161,10 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.5.7"
|
version = "0.5.7"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.flate2]]
|
|
||||||
version = "1.1.9"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.fs_extra]]
|
[[exemptions.fs_extra]]
|
||||||
version = "1.3.0"
|
version = "1.3.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.futures-task]]
|
|
||||||
version = "0.3.31"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.futures-util]]
|
|
||||||
version = "0.3.31"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.getrandom]]
|
[[exemptions.getrandom]]
|
||||||
version = "0.2.17"
|
version = "0.2.17"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -244,30 +177,10 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.4.1"
|
version = "0.4.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.hashbrown]]
|
|
||||||
version = "0.14.5"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.http]]
|
|
||||||
version = "1.4.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.http-body-util]]
|
|
||||||
version = "0.1.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.httparse]]
|
|
||||||
version = "1.10.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.hybrid-array]]
|
[[exemptions.hybrid-array]]
|
||||||
version = "0.4.7"
|
version = "0.4.7"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.hyper]]
|
|
||||||
version = "1.8.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.hyper-timeout]]
|
[[exemptions.hyper-timeout]]
|
||||||
version = "0.5.2"
|
version = "0.5.2"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -276,18 +189,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.1.65"
|
version = "0.1.65"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.id-arena]]
|
|
||||||
version = "2.3.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.ident_case]]
|
|
||||||
version = "1.0.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.indexmap]]
|
|
||||||
version = "2.13.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.is_ci]]
|
[[exemptions.is_ci]]
|
||||||
version = "1.2.0"
|
version = "1.2.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -296,14 +197,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.14.0"
|
version = "0.14.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.itoa]]
|
|
||||||
version = "1.0.17"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.jobserver]]
|
|
||||||
version = "0.1.34"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.js-sys]]
|
[[exemptions.js-sys]]
|
||||||
version = "0.3.85"
|
version = "0.3.85"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -320,26 +213,10 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.35.0"
|
version = "0.35.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.linux-raw-sys]]
|
|
||||||
version = "0.11.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.lock_api]]
|
|
||||||
version = "0.4.14"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.log]]
|
|
||||||
version = "0.4.29"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.matchit]]
|
[[exemptions.matchit]]
|
||||||
version = "0.8.4"
|
version = "0.8.4"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.memchr]]
|
|
||||||
version = "2.8.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.memsafe]]
|
[[exemptions.memsafe]]
|
||||||
version = "0.4.0"
|
version = "0.4.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -360,34 +237,14 @@ criteria = "safe-to-deploy"
|
|||||||
version = "2.3.0"
|
version = "2.3.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.mime]]
|
|
||||||
version = "0.3.17"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.minimal-lexical]]
|
[[exemptions.minimal-lexical]]
|
||||||
version = "0.2.1"
|
version = "0.2.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.mio]]
|
|
||||||
version = "1.1.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.multimap]]
|
[[exemptions.multimap]]
|
||||||
version = "0.10.1"
|
version = "0.10.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.num-bigint]]
|
|
||||||
version = "0.4.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.num-conv]]
|
|
||||||
version = "0.2.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.object]]
|
|
||||||
version = "0.37.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.oid-registry]]
|
[[exemptions.oid-registry]]
|
||||||
version = "0.8.1"
|
version = "0.8.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -400,14 +257,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "4.2.3"
|
version = "4.2.3"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.parking_lot]]
|
|
||||||
version = "0.12.5"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.parking_lot_core]]
|
|
||||||
version = "0.9.12"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.pem]]
|
[[exemptions.pem]]
|
||||||
version = "3.0.6"
|
version = "3.0.6"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -424,58 +273,14 @@ criteria = "safe-to-deploy"
|
|||||||
version = "1.1.10"
|
version = "1.1.10"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.portable-atomic]]
|
|
||||||
version = "1.13.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.prettyplease]]
|
|
||||||
version = "0.2.37"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.proc-macro2]]
|
|
||||||
version = "1.0.106"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.prost]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.prost-build]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.prost-derive]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.prost-types]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.pulldown-cmark]]
|
[[exemptions.pulldown-cmark]]
|
||||||
version = "0.13.0"
|
version = "0.13.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.pulldown-cmark-to-cmark]]
|
|
||||||
version = "22.0.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.quote]]
|
|
||||||
version = "1.0.44"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.r-efi]]
|
[[exemptions.r-efi]]
|
||||||
version = "5.3.0"
|
version = "5.3.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.rand]]
|
|
||||||
version = "0.10.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.rand_core]]
|
|
||||||
version = "0.10.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.rcgen]]
|
[[exemptions.rcgen]]
|
||||||
version = "0.14.7"
|
version = "0.14.7"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -484,18 +289,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.5.18"
|
version = "0.5.18"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.regex]]
|
|
||||||
version = "1.12.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.regex-automata]]
|
|
||||||
version = "0.4.14"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.regex-syntax]]
|
|
||||||
version = "0.8.9"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.ring]]
|
[[exemptions.ring]]
|
||||||
version = "0.17.14"
|
version = "0.17.14"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -504,10 +297,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.rustc-demangle]]
|
|
||||||
version = "0.1.27"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.rusticata-macros]]
|
[[exemptions.rusticata-macros]]
|
||||||
version = "4.1.0"
|
version = "4.1.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -528,10 +317,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.1.4"
|
version = "0.1.4"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.scopeguard]]
|
|
||||||
version = "1.2.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.secrecy]]
|
[[exemptions.secrecy]]
|
||||||
version = "0.10.3"
|
version = "0.10.3"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -540,18 +325,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "1.0.27"
|
version = "1.0.27"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.serde]]
|
|
||||||
version = "1.0.228"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.serde_core]]
|
|
||||||
version = "1.0.228"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.serde_derive]]
|
|
||||||
version = "1.0.228"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.sha2]]
|
[[exemptions.sha2]]
|
||||||
version = "0.11.0-rc.5"
|
version = "0.11.0-rc.5"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -568,10 +341,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.3.8"
|
version = "0.3.8"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.slab]]
|
|
||||||
version = "0.4.12"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.smlang]]
|
[[exemptions.smlang]]
|
||||||
version = "0.8.0"
|
version = "0.8.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -580,10 +349,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.8.0"
|
version = "0.8.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.socket2]]
|
|
||||||
version = "0.6.2"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.sqlite-wasm-rs]]
|
[[exemptions.sqlite-wasm-rs]]
|
||||||
version = "0.5.2"
|
version = "0.5.2"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -592,10 +357,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.subtle]]
|
|
||||||
version = "2.6.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.supports-color]]
|
[[exemptions.supports-color]]
|
||||||
version = "3.0.2"
|
version = "3.0.2"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -620,74 +381,10 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.4.3"
|
version = "0.4.3"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.thiserror]]
|
|
||||||
version = "2.0.18"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.thiserror-impl]]
|
|
||||||
version = "2.0.18"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.time]]
|
|
||||||
version = "0.3.47"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.time-core]]
|
|
||||||
version = "0.1.8"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.time-macros]]
|
|
||||||
version = "0.2.27"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tokio]]
|
|
||||||
version = "1.49.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tokio-macros]]
|
|
||||||
version = "2.6.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tokio-rustls]]
|
[[exemptions.tokio-rustls]]
|
||||||
version = "0.26.4"
|
version = "0.26.4"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.tokio-stream]]
|
|
||||||
version = "0.1.18"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tokio-util]]
|
|
||||||
version = "0.7.18"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tonic]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tonic-build]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tonic-prost]]
|
|
||||||
version = "0.14.4"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tonic-prost-build]]
|
|
||||||
version = "0.14.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tower]]
|
|
||||||
version = "0.5.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tower-layer]]
|
|
||||||
version = "0.3.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tower-service]]
|
|
||||||
version = "0.3.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.tracing]]
|
[[exemptions.tracing]]
|
||||||
version = "0.1.44"
|
version = "0.1.44"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -708,34 +405,10 @@ criteria = "safe-to-run"
|
|||||||
version = "1.19.0"
|
version = "1.19.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.unicase]]
|
|
||||||
version = "2.9.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.unicode-ident]]
|
|
||||||
version = "1.0.23"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.untrusted]]
|
|
||||||
version = "0.7.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.untrusted]]
|
[[exemptions.untrusted]]
|
||||||
version = "0.9.0"
|
version = "0.9.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.uuid]]
|
|
||||||
version = "1.20.0"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.wasi]]
|
|
||||||
version = "0.11.1+wasi-snapshot-preview1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.wasm-bindgen]]
|
|
||||||
version = "0.2.108"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.wasm-bindgen-macro]]
|
[[exemptions.wasm-bindgen-macro]]
|
||||||
version = "0.2.108"
|
version = "0.2.108"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -760,102 +433,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.4.0"
|
version = "0.4.0"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.windows-core]]
|
|
||||||
version = "0.62.2"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-implement]]
|
|
||||||
version = "0.60.2"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-interface]]
|
|
||||||
version = "0.59.3"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-result]]
|
|
||||||
version = "0.4.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-strings]]
|
|
||||||
version = "0.5.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-targets]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows-targets]]
|
|
||||||
version = "0.53.5"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_aarch64_gnullvm]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_aarch64_gnullvm]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_aarch64_msvc]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_aarch64_msvc]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_gnu]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_gnu]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_gnullvm]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_gnullvm]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_msvc]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_i686_msvc]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_gnu]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_gnu]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_gnullvm]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_gnullvm]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_msvc]]
|
|
||||||
version = "0.52.6"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.windows_x86_64_msvc]]
|
|
||||||
version = "0.53.1"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.winnow]]
|
|
||||||
version = "0.7.14"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.x509-parser]]
|
[[exemptions.x509-parser]]
|
||||||
version = "0.18.1"
|
version = "0.18.1"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
@@ -864,10 +441,6 @@ criteria = "safe-to-deploy"
|
|||||||
version = "0.5.2"
|
version = "0.5.2"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|
||||||
[[exemptions.zmij]]
|
|
||||||
version = "1.0.20"
|
|
||||||
criteria = "safe-to-deploy"
|
|
||||||
|
|
||||||
[[exemptions.zstd]]
|
[[exemptions.zstd]]
|
||||||
version = "0.13.3"
|
version = "0.13.3"
|
||||||
criteria = "safe-to-deploy"
|
criteria = "safe-to-deploy"
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,4 @@
|
|||||||
# app
|
# useragent
|
||||||
|
|
||||||
A new Flutter project.
|
A new Flutter project.
|
||||||
|
|
||||||
|
|||||||
14
useragent/android/.gitignore
vendored
Normal file
14
useragent/android/.gitignore
vendored
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
gradle-wrapper.jar
|
||||||
|
/.gradle
|
||||||
|
/captures/
|
||||||
|
/gradlew
|
||||||
|
/gradlew.bat
|
||||||
|
/local.properties
|
||||||
|
GeneratedPluginRegistrant.java
|
||||||
|
.cxx/
|
||||||
|
|
||||||
|
# Remember to never publicly share your keystore.
|
||||||
|
# See https://flutter.dev/to/reference-keystore
|
||||||
|
key.properties
|
||||||
|
**/*.keystore
|
||||||
|
**/*.jks
|
||||||
44
useragent/android/app/build.gradle.kts
Normal file
44
useragent/android/app/build.gradle.kts
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
plugins {
|
||||||
|
id("com.android.application")
|
||||||
|
id("kotlin-android")
|
||||||
|
// The Flutter Gradle Plugin must be applied after the Android and Kotlin Gradle plugins.
|
||||||
|
id("dev.flutter.flutter-gradle-plugin")
|
||||||
|
}
|
||||||
|
|
||||||
|
android {
|
||||||
|
namespace = "com.example.useragent"
|
||||||
|
compileSdk = flutter.compileSdkVersion
|
||||||
|
ndkVersion = flutter.ndkVersion
|
||||||
|
|
||||||
|
compileOptions {
|
||||||
|
sourceCompatibility = JavaVersion.VERSION_17
|
||||||
|
targetCompatibility = JavaVersion.VERSION_17
|
||||||
|
}
|
||||||
|
|
||||||
|
kotlinOptions {
|
||||||
|
jvmTarget = JavaVersion.VERSION_17.toString()
|
||||||
|
}
|
||||||
|
|
||||||
|
defaultConfig {
|
||||||
|
// TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html).
|
||||||
|
applicationId = "com.example.useragent"
|
||||||
|
// You can update the following values to match your application needs.
|
||||||
|
// For more information, see: https://flutter.dev/to/review-gradle-config.
|
||||||
|
minSdk = flutter.minSdkVersion
|
||||||
|
targetSdk = flutter.targetSdkVersion
|
||||||
|
versionCode = flutter.versionCode
|
||||||
|
versionName = flutter.versionName
|
||||||
|
}
|
||||||
|
|
||||||
|
buildTypes {
|
||||||
|
release {
|
||||||
|
// TODO: Add your own signing config for the release build.
|
||||||
|
// Signing with the debug keys for now, so `flutter run --release` works.
|
||||||
|
signingConfig = signingConfigs.getByName("debug")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
flutter {
|
||||||
|
source = "../.."
|
||||||
|
}
|
||||||
7
useragent/android/app/src/debug/AndroidManifest.xml
Normal file
7
useragent/android/app/src/debug/AndroidManifest.xml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
<!-- The INTERNET permission is required for development. Specifically,
|
||||||
|
the Flutter tool needs it to communicate with the running application
|
||||||
|
to allow setting breakpoints, to provide hot reload, etc.
|
||||||
|
-->
|
||||||
|
<uses-permission android:name="android.permission.INTERNET"/>
|
||||||
|
</manifest>
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user