Compare commits
2 Commits
21b9d698fa
...
push-yyxvk
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1b4369b1cb | ||
|
|
7bd37b3c4a |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -1,3 +1 @@
|
||||
target/
|
||||
scripts/__pycache__/
|
||||
.DS_Store
|
||||
target/
|
||||
@@ -27,82 +27,6 @@ This document covers concrete technology choices and dependencies. For the archi
|
||||
|
||||
---
|
||||
|
||||
## EVM Policy Engine
|
||||
|
||||
### Overview
|
||||
|
||||
The EVM engine classifies incoming transactions, enforces grant constraints, and records executions. It is the sole path through which a wallet key is used for signing.
|
||||
|
||||
The central abstraction is the `Policy` trait. Each implementation handles one semantic transaction category and owns its own database tables for grant storage and transaction logging.
|
||||
|
||||
### Transaction Evaluation Flow
|
||||
|
||||
`Engine::evaluate_transaction` runs the following steps in order:
|
||||
|
||||
1. **Classify** — Each registered policy's `analyze(context)` inspects the transaction fields (`chain`, `to`, `value`, `calldata`). The first one returning `Some(meaning)` wins. If none match, the transaction is rejected as `UnsupportedTransactionType`.
|
||||
2. **Find grant** — `Policy::try_find_grant` queries for a non-revoked grant covering this wallet, client, chain, and target address.
|
||||
3. **Check shared constraints** — `check_shared_constraints` runs in the engine before any policy-specific logic. It enforces the validity window, gas fee caps, and transaction count rate limit (see below).
|
||||
4. **Evaluate** — `Policy::evaluate` checks the decoded meaning against the grant's policy-specific constraints and returns any violations.
|
||||
5. **Record** — If `RunKind::Execution` and there are no violations, the engine writes to `evm_transaction_log` and calls `Policy::record_transaction` for any policy-specific logging (e.g., token transfer volume).
|
||||
|
||||
### Policy Trait
|
||||
|
||||
| Method | Purpose |
|
||||
|---|---|
|
||||
| `analyze` | Pure — classifies a transaction into a typed `Meaning`, or `None` if this policy doesn't apply |
|
||||
| `evaluate` | Checks the `Meaning` against a `Grant`; returns a list of `EvalViolation`s |
|
||||
| `create_grant` | Inserts policy-specific rows; returns the specific grant ID |
|
||||
| `try_find_grant` | Finds a matching non-revoked grant for the given `EvalContext` |
|
||||
| `find_all_grants` | Returns all non-revoked grants (used for listing) |
|
||||
| `record_transaction` | Persists policy-specific data after execution |
|
||||
|
||||
`analyze` and `evaluate` are intentionally separate: classification is pure and cheap, while evaluation may involve DB queries (e.g., fetching past transfer volume).
|
||||
|
||||
### Registered Policies
|
||||
|
||||
**EtherTransfer** — plain ETH transfers (empty calldata)
|
||||
|
||||
- Grant requires: allowlist of recipient addresses + one volumetric rate limit (max ETH over a time window)
|
||||
- Violations: recipient not in allowlist, cumulative ETH volume exceeded
|
||||
|
||||
**TokenTransfer** — ERC-20 `transfer(address,uint256)` calls
|
||||
|
||||
- Recognised by ABI-decoding the `transfer(address,uint256)` selector against a static registry of known token contracts (`arbiter_tokens_registry`)
|
||||
- Grant requires: token contract address, optional recipient restriction, zero or more volumetric rate limits
|
||||
- Violations: recipient mismatch, any volumetric limit exceeded
|
||||
|
||||
### Grant Model
|
||||
|
||||
Every grant has two layers:
|
||||
|
||||
- **Shared (`evm_basic_grant`)** — wallet, chain, validity period, gas fee caps, transaction count rate limit. One row per grant regardless of type.
|
||||
- **Specific** — policy-owned tables (`evm_ether_transfer_grant`, `evm_token_transfer_grant`, etc.) holding type-specific configuration.
|
||||
|
||||
`find_all_grants` uses a `#[diesel::auto_type]` base join between the specific and shared tables, then batch-loads related rows (targets, volume limits) in two additional queries to avoid N+1.
|
||||
|
||||
The engine exposes `list_all_grants` which collects across all policy types into `Vec<Grant<SpecificGrant>>` via a blanket `From<Grant<S>> for Grant<SpecificGrant>` conversion.
|
||||
|
||||
### Shared Constraints (enforced by the engine)
|
||||
|
||||
These are checked centrally in `check_shared_constraints` before policy evaluation:
|
||||
|
||||
| Constraint | Fields | Behaviour |
|
||||
|---|---|---|
|
||||
| Validity window | `valid_from`, `valid_until` | Emits `InvalidTime` if current time is outside the range |
|
||||
| Gas fee cap | `max_gas_fee_per_gas`, `max_priority_fee_per_gas` | Emits `GasLimitExceeded` if either cap is breached |
|
||||
| Tx count rate limit | `rate_limit` (`count` + `window`) | Counts rows in `evm_transaction_log` within the window; emits `RateLimitExceeded` if at or above the limit |
|
||||
|
||||
---
|
||||
|
||||
### Known Limitations
|
||||
|
||||
- **Only EIP-1559 transactions are supported.** Legacy and EIP-2930 types are rejected outright.
|
||||
- **No opaque-calldata (unknown contract) grant type.** The architecture describes a category for unrecognised contracts, but no policy implements it yet. Any transaction that is not a plain ETH transfer or a known ERC-20 transfer is unconditionally rejected.
|
||||
- **Token registry is static.** Tokens are recognised only if they appear in the hard-coded `arbiter_tokens_registry` crate. There is no mechanism to register additional contracts at runtime.
|
||||
- **Nonce management is not implemented.** The architecture lists nonce deduplication as a core responsibility, but no nonce tracking or enforcement exists yet.
|
||||
|
||||
---
|
||||
|
||||
## Memory Protection
|
||||
|
||||
The unsealed root key must be held in a hardened memory cell resistant to dumps, page swaps, and hibernation.
|
||||
|
||||
178
app/.dart_tool/package_config.json
Normal file
178
app/.dart_tool/package_config.json
Normal file
@@ -0,0 +1,178 @@
|
||||
{
|
||||
"configVersion": 2,
|
||||
"packages": [
|
||||
{
|
||||
"name": "async",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/async-2.13.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "boolean_selector",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/boolean_selector-2.1.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "characters",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/characters-1.4.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "clock",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/clock-1.1.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "collection",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/collection-1.19.1",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "cupertino_icons",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/cupertino_icons-1.0.8",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "fake_async",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/fake_async-1.3.3",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.3"
|
||||
},
|
||||
{
|
||||
"name": "flutter",
|
||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.8"
|
||||
},
|
||||
{
|
||||
"name": "flutter_lints",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/flutter_lints-6.0.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.8"
|
||||
},
|
||||
{
|
||||
"name": "flutter_test",
|
||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter_test",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.8"
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker-11.0.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.2"
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker_flutter_testing",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_flutter_testing-3.0.10",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.2"
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker_testing",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_testing-3.0.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.2"
|
||||
},
|
||||
{
|
||||
"name": "lints",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/lints-6.1.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.8"
|
||||
},
|
||||
{
|
||||
"name": "matcher",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/matcher-0.12.17",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "material_color_utilities",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/material_color_utilities-0.11.1",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "2.17"
|
||||
},
|
||||
{
|
||||
"name": "meta",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/meta-1.17.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.5"
|
||||
},
|
||||
{
|
||||
"name": "path",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/path-1.9.1",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "sky_engine",
|
||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/bin/cache/pkg/sky_engine",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.8"
|
||||
},
|
||||
{
|
||||
"name": "source_span",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/source_span-1.10.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "stack_trace",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stack_trace-1.12.1",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.4"
|
||||
},
|
||||
{
|
||||
"name": "stream_channel",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stream_channel-2.1.4",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.3"
|
||||
},
|
||||
{
|
||||
"name": "string_scanner",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/string_scanner-1.4.1",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "term_glyph",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/term_glyph-1.2.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "test_api",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/test_api-0.7.7",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.5"
|
||||
},
|
||||
{
|
||||
"name": "vector_math",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vector_math-2.2.0",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.1"
|
||||
},
|
||||
{
|
||||
"name": "vm_service",
|
||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vm_service-15.0.2",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.5"
|
||||
},
|
||||
{
|
||||
"name": "app",
|
||||
"rootUri": "../",
|
||||
"packageUri": "lib/",
|
||||
"languageVersion": "3.10"
|
||||
}
|
||||
],
|
||||
"generator": "pub",
|
||||
"generatorVersion": "3.10.8",
|
||||
"flutterRoot": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable",
|
||||
"flutterVersion": "3.38.9",
|
||||
"pubCache": "file:///Users/kaska/.pub-cache"
|
||||
}
|
||||
230
app/.dart_tool/package_graph.json
Normal file
230
app/.dart_tool/package_graph.json
Normal file
@@ -0,0 +1,230 @@
|
||||
{
|
||||
"roots": [
|
||||
"app"
|
||||
],
|
||||
"packages": [
|
||||
{
|
||||
"name": "app",
|
||||
"version": "1.0.0+1",
|
||||
"dependencies": [
|
||||
"cupertino_icons",
|
||||
"flutter"
|
||||
],
|
||||
"devDependencies": [
|
||||
"flutter_lints",
|
||||
"flutter_test"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "flutter_lints",
|
||||
"version": "6.0.0",
|
||||
"dependencies": [
|
||||
"lints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "flutter_test",
|
||||
"version": "0.0.0",
|
||||
"dependencies": [
|
||||
"clock",
|
||||
"collection",
|
||||
"fake_async",
|
||||
"flutter",
|
||||
"leak_tracker_flutter_testing",
|
||||
"matcher",
|
||||
"meta",
|
||||
"path",
|
||||
"stack_trace",
|
||||
"stream_channel",
|
||||
"test_api",
|
||||
"vector_math"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "cupertino_icons",
|
||||
"version": "1.0.8",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "flutter",
|
||||
"version": "0.0.0",
|
||||
"dependencies": [
|
||||
"characters",
|
||||
"collection",
|
||||
"material_color_utilities",
|
||||
"meta",
|
||||
"sky_engine",
|
||||
"vector_math"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "lints",
|
||||
"version": "6.1.0",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "stream_channel",
|
||||
"version": "2.1.4",
|
||||
"dependencies": [
|
||||
"async"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "meta",
|
||||
"version": "1.17.0",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "collection",
|
||||
"version": "1.19.1",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker_flutter_testing",
|
||||
"version": "3.0.10",
|
||||
"dependencies": [
|
||||
"flutter",
|
||||
"leak_tracker",
|
||||
"leak_tracker_testing",
|
||||
"matcher",
|
||||
"meta"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "vector_math",
|
||||
"version": "2.2.0",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "stack_trace",
|
||||
"version": "1.12.1",
|
||||
"dependencies": [
|
||||
"path"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "clock",
|
||||
"version": "1.1.2",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "fake_async",
|
||||
"version": "1.3.3",
|
||||
"dependencies": [
|
||||
"clock",
|
||||
"collection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "path",
|
||||
"version": "1.9.1",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "matcher",
|
||||
"version": "0.12.17",
|
||||
"dependencies": [
|
||||
"async",
|
||||
"meta",
|
||||
"stack_trace",
|
||||
"term_glyph",
|
||||
"test_api"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "test_api",
|
||||
"version": "0.7.7",
|
||||
"dependencies": [
|
||||
"async",
|
||||
"boolean_selector",
|
||||
"collection",
|
||||
"meta",
|
||||
"source_span",
|
||||
"stack_trace",
|
||||
"stream_channel",
|
||||
"string_scanner",
|
||||
"term_glyph"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "sky_engine",
|
||||
"version": "0.0.0",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "material_color_utilities",
|
||||
"version": "0.11.1",
|
||||
"dependencies": [
|
||||
"collection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "characters",
|
||||
"version": "1.4.0",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "async",
|
||||
"version": "2.13.0",
|
||||
"dependencies": [
|
||||
"collection",
|
||||
"meta"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker_testing",
|
||||
"version": "3.0.2",
|
||||
"dependencies": [
|
||||
"leak_tracker",
|
||||
"matcher",
|
||||
"meta"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "leak_tracker",
|
||||
"version": "11.0.2",
|
||||
"dependencies": [
|
||||
"clock",
|
||||
"collection",
|
||||
"meta",
|
||||
"path",
|
||||
"vm_service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "term_glyph",
|
||||
"version": "1.2.2",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"name": "string_scanner",
|
||||
"version": "1.4.1",
|
||||
"dependencies": [
|
||||
"source_span"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "source_span",
|
||||
"version": "1.10.2",
|
||||
"dependencies": [
|
||||
"collection",
|
||||
"path",
|
||||
"term_glyph"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "boolean_selector",
|
||||
"version": "2.1.2",
|
||||
"dependencies": [
|
||||
"source_span",
|
||||
"string_scanner"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "vm_service",
|
||||
"version": "15.0.2",
|
||||
"dependencies": []
|
||||
}
|
||||
],
|
||||
"configVersion": 1
|
||||
}
|
||||
1
app/.dart_tool/version
Normal file
1
app/.dart_tool/version
Normal file
@@ -0,0 +1 @@
|
||||
3.38.9
|
||||
11
app/macos/Flutter/ephemeral/Flutter-Generated.xcconfig
Normal file
11
app/macos/Flutter/ephemeral/Flutter-Generated.xcconfig
Normal file
@@ -0,0 +1,11 @@
|
||||
// This is a generated file; do not edit or check into version control.
|
||||
FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable
|
||||
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app
|
||||
COCOAPODS_PARALLEL_CODE_SIGN=true
|
||||
FLUTTER_BUILD_DIR=build
|
||||
FLUTTER_BUILD_NAME=1.0.0
|
||||
FLUTTER_BUILD_NUMBER=1
|
||||
DART_OBFUSCATION=false
|
||||
TRACK_WIDGET_CREATION=true
|
||||
TREE_SHAKE_ICONS=false
|
||||
PACKAGE_CONFIG=.dart_tool/package_config.json
|
||||
12
app/macos/Flutter/ephemeral/flutter_export_environment.sh
Executable file
12
app/macos/Flutter/ephemeral/flutter_export_environment.sh
Executable file
@@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
# This is a generated file; do not edit or check into version control.
|
||||
export "FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable"
|
||||
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app"
|
||||
export "COCOAPODS_PARALLEL_CODE_SIGN=true"
|
||||
export "FLUTTER_BUILD_DIR=build"
|
||||
export "FLUTTER_BUILD_NAME=1.0.0"
|
||||
export "FLUTTER_BUILD_NUMBER=1"
|
||||
export "DART_OBFUSCATION=false"
|
||||
export "TRACK_WIDGET_CREATION=true"
|
||||
export "TREE_SHAKE_ICONS=false"
|
||||
export "PACKAGE_CONFIG=.dart_tool/package_config.json"
|
||||
@@ -55,15 +55,6 @@ backend = "aqua:protocolbuffers/protobuf/protoc"
|
||||
"platforms.macos-x64" = { checksum = "sha256:312f04713946921cc0187ef34df80241ddca1bab6f564c636885fd2cc90d3f88", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-x86_64.zip"}
|
||||
"platforms.windows-x64" = { checksum = "sha256:1ebd7c87baffb9f1c47169b640872bf5fb1e4408079c691af527be9561d8f6f7", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-win64.zip"}
|
||||
|
||||
[[tools.python]]
|
||||
version = "3.14.3"
|
||||
backend = "core:python"
|
||||
"platforms.linux-arm64" = { checksum = "sha256:be0f4dc2932f762292b27d46ea7d3e8e66ddf3969a5eb0254a229015ed402625", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
||||
"platforms.linux-x64" = { checksum = "sha256:0a73413f89efd417871876c9accaab28a9d1e3cd6358fbfff171a38ec99302f0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
||||
"platforms.macos-arm64" = { checksum = "sha256:4703cdf18b26798fde7b49b6b66149674c25f97127be6a10dbcf29309bdcdcdb", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-apple-darwin-install_only_stripped.tar.gz"}
|
||||
"platforms.macos-x64" = { checksum = "sha256:76f1cc26e3d262eae8ca546a93e8bded10cf0323613f7e246fea2e10a8115eb7", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-apple-darwin-install_only_stripped.tar.gz"}
|
||||
"platforms.windows-x64" = { checksum = "sha256:950c5f21a015c1bdd1337f233456df2470fab71e4d794407d27a84cb8b9909a0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-pc-windows-msvc-install_only_stripped.tar.gz"}
|
||||
|
||||
[[tools.rust]]
|
||||
version = "1.93.0"
|
||||
backend = "core:rust"
|
||||
|
||||
@@ -9,4 +9,3 @@ protoc = "29.6"
|
||||
"cargo:cargo-nextest" = "0.9.126"
|
||||
"cargo:cargo-shear" = "latest"
|
||||
"cargo:cargo-insta" = "1.46.3"
|
||||
python = "3.14.3"
|
||||
|
||||
@@ -2,6 +2,7 @@ syntax = "proto3";
|
||||
|
||||
package arbiter;
|
||||
|
||||
import "auth.proto";
|
||||
import "client.proto";
|
||||
import "user_agent.proto";
|
||||
|
||||
@@ -11,6 +12,6 @@ message ServerInfo {
|
||||
}
|
||||
|
||||
service ArbiterService {
|
||||
rpc Client(stream arbiter.client.ClientRequest) returns (stream arbiter.client.ClientResponse);
|
||||
rpc UserAgent(stream arbiter.user_agent.UserAgentRequest) returns (stream arbiter.user_agent.UserAgentResponse);
|
||||
rpc Client(stream ClientRequest) returns (stream ClientResponse);
|
||||
rpc UserAgent(stream UserAgentRequest) returns (stream UserAgentResponse);
|
||||
}
|
||||
|
||||
35
protobufs/auth.proto
Normal file
35
protobufs/auth.proto
Normal file
@@ -0,0 +1,35 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package arbiter.auth;
|
||||
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
message AuthChallengeRequest {
|
||||
bytes pubkey = 1;
|
||||
optional string bootstrap_token = 2;
|
||||
}
|
||||
|
||||
message AuthChallenge {
|
||||
bytes pubkey = 1;
|
||||
int32 nonce = 2;
|
||||
}
|
||||
|
||||
message AuthChallengeSolution {
|
||||
bytes signature = 1;
|
||||
}
|
||||
|
||||
message AuthOk {}
|
||||
|
||||
message ClientMessage {
|
||||
oneof payload {
|
||||
AuthChallengeRequest auth_challenge_request = 1;
|
||||
AuthChallengeSolution auth_challenge_solution = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message ServerMessage {
|
||||
oneof payload {
|
||||
AuthChallenge auth_challenge = 1;
|
||||
AuthOk auth_ok = 2;
|
||||
}
|
||||
}
|
||||
@@ -1,38 +1,17 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package arbiter.client;
|
||||
package arbiter;
|
||||
|
||||
import "evm.proto";
|
||||
|
||||
message AuthChallengeRequest {
|
||||
bytes pubkey = 1;
|
||||
}
|
||||
|
||||
message AuthChallenge {
|
||||
bytes pubkey = 1;
|
||||
int32 nonce = 2;
|
||||
}
|
||||
|
||||
message AuthChallengeSolution {
|
||||
bytes signature = 1;
|
||||
}
|
||||
|
||||
message AuthOk {}
|
||||
import "auth.proto";
|
||||
|
||||
message ClientRequest {
|
||||
oneof payload {
|
||||
AuthChallengeRequest auth_challenge_request = 1;
|
||||
AuthChallengeSolution auth_challenge_solution = 2;
|
||||
arbiter.evm.EvmSignTransactionRequest evm_sign_transaction = 3;
|
||||
arbiter.evm.EvmAnalyzeTransactionRequest evm_analyze_transaction = 4;
|
||||
arbiter.auth.ClientMessage auth_message = 1;
|
||||
}
|
||||
}
|
||||
|
||||
message ClientResponse {
|
||||
oneof payload {
|
||||
AuthChallenge auth_challenge = 1;
|
||||
AuthOk auth_ok = 2;
|
||||
arbiter.evm.EvmSignTransactionResponse evm_sign_transaction = 3;
|
||||
arbiter.evm.EvmAnalyzeTransactionResponse evm_analyze_transaction = 4;
|
||||
arbiter.auth.ServerMessage auth_message = 1;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,216 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package arbiter.evm;
|
||||
|
||||
import "google/protobuf/empty.proto";
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
enum EvmError {
|
||||
EVM_ERROR_UNSPECIFIED = 0;
|
||||
EVM_ERROR_VAULT_SEALED = 1;
|
||||
EVM_ERROR_INTERNAL = 2;
|
||||
}
|
||||
|
||||
message WalletEntry {
|
||||
bytes address = 1; // 20-byte Ethereum address
|
||||
}
|
||||
|
||||
message WalletList {
|
||||
repeated WalletEntry wallets = 1;
|
||||
}
|
||||
|
||||
message WalletCreateResponse {
|
||||
oneof result {
|
||||
WalletEntry wallet = 1;
|
||||
EvmError error = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message WalletListResponse {
|
||||
oneof result {
|
||||
WalletList wallets = 1;
|
||||
EvmError error = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Grant types ---
|
||||
|
||||
message TransactionRateLimit {
|
||||
uint32 count = 1;
|
||||
int64 window_secs = 2;
|
||||
}
|
||||
|
||||
message VolumeRateLimit {
|
||||
bytes max_volume = 1; // U256 as big-endian bytes
|
||||
int64 window_secs = 2;
|
||||
}
|
||||
|
||||
message SharedSettings {
|
||||
int32 wallet_id = 1;
|
||||
uint64 chain_id = 2;
|
||||
optional google.protobuf.Timestamp valid_from = 3;
|
||||
optional google.protobuf.Timestamp valid_until = 4;
|
||||
optional bytes max_gas_fee_per_gas = 5; // U256 as big-endian bytes
|
||||
optional bytes max_priority_fee_per_gas = 6; // U256 as big-endian bytes
|
||||
optional TransactionRateLimit rate_limit = 7;
|
||||
}
|
||||
|
||||
message EtherTransferSettings {
|
||||
repeated bytes targets = 1; // list of 20-byte Ethereum addresses
|
||||
VolumeRateLimit limit = 2;
|
||||
}
|
||||
|
||||
message TokenTransferSettings {
|
||||
bytes token_contract = 1; // 20-byte Ethereum address
|
||||
optional bytes target = 2; // 20-byte Ethereum address; absent means any recipient allowed
|
||||
repeated VolumeRateLimit volume_limits = 3;
|
||||
}
|
||||
|
||||
message SpecificGrant {
|
||||
oneof grant {
|
||||
EtherTransferSettings ether_transfer = 1;
|
||||
TokenTransferSettings token_transfer = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message EtherTransferMeaning {
|
||||
bytes to = 1; // 20-byte Ethereum address
|
||||
bytes value = 2; // U256 as big-endian bytes
|
||||
}
|
||||
|
||||
message TokenInfo {
|
||||
string symbol = 1;
|
||||
bytes address = 2; // 20-byte Ethereum address
|
||||
uint64 chain_id = 3;
|
||||
}
|
||||
|
||||
// Mirror of token_transfers::Meaning
|
||||
message TokenTransferMeaning {
|
||||
TokenInfo token = 1;
|
||||
bytes to = 2; // 20-byte Ethereum address
|
||||
bytes value = 3; // U256 as big-endian bytes
|
||||
}
|
||||
|
||||
// Mirror of policies::SpecificMeaning
|
||||
message SpecificMeaning {
|
||||
oneof meaning {
|
||||
EtherTransferMeaning ether_transfer = 1;
|
||||
TokenTransferMeaning token_transfer = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// --- Eval error types ---
|
||||
message GasLimitExceededViolation {
|
||||
optional bytes max_gas_fee_per_gas = 1; // U256 as big-endian bytes
|
||||
optional bytes max_priority_fee_per_gas = 2; // U256 as big-endian bytes
|
||||
}
|
||||
|
||||
message EvalViolation {
|
||||
oneof kind {
|
||||
bytes invalid_target = 1; // 20-byte Ethereum address
|
||||
GasLimitExceededViolation gas_limit_exceeded = 2;
|
||||
google.protobuf.Empty rate_limit_exceeded = 3;
|
||||
google.protobuf.Empty volumetric_limit_exceeded = 4;
|
||||
google.protobuf.Empty invalid_time = 5;
|
||||
google.protobuf.Empty invalid_transaction_type = 6;
|
||||
}
|
||||
}
|
||||
|
||||
// Transaction was classified but no grant covers it
|
||||
message NoMatchingGrantError {
|
||||
SpecificMeaning meaning = 1;
|
||||
}
|
||||
|
||||
// Transaction was classified and a grant was found, but constraints were violated
|
||||
message PolicyViolationsError {
|
||||
SpecificMeaning meaning = 1;
|
||||
repeated EvalViolation violations = 2;
|
||||
}
|
||||
|
||||
// top-level error returned when transaction evaluation fails
|
||||
message TransactionEvalError {
|
||||
oneof kind {
|
||||
google.protobuf.Empty contract_creation_not_supported = 1;
|
||||
google.protobuf.Empty unsupported_transaction_type = 2;
|
||||
NoMatchingGrantError no_matching_grant = 3;
|
||||
PolicyViolationsError policy_violations = 4;
|
||||
}
|
||||
}
|
||||
|
||||
// --- UserAgent grant management ---
|
||||
message EvmGrantCreateRequest {
|
||||
int32 client_id = 1;
|
||||
SharedSettings shared = 2;
|
||||
SpecificGrant specific = 3;
|
||||
}
|
||||
|
||||
message EvmGrantCreateResponse {
|
||||
oneof result {
|
||||
int32 grant_id = 1;
|
||||
EvmError error = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message EvmGrantDeleteRequest {
|
||||
int32 grant_id = 1;
|
||||
}
|
||||
|
||||
message EvmGrantDeleteResponse {
|
||||
oneof result {
|
||||
google.protobuf.Empty ok = 1;
|
||||
EvmError error = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// Basic grant info returned in grant listings
|
||||
message GrantEntry {
|
||||
int32 id = 1;
|
||||
int32 client_id = 2;
|
||||
SharedSettings shared = 3;
|
||||
SpecificGrant specific = 4;
|
||||
}
|
||||
|
||||
message EvmGrantListRequest {
|
||||
optional int32 wallet_id = 1;
|
||||
}
|
||||
|
||||
message EvmGrantListResponse {
|
||||
oneof result {
|
||||
EvmGrantList grants = 1;
|
||||
EvmError error = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message EvmGrantList {
|
||||
repeated GrantEntry grants = 1;
|
||||
}
|
||||
|
||||
// --- Client transaction operations ---
|
||||
|
||||
message EvmSignTransactionRequest {
|
||||
bytes wallet_address = 1; // 20-byte Ethereum address
|
||||
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction (unsigned)
|
||||
}
|
||||
|
||||
// oneof because signing and evaluation happen atomically — a signing failure
|
||||
// is always either an eval error or an internal error, never a partial success
|
||||
message EvmSignTransactionResponse {
|
||||
oneof result {
|
||||
bytes signature = 1; // 65-byte signature: r[32] || s[32] || v[1]
|
||||
TransactionEvalError eval_error = 2;
|
||||
EvmError error = 3;
|
||||
}
|
||||
}
|
||||
|
||||
message EvmAnalyzeTransactionRequest {
|
||||
bytes wallet_address = 1; // 20-byte Ethereum address
|
||||
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction
|
||||
}
|
||||
|
||||
message EvmAnalyzeTransactionResponse {
|
||||
oneof result {
|
||||
SpecificMeaning meaning = 1;
|
||||
TransactionEvalError eval_error = 2;
|
||||
EvmError error = 3;
|
||||
}
|
||||
}
|
||||
@@ -1,25 +1,9 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package arbiter.user_agent;
|
||||
package arbiter;
|
||||
|
||||
import "auth.proto";
|
||||
import "google/protobuf/empty.proto";
|
||||
import "evm.proto";
|
||||
|
||||
message AuthChallengeRequest {
|
||||
bytes pubkey = 1;
|
||||
optional string bootstrap_token = 2;
|
||||
}
|
||||
|
||||
message AuthChallenge {
|
||||
bytes pubkey = 1;
|
||||
int32 nonce = 2;
|
||||
}
|
||||
|
||||
message AuthChallengeSolution {
|
||||
bytes signature = 1;
|
||||
}
|
||||
|
||||
message AuthOk {}
|
||||
|
||||
message UnsealStart {
|
||||
bytes client_pubkey = 1;
|
||||
@@ -51,29 +35,17 @@ enum VaultState {
|
||||
|
||||
message UserAgentRequest {
|
||||
oneof payload {
|
||||
AuthChallengeRequest auth_challenge_request = 1;
|
||||
AuthChallengeSolution auth_challenge_solution = 2;
|
||||
UnsealStart unseal_start = 3;
|
||||
UnsealEncryptedKey unseal_encrypted_key = 4;
|
||||
google.protobuf.Empty query_vault_state = 5;
|
||||
google.protobuf.Empty evm_wallet_create = 6;
|
||||
google.protobuf.Empty evm_wallet_list = 7;
|
||||
arbiter.evm.EvmGrantCreateRequest evm_grant_create = 8;
|
||||
arbiter.evm.EvmGrantDeleteRequest evm_grant_delete = 9;
|
||||
arbiter.evm.EvmGrantListRequest evm_grant_list = 10;
|
||||
arbiter.auth.ClientMessage auth_message = 1;
|
||||
UnsealStart unseal_start = 2;
|
||||
UnsealEncryptedKey unseal_encrypted_key = 3;
|
||||
google.protobuf.Empty query_vault_state = 4;
|
||||
}
|
||||
}
|
||||
message UserAgentResponse {
|
||||
oneof payload {
|
||||
AuthChallenge auth_challenge = 1;
|
||||
AuthOk auth_ok = 2;
|
||||
UnsealStartResponse unseal_start_response = 3;
|
||||
UnsealResult unseal_result = 4;
|
||||
VaultState vault_state = 5;
|
||||
arbiter.evm.WalletCreateResponse evm_wallet_create = 6;
|
||||
arbiter.evm.WalletListResponse evm_wallet_list = 7;
|
||||
arbiter.evm.EvmGrantCreateResponse evm_grant_create = 8;
|
||||
arbiter.evm.EvmGrantDeleteResponse evm_grant_delete = 9;
|
||||
arbiter.evm.EvmGrantListResponse evm_grant_list = 10;
|
||||
arbiter.auth.ServerMessage auth_message = 1;
|
||||
UnsealStartResponse unseal_start_response = 2;
|
||||
UnsealResult unseal_result = 3;
|
||||
VaultState vault_state = 4;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,150 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Fetch the Uniswap default token list and emit Rust `TokenInfo` statics.
|
||||
|
||||
Usage:
|
||||
python3 gen_erc20_registry.py # fetch from IPFS
|
||||
python3 gen_erc20_registry.py tokens.json # local file
|
||||
python3 gen_erc20_registry.py tokens.json out.rs # custom output file
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import unicodedata
|
||||
import urllib.request
|
||||
|
||||
UNISWAP_URL = "https://ipfs.io/ipns/tokens.uniswap.org"
|
||||
|
||||
SOLANA_CHAIN_ID = 501000101
|
||||
IDENTIFIER_RE = re.compile(r"[^A-Za-z0-9]+")
|
||||
|
||||
|
||||
def load_tokens(source=None):
|
||||
if source:
|
||||
with open(source) as f:
|
||||
return json.load(f)
|
||||
req = urllib.request.Request(
|
||||
UNISWAP_URL,
|
||||
headers={"Accept": "application/json", "User-Agent": "gen_tokens/1.0"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=60) as resp:
|
||||
return json.loads(resp.read())
|
||||
|
||||
|
||||
def escape(s: str) -> str:
|
||||
return s.replace("\\", "\\\\").replace('"', '\\"')
|
||||
|
||||
|
||||
def to_screaming_case(name: str) -> str:
|
||||
normalized = unicodedata.normalize("NFKD", name or "")
|
||||
ascii_name = normalized.encode("ascii", "ignore").decode("ascii")
|
||||
snake = IDENTIFIER_RE.sub("_", ascii_name).strip("_").upper()
|
||||
if not snake:
|
||||
snake = "TOKEN"
|
||||
if snake[0].isdigit():
|
||||
snake = f"TOKEN_{snake}"
|
||||
return snake
|
||||
|
||||
|
||||
def static_name_for_token(token: dict, used_names: set[str]) -> str:
|
||||
base = to_screaming_case(token.get("name", ""))
|
||||
if base not in used_names:
|
||||
used_names.add(base)
|
||||
return base
|
||||
|
||||
address = token["address"]
|
||||
suffix = f"{token['chainId']}_{address[2:].upper()[-8:]}"
|
||||
candidate = f"{base}_{suffix}"
|
||||
|
||||
i = 2
|
||||
while candidate in used_names:
|
||||
candidate = f"{base}_{suffix}_{i}"
|
||||
i += 1
|
||||
|
||||
used_names.add(candidate)
|
||||
return candidate
|
||||
|
||||
|
||||
def main():
|
||||
source = sys.argv[1] if len(sys.argv) > 1 else None
|
||||
output = sys.argv[2] if len(sys.argv) > 2 else "generated_tokens.rs"
|
||||
data = load_tokens(source)
|
||||
tokens = data["tokens"]
|
||||
|
||||
# Deduplicate by (chainId, address)
|
||||
seen = set()
|
||||
unique = []
|
||||
for t in tokens:
|
||||
key = (t["chainId"], t["address"].lower())
|
||||
if key not in seen:
|
||||
seen.add(key)
|
||||
unique.append(t)
|
||||
|
||||
unique.sort(key=lambda t: (t["chainId"], t.get("symbol", "").upper()))
|
||||
evm_tokens = [t for t in unique if t["chainId"] != SOLANA_CHAIN_ID]
|
||||
|
||||
ver = data["version"]
|
||||
lines = []
|
||||
w = lines.append
|
||||
|
||||
w(
|
||||
f"// Auto-generated from Uniswap token list v{ver['major']}.{ver['minor']}.{ver['patch']}"
|
||||
)
|
||||
w(f"// {len(evm_tokens)} tokens")
|
||||
w("// DO NOT EDIT - regenerate with gen_erc20_registry.py")
|
||||
w("")
|
||||
|
||||
used_static_names = set()
|
||||
token_statics = []
|
||||
for t in evm_tokens:
|
||||
static_name = static_name_for_token(t, used_static_names)
|
||||
token_statics.append((static_name, t))
|
||||
|
||||
for static_name, t in token_statics:
|
||||
addr = t["address"]
|
||||
name = escape(t.get("name", ""))
|
||||
symbol = escape(t.get("symbol", ""))
|
||||
decimals = t.get("decimals", 18)
|
||||
logo = t.get("logoURI")
|
||||
chain = t["chainId"]
|
||||
|
||||
logo_val = f'Some("{escape(logo)}")' if logo else "None"
|
||||
|
||||
w(f"pub static {static_name}: TokenInfo = TokenInfo {{")
|
||||
w(f' name: "{name}",')
|
||||
w(f' symbol: "{symbol}",')
|
||||
w(f" decimals: {decimals},")
|
||||
w(f' contract: address!("{addr}"),')
|
||||
w(f" chain: {chain},")
|
||||
w(f" logo_uri: {logo_val},")
|
||||
w("};")
|
||||
w("")
|
||||
|
||||
w("pub static TOKENS: &[&TokenInfo] = &[")
|
||||
for static_name, _ in token_statics:
|
||||
w(f" &{static_name},")
|
||||
w("];")
|
||||
w("")
|
||||
w("pub fn get_token(")
|
||||
w(" chain_id: alloy::primitives::ChainId,")
|
||||
w(" address: alloy::primitives::Address,")
|
||||
w(") -> Option<&'static TokenInfo> {")
|
||||
w(" match (chain_id, address) {")
|
||||
for static_name, t in token_statics:
|
||||
w(
|
||||
f' ({t["chainId"]}, addr) if addr == address!("{t["address"]}") => Some(&{static_name}),'
|
||||
)
|
||||
w(" _ => None,")
|
||||
w(" }")
|
||||
w("}")
|
||||
w("")
|
||||
|
||||
with open(output, "w") as f:
|
||||
f.write("\n".join(lines))
|
||||
|
||||
print(f"Wrote {len(token_statics)} tokens to {output}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
2571
server/Cargo.lock
generated
2571
server/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,17 +1,15 @@
|
||||
[workspace]
|
||||
members = [
|
||||
"crates/*",
|
||||
"crates/arbiter-client",
|
||||
"crates/arbiter-proto",
|
||||
"crates/arbiter-server",
|
||||
"crates/arbiter-useragent",
|
||||
]
|
||||
resolver = "3"
|
||||
|
||||
|
||||
[workspace.dependencies]
|
||||
tonic = { version = "0.14.3", features = [
|
||||
"deflate",
|
||||
"gzip",
|
||||
"tls-connect-info",
|
||||
"zstd",
|
||||
] }
|
||||
tonic = { version = "0.14.3", features = ["deflate", "gzip", "tls-connect-info", "zstd"] }
|
||||
tracing = "0.1.44"
|
||||
tokio = { version = "1.49.0", features = ["full"] }
|
||||
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
||||
@@ -28,7 +26,6 @@ kameo = "0.19.2"
|
||||
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
||||
rstest = "0.26.1"
|
||||
rustls-pki-types = "1.14.0"
|
||||
alloy = "1.7.3"
|
||||
rcgen = { version = "0.14.7", features = [
|
||||
"aws_lc_rs",
|
||||
"pem",
|
||||
|
||||
@@ -18,7 +18,7 @@ thiserror.workspace = true
|
||||
rustls-pki-types.workspace = true
|
||||
base64 = "0.22.1"
|
||||
tracing.workspace = true
|
||||
async-trait.workspace = true
|
||||
|
||||
|
||||
[build-dependencies]
|
||||
tonic-prost-build = "0.14.3"
|
||||
|
||||
@@ -11,9 +11,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
.compile_protos(
|
||||
&[
|
||||
format!("{}/arbiter.proto", PROTOBUF_DIR),
|
||||
format!("{}/user_agent.proto", PROTOBUF_DIR),
|
||||
format!("{}/client.proto", PROTOBUF_DIR),
|
||||
format!("{}/evm.proto", PROTOBUF_DIR),
|
||||
format!("{}/auth.proto", PROTOBUF_DIR),
|
||||
],
|
||||
&[PROTOBUF_DIR.to_string()],
|
||||
)
|
||||
|
||||
@@ -3,19 +3,13 @@ pub mod url;
|
||||
|
||||
use base64::{Engine, prelude::BASE64_STANDARD};
|
||||
|
||||
use crate::proto::auth::AuthChallenge;
|
||||
|
||||
pub mod proto {
|
||||
tonic::include_proto!("arbiter");
|
||||
|
||||
pub mod user_agent {
|
||||
tonic::include_proto!("arbiter.user_agent");
|
||||
}
|
||||
|
||||
pub mod client {
|
||||
tonic::include_proto!("arbiter.client");
|
||||
}
|
||||
|
||||
pub mod evm {
|
||||
tonic::include_proto!("arbiter.evm");
|
||||
pub mod auth {
|
||||
tonic::include_proto!("arbiter.auth");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -34,7 +28,7 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
||||
Ok(arbiter_home)
|
||||
}
|
||||
|
||||
pub fn format_challenge(nonce: i32, pubkey: &[u8]) -> Vec<u8> {
|
||||
let concat_form = format!("{}:{}", nonce, BASE64_STANDARD.encode(pubkey));
|
||||
concat_form.into_bytes()
|
||||
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
|
||||
let concat_form = format!("{}:{}", challenge.nonce, BASE64_STANDARD.encode(&challenge.pubkey));
|
||||
concat_form.into_bytes().to_vec()
|
||||
}
|
||||
|
||||
@@ -1,293 +1,371 @@
|
||||
//! Transport-facing abstractions for protocol/session code.
|
||||
//! Transport abstraction layer for bridging gRPC bidirectional streaming with kameo actors.
|
||||
//!
|
||||
//! This module separates three concerns:
|
||||
//! This module provides a clean separation between the gRPC transport layer and business logic
|
||||
//! by modeling the connection as two linked kameo actors:
|
||||
//!
|
||||
//! - protocol/session logic wants a small duplex interface ([`Bi`])
|
||||
//! - transport adapters push concrete stream items to an underlying IO layer
|
||||
//! - transport boundaries translate between protocol-facing and transport-facing
|
||||
//! item types via direction-specific converters
|
||||
//! - A **transport actor** ([`GrpcTransportActor`]) that owns the gRPC stream and channel,
|
||||
//! forwarding inbound messages to the business actor and outbound messages to the client.
|
||||
//! - A **business logic actor** that receives inbound messages from the transport actor and
|
||||
//! sends outbound messages back through the transport actor.
|
||||
//!
|
||||
//! [`Bi`] is intentionally minimal and transport-agnostic:
|
||||
//! - [`Bi::recv`] yields inbound protocol messages
|
||||
//! - [`Bi::send`] accepts outbound protocol/domain items
|
||||
//! The [`wire()`] function sets up bidirectional linking between the two actors, ensuring
|
||||
//! that if either actor dies, the other is notified and can shut down gracefully.
|
||||
//!
|
||||
//! # Generic Ordering Rule
|
||||
//! # Terminology
|
||||
//!
|
||||
//! This module uses a single convention consistently: when a type or trait is
|
||||
//! parameterized by protocol message directions, the generic parameters are
|
||||
//! declared as `Inbound` first, then `Outbound`.
|
||||
//! - **InboundMessage**: a message received by the transport actor from the channel/socket
|
||||
//! and forwarded to the business actor.
|
||||
//! - **OutboundMessage**: a message produced by the business actor and sent to the transport
|
||||
//! actor to be forwarded to the channel/socket.
|
||||
//!
|
||||
//! For [`Bi`], that means `Bi<Inbound, Outbound>`:
|
||||
//! - `recv() -> Option<Inbound>`
|
||||
//! - `send(Outbound)`
|
||||
//!
|
||||
//! For adapter types that are parameterized by direction-specific converters,
|
||||
//! inbound-related converter parameters are declared before outbound-related
|
||||
//! converter parameters.
|
||||
//!
|
||||
//! [`RecvConverter`] and [`SendConverter`] are infallible conversion traits used
|
||||
//! by adapters to map between protocol-facing and transport-facing item types.
|
||||
//! The traits themselves are not result-aware; adapters decide how transport
|
||||
//! errors are handled before (or instead of) conversion.
|
||||
//!
|
||||
//! [`grpc::GrpcAdapter`] combines:
|
||||
//! - a tonic inbound stream
|
||||
//! - a Tokio sender for outbound transport items
|
||||
//! - a [`RecvConverter`] for the receive path
|
||||
//! - a [`SendConverter`] for the send path
|
||||
//!
|
||||
//! [`DummyTransport`] is a no-op implementation useful for tests and local actor
|
||||
//! execution where no real network stream exists.
|
||||
//!
|
||||
//! # Component Interaction
|
||||
//! # Architecture
|
||||
//!
|
||||
//! ```text
|
||||
//! inbound (network -> protocol)
|
||||
//! ============================
|
||||
//!
|
||||
//! tonic::Streaming<RecvTransport>
|
||||
//! -> grpc::GrpcAdapter::recv()
|
||||
//! |
|
||||
//! +--> on `Ok(item)`: RecvConverter::convert(RecvTransport) -> Inbound
|
||||
//! +--> on `Err(status)`: log error and close stream (`None`)
|
||||
//! -> Bi::recv()
|
||||
//! -> protocol/session actor
|
||||
//!
|
||||
//! outbound (protocol -> network)
|
||||
//! ==============================
|
||||
//!
|
||||
//! protocol/session actor
|
||||
//! -> Bi::send(Outbound)
|
||||
//! -> grpc::GrpcAdapter::send()
|
||||
//! |
|
||||
//! +--> SendConverter::convert(Outbound) -> SendTransport
|
||||
//! -> Tokio mpsc::Sender<SendTransport>
|
||||
//! -> tonic response stream
|
||||
//! gRPC Stream ──InboundMessage──▶ GrpcTransportActor ──tell(InboundMessage)──▶ BusinessActor
|
||||
//! ▲ │
|
||||
//! └─tell(Result<OutboundMessage, _>)────┘
|
||||
//! │
|
||||
//! mpsc::Sender ──▶ Client
|
||||
//! ```
|
||||
//!
|
||||
//! # Design Notes
|
||||
//! # Example
|
||||
//!
|
||||
//! - `send()` returns [`Error`] only for transport delivery failures (for
|
||||
//! example, when the outbound channel is closed).
|
||||
//! - [`grpc::GrpcAdapter`] logs tonic receive errors and treats them as stream
|
||||
//! closure (`None`).
|
||||
//! - When protocol-facing and transport-facing types are identical, use
|
||||
//! [`IdentityRecvConverter`] / [`IdentitySendConverter`].
|
||||
//! ```rust,ignore
|
||||
//! let (tx, rx) = mpsc::channel(1000);
|
||||
//! let context = server_context.clone();
|
||||
//!
|
||||
//! wire(
|
||||
//! |transport_ref| MyBusinessActor::new(context, transport_ref),
|
||||
//! |business_recipient, business_id| GrpcTransportActor {
|
||||
//! sender: tx,
|
||||
//! receiver: grpc_stream,
|
||||
//! business_logic_actor: business_recipient,
|
||||
//! business_logic_actor_id: business_id,
|
||||
//! },
|
||||
//! ).await;
|
||||
//!
|
||||
//! Ok(Response::new(ReceiverStream::new(rx)))
|
||||
//! ```
|
||||
|
||||
use std::marker::PhantomData;
|
||||
use futures::{Stream, StreamExt};
|
||||
use kameo::{
|
||||
Actor,
|
||||
actor::{ActorRef, PreparedActor, Recipient, Spawn, WeakActorRef},
|
||||
mailbox::Signal,
|
||||
prelude::Message,
|
||||
};
|
||||
use tokio::{
|
||||
select,
|
||||
sync::mpsc::{self, error::SendError},
|
||||
};
|
||||
use tonic::{Status, Streaming};
|
||||
use tracing::{debug, error};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
/// Errors returned by transport adapters implementing [`Bi`].
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum Error {
|
||||
#[error("Transport channel is closed")]
|
||||
ChannelClosed,
|
||||
}
|
||||
|
||||
/// Minimal bidirectional transport abstraction used by protocol code.
|
||||
/// A bidirectional stream abstraction for sans-io testing.
|
||||
///
|
||||
/// `Bi<Inbound, Outbound>` models a duplex channel with:
|
||||
/// - inbound items of type `Inbound` read via [`Bi::recv`]
|
||||
/// - outbound items of type `Outbound` written via [`Bi::send`]
|
||||
#[async_trait]
|
||||
pub trait Bi<Inbound, Outbound>: Send + Sync + 'static {
|
||||
async fn send(&mut self, item: Outbound) -> Result<(), Error>;
|
||||
|
||||
async fn recv(&mut self) -> Option<Inbound>;
|
||||
/// Combines a [`Stream`] of incoming messages with the ability to [`send`](Bi::send)
|
||||
/// outgoing responses. This trait allows business logic to be tested without a real
|
||||
/// gRPC connection by swapping in an in-memory implementation.
|
||||
///
|
||||
/// # Type Parameters
|
||||
/// - `T`: `InboundMessage` received from the channel/socket (e.g., `UserAgentRequest`)
|
||||
/// - `U`: `OutboundMessage` sent to the channel/socket (e.g., `UserAgentResponse`)
|
||||
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
|
||||
type Error;
|
||||
fn send(
|
||||
&mut self,
|
||||
item: Result<U, Status>,
|
||||
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
|
||||
}
|
||||
|
||||
/// Converts transport-facing inbound items into protocol-facing inbound items.
|
||||
pub trait RecvConverter: Send + Sync + 'static {
|
||||
type Input;
|
||||
type Output;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
||||
/// Concrete [`Bi`] implementation backed by a tonic gRPC [`Streaming`] and an [`mpsc::Sender`].
|
||||
///
|
||||
/// This is the production implementation used in gRPC service handlers. The `request_stream`
|
||||
/// receives messages from the client, and `response_sender` sends responses back.
|
||||
pub struct BiStream<T, U> {
|
||||
pub request_stream: Streaming<T>,
|
||||
pub response_sender: mpsc::Sender<Result<U, Status>>,
|
||||
}
|
||||
|
||||
/// Converts protocol/domain outbound items into transport-facing outbound items.
|
||||
pub trait SendConverter: Send + Sync + 'static {
|
||||
type Input;
|
||||
type Output;
|
||||
impl<T, U> Stream for BiStream<T, U>
|
||||
where
|
||||
T: Send + 'static,
|
||||
U: Send + 'static,
|
||||
{
|
||||
type Item = Result<T, Status>;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
||||
fn poll_next(
|
||||
mut self: std::pin::Pin<&mut Self>,
|
||||
cx: &mut std::task::Context<'_>,
|
||||
) -> std::task::Poll<Option<Self::Item>> {
|
||||
self.request_stream.poll_next_unpin(cx)
|
||||
}
|
||||
}
|
||||
|
||||
/// A [`RecvConverter`] that forwards values unchanged.
|
||||
pub struct IdentityRecvConverter<T> {
|
||||
_marker: PhantomData<T>,
|
||||
impl<T, U> Bi<T, U> for BiStream<T, U>
|
||||
where
|
||||
T: Send + 'static,
|
||||
U: Send + 'static,
|
||||
{
|
||||
type Error = SendError<Result<U, Status>>;
|
||||
|
||||
async fn send(&mut self, item: Result<U, Status>) -> Result<(), Self::Error> {
|
||||
self.response_sender.send(item).await
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> IdentityRecvConverter<T> {
|
||||
pub fn new() -> Self {
|
||||
/// Marker trait for transport actors that can receive outbound messages of type `T`.
|
||||
///
|
||||
/// Implement this on your transport actor to indicate it can handle outbound messages
|
||||
/// produced by the business actor. Requires the actor to implement [`Message<Result<T, E>>`]
|
||||
/// so business logic can forward responses via [`tell()`](ActorRef::tell).
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```rust,ignore
|
||||
/// #[derive(Actor)]
|
||||
/// struct MyTransportActor { /* ... */ }
|
||||
///
|
||||
/// impl Message<Result<MyResponse, MyError>> for MyTransportActor {
|
||||
/// type Reply = ();
|
||||
/// async fn handle(&mut self, msg: Result<MyResponse, MyError>, _ctx: &mut Context<Self, Self::Reply>) -> Self::Reply {
|
||||
/// // forward outbound message to channel/socket
|
||||
/// }
|
||||
/// }
|
||||
///
|
||||
/// impl TransportActor<MyResponse, MyError> for MyTransportActor {}
|
||||
/// ```
|
||||
pub trait TransportActor<Outbound: Send + 'static, DomainError: Send + 'static>:
|
||||
Actor + Send + Message<Result<Outbound, DomainError>>
|
||||
{
|
||||
}
|
||||
|
||||
/// A kameo actor that bridges a gRPC bidirectional stream with a business logic actor.
|
||||
///
|
||||
/// This actor owns the gRPC [`Streaming`] receiver and an [`mpsc::Sender`] for responses.
|
||||
/// It multiplexes between its own mailbox (for outbound messages from the business actor)
|
||||
/// and the gRPC stream (for inbound client messages) using [`tokio::select!`].
|
||||
///
|
||||
/// # Message Flow
|
||||
///
|
||||
/// - **Inbound**: Messages from the gRPC stream are forwarded to `business_logic_actor`
|
||||
/// via [`tell()`](Recipient::tell).
|
||||
/// - **Outbound**: The business actor sends `Result<Outbound, DomainError>` messages to this
|
||||
/// actor, which forwards them through the `sender` channel to the gRPC response stream.
|
||||
///
|
||||
/// # Lifecycle
|
||||
///
|
||||
/// - If the business logic actor dies (detected via actor linking), this actor stops,
|
||||
/// which closes the gRPC stream.
|
||||
/// - If the gRPC stream closes or errors, this actor stops, which (via linking) notifies
|
||||
/// the business actor.
|
||||
/// - Error responses (`Err(DomainError)`) are forwarded to the client and then the actor stops,
|
||||
/// closing the connection.
|
||||
///
|
||||
/// # Type Parameters
|
||||
/// - `Outbound`: `OutboundMessage` sent to the client (e.g., `UserAgentResponse`)
|
||||
/// - `Inbound`: `InboundMessage` received from the client (e.g., `UserAgentRequest`)
|
||||
/// - `E`: The domain error type, must implement `Into<tonic::Status>` for gRPC conversion
|
||||
pub struct GrpcTransportActor<Outbound, Inbound, DomainError>
|
||||
where
|
||||
Outbound: Send + 'static,
|
||||
Inbound: Send + 'static,
|
||||
DomainError: Into<tonic::Status> + Send + 'static,
|
||||
{
|
||||
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
|
||||
receiver: tonic::Streaming<Inbound>,
|
||||
business_logic_actor: Recipient<Inbound>,
|
||||
_error: std::marker::PhantomData<DomainError>,
|
||||
}
|
||||
|
||||
impl<Outbound, Inbound, DomainError> GrpcTransportActor<Outbound, Inbound, DomainError>
|
||||
where
|
||||
Outbound: Send + 'static,
|
||||
Inbound: Send + 'static,
|
||||
DomainError: Into<tonic::Status> + Send + 'static,
|
||||
{
|
||||
pub fn new(
|
||||
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
|
||||
receiver: tonic::Streaming<Inbound>,
|
||||
business_logic_actor: Recipient<Inbound>,
|
||||
) -> Self {
|
||||
Self {
|
||||
_marker: PhantomData,
|
||||
sender,
|
||||
receiver,
|
||||
business_logic_actor,
|
||||
_error: std::marker::PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Default for IdentityRecvConverter<T> {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> RecvConverter for IdentityRecvConverter<T>
|
||||
impl<Outbound, Inbound, E> Actor for GrpcTransportActor<Outbound, Inbound, E>
|
||||
where
|
||||
T: Send + Sync + 'static,
|
||||
Outbound: Send + 'static,
|
||||
Inbound: Send + 'static,
|
||||
E: Into<tonic::Status> + Send + 'static,
|
||||
{
|
||||
type Input = T;
|
||||
type Output = T;
|
||||
type Args = Self;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
||||
item
|
||||
}
|
||||
}
|
||||
type Error = ();
|
||||
|
||||
/// A [`SendConverter`] that forwards values unchanged.
|
||||
pub struct IdentitySendConverter<T> {
|
||||
_marker: PhantomData<T>,
|
||||
}
|
||||
|
||||
impl<T> IdentitySendConverter<T> {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
_marker: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Default for IdentitySendConverter<T> {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> SendConverter for IdentitySendConverter<T>
|
||||
where
|
||||
T: Send + Sync + 'static,
|
||||
{
|
||||
type Input = T;
|
||||
type Output = T;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
||||
item
|
||||
}
|
||||
}
|
||||
|
||||
/// gRPC-specific transport adapters and helpers.
|
||||
pub mod grpc {
|
||||
use async_trait::async_trait;
|
||||
use futures::StreamExt;
|
||||
use tokio::sync::mpsc;
|
||||
use tonic::Streaming;
|
||||
|
||||
use super::{Bi, Error, RecvConverter, SendConverter};
|
||||
|
||||
/// [`Bi`] adapter backed by a tonic gRPC bidirectional stream.
|
||||
///
|
||||
|
||||
/// Tonic receive errors are logged and treated as stream closure (`None`).
|
||||
/// The receive converter is only invoked for successful inbound transport
|
||||
/// items.
|
||||
pub struct GrpcAdapter<InboundConverter, OutboundConverter>
|
||||
where
|
||||
InboundConverter: RecvConverter,
|
||||
OutboundConverter: SendConverter,
|
||||
{
|
||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
||||
receiver: Streaming<InboundConverter::Input>,
|
||||
inbound_converter: InboundConverter,
|
||||
outbound_converter: OutboundConverter,
|
||||
async fn on_start(args: Self::Args, _: ActorRef<Self>) -> Result<Self, Self::Error> {
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
impl<InboundTransport, Inbound, InboundConverter, OutboundConverter>
|
||||
GrpcAdapter<InboundConverter, OutboundConverter>
|
||||
where
|
||||
InboundConverter: RecvConverter<Input = InboundTransport, Output = Inbound>,
|
||||
OutboundConverter: SendConverter,
|
||||
{
|
||||
pub fn new(
|
||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
||||
receiver: Streaming<InboundTransport>,
|
||||
inbound_converter: InboundConverter,
|
||||
outbound_converter: OutboundConverter,
|
||||
) -> Self {
|
||||
Self {
|
||||
sender,
|
||||
receiver,
|
||||
inbound_converter,
|
||||
outbound_converter,
|
||||
fn on_link_died(
|
||||
&mut self,
|
||||
_: WeakActorRef<Self>,
|
||||
id: kameo::prelude::ActorId,
|
||||
_: kameo::prelude::ActorStopReason,
|
||||
) -> impl Future<
|
||||
Output = Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error>,
|
||||
> + Send {
|
||||
async move {
|
||||
if id == self.business_logic_actor.id() {
|
||||
error!("Business logic actor died, stopping GrpcTransportActor");
|
||||
Ok(std::ops::ControlFlow::Break(
|
||||
kameo::prelude::ActorStopReason::Normal,
|
||||
))
|
||||
} else {
|
||||
debug!(
|
||||
"Linked actor {} died, but it's not the business logic actor, ignoring",
|
||||
id
|
||||
);
|
||||
Ok(std::ops::ControlFlow::Continue(()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<InboundConverter, OutboundConverter> Bi<InboundConverter::Output, OutboundConverter::Input>
|
||||
for GrpcAdapter<InboundConverter, OutboundConverter>
|
||||
where
|
||||
InboundConverter: RecvConverter,
|
||||
OutboundConverter: SendConverter,
|
||||
OutboundConverter::Input: Send + 'static,
|
||||
OutboundConverter::Output: Send + 'static,
|
||||
{
|
||||
#[tracing::instrument(level = "trace", skip(self, item))]
|
||||
async fn send(&mut self, item: OutboundConverter::Input) -> Result<(), Error> {
|
||||
let outbound = self.outbound_converter.convert(item);
|
||||
self.sender
|
||||
.send(outbound)
|
||||
.await
|
||||
.map_err(|_| Error::ChannelClosed)
|
||||
}
|
||||
|
||||
#[tracing::instrument(level = "trace", skip(self))]
|
||||
async fn recv(&mut self) -> Option<InboundConverter::Output> {
|
||||
match self.receiver.next().await {
|
||||
Some(Ok(item)) => Some(self.inbound_converter.convert(item)),
|
||||
Some(Err(error)) => {
|
||||
tracing::error!(error = ?error, "grpc transport recv failed; closing stream");
|
||||
None
|
||||
async fn next(
|
||||
&mut self,
|
||||
_: WeakActorRef<Self>,
|
||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
||||
select! {
|
||||
msg = mailbox_rx.recv() => {
|
||||
msg
|
||||
}
|
||||
recv_msg = self.receiver.next() => {
|
||||
match recv_msg {
|
||||
Some(Ok(msg)) => {
|
||||
match self.business_logic_actor.tell(msg).await {
|
||||
Ok(_) => None,
|
||||
Err(e) => {
|
||||
// TODO: this would probably require better error handling - or resending if backpressure is the issue
|
||||
error!("Failed to send message to business logic actor: {}", e);
|
||||
Some(Signal::Stop)
|
||||
}
|
||||
}
|
||||
}
|
||||
Some(Err(e)) => {
|
||||
error!("Received error from stream: {}, stopping GrpcTransportActor", e);
|
||||
Some(Signal::Stop)
|
||||
}
|
||||
None => {
|
||||
error!("Receiver channel closed, stopping GrpcTransportActor");
|
||||
Some(Signal::Stop)
|
||||
}
|
||||
}
|
||||
None => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// No-op [`Bi`] transport for tests and manual actor usage.
|
||||
///
|
||||
/// `send` drops all items and succeeds. [`Bi::recv`] never resolves and therefore
|
||||
/// does not busy-wait or spuriously close the stream.
|
||||
pub struct DummyTransport<Inbound, Outbound> {
|
||||
_marker: PhantomData<(Inbound, Outbound)>,
|
||||
}
|
||||
impl<Outbound, Inbound, E> Message<Result<Outbound, E>> for GrpcTransportActor<Outbound, Inbound, E>
|
||||
where
|
||||
Outbound: Send + 'static,
|
||||
Inbound: Send + 'static,
|
||||
E: Into<tonic::Status> + Send + 'static,
|
||||
{
|
||||
type Reply = ();
|
||||
|
||||
impl<Inbound, Outbound> DummyTransport<Inbound, Outbound> {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
_marker: PhantomData,
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: Result<Outbound, E>,
|
||||
ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
let is_err = msg.is_err();
|
||||
let grpc_msg = msg.map_err(Into::into);
|
||||
match self.sender.send(grpc_msg).await {
|
||||
Ok(_) => {
|
||||
if is_err {
|
||||
ctx.stop();
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to send message: {}", e);
|
||||
ctx.stop();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Inbound, Outbound> Default for DummyTransport<Inbound, Outbound> {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<Inbound, Outbound> Bi<Inbound, Outbound> for DummyTransport<Inbound, Outbound>
|
||||
impl<Outbound, Inbound, E> TransportActor<Outbound, E> for GrpcTransportActor<Outbound, Inbound, E>
|
||||
where
|
||||
Inbound: Send + Sync + 'static,
|
||||
Outbound: Send + Sync + 'static,
|
||||
Outbound: Send + 'static,
|
||||
Inbound: Send + 'static,
|
||||
E: Into<tonic::Status> + Send + 'static,
|
||||
{
|
||||
async fn send(&mut self, _item: Outbound) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn recv(&mut self) -> Option<Inbound> {
|
||||
std::future::pending::<()>().await;
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Wires together a transport actor and a business logic actor with bidirectional linking.
|
||||
///
|
||||
/// This function handles the chicken-and-egg problem of two actors that need references
|
||||
/// to each other at construction time. It uses kameo's [`PreparedActor`] to obtain
|
||||
/// [`ActorRef`]s before spawning, then links both actors so that if either dies,
|
||||
/// the other is notified via [`on_link_died`](Actor::on_link_died).
|
||||
///
|
||||
/// The business actor receives a type-erased [`Recipient<Result<Outbound, DomainError>>`] instead of an
|
||||
/// `ActorRef<Transport>`, keeping it decoupled from the concrete transport implementation.
|
||||
///
|
||||
/// # Type Parameters
|
||||
/// - `Transport`: The transport actor type (e.g., [`GrpcTransportActor`])
|
||||
/// - `Inbound`: `InboundMessage` received by the business actor from the transport
|
||||
/// - `Outbound`: `OutboundMessage` sent by the business actor back to the transport
|
||||
/// - `Business`: The business logic actor
|
||||
/// - `BusinessCtor`: Closure that receives a prepared business actor and transport recipient,
|
||||
/// spawns the business actor, and returns its [`ActorRef`]
|
||||
/// - `TransportCtor`: Closure that receives a prepared transport actor, a recipient for
|
||||
/// inbound messages, and the business actor id, then spawns the transport actor
|
||||
///
|
||||
/// # Returns
|
||||
/// A tuple of `(transport_ref, business_ref)` — actor references for both spawned actors.
|
||||
pub async fn wire<
|
||||
Transport,
|
||||
Inbound,
|
||||
Outbound,
|
||||
DomainError,
|
||||
Business,
|
||||
BusinessCtor,
|
||||
TransportCtor,
|
||||
>(
|
||||
business_ctor: BusinessCtor,
|
||||
transport_ctor: TransportCtor,
|
||||
) -> (ActorRef<Transport>, ActorRef<Business>)
|
||||
where
|
||||
Transport: TransportActor<Outbound, DomainError>,
|
||||
Inbound: Send + 'static,
|
||||
Outbound: Send + 'static,
|
||||
DomainError: Send + 'static,
|
||||
Business: Actor + Message<Inbound> + Send + 'static,
|
||||
BusinessCtor: FnOnce(PreparedActor<Business>, Recipient<Result<Outbound, DomainError>>),
|
||||
TransportCtor:
|
||||
FnOnce(PreparedActor<Transport>, Recipient<Inbound>),
|
||||
{
|
||||
let prepared_business: PreparedActor<Business> = Spawn::prepare();
|
||||
let prepared_transport: PreparedActor<Transport> = Spawn::prepare();
|
||||
|
||||
let business_ref = prepared_business.actor_ref().clone();
|
||||
let transport_ref = prepared_transport.actor_ref().clone();
|
||||
|
||||
transport_ref.link(&business_ref).await;
|
||||
business_ref.link(&transport_ref).await;
|
||||
|
||||
let recipient = transport_ref.clone().recipient();
|
||||
business_ctor(prepared_business, recipient);
|
||||
let business_recipient = business_ref.clone().recipient();
|
||||
transport_ctor(prepared_transport, business_recipient);
|
||||
|
||||
|
||||
(transport_ref, business_ref)
|
||||
}
|
||||
|
||||
@@ -42,9 +42,6 @@ argon2 = { version = "0.5.3", features = ["zeroize"] }
|
||||
restructed = "0.2.2"
|
||||
strum = { version = "0.27.2", features = ["derive"] }
|
||||
pem = "3.0.6"
|
||||
k256 = "0.13.4"
|
||||
alloy.workspace = true
|
||||
arbiter-tokens-registry.path = "../arbiter-tokens-registry"
|
||||
|
||||
[dev-dependencies]
|
||||
insta = "1.46.3"
|
||||
|
||||
@@ -56,103 +56,4 @@ create table if not exists program_client (
|
||||
public_key blob not null,
|
||||
created_at integer not null default(unixepoch ('now')),
|
||||
updated_at integer not null default(unixepoch ('now'))
|
||||
) STRICT;
|
||||
|
||||
create table if not exists evm_wallet (
|
||||
id integer not null primary key,
|
||||
address blob not null, -- 20-byte Ethereum address
|
||||
aead_encrypted_id integer not null references aead_encrypted (id) on delete RESTRICT,
|
||||
created_at integer not null default(unixepoch ('now'))
|
||||
) STRICT;
|
||||
|
||||
create unique index if not exists uniq_evm_wallet_address on evm_wallet (address);
|
||||
create unique index if not exists uniq_evm_wallet_aead on evm_wallet (aead_encrypted_id);
|
||||
|
||||
create table if not exists evm_ether_transfer_limit (
|
||||
id integer not null primary key,
|
||||
window_secs integer not null, -- window duration in seconds
|
||||
max_volume blob not null -- big-endian 32-byte U256
|
||||
) STRICT;
|
||||
|
||||
-- Shared grant properties: client scope, timeframe, fee caps, and rate limit
|
||||
create table if not exists evm_basic_grant (
|
||||
id integer not null primary key,
|
||||
wallet_id integer not null references evm_wallet(id) on delete restrict,
|
||||
client_id integer not null references program_client(id) on delete restrict,
|
||||
chain_id integer not null, -- EIP-155 chain ID
|
||||
valid_from integer, -- unix timestamp (seconds), null = no lower bound
|
||||
valid_until integer, -- unix timestamp (seconds), null = no upper bound
|
||||
max_gas_fee_per_gas blob, -- big-endian 32-byte U256, null = unlimited
|
||||
max_priority_fee_per_gas blob, -- big-endian 32-byte U256, null = unlimited
|
||||
rate_limit_count integer, -- max transactions in window, null = unlimited
|
||||
rate_limit_window_secs integer, -- window duration in seconds, null = unlimited
|
||||
revoked_at integer, -- unix timestamp when revoked, null = still active
|
||||
created_at integer not null default(unixepoch('now'))
|
||||
) STRICT;
|
||||
|
||||
-- Shared transaction log for all EVM grants, used for rate limit tracking and auditing
|
||||
create table if not exists evm_transaction_log (
|
||||
id integer not null primary key,
|
||||
grant_id integer not null references evm_basic_grant(id) on delete restrict,
|
||||
client_id integer not null references program_client(id) on delete restrict,
|
||||
wallet_id integer not null references evm_wallet(id) on delete restrict,
|
||||
chain_id integer not null,
|
||||
eth_value blob not null, -- always present on any EVM tx
|
||||
signed_at integer not null default(unixepoch('now'))
|
||||
) STRICT;
|
||||
|
||||
create index if not exists idx_evm_basic_grant_wallet_chain on evm_basic_grant(client_id, wallet_id, chain_id);
|
||||
|
||||
-- ===============================
|
||||
-- ERC20 token transfer grant
|
||||
-- ===============================
|
||||
create table if not exists evm_token_transfer_grant (
|
||||
id integer not null primary key,
|
||||
basic_grant_id integer not null unique references evm_basic_grant(id) on delete cascade,
|
||||
token_contract blob not null, -- 20-byte ERC20 contract address
|
||||
receiver blob -- 20-byte recipient address or null if every recipient allowed
|
||||
) STRICT;
|
||||
|
||||
-- Per-window volume limits for token transfer grants
|
||||
create table if not exists evm_token_transfer_volume_limit (
|
||||
id integer not null primary key,
|
||||
grant_id integer not null references evm_token_transfer_grant(id) on delete cascade,
|
||||
window_secs integer not null, -- window duration in seconds
|
||||
max_volume blob not null -- big-endian 32-byte U256
|
||||
) STRICT;
|
||||
|
||||
-- Log table for token transfer grant usage
|
||||
create table if not exists evm_token_transfer_log (
|
||||
id integer not null primary key,
|
||||
grant_id integer not null references evm_token_transfer_grant(id) on delete restrict,
|
||||
log_id integer not null references evm_transaction_log(id) on delete restrict,
|
||||
chain_id integer not null, -- EIP-155 chain ID
|
||||
token_contract blob not null, -- 20-byte ERC20 contract address
|
||||
recipient_address blob not null, -- 20-byte recipient address
|
||||
value blob not null, -- big-endian 32-byte U256
|
||||
created_at integer not null default(unixepoch('now'))
|
||||
) STRICT;
|
||||
|
||||
create index if not exists idx_token_transfer_log_grant on evm_token_transfer_log(grant_id);
|
||||
create index if not exists idx_token_transfer_log_log_id on evm_token_transfer_log(log_id);
|
||||
create index if not exists idx_token_transfer_log_chain on evm_token_transfer_log(chain_id);
|
||||
|
||||
|
||||
-- ===============================
|
||||
-- Ether transfer grant (uses base log)
|
||||
-- ===============================
|
||||
create table if not exists evm_ether_transfer_grant (
|
||||
id integer not null primary key,
|
||||
basic_grant_id integer not null unique references evm_basic_grant(id) on delete cascade,
|
||||
limit_id integer not null references evm_ether_transfer_limit(id) on delete restrict
|
||||
) STRICT;
|
||||
|
||||
-- Specific recipient addresses for an ether transfer grant
|
||||
create table if not exists evm_ether_transfer_grant_target (
|
||||
id integer not null primary key,
|
||||
grant_id integer not null references evm_ether_transfer_grant(id) on delete cascade,
|
||||
address blob not null -- 20-byte recipient address
|
||||
) STRICT;
|
||||
|
||||
create unique index if not exists uniq_ether_transfer_target on evm_ether_transfer_grant_target(grant_id, address);
|
||||
|
||||
) STRICT;
|
||||
BIN
server/crates/arbiter-server/src/.DS_Store
vendored
Normal file
BIN
server/crates/arbiter-server/src/.DS_Store
vendored
Normal file
Binary file not shown.
12
server/crates/arbiter-server/src/actors/client.rs
Normal file
12
server/crates/arbiter-server/src/actors/client.rs
Normal file
@@ -0,0 +1,12 @@
|
||||
use arbiter_proto::{
|
||||
proto::{ClientRequest, ClientResponse},
|
||||
transport::Bi,
|
||||
};
|
||||
|
||||
use crate::ServerContext;
|
||||
|
||||
pub(crate) async fn handle_client(
|
||||
_context: ServerContext,
|
||||
_bistream: impl Bi<ClientRequest, ClientResponse>,
|
||||
) {
|
||||
}
|
||||
@@ -1,101 +0,0 @@
|
||||
use arbiter_proto::proto::client::{
|
||||
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
|
||||
client_request::Payload as ClientRequestPayload,
|
||||
};
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use tracing::error;
|
||||
|
||||
use crate::actors::client::{
|
||||
ClientConnection,
|
||||
auth::state::{AuthContext, AuthStateMachine},
|
||||
session::ClientSession,
|
||||
};
|
||||
|
||||
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
||||
pub enum Error {
|
||||
#[error("Unexpected message payload")]
|
||||
UnexpectedMessagePayload,
|
||||
#[error("Invalid client public key length")]
|
||||
InvalidClientPubkeyLength,
|
||||
#[error("Invalid client public key encoding")]
|
||||
InvalidAuthPubkeyEncoding,
|
||||
#[error("Database pool unavailable")]
|
||||
DatabasePoolUnavailable,
|
||||
#[error("Database operation failed")]
|
||||
DatabaseOperationFailed,
|
||||
#[error("Public key not registered")]
|
||||
PublicKeyNotRegistered,
|
||||
#[error("Invalid signature length")]
|
||||
InvalidSignatureLength,
|
||||
#[error("Invalid challenge solution")]
|
||||
InvalidChallengeSolution,
|
||||
#[error("Transport error")]
|
||||
Transport,
|
||||
}
|
||||
|
||||
mod state;
|
||||
use state::*;
|
||||
|
||||
fn parse_auth_event(payload: ClientRequestPayload) -> Result<AuthEvents, Error> {
|
||||
match payload {
|
||||
ClientRequestPayload::AuthChallengeRequest(AuthChallengeRequest { pubkey }) => {
|
||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
||||
Ok(AuthEvents::AuthRequest(ChallengeRequest {
|
||||
pubkey: pubkey.into(),
|
||||
}))
|
||||
}
|
||||
ClientRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
|
||||
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
|
||||
solution: signature,
|
||||
}))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn authenticate(props: &mut ClientConnection) -> Result<VerifyingKey, Error> {
|
||||
let mut state = AuthStateMachine::new(AuthContext::new(props));
|
||||
|
||||
loop {
|
||||
let transport = state.context_mut().conn.transport.as_mut();
|
||||
let Some(ClientRequest {
|
||||
payload: Some(payload),
|
||||
}) = transport.recv().await
|
||||
else {
|
||||
return Err(Error::Transport);
|
||||
};
|
||||
|
||||
let event = parse_auth_event(payload)?;
|
||||
|
||||
match state.process_event(event).await {
|
||||
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
||||
Err(AuthError::ActionFailed(err)) => {
|
||||
error!(?err, "State machine action failed");
|
||||
return Err(err);
|
||||
}
|
||||
Err(AuthError::GuardFailed(err)) => {
|
||||
error!(?err, "State machine guard failed");
|
||||
return Err(err);
|
||||
}
|
||||
Err(AuthError::InvalidEvent) => {
|
||||
error!("Invalid event for current state");
|
||||
return Err(Error::InvalidChallengeSolution);
|
||||
}
|
||||
Err(AuthError::TransitionsFailed) => {
|
||||
error!("Invalid state transition");
|
||||
return Err(Error::InvalidChallengeSolution);
|
||||
}
|
||||
|
||||
_ => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn authenticate_and_create(
|
||||
mut props: ClientConnection,
|
||||
) -> Result<ClientSession, Error> {
|
||||
let key = authenticate(&mut props).await?;
|
||||
let session = ClientSession::new(props, key);
|
||||
Ok(session)
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
use arbiter_proto::proto::client::{
|
||||
AuthChallenge, ClientResponse,
|
||||
client_response::Payload as ClientResponsePayload,
|
||||
};
|
||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use tracing::error;
|
||||
|
||||
use super::Error;
|
||||
use crate::{actors::client::ClientConnection, db::schema};
|
||||
|
||||
pub struct ChallengeRequest {
|
||||
pub pubkey: VerifyingKey,
|
||||
}
|
||||
|
||||
pub struct ChallengeContext {
|
||||
pub challenge: AuthChallenge,
|
||||
pub key: VerifyingKey,
|
||||
}
|
||||
|
||||
pub struct ChallengeSolution {
|
||||
pub solution: Vec<u8>,
|
||||
}
|
||||
|
||||
smlang::statemachine!(
|
||||
name: Auth,
|
||||
custom_error: true,
|
||||
transitions: {
|
||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
|
||||
}
|
||||
);
|
||||
|
||||
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
||||
let mut db_conn = db.get().await.map_err(|e| {
|
||||
error!(error = ?e, "Database pool error");
|
||||
Error::DatabasePoolUnavailable
|
||||
})?;
|
||||
db_conn
|
||||
.exclusive_transaction(|conn| {
|
||||
Box::pin(async move {
|
||||
let current_nonce = schema::program_client::table
|
||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
||||
.select(schema::program_client::nonce)
|
||||
.first::<i32>(conn)
|
||||
.await?;
|
||||
|
||||
update(schema::program_client::table)
|
||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
||||
.set(schema::program_client::nonce.eq(current_nonce + 1))
|
||||
.execute(conn)
|
||||
.await?;
|
||||
|
||||
Result::<_, diesel::result::Error>::Ok(current_nonce)
|
||||
})
|
||||
})
|
||||
.await
|
||||
.optional()
|
||||
.map_err(|e| {
|
||||
error!(error = ?e, "Database error");
|
||||
Error::DatabaseOperationFailed
|
||||
})?
|
||||
.ok_or_else(|| {
|
||||
error!(?pubkey_bytes, "Public key not found in database");
|
||||
Error::PublicKeyNotRegistered
|
||||
})
|
||||
}
|
||||
|
||||
pub struct AuthContext<'a> {
|
||||
pub(super) conn: &'a mut ClientConnection,
|
||||
}
|
||||
|
||||
impl<'a> AuthContext<'a> {
|
||||
pub fn new(conn: &'a mut ClientConnection) -> Self {
|
||||
Self { conn }
|
||||
}
|
||||
}
|
||||
|
||||
impl AuthStateMachineContext for AuthContext<'_> {
|
||||
type Error = Error;
|
||||
|
||||
async fn verify_solution(
|
||||
&self,
|
||||
ChallengeContext { challenge, key }: &ChallengeContext,
|
||||
ChallengeSolution { solution }: &ChallengeSolution,
|
||||
) -> Result<bool, Self::Error> {
|
||||
let formatted_challenge =
|
||||
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
|
||||
let signature = solution.as_slice().try_into().map_err(|_| {
|
||||
error!(?solution, "Invalid signature length");
|
||||
Error::InvalidChallengeSolution
|
||||
})?;
|
||||
|
||||
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
|
||||
|
||||
Ok(valid)
|
||||
}
|
||||
|
||||
async fn prepare_challenge(
|
||||
&mut self,
|
||||
ChallengeRequest { pubkey }: ChallengeRequest,
|
||||
) -> Result<ChallengeContext, Self::Error> {
|
||||
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
|
||||
|
||||
let challenge = AuthChallenge {
|
||||
pubkey: pubkey.as_bytes().to_vec(),
|
||||
nonce,
|
||||
};
|
||||
|
||||
self.conn
|
||||
.transport
|
||||
.send(Ok(ClientResponse {
|
||||
payload: Some(ClientResponsePayload::AuthChallenge(challenge.clone())),
|
||||
}))
|
||||
.await
|
||||
.map_err(|e| {
|
||||
error!(?e, "Failed to send auth challenge");
|
||||
Error::Transport
|
||||
})?;
|
||||
|
||||
Ok(ChallengeContext {
|
||||
challenge,
|
||||
key: pubkey,
|
||||
})
|
||||
}
|
||||
|
||||
fn provide_key(
|
||||
&mut self,
|
||||
state_data: &ChallengeContext,
|
||||
_: ChallengeSolution,
|
||||
) -> Result<VerifyingKey, Self::Error> {
|
||||
Ok(state_data.key)
|
||||
}
|
||||
}
|
||||
@@ -1,58 +0,0 @@
|
||||
use arbiter_proto::{
|
||||
proto::client::{ClientRequest, ClientResponse},
|
||||
transport::Bi,
|
||||
};
|
||||
use kameo::actor::Spawn;
|
||||
use tracing::{error, info};
|
||||
|
||||
use crate::{
|
||||
actors::{GlobalActors, client::session::ClientSession},
|
||||
db,
|
||||
};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
|
||||
pub enum ClientError {
|
||||
#[error("Expected message with payload")]
|
||||
MissingRequestPayload,
|
||||
#[error("Unexpected request payload")]
|
||||
UnexpectedRequestPayload,
|
||||
#[error("State machine error")]
|
||||
StateTransitionFailed,
|
||||
#[error("Connection registration failed")]
|
||||
ConnectionRegistrationFailed,
|
||||
#[error(transparent)]
|
||||
Auth(#[from] auth::Error),
|
||||
}
|
||||
|
||||
pub type Transport = Box<dyn Bi<ClientRequest, Result<ClientResponse, ClientError>> + Send>;
|
||||
|
||||
pub struct ClientConnection {
|
||||
pub(crate) db: db::DatabasePool,
|
||||
pub(crate) transport: Transport,
|
||||
pub(crate) actors: GlobalActors,
|
||||
}
|
||||
|
||||
impl ClientConnection {
|
||||
pub fn new(db: db::DatabasePool, transport: Transport, actors: GlobalActors) -> Self {
|
||||
Self {
|
||||
db,
|
||||
transport,
|
||||
actors,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub mod auth;
|
||||
pub mod session;
|
||||
|
||||
pub async fn connect_client(props: ClientConnection) {
|
||||
match auth::authenticate_and_create(props).await {
|
||||
Ok(session) => {
|
||||
ClientSession::spawn(session);
|
||||
info!("Client authenticated, session started");
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Authentication failed, closing connection");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,98 +0,0 @@
|
||||
use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use kameo::Actor;
|
||||
use tokio::select;
|
||||
use tracing::{error, info};
|
||||
|
||||
use crate::{actors::{
|
||||
GlobalActors, client::{ClientError, ClientConnection}, router::RegisterClient
|
||||
}, db};
|
||||
|
||||
pub struct ClientSession {
|
||||
props: ClientConnection,
|
||||
key: VerifyingKey,
|
||||
}
|
||||
|
||||
impl ClientSession {
|
||||
pub(crate) fn new(props: ClientConnection, key: VerifyingKey) -> Self {
|
||||
Self { props, key }
|
||||
}
|
||||
|
||||
pub async fn process_transport_inbound(&mut self, req: ClientRequest) -> Output {
|
||||
let msg = req.payload.ok_or_else(|| {
|
||||
error!(actor = "client", "Received message with no payload");
|
||||
ClientError::MissingRequestPayload
|
||||
})?;
|
||||
|
||||
match msg {
|
||||
_ => Err(ClientError::UnexpectedRequestPayload),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type Output = Result<ClientResponse, ClientError>;
|
||||
|
||||
impl Actor for ClientSession {
|
||||
type Args = Self;
|
||||
|
||||
type Error = ClientError;
|
||||
|
||||
async fn on_start(
|
||||
args: Self::Args,
|
||||
this: kameo::prelude::ActorRef<Self>,
|
||||
) -> Result<Self, Self::Error> {
|
||||
args.props
|
||||
.actors
|
||||
.router
|
||||
.ask(RegisterClient { actor: this })
|
||||
.await
|
||||
.map_err(|_| ClientError::ConnectionRegistrationFailed)?;
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
async fn next(
|
||||
&mut self,
|
||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
||||
loop {
|
||||
select! {
|
||||
signal = mailbox_rx.recv() => {
|
||||
return signal;
|
||||
}
|
||||
msg = self.props.transport.recv() => {
|
||||
match msg {
|
||||
Some(request) => {
|
||||
match self.process_transport_inbound(request).await {
|
||||
Ok(resp) => {
|
||||
if self.props.transport.send(Ok(resp)).await.is_err() {
|
||||
error!(actor = "client", reason = "channel closed", "send.failed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
let _ = self.props.transport.send(Err(err)).await;
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
}
|
||||
None => {
|
||||
info!(actor = "client", "transport.closed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ClientSession {
|
||||
pub fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||
use arbiter_proto::transport::DummyTransport;
|
||||
let transport: super::Transport = Box::new(DummyTransport::new());
|
||||
let props = ClientConnection::new(db, transport, actors);
|
||||
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
|
||||
Self { props, key }
|
||||
}
|
||||
}
|
||||
@@ -1,246 +0,0 @@
|
||||
use alloy::{consensus::TxEip1559, network::TxSigner, primitives::Address, signers::Signature};
|
||||
use diesel::{ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use kameo::{Actor, actor::ActorRef, messages};
|
||||
use memsafe::MemSafe;
|
||||
use rand::{SeedableRng, rng, rngs::StdRng};
|
||||
|
||||
use crate::{
|
||||
actors::keyholder::{CreateNew, Decrypt, KeyHolder},
|
||||
db::{self, DatabasePool, models::{self, EvmBasicGrant, SqliteTimestamp}, schema},
|
||||
evm::{
|
||||
self, RunKind,
|
||||
policies::{
|
||||
FullGrant, SharedGrantSettings, SpecificGrant, SpecificMeaning,
|
||||
ether_transfer::EtherTransfer,
|
||||
token_transfers::TokenTransfer,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
pub use crate::evm::safe_signer;
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum SignTransactionError {
|
||||
#[error("Wallet not found")]
|
||||
#[diagnostic(code(arbiter::evm::sign::wallet_not_found))]
|
||||
WalletNotFound,
|
||||
|
||||
#[error("Database error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::sign::database))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
|
||||
#[error("Database pool error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::sign::pool))]
|
||||
Pool(#[from] db::PoolError),
|
||||
|
||||
#[error("Keyholder error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::sign::keyholder))]
|
||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
||||
|
||||
#[error("Keyholder mailbox error")]
|
||||
#[diagnostic(code(arbiter::evm::sign::keyholder_send))]
|
||||
KeyholderSend,
|
||||
|
||||
#[error("Signing error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::sign::signing))]
|
||||
Signing(#[from] alloy::signers::Error),
|
||||
|
||||
#[error("Policy error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::sign::vet))]
|
||||
Vet(#[from] evm::VetError),
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum Error {
|
||||
#[error("Keyholder error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::keyholder))]
|
||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
||||
|
||||
#[error("Keyholder mailbox error")]
|
||||
#[diagnostic(code(arbiter::evm::keyholder_send))]
|
||||
KeyholderSend,
|
||||
|
||||
#[error("Database error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::database))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
|
||||
#[error("Database pool error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::database_pool))]
|
||||
DatabasePool(#[from] db::PoolError),
|
||||
|
||||
#[error("Grant creation error: {0}")]
|
||||
#[diagnostic(code(arbiter::evm::creation))]
|
||||
Creation(#[from] evm::CreationError),
|
||||
}
|
||||
|
||||
#[derive(Actor)]
|
||||
pub struct EvmActor {
|
||||
pub keyholder: ActorRef<KeyHolder>,
|
||||
pub db: DatabasePool,
|
||||
pub rng: StdRng,
|
||||
pub engine: evm::Engine,
|
||||
}
|
||||
|
||||
impl EvmActor {
|
||||
pub fn new(keyholder: ActorRef<KeyHolder>, db: DatabasePool) -> Self {
|
||||
// is it safe to seed rng from system once?
|
||||
// todo: audit
|
||||
let rng = StdRng::from_rng(&mut rng());
|
||||
let engine = evm::Engine::new(db.clone());
|
||||
Self { keyholder, db, rng, engine }
|
||||
}
|
||||
}
|
||||
|
||||
#[messages]
|
||||
impl EvmActor {
|
||||
#[message]
|
||||
pub async fn generate(&mut self) -> Result<Address, Error> {
|
||||
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
||||
|
||||
// Move raw key bytes into a Vec<u8> MemSafe for KeyHolder
|
||||
let plaintext = {
|
||||
let reader = key_cell.read().expect("MemSafe read");
|
||||
MemSafe::new(reader.to_vec()).expect("MemSafe allocation")
|
||||
};
|
||||
|
||||
let aead_id: i32 = self
|
||||
.keyholder
|
||||
.ask(CreateNew { plaintext })
|
||||
.await
|
||||
.map_err(|_| Error::KeyholderSend)?;
|
||||
|
||||
let mut conn = self.db.get().await?;
|
||||
insert_into(schema::evm_wallet::table)
|
||||
.values(&models::NewEvmWallet {
|
||||
address: address.as_slice().to_vec(),
|
||||
aead_encrypted_id: aead_id,
|
||||
})
|
||||
.execute(&mut conn)
|
||||
.await?;
|
||||
|
||||
Ok(address)
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn list_wallets(&self) -> Result<Vec<Address>, Error> {
|
||||
let mut conn = self.db.get().await?;
|
||||
let rows: Vec<models::EvmWallet> = schema::evm_wallet::table
|
||||
.select(models::EvmWallet::as_select())
|
||||
.load(&mut conn)
|
||||
.await?;
|
||||
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|w| Address::from_slice(&w.address))
|
||||
.collect())
|
||||
}
|
||||
}
|
||||
|
||||
#[messages]
|
||||
impl EvmActor {
|
||||
#[message]
|
||||
pub async fn useragent_create_grant(
|
||||
&mut self,
|
||||
client_id: i32,
|
||||
basic: SharedGrantSettings,
|
||||
grant: SpecificGrant,
|
||||
) -> Result<i32, evm::CreationError> {
|
||||
match grant {
|
||||
SpecificGrant::EtherTransfer(settings) => {
|
||||
self.engine
|
||||
.create_grant::<EtherTransfer>(client_id, FullGrant { basic, specific: settings })
|
||||
.await
|
||||
}
|
||||
SpecificGrant::TokenTransfer(settings) => {
|
||||
self.engine
|
||||
.create_grant::<TokenTransfer>(client_id, FullGrant { basic, specific: settings })
|
||||
.await
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn useragent_delete_grant(&mut self, grant_id: i32) -> Result<(), Error> {
|
||||
let mut conn = self.db.get().await?;
|
||||
diesel::update(schema::evm_basic_grant::table)
|
||||
.filter(schema::evm_basic_grant::id.eq(grant_id))
|
||||
.set(schema::evm_basic_grant::revoked_at.eq(SqliteTimestamp::now()))
|
||||
.execute(&mut conn)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn useragent_list_grants(
|
||||
&mut self,
|
||||
wallet_id: Option<i32>,
|
||||
) -> Result<Vec<EvmBasicGrant>, Error> {
|
||||
let mut conn = self.db.get().await?;
|
||||
let mut query = schema::evm_basic_grant::table
|
||||
.select(EvmBasicGrant::as_select())
|
||||
.filter(schema::evm_basic_grant::revoked_at.is_null())
|
||||
.into_boxed();
|
||||
if let Some(wid) = wallet_id {
|
||||
query = query.filter(schema::evm_basic_grant::wallet_id.eq(wid));
|
||||
}
|
||||
Ok(query.load(&mut conn).await?)
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn shared_analyze_transaction(
|
||||
&mut self,
|
||||
client_id: i32,
|
||||
wallet_address: Address,
|
||||
transaction: TxEip1559,
|
||||
) -> Result<SpecificMeaning, SignTransactionError> {
|
||||
let mut conn = self.db.get().await?;
|
||||
let wallet = schema::evm_wallet::table
|
||||
.select(models::EvmWallet::as_select())
|
||||
.filter(schema::evm_wallet::address.eq(wallet_address.as_slice()))
|
||||
.first(&mut conn)
|
||||
.await
|
||||
.optional()?
|
||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
||||
drop(conn);
|
||||
|
||||
let meaning = self.engine
|
||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
||||
.await?;
|
||||
|
||||
Ok(meaning)
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn client_sign_transaction(
|
||||
&mut self,
|
||||
client_id: i32,
|
||||
wallet_address: Address,
|
||||
mut transaction: TxEip1559,
|
||||
) -> Result<Signature, SignTransactionError> {
|
||||
let mut conn = self.db.get().await?;
|
||||
let wallet = schema::evm_wallet::table
|
||||
.select(models::EvmWallet::as_select())
|
||||
.filter(schema::evm_wallet::address.eq(wallet_address.as_slice()))
|
||||
.first(&mut conn)
|
||||
.await
|
||||
.optional()?
|
||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
||||
drop(conn);
|
||||
|
||||
let raw_key: MemSafe<Vec<u8>> = self
|
||||
.keyholder
|
||||
.ask(Decrypt { aead_id: wallet.aead_encrypted_id })
|
||||
.await
|
||||
.map_err(|_| SignTransactionError::KeyholderSend)?;
|
||||
|
||||
let signer = safe_signer::SafeSigner::from_memsafe(raw_key)?;
|
||||
|
||||
self.engine
|
||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
||||
.await?;
|
||||
|
||||
use alloy::network::TxSignerSync as _;
|
||||
Ok(signer.sign_transaction_sync(&mut transaction)?)
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
use chrono::Utc;
|
||||
use diesel::{
|
||||
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
|
||||
dsl::{insert_into, update},
|
||||
@@ -313,7 +312,7 @@ impl KeyHolder {
|
||||
current_nonce: nonce.to_vec(),
|
||||
schema_version: 1,
|
||||
associated_root_key_id: *root_key_history_id,
|
||||
created_at: Utc::now().into()
|
||||
created_at: chrono::Utc::now().timestamp() as i32,
|
||||
})
|
||||
.returning(schema::aead_encrypted::id)
|
||||
.get_result(&mut conn)
|
||||
|
||||
@@ -3,15 +3,13 @@ use miette::Diagnostic;
|
||||
use thiserror::Error;
|
||||
|
||||
use crate::{
|
||||
actors::{bootstrap::Bootstrapper, evm::EvmActor, keyholder::KeyHolder, router::MessageRouter},
|
||||
actors::{bootstrap::Bootstrapper, keyholder::KeyHolder},
|
||||
db,
|
||||
};
|
||||
|
||||
pub mod bootstrap;
|
||||
pub mod client;
|
||||
mod evm;
|
||||
pub mod keyholder;
|
||||
pub mod router;
|
||||
pub mod user_agent;
|
||||
|
||||
#[derive(Error, Debug, Diagnostic)]
|
||||
@@ -30,18 +28,13 @@ pub enum SpawnError {
|
||||
pub struct GlobalActors {
|
||||
pub key_holder: ActorRef<KeyHolder>,
|
||||
pub bootstrapper: ActorRef<Bootstrapper>,
|
||||
pub router: ActorRef<MessageRouter>,
|
||||
pub evm: ActorRef<EvmActor>,
|
||||
}
|
||||
|
||||
impl GlobalActors {
|
||||
pub async fn spawn(db: db::DatabasePool) -> Result<Self, SpawnError> {
|
||||
let key_holder = KeyHolder::spawn(KeyHolder::new(db.clone()).await?);
|
||||
Ok(Self {
|
||||
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
|
||||
evm: EvmActor::spawn(EvmActor::new(key_holder.clone(), db)),
|
||||
key_holder,
|
||||
router: MessageRouter::spawn(MessageRouter::default()),
|
||||
key_holder: KeyHolder::spawn(KeyHolder::new(db.clone()).await?),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,79 +0,0 @@
|
||||
use std::{
|
||||
collections::{HashMap},
|
||||
ops::ControlFlow,
|
||||
};
|
||||
|
||||
use kameo::{
|
||||
Actor,
|
||||
actor::{ActorId, ActorRef},
|
||||
messages,
|
||||
prelude::{ActorStopReason, Context, WeakActorRef},
|
||||
};
|
||||
use tracing::info;
|
||||
|
||||
use crate::actors::{client::session::ClientSession, user_agent::session::UserAgentSession};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct MessageRouter {
|
||||
pub user_agents: HashMap<ActorId, ActorRef<UserAgentSession>>,
|
||||
pub clients: HashMap<ActorId, ActorRef<ClientSession>>,
|
||||
}
|
||||
|
||||
impl Actor for MessageRouter {
|
||||
type Args = Self;
|
||||
|
||||
type Error = ();
|
||||
|
||||
async fn on_start(args: Self::Args, _: ActorRef<Self>) -> Result<Self, Self::Error> {
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
async fn on_link_died(
|
||||
&mut self,
|
||||
_: WeakActorRef<Self>,
|
||||
id: ActorId,
|
||||
_: ActorStopReason,
|
||||
) -> Result<ControlFlow<ActorStopReason>, Self::Error> {
|
||||
if self.user_agents.remove(&id).is_some() {
|
||||
info!(
|
||||
?id,
|
||||
actor = "MessageRouter",
|
||||
event = "useragent.disconnected"
|
||||
);
|
||||
} else if self.clients.remove(&id).is_some() {
|
||||
info!(?id, actor = "MessageRouter", event = "client.disconnected");
|
||||
} else {
|
||||
info!(
|
||||
?id,
|
||||
actor = "MessageRouter",
|
||||
event = "unknown.actor.disconnected"
|
||||
);
|
||||
}
|
||||
Ok(ControlFlow::Continue(()))
|
||||
}
|
||||
}
|
||||
|
||||
#[messages]
|
||||
impl MessageRouter {
|
||||
#[message(ctx)]
|
||||
pub async fn register_user_agent(
|
||||
&mut self,
|
||||
actor: ActorRef<UserAgentSession>,
|
||||
ctx: &mut Context<Self, ()>,
|
||||
) {
|
||||
info!(id = %actor.id(), actor = "MessageRouter", event = "useragent.connected");
|
||||
ctx.actor_ref().link(&actor).await;
|
||||
self.user_agents.insert(actor.id(), actor);
|
||||
}
|
||||
|
||||
#[message(ctx)]
|
||||
pub async fn register_client(
|
||||
&mut self,
|
||||
actor: ActorRef<ClientSession>,
|
||||
ctx: &mut Context<Self, ()>,
|
||||
) {
|
||||
info!(id = %actor.id(), actor = "MessageRouter", event = "client.connected");
|
||||
ctx.actor_ref().link(&actor).await;
|
||||
self.clients.insert(actor.id(), actor);
|
||||
}
|
||||
}
|
||||
@@ -1,118 +0,0 @@
|
||||
use arbiter_proto::proto::user_agent::{
|
||||
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
};
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use tracing::error;
|
||||
|
||||
use crate::actors::user_agent::{
|
||||
UserAgentConnection,
|
||||
auth::state::{AuthContext, AuthStateMachine}, session::UserAgentSession,
|
||||
};
|
||||
|
||||
#[derive(thiserror::Error, Debug, PartialEq)]
|
||||
pub enum Error {
|
||||
#[error("Unexpected message payload")]
|
||||
UnexpectedMessagePayload,
|
||||
#[error("Invalid client public key length")]
|
||||
InvalidClientPubkeyLength,
|
||||
#[error("Invalid client public key encoding")]
|
||||
InvalidAuthPubkeyEncoding,
|
||||
#[error("Database pool unavailable")]
|
||||
DatabasePoolUnavailable,
|
||||
#[error("Database operation failed")]
|
||||
DatabaseOperationFailed,
|
||||
#[error("Public key not registered")]
|
||||
PublicKeyNotRegistered,
|
||||
#[error("Transport error")]
|
||||
Transport,
|
||||
#[error("Invalid bootstrap token")]
|
||||
InvalidBootstrapToken,
|
||||
#[error("Bootstrapper actor unreachable")]
|
||||
BootstrapperActorUnreachable,
|
||||
#[error("Invalid challenge solution")]
|
||||
InvalidChallengeSolution,
|
||||
}
|
||||
|
||||
mod state;
|
||||
use state::*;
|
||||
|
||||
fn parse_auth_event(payload: UserAgentRequestPayload) -> Result<AuthEvents, Error> {
|
||||
match payload {
|
||||
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
|
||||
pubkey,
|
||||
bootstrap_token: None,
|
||||
}) => {
|
||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
||||
Ok(AuthEvents::AuthRequest(ChallengeRequest {
|
||||
pubkey: pubkey.into(),
|
||||
}))
|
||||
}
|
||||
UserAgentRequestPayload::AuthChallengeRequest(AuthChallengeRequest {
|
||||
pubkey,
|
||||
bootstrap_token: Some(token),
|
||||
}) => {
|
||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
||||
Ok(AuthEvents::BootstrapAuthRequest(BootstrapAuthRequest {
|
||||
pubkey: pubkey.into(),
|
||||
token,
|
||||
}))
|
||||
}
|
||||
UserAgentRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
|
||||
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
|
||||
solution: signature,
|
||||
}))
|
||||
}
|
||||
_ => Err(Error::UnexpectedMessagePayload),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn authenticate(props: &mut UserAgentConnection) -> Result<VerifyingKey, Error> {
|
||||
let mut state = AuthStateMachine::new(AuthContext::new(props));
|
||||
|
||||
loop {
|
||||
// This is needed because `state` now holds mutable reference to `ConnectionProps`, so we can't directly access `props` here
|
||||
let transport = state.context_mut().conn.transport.as_mut();
|
||||
let Some(UserAgentRequest {
|
||||
payload: Some(payload),
|
||||
}) = transport.recv().await
|
||||
else {
|
||||
return Err(Error::Transport);
|
||||
};
|
||||
|
||||
let event = parse_auth_event(payload)?;
|
||||
|
||||
match state.process_event(event).await {
|
||||
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
||||
Err(AuthError::ActionFailed(err)) => {
|
||||
error!(?err, "State machine action failed");
|
||||
return Err(err);
|
||||
}
|
||||
Err(AuthError::GuardFailed(err)) => {
|
||||
error!(?err, "State machine guard failed");
|
||||
return Err(err);
|
||||
}
|
||||
Err(AuthError::InvalidEvent) => {
|
||||
error!("Invalid event for current state");
|
||||
return Err(Error::InvalidChallengeSolution);
|
||||
}
|
||||
Err(AuthError::TransitionsFailed) => {
|
||||
error!("Invalid state transition");
|
||||
return Err(Error::InvalidChallengeSolution);
|
||||
}
|
||||
|
||||
_ => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
pub async fn authenticate_and_create(mut props: UserAgentConnection) -> Result<UserAgentSession, Error> {
|
||||
let key = authenticate(&mut props).await?;
|
||||
let session = UserAgentSession::new(props, key.clone());
|
||||
Ok(session)
|
||||
}
|
||||
@@ -1,202 +0,0 @@
|
||||
use arbiter_proto::proto::user_agent::{
|
||||
AuthChallenge, UserAgentResponse,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
};
|
||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use tracing::error;
|
||||
|
||||
use super::Error;
|
||||
use crate::{
|
||||
actors::{bootstrap::ConsumeToken, user_agent::UserAgentConnection},
|
||||
db::schema,
|
||||
};
|
||||
|
||||
pub struct ChallengeRequest {
|
||||
pub pubkey: VerifyingKey,
|
||||
}
|
||||
|
||||
pub struct BootstrapAuthRequest {
|
||||
pub pubkey: VerifyingKey,
|
||||
pub token: String,
|
||||
}
|
||||
|
||||
pub struct ChallengeContext {
|
||||
pub challenge: AuthChallenge,
|
||||
pub key: VerifyingKey,
|
||||
}
|
||||
|
||||
pub struct ChallengeSolution {
|
||||
pub solution: Vec<u8>,
|
||||
}
|
||||
|
||||
smlang::statemachine!(
|
||||
name: Auth,
|
||||
custom_error: true,
|
||||
transitions: {
|
||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
||||
Init + BootstrapAuthRequest(BootstrapAuthRequest) [async verify_bootstrap_token] / provide_key_bootstrap = AuthOk(VerifyingKey),
|
||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
|
||||
}
|
||||
);
|
||||
|
||||
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
||||
let mut db_conn = db.get().await.map_err(|e| {
|
||||
error!(error = ?e, "Database pool error");
|
||||
Error::DatabasePoolUnavailable
|
||||
})?;
|
||||
db_conn
|
||||
.exclusive_transaction(|conn| {
|
||||
Box::pin(async move {
|
||||
let current_nonce = schema::useragent_client::table
|
||||
.filter(schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()))
|
||||
.select(schema::useragent_client::nonce)
|
||||
.first::<i32>(conn)
|
||||
.await?;
|
||||
|
||||
update(schema::useragent_client::table)
|
||||
.filter(schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()))
|
||||
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
|
||||
.execute(conn)
|
||||
.await?;
|
||||
|
||||
Result::<_, diesel::result::Error>::Ok(current_nonce)
|
||||
})
|
||||
})
|
||||
.await
|
||||
.optional()
|
||||
.map_err(|e| {
|
||||
error!(error = ?e, "Database error");
|
||||
Error::DatabaseOperationFailed
|
||||
})?
|
||||
.ok_or_else(|| {
|
||||
error!(?pubkey_bytes, "Public key not found in database");
|
||||
Error::PublicKeyNotRegistered
|
||||
})
|
||||
}
|
||||
|
||||
async fn register_key(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<(), Error> {
|
||||
let mut conn = db.get().await.map_err(|e| {
|
||||
error!(error = ?e, "Database pool error");
|
||||
Error::DatabasePoolUnavailable
|
||||
})?;
|
||||
|
||||
diesel::insert_into(schema::useragent_client::table)
|
||||
.values((
|
||||
schema::useragent_client::public_key.eq(pubkey_bytes.to_vec()),
|
||||
schema::useragent_client::nonce.eq(1),
|
||||
))
|
||||
.execute(&mut conn)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
error!(error = ?e, "Database error");
|
||||
Error::DatabaseOperationFailed
|
||||
})?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub struct AuthContext<'a> {
|
||||
pub(super) conn: &'a mut UserAgentConnection,
|
||||
}
|
||||
|
||||
impl<'a> AuthContext<'a> {
|
||||
pub fn new(conn: &'a mut UserAgentConnection) -> Self {
|
||||
Self { conn }
|
||||
}
|
||||
}
|
||||
|
||||
impl AuthStateMachineContext for AuthContext<'_> {
|
||||
type Error = Error;
|
||||
|
||||
async fn verify_solution(
|
||||
&self,
|
||||
ChallengeContext { challenge, key }: &ChallengeContext,
|
||||
ChallengeSolution { solution }: &ChallengeSolution,
|
||||
) -> Result<bool, Self::Error> {
|
||||
let formatted_challenge =
|
||||
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
|
||||
let signature = solution.as_slice().try_into().map_err(|_| {
|
||||
error!(?solution, "Invalid signature length");
|
||||
Error::InvalidChallengeSolution
|
||||
})?;
|
||||
|
||||
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
|
||||
|
||||
Ok(valid)
|
||||
}
|
||||
|
||||
async fn prepare_challenge(
|
||||
&mut self,
|
||||
ChallengeRequest { pubkey }: ChallengeRequest,
|
||||
) -> Result<ChallengeContext, Self::Error> {
|
||||
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
|
||||
|
||||
let challenge = AuthChallenge {
|
||||
pubkey: pubkey.as_bytes().to_vec(),
|
||||
nonce,
|
||||
};
|
||||
|
||||
self.conn
|
||||
.transport
|
||||
.send(Ok(UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
|
||||
}))
|
||||
.await
|
||||
.map_err(|e| {
|
||||
error!(?e, "Failed to send auth challenge");
|
||||
Error::Transport
|
||||
})?;
|
||||
|
||||
Ok(ChallengeContext {
|
||||
challenge,
|
||||
key: pubkey,
|
||||
})
|
||||
}
|
||||
|
||||
#[allow(missing_docs)]
|
||||
#[allow(clippy::result_unit_err)]
|
||||
async fn verify_bootstrap_token(
|
||||
&self,
|
||||
BootstrapAuthRequest { pubkey, token }: &BootstrapAuthRequest,
|
||||
) -> Result<bool, Self::Error> {
|
||||
let token_ok: bool = self
|
||||
.conn
|
||||
.actors
|
||||
.bootstrapper
|
||||
.ask(ConsumeToken {
|
||||
token: token.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|e| {
|
||||
error!(?pubkey, "Failed to consume bootstrap token: {e}");
|
||||
Error::BootstrapperActorUnreachable
|
||||
})?;
|
||||
|
||||
if !token_ok {
|
||||
error!(?pubkey, "Invalid bootstrap token provided");
|
||||
return Err(Error::InvalidBootstrapToken);
|
||||
}
|
||||
|
||||
register_key(&self.conn.db, pubkey.as_bytes()).await?;
|
||||
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
fn provide_key_bootstrap(
|
||||
&mut self,
|
||||
event_data: BootstrapAuthRequest,
|
||||
) -> Result<VerifyingKey, Self::Error> {
|
||||
Ok(event_data.pubkey)
|
||||
}
|
||||
|
||||
fn provide_key(
|
||||
&mut self,
|
||||
state_data: &ChallengeContext,
|
||||
_: ChallengeSolution,
|
||||
) -> Result<VerifyingKey, Self::Error> {
|
||||
Ok(state_data.key)
|
||||
}
|
||||
}
|
||||
57
server/crates/arbiter-server/src/actors/user_agent/error.rs
Normal file
57
server/crates/arbiter-server/src/actors/user_agent/error.rs
Normal file
@@ -0,0 +1,57 @@
|
||||
use tonic::Status;
|
||||
|
||||
use crate::db;
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum UserAgentError {
|
||||
#[error("Missing payload in request")]
|
||||
MissingPayload,
|
||||
|
||||
#[error("Invalid bootstrap token")]
|
||||
InvalidBootstrapToken,
|
||||
|
||||
#[error("Public key not registered")]
|
||||
PubkeyNotRegistered,
|
||||
|
||||
#[error("Invalid public key format")]
|
||||
InvalidPubkey,
|
||||
|
||||
#[error("Invalid signature length")]
|
||||
InvalidSignatureLength,
|
||||
|
||||
#[error("Invalid challenge solution")]
|
||||
InvalidChallengeSolution,
|
||||
|
||||
#[error("Invalid state for operation")]
|
||||
InvalidState,
|
||||
|
||||
#[error("Actor unavailable")]
|
||||
ActorUnavailable,
|
||||
|
||||
#[error("Database error")]
|
||||
Database(#[from] diesel::result::Error),
|
||||
|
||||
#[error("Database pool error")]
|
||||
DatabasePool(#[from] db::PoolError),
|
||||
}
|
||||
|
||||
impl From<UserAgentError> for Status {
|
||||
fn from(err: UserAgentError) -> Self {
|
||||
match err {
|
||||
UserAgentError::MissingPayload
|
||||
| UserAgentError::InvalidBootstrapToken
|
||||
| UserAgentError::InvalidPubkey
|
||||
| UserAgentError::InvalidSignatureLength => Status::invalid_argument(err.to_string()),
|
||||
|
||||
UserAgentError::PubkeyNotRegistered | UserAgentError::InvalidChallengeSolution => {
|
||||
Status::unauthenticated(err.to_string())
|
||||
}
|
||||
|
||||
UserAgentError::InvalidState => Status::failed_precondition(err.to_string()),
|
||||
|
||||
UserAgentError::ActorUnavailable
|
||||
| UserAgentError::Database(_)
|
||||
| UserAgentError::DatabasePool(_) => Status::internal(err.to_string()),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,65 +1,401 @@
|
||||
use arbiter_proto::{
|
||||
proto::user_agent::{UserAgentRequest, UserAgentResponse},
|
||||
transport::Bi,
|
||||
use std::{ops::DerefMut, sync::Mutex};
|
||||
|
||||
use arbiter_proto::proto::{
|
||||
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
|
||||
UserAgentResponse,
|
||||
auth::{
|
||||
self, AuthChallengeRequest, AuthOk, ClientMessage as ClientAuthMessage,
|
||||
ServerMessage as AuthServerMessage,
|
||||
client_message::Payload as ClientAuthPayload,
|
||||
server_message::Payload as ServerAuthPayload,
|
||||
},
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
};
|
||||
use kameo::actor::Spawn as _;
|
||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use kameo::{Actor, actor::Recipient, error::SendError, messages, prelude::Message};
|
||||
use memsafe::MemSafe;
|
||||
use tracing::{error, info};
|
||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||
|
||||
use crate::{
|
||||
actors::{GlobalActors, user_agent::session::UserAgentSession},
|
||||
db::{self},
|
||||
ServerContext,
|
||||
actors::{
|
||||
GlobalActors,
|
||||
bootstrap::ConsumeToken,
|
||||
keyholder::{self, TryUnseal},
|
||||
user_agent::state::{
|
||||
ChallengeContext, DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine,
|
||||
UserAgentStates,
|
||||
},
|
||||
},
|
||||
db::{self, schema},
|
||||
};
|
||||
|
||||
#[derive(Debug, thiserror::Error, PartialEq)]
|
||||
pub enum UserAgentError {
|
||||
#[error("Expected message with payload")]
|
||||
MissingRequestPayload,
|
||||
#[error("Unexpected request payload")]
|
||||
UnexpectedRequestPayload,
|
||||
#[error("Invalid state for unseal encrypted key")]
|
||||
InvalidStateForUnsealEncryptedKey,
|
||||
#[error("client_pubkey must be 32 bytes")]
|
||||
InvalidClientPubkeyLength,
|
||||
#[error("State machine error")]
|
||||
StateTransitionFailed,
|
||||
#[error("Vault is not available")]
|
||||
KeyHolderActorUnreachable,
|
||||
#[error(transparent)]
|
||||
Auth(#[from] auth::Error),
|
||||
#[error("Failed registering connection")]
|
||||
ConnectionRegistrationFailed,
|
||||
}
|
||||
mod error;
|
||||
mod state;
|
||||
|
||||
pub type Transport =
|
||||
Box<dyn Bi<UserAgentRequest, Result<UserAgentResponse, UserAgentError>> + Send>;
|
||||
pub use error::UserAgentError;
|
||||
|
||||
pub struct UserAgentConnection {
|
||||
#[derive(Actor)]
|
||||
pub struct UserAgentActor {
|
||||
db: db::DatabasePool,
|
||||
actors: GlobalActors,
|
||||
transport: Transport,
|
||||
state: UserAgentStateMachine<DummyContext>,
|
||||
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||
}
|
||||
|
||||
impl UserAgentConnection {
|
||||
pub fn new(db: db::DatabasePool, actors: GlobalActors, transport: Transport) -> Self {
|
||||
impl UserAgentActor {
|
||||
pub(crate) fn new(
|
||||
context: ServerContext,
|
||||
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
db,
|
||||
actors,
|
||||
db: context.db.clone(),
|
||||
actors: context.actors.clone(),
|
||||
state: UserAgentStateMachine::new(DummyContext),
|
||||
transport,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn new_manual(
|
||||
db: db::DatabasePool,
|
||||
actors: GlobalActors,
|
||||
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
db,
|
||||
actors,
|
||||
state: UserAgentStateMachine::new(DummyContext),
|
||||
transport,
|
||||
}
|
||||
}
|
||||
|
||||
async fn process_request(&mut self, req: UserAgentRequest) -> Output {
|
||||
let msg = req.payload.ok_or_else(|| {
|
||||
error!(actor = "useragent", "Received message with no payload");
|
||||
UserAgentError::MissingPayload
|
||||
})?;
|
||||
|
||||
match msg {
|
||||
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
||||
payload: Some(ClientAuthPayload::AuthChallengeRequest(req)),
|
||||
}) => self.handle_auth_challenge_request(req).await,
|
||||
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
||||
payload: Some(ClientAuthPayload::AuthChallengeSolution(solution)),
|
||||
}) => self.handle_auth_challenge_solution(solution).await,
|
||||
UserAgentRequestPayload::UnsealStart(unseal_start) => {
|
||||
self.handle_unseal_request(unseal_start).await
|
||||
}
|
||||
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
|
||||
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
|
||||
}
|
||||
_ => Err(UserAgentError::MissingPayload),
|
||||
}
|
||||
}
|
||||
|
||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
|
||||
self.state.process_event(event).map_err(|e| {
|
||||
error!(?e, "State transition failed");
|
||||
UserAgentError::InvalidState
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn auth_with_bootstrap_token(
|
||||
&mut self,
|
||||
pubkey: ed25519_dalek::VerifyingKey,
|
||||
token: String,
|
||||
) -> Output {
|
||||
let token_ok: bool = self
|
||||
.actors
|
||||
.bootstrapper
|
||||
.ask(ConsumeToken { token })
|
||||
.await
|
||||
.map_err(|e| {
|
||||
error!(?pubkey, "Failed to consume bootstrap token: {e}");
|
||||
UserAgentError::ActorUnavailable
|
||||
})?;
|
||||
|
||||
if !token_ok {
|
||||
error!(?pubkey, "Invalid bootstrap token provided");
|
||||
return Err(UserAgentError::InvalidBootstrapToken);
|
||||
}
|
||||
|
||||
{
|
||||
let mut conn = self.db.get().await?;
|
||||
|
||||
diesel::insert_into(schema::useragent_client::table)
|
||||
.values((
|
||||
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
|
||||
schema::useragent_client::nonce.eq(1),
|
||||
))
|
||||
.execute(&mut conn)
|
||||
.await?;
|
||||
}
|
||||
|
||||
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
|
||||
|
||||
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
|
||||
}
|
||||
|
||||
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
|
||||
let nonce: Option<i32> = {
|
||||
let mut db_conn = self.db.get().await?;
|
||||
db_conn
|
||||
.exclusive_transaction(|conn| {
|
||||
Box::pin(async move {
|
||||
let current_nonce = schema::useragent_client::table
|
||||
.filter(
|
||||
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
|
||||
)
|
||||
.select(schema::useragent_client::nonce)
|
||||
.first::<i32>(conn)
|
||||
.await?;
|
||||
|
||||
update(schema::useragent_client::table)
|
||||
.filter(
|
||||
schema::useragent_client::public_key.eq(pubkey.as_bytes().to_vec()),
|
||||
)
|
||||
.set(schema::useragent_client::nonce.eq(current_nonce + 1))
|
||||
.execute(conn)
|
||||
.await?;
|
||||
|
||||
Result::<_, diesel::result::Error>::Ok(current_nonce)
|
||||
})
|
||||
})
|
||||
.await
|
||||
.optional()?
|
||||
};
|
||||
|
||||
let Some(nonce) = nonce else {
|
||||
error!(?pubkey, "Public key not found in database");
|
||||
return Err(UserAgentError::PubkeyNotRegistered);
|
||||
};
|
||||
|
||||
let challenge = auth::AuthChallenge {
|
||||
pubkey: pubkey_bytes,
|
||||
nonce,
|
||||
};
|
||||
|
||||
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
|
||||
challenge: challenge.clone(),
|
||||
key: pubkey,
|
||||
}))?;
|
||||
|
||||
info!(
|
||||
?pubkey,
|
||||
?challenge,
|
||||
"Sent authentication challenge to client"
|
||||
);
|
||||
|
||||
Ok(auth_response(ServerAuthPayload::AuthChallenge(challenge)))
|
||||
}
|
||||
|
||||
fn verify_challenge_solution(
|
||||
&self,
|
||||
solution: &auth::AuthChallengeSolution,
|
||||
) -> Result<(bool, &ChallengeContext), UserAgentError> {
|
||||
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
|
||||
else {
|
||||
error!("Received challenge solution in invalid state");
|
||||
return Err(UserAgentError::InvalidState);
|
||||
};
|
||||
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
|
||||
|
||||
let signature = solution.signature.as_slice().try_into().map_err(|_| {
|
||||
error!(?solution, "Invalid signature length");
|
||||
UserAgentError::InvalidSignatureLength
|
||||
})?;
|
||||
|
||||
let valid = challenge_context
|
||||
.key
|
||||
.verify_strict(&formatted_challenge, &signature)
|
||||
.is_ok();
|
||||
|
||||
Ok((valid, challenge_context))
|
||||
}
|
||||
}
|
||||
|
||||
pub mod auth;
|
||||
pub mod session;
|
||||
type Output = Result<UserAgentResponse, UserAgentError>;
|
||||
|
||||
pub async fn connect_user_agent(props: UserAgentConnection) {
|
||||
match auth::authenticate_and_create(props).await {
|
||||
Ok(session) => {
|
||||
UserAgentSession::spawn(session);
|
||||
info!("User authenticated, session started");
|
||||
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
|
||||
UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthMessage(AuthServerMessage {
|
||||
payload: Some(payload),
|
||||
})),
|
||||
}
|
||||
}
|
||||
|
||||
fn unseal_response(payload: UserAgentResponsePayload) -> UserAgentResponse {
|
||||
UserAgentResponse {
|
||||
payload: Some(payload),
|
||||
}
|
||||
}
|
||||
|
||||
#[messages]
|
||||
impl UserAgentActor {
|
||||
#[message]
|
||||
pub async fn handle_unseal_request(&mut self, req: UnsealStart) -> Output {
|
||||
let secret = EphemeralSecret::random();
|
||||
let public_key = PublicKey::from(&secret);
|
||||
|
||||
let client_pubkey_bytes: [u8; 32] = req
|
||||
.client_pubkey
|
||||
.try_into()
|
||||
.map_err(|_| UserAgentError::InvalidPubkey)?;
|
||||
|
||||
let client_public_key = PublicKey::from(client_pubkey_bytes);
|
||||
|
||||
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
||||
secret: Mutex::new(Some(secret)),
|
||||
client_public_key,
|
||||
}))?;
|
||||
|
||||
Ok(unseal_response(
|
||||
UserAgentResponsePayload::UnsealStartResponse(UnsealStartResponse {
|
||||
server_pubkey: public_key.as_bytes().to_vec(),
|
||||
}),
|
||||
))
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
|
||||
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
||||
error!("Received unseal encrypted key in invalid state");
|
||||
return Err(UserAgentError::InvalidState);
|
||||
};
|
||||
let ephemeral_secret = {
|
||||
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
||||
let secret = secret_lock.take();
|
||||
match secret {
|
||||
Some(secret) => secret,
|
||||
None => {
|
||||
drop(secret_lock);
|
||||
error!("Ephemeral secret already taken");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
return Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let nonce = XNonce::from_slice(&req.nonce);
|
||||
|
||||
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
|
||||
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||
|
||||
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
|
||||
|
||||
let decryption_result = {
|
||||
let mut write_handle = seal_key_buffer.write().unwrap();
|
||||
let write_handle = write_handle.deref_mut();
|
||||
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
|
||||
};
|
||||
|
||||
match decryption_result {
|
||||
Ok(_) => {
|
||||
match self
|
||||
.actors
|
||||
.key_holder
|
||||
.ask(TryUnseal {
|
||||
seal_key_raw: seal_key_buffer,
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(_) => {
|
||||
info!("Successfully unsealed key with client-provided key");
|
||||
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::Success.into(),
|
||||
)))
|
||||
}
|
||||
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
Err(SendError::HandlerError(err)) => {
|
||||
error!(?err, "Keyholder failed to unseal key");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Failed to send unseal request to keyholder");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Err(UserAgentError::ActorUnavailable)
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Failed to decrypt unseal key");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Authentication failed, closing connection");
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn handle_auth_challenge_request(&mut self, req: AuthChallengeRequest) -> Output {
|
||||
let pubkey = req
|
||||
.pubkey
|
||||
.as_array()
|
||||
.ok_or(UserAgentError::InvalidPubkey)?;
|
||||
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|_err| {
|
||||
error!(?pubkey, "Failed to convert to VerifyingKey");
|
||||
UserAgentError::InvalidPubkey
|
||||
})?;
|
||||
|
||||
self.transition(UserAgentEvents::AuthRequest)?;
|
||||
|
||||
match req.bootstrap_token {
|
||||
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
|
||||
None => self.auth_with_challenge(pubkey, req.pubkey).await,
|
||||
}
|
||||
}
|
||||
|
||||
#[message]
|
||||
pub async fn handle_auth_challenge_solution(
|
||||
&mut self,
|
||||
solution: auth::AuthChallengeSolution,
|
||||
) -> Output {
|
||||
let (valid, challenge_context) = self.verify_challenge_solution(&solution)?;
|
||||
|
||||
if valid {
|
||||
info!(
|
||||
?challenge_context,
|
||||
"Client provided valid solution to authentication challenge"
|
||||
);
|
||||
self.transition(UserAgentEvents::ReceivedGoodSolution)?;
|
||||
Ok(auth_response(ServerAuthPayload::AuthOk(AuthOk {})))
|
||||
} else {
|
||||
error!("Client provided invalid solution to authentication challenge");
|
||||
self.transition(UserAgentEvents::ReceivedBadSolution)?;
|
||||
Err(UserAgentError::InvalidChallengeSolution)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Message<UserAgentRequest> for UserAgentActor {
|
||||
type Reply = ();
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
msg: UserAgentRequest,
|
||||
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
let result = self.process_request(msg).await;
|
||||
if let Err(e) = self.transport.tell(result).await {
|
||||
error!(actor = "useragent", "Failed to send response to transport: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,320 +0,0 @@
|
||||
use std::{ops::DerefMut, sync::Mutex};
|
||||
|
||||
use arbiter_proto::proto::{
|
||||
evm as evm_proto,
|
||||
user_agent::{
|
||||
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
|
||||
UserAgentResponse, user_agent_request::Payload as UserAgentRequestPayload,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
},
|
||||
};
|
||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use kameo::{
|
||||
Actor,
|
||||
error::SendError,
|
||||
};
|
||||
use memsafe::MemSafe;
|
||||
use tokio::select;
|
||||
use tracing::{error, info};
|
||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||
|
||||
use crate::actors::{
|
||||
evm::{Generate, ListWallets},
|
||||
keyholder::{self, TryUnseal},
|
||||
router::RegisterUserAgent,
|
||||
user_agent::{UserAgentConnection, UserAgentError},
|
||||
};
|
||||
|
||||
mod state;
|
||||
use state::{DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine, UserAgentStates};
|
||||
|
||||
pub struct UserAgentSession {
|
||||
props: UserAgentConnection,
|
||||
key: VerifyingKey,
|
||||
state: UserAgentStateMachine<DummyContext>,
|
||||
}
|
||||
|
||||
impl UserAgentSession {
|
||||
pub(crate) fn new(props: UserAgentConnection, key: VerifyingKey) -> Self {
|
||||
Self {
|
||||
props,
|
||||
key,
|
||||
state: UserAgentStateMachine::new(DummyContext),
|
||||
}
|
||||
}
|
||||
|
||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
|
||||
self.state.process_event(event).map_err(|e| {
|
||||
error!(?e, "State transition failed");
|
||||
UserAgentError::StateTransitionFailed
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn process_transport_inbound(&mut self, req: UserAgentRequest) -> Output {
|
||||
let msg = req.payload.ok_or_else(|| {
|
||||
error!(actor = "useragent", "Received message with no payload");
|
||||
UserAgentError::MissingRequestPayload
|
||||
})?;
|
||||
|
||||
match msg {
|
||||
UserAgentRequestPayload::UnsealStart(unseal_start) => {
|
||||
self.handle_unseal_request(unseal_start).await
|
||||
}
|
||||
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
|
||||
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
|
||||
}
|
||||
UserAgentRequestPayload::EvmWalletCreate(_) => self.handle_evm_wallet_create().await,
|
||||
UserAgentRequestPayload::EvmWalletList(_) => self.handle_evm_wallet_list().await,
|
||||
_ => Err(UserAgentError::UnexpectedRequestPayload),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type Output = Result<UserAgentResponse, UserAgentError>;
|
||||
|
||||
fn response(payload: UserAgentResponsePayload) -> UserAgentResponse {
|
||||
UserAgentResponse {
|
||||
payload: Some(payload),
|
||||
}
|
||||
}
|
||||
|
||||
impl UserAgentSession {
|
||||
async fn handle_unseal_request(&mut self, req: UnsealStart) -> Output {
|
||||
let secret = EphemeralSecret::random();
|
||||
let public_key = PublicKey::from(&secret);
|
||||
|
||||
let client_pubkey_bytes: [u8; 32] = req
|
||||
.client_pubkey
|
||||
.try_into()
|
||||
.map_err(|_| UserAgentError::InvalidClientPubkeyLength)?;
|
||||
|
||||
let client_public_key = PublicKey::from(client_pubkey_bytes);
|
||||
|
||||
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
||||
secret: Mutex::new(Some(secret)),
|
||||
client_public_key,
|
||||
}))?;
|
||||
|
||||
Ok(response(UserAgentResponsePayload::UnsealStartResponse(
|
||||
UnsealStartResponse {
|
||||
server_pubkey: public_key.as_bytes().to_vec(),
|
||||
},
|
||||
)))
|
||||
}
|
||||
|
||||
async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
|
||||
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
||||
error!("Received unseal encrypted key in invalid state");
|
||||
return Err(UserAgentError::InvalidStateForUnsealEncryptedKey);
|
||||
};
|
||||
let ephemeral_secret = {
|
||||
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
||||
let secret = secret_lock.take();
|
||||
match secret {
|
||||
Some(secret) => secret,
|
||||
None => {
|
||||
drop(secret_lock);
|
||||
error!("Ephemeral secret already taken");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
return Ok(response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)));
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let nonce = XNonce::from_slice(&req.nonce);
|
||||
|
||||
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
|
||||
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||
|
||||
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
|
||||
|
||||
let decryption_result = {
|
||||
let mut write_handle = seal_key_buffer.write().unwrap();
|
||||
let write_handle = write_handle.deref_mut();
|
||||
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
|
||||
};
|
||||
|
||||
match decryption_result {
|
||||
Ok(_) => {
|
||||
match self
|
||||
.props
|
||||
.actors
|
||||
.key_holder
|
||||
.ask(TryUnseal {
|
||||
seal_key_raw: seal_key_buffer,
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(_) => {
|
||||
info!("Successfully unsealed key with client-provided key");
|
||||
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::Success.into(),
|
||||
)))
|
||||
}
|
||||
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
Err(SendError::HandlerError(err)) => {
|
||||
error!(?err, "Keyholder failed to unseal key");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Failed to send unseal request to keyholder");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Err(UserAgentError::KeyHolderActorUnreachable)
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
error!(?err, "Failed to decrypt unseal key");
|
||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||
Ok(response(UserAgentResponsePayload::UnsealResult(
|
||||
UnsealResult::InvalidKey.into(),
|
||||
)))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UserAgentSession {
|
||||
async fn handle_evm_wallet_create(&mut self) -> Output {
|
||||
use evm_proto::wallet_create_response::Result as CreateResult;
|
||||
|
||||
let result = match self.props.actors.evm.ask(Generate {}).await {
|
||||
Ok(address) => CreateResult::Wallet(evm_proto::WalletEntry {
|
||||
address: address.as_slice().to_vec(),
|
||||
}),
|
||||
Err(err) => CreateResult::Error(map_evm_error("wallet create", err).into()),
|
||||
};
|
||||
|
||||
Ok(response(UserAgentResponsePayload::EvmWalletCreate(
|
||||
evm_proto::WalletCreateResponse {
|
||||
result: Some(result),
|
||||
},
|
||||
)))
|
||||
}
|
||||
|
||||
async fn handle_evm_wallet_list(&mut self) -> Output {
|
||||
use evm_proto::wallet_list_response::Result as ListResult;
|
||||
|
||||
let result = match self.props.actors.evm.ask(ListWallets {}).await {
|
||||
Ok(wallets) => ListResult::Wallets(evm_proto::WalletList {
|
||||
wallets: wallets
|
||||
.into_iter()
|
||||
.map(|addr| evm_proto::WalletEntry {
|
||||
address: addr.as_slice().to_vec(),
|
||||
})
|
||||
.collect(),
|
||||
}),
|
||||
Err(err) => ListResult::Error(map_evm_error("wallet list", err).into()),
|
||||
};
|
||||
|
||||
Ok(response(UserAgentResponsePayload::EvmWalletList(
|
||||
evm_proto::WalletListResponse {
|
||||
result: Some(result),
|
||||
},
|
||||
)))
|
||||
}
|
||||
}
|
||||
|
||||
fn map_evm_error<M>(op: &str, err: SendError<M, crate::actors::evm::Error>) -> evm_proto::EvmError {
|
||||
use crate::actors::{evm::Error as EvmError, keyholder::Error as KhError};
|
||||
match err {
|
||||
SendError::HandlerError(EvmError::Keyholder(KhError::NotBootstrapped)) => {
|
||||
evm_proto::EvmError::VaultSealed
|
||||
}
|
||||
SendError::HandlerError(err) => {
|
||||
error!(?err, "EVM {op} failed");
|
||||
evm_proto::EvmError::Internal
|
||||
}
|
||||
_ => {
|
||||
error!("EVM actor unreachable during {op}");
|
||||
evm_proto::EvmError::Internal
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Actor for UserAgentSession {
|
||||
type Args = Self;
|
||||
|
||||
type Error = UserAgentError;
|
||||
|
||||
async fn on_start(
|
||||
args: Self::Args,
|
||||
this: kameo::prelude::ActorRef<Self>,
|
||||
) -> Result<Self, Self::Error> {
|
||||
args.props
|
||||
.actors
|
||||
.router
|
||||
.ask(RegisterUserAgent {
|
||||
actor: this.clone(),
|
||||
})
|
||||
.await
|
||||
.map_err(|err| {
|
||||
error!(?err, "Failed to register user agent connection with router");
|
||||
UserAgentError::ConnectionRegistrationFailed
|
||||
})?;
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
async fn next(
|
||||
&mut self,
|
||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
||||
loop {
|
||||
select! {
|
||||
signal = mailbox_rx.recv() => {
|
||||
return signal;
|
||||
}
|
||||
msg = self.props.transport.recv() => {
|
||||
match msg {
|
||||
Some(request) => {
|
||||
match self.process_transport_inbound(request).await {
|
||||
Ok(response) => {
|
||||
if self.props.transport.send(Ok(response)).await.is_err() {
|
||||
error!(actor = "useragent", reason = "channel closed", "send.failed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
let _ = self.props.transport.send(Err(err)).await;
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
}
|
||||
None => {
|
||||
info!(actor = "useragent", "transport.closed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UserAgentSession {
|
||||
pub fn new_test(db: crate::db::DatabasePool, actors: crate::actors::GlobalActors) -> Self {
|
||||
use arbiter_proto::transport::DummyTransport;
|
||||
let transport: super::Transport = Box::new(DummyTransport::new());
|
||||
let props = UserAgentConnection::new(db, actors, transport);
|
||||
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
|
||||
Self {
|
||||
props,
|
||||
key,
|
||||
state: UserAgentStateMachine::new(DummyContext),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
use std::sync::Mutex;
|
||||
|
||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||
|
||||
pub struct UnsealContext {
|
||||
pub client_public_key: PublicKey,
|
||||
pub secret: Mutex<Option<EphemeralSecret>>,
|
||||
}
|
||||
|
||||
smlang::statemachine!(
|
||||
name: UserAgent,
|
||||
custom_error: false,
|
||||
transitions: {
|
||||
*Idle + UnsealRequest(UnsealContext) / generate_temp_keypair = WaitingForUnsealKey(UnsealContext),
|
||||
WaitingForUnsealKey(UnsealContext) + ReceivedValidKey = Unsealed,
|
||||
WaitingForUnsealKey(UnsealContext) + ReceivedInvalidKey = Idle,
|
||||
}
|
||||
);
|
||||
|
||||
pub struct DummyContext;
|
||||
impl UserAgentStateMachineContext for DummyContext {
|
||||
#[allow(missing_docs)]
|
||||
#[allow(clippy::unused_unit)]
|
||||
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
||||
Ok(event_data)
|
||||
}
|
||||
}
|
||||
51
server/crates/arbiter-server/src/actors/user_agent/state.rs
Normal file
51
server/crates/arbiter-server/src/actors/user_agent/state.rs
Normal file
@@ -0,0 +1,51 @@
|
||||
use std::sync::Mutex;
|
||||
|
||||
use arbiter_proto::proto::auth::AuthChallenge;
|
||||
use ed25519_dalek::VerifyingKey;
|
||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||
|
||||
/// Context for state machine with validated key and sent challenge
|
||||
/// Challenge is then transformed to bytes using shared function and verified
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ChallengeContext {
|
||||
pub challenge: AuthChallenge,
|
||||
pub key: VerifyingKey,
|
||||
}
|
||||
|
||||
pub struct UnsealContext {
|
||||
pub client_public_key: PublicKey,
|
||||
pub secret: Mutex<Option<EphemeralSecret>>,
|
||||
}
|
||||
|
||||
smlang::statemachine!(
|
||||
name: UserAgent,
|
||||
custom_error: false,
|
||||
transitions: {
|
||||
*Init + AuthRequest = ReceivedAuthRequest,
|
||||
ReceivedAuthRequest + ReceivedBootstrapToken = Idle,
|
||||
|
||||
ReceivedAuthRequest + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
|
||||
|
||||
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Idle,
|
||||
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
|
||||
|
||||
Idle + UnsealRequest(UnsealContext) / generate_temp_keypair = WaitingForUnsealKey(UnsealContext),
|
||||
WaitingForUnsealKey(UnsealContext) + ReceivedValidKey = Unsealed,
|
||||
WaitingForUnsealKey(UnsealContext) + ReceivedInvalidKey = Idle,
|
||||
}
|
||||
);
|
||||
|
||||
pub struct DummyContext;
|
||||
impl UserAgentStateMachineContext for DummyContext {
|
||||
#[allow(missing_docs)]
|
||||
#[allow(clippy::unused_unit)]
|
||||
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
||||
Ok(event_data)
|
||||
}
|
||||
|
||||
#[allow(missing_docs)]
|
||||
#[allow(clippy::unused_unit)]
|
||||
fn move_challenge(&mut self, event_data: ChallengeContext) -> Result<ChallengeContext, ()> {
|
||||
Ok(event_data)
|
||||
}
|
||||
}
|
||||
@@ -1,78 +1,14 @@
|
||||
#![allow(unused)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use crate::db::schema::{
|
||||
self, aead_encrypted, arbiter_settings, evm_basic_grant, evm_ether_transfer_grant, evm_ether_transfer_grant_target, evm_ether_transfer_limit, evm_token_transfer_grant, evm_token_transfer_log, evm_token_transfer_volume_limit, evm_transaction_log, evm_wallet, root_key_history, tls_history
|
||||
};
|
||||
use chrono::{DateTime, Utc};
|
||||
use crate::db::schema::{self, aead_encrypted, arbiter_settings, root_key_history, tls_history};
|
||||
use diesel::{prelude::*, sqlite::Sqlite};
|
||||
use restructed::Models;
|
||||
|
||||
pub mod types {
|
||||
use std::os::unix;
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use diesel::{
|
||||
deserialize::{FromSql, FromSqlRow},
|
||||
expression::AsExpression,
|
||||
serialize::{IsNull, ToSql},
|
||||
sql_types::Integer,
|
||||
sqlite::{Sqlite, SqliteType},
|
||||
};
|
||||
|
||||
#[derive(Debug, FromSqlRow, AsExpression)]
|
||||
#[sql_type = "Integer"]
|
||||
#[repr(transparent)] // hint compiler to optimize the wrapper struct away
|
||||
pub struct SqliteTimestamp(pub DateTime<Utc>);
|
||||
impl SqliteTimestamp {
|
||||
pub fn now() -> Self {
|
||||
SqliteTimestamp(Utc::now())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<chrono::DateTime<Utc>> for SqliteTimestamp {
|
||||
fn from(dt: chrono::DateTime<Utc>) -> Self {
|
||||
SqliteTimestamp(dt)
|
||||
}
|
||||
}
|
||||
impl Into<chrono::DateTime<Utc>> for SqliteTimestamp {
|
||||
fn into(self) -> chrono::DateTime<Utc> {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl ToSql<Integer, Sqlite> for SqliteTimestamp {
|
||||
fn to_sql<'b>(
|
||||
&'b self,
|
||||
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
||||
) -> diesel::serialize::Result {
|
||||
let unix_timestamp = self.0.timestamp() as i32;
|
||||
out.set_value(unix_timestamp);
|
||||
Ok(IsNull::No)
|
||||
}
|
||||
}
|
||||
|
||||
impl FromSql<Integer, Sqlite> for SqliteTimestamp {
|
||||
fn from_sql(
|
||||
mut bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
||||
) -> diesel::deserialize::Result<Self> {
|
||||
let Some(SqliteType::Integer) = bytes.value_type() else {
|
||||
return Err(format!(
|
||||
"Expected Integer type for SqliteTimestamp, got {:?}",
|
||||
bytes.value_type()
|
||||
)
|
||||
.into());
|
||||
};
|
||||
|
||||
let unix_timestamp = bytes.read_integer();
|
||||
let datetime = DateTime::from_timestamp(unix_timestamp as i64, 0)
|
||||
.ok_or("Timestamp is out of bounds")?;
|
||||
|
||||
Ok(SqliteTimestamp(datetime))
|
||||
}
|
||||
}
|
||||
pub struct SqliteTimestamp(DateTime<Utc>);
|
||||
}
|
||||
pub use types::*;
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[view(
|
||||
@@ -89,7 +25,7 @@ pub struct AeadEncrypted {
|
||||
pub current_nonce: Vec<u8>,
|
||||
pub schema_version: i32,
|
||||
pub associated_root_key_id: i32, // references root_key_history.id
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub created_at: i32,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
@@ -122,9 +58,9 @@ pub struct TlsHistory {
|
||||
pub id: i32,
|
||||
pub cert: String,
|
||||
pub cert_key: String, // PEM Encoded private key
|
||||
pub ca_cert: String, // PEM Encoded certificate for cert signing
|
||||
pub ca_key: String, // PEM Encoded public key for cert signing
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub ca_cert: String, // PEM Encoded certificate for cert signing
|
||||
pub ca_key: String, // PEM Encoded public key for cert signing
|
||||
pub created_at: i32,
|
||||
}
|
||||
|
||||
#[derive(Queryable, Debug, Insertable, Selectable)]
|
||||
@@ -132,173 +68,25 @@ pub struct TlsHistory {
|
||||
pub struct ArbiterSettings {
|
||||
pub id: i32,
|
||||
pub root_key_id: Option<i32>, // references root_key_history.id
|
||||
pub tls_id: Option<i32>, // references tls_history.id
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_wallet, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmWallet,
|
||||
derive(Insertable),
|
||||
omit(id, created_at),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmWallet {
|
||||
pub id: i32,
|
||||
pub address: Vec<u8>,
|
||||
pub aead_encrypted_id: i32,
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub tls_id: Option<i32>, // references tls_history.id
|
||||
}
|
||||
|
||||
#[derive(Queryable, Debug)]
|
||||
#[diesel(table_name = schema::program_client, check_for_backend(Sqlite))]
|
||||
pub struct ProgramClient {
|
||||
pub id: i32,
|
||||
pub nonce: i32,
|
||||
pub public_key: Vec<u8>,
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub updated_at: SqliteTimestamp,
|
||||
pub nonce: i32,
|
||||
pub created_at: i32,
|
||||
pub updated_at: i32,
|
||||
}
|
||||
|
||||
#[derive(Queryable, Debug)]
|
||||
#[diesel(table_name = schema::useragent_client, check_for_backend(Sqlite))]
|
||||
pub struct UseragentClient {
|
||||
pub id: i32,
|
||||
pub nonce: i32,
|
||||
pub public_key: Vec<u8>,
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub updated_at: SqliteTimestamp,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_ether_transfer_limit, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmEtherTransferLimit,
|
||||
derive(Insertable),
|
||||
omit(id, created_at),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmEtherTransferLimit {
|
||||
pub id: i32,
|
||||
pub window_secs: i32,
|
||||
pub max_volume: Vec<u8>,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_basic_grant, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmBasicGrant,
|
||||
derive(Insertable),
|
||||
omit(id, created_at),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmBasicGrant {
|
||||
pub id: i32,
|
||||
pub wallet_id: i32, // references evm_wallet.id
|
||||
pub client_id: i32, // references program_client.id
|
||||
pub chain_id: i32,
|
||||
pub valid_from: Option<SqliteTimestamp>,
|
||||
pub valid_until: Option<SqliteTimestamp>,
|
||||
pub max_gas_fee_per_gas: Option<Vec<u8>>,
|
||||
pub max_priority_fee_per_gas: Option<Vec<u8>>,
|
||||
pub rate_limit_count: Option<i32>,
|
||||
pub rate_limit_window_secs: Option<i32>,
|
||||
pub revoked_at: Option<SqliteTimestamp>,
|
||||
pub created_at: SqliteTimestamp,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_transaction_log, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmTransactionLog,
|
||||
derive(Insertable),
|
||||
omit(id),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmTransactionLog {
|
||||
pub id: i32,
|
||||
pub grant_id: i32,
|
||||
pub client_id: i32,
|
||||
pub wallet_id: i32,
|
||||
pub chain_id: i32,
|
||||
pub eth_value: Vec<u8>,
|
||||
pub signed_at: SqliteTimestamp,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_ether_transfer_grant, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmEtherTransferGrant,
|
||||
derive(Insertable),
|
||||
omit(id),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmEtherTransferGrant {
|
||||
pub id: i32,
|
||||
pub basic_grant_id: i32,
|
||||
pub limit_id: i32, // references evm_ether_transfer_limit.id
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_ether_transfer_grant_target, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmEtherTransferGrantTarget,
|
||||
derive(Insertable),
|
||||
omit(id),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmEtherTransferGrantTarget {
|
||||
pub id: i32,
|
||||
pub grant_id: i32,
|
||||
pub address: Vec<u8>,
|
||||
}
|
||||
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_token_transfer_grant, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmTokenTransferGrant,
|
||||
derive(Insertable),
|
||||
omit(id),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmTokenTransferGrant {
|
||||
pub id: i32,
|
||||
pub basic_grant_id: i32,
|
||||
pub token_contract: Vec<u8>,
|
||||
pub receiver: Option<Vec<u8>>,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_token_transfer_volume_limit, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmTokenTransferVolumeLimit,
|
||||
derive(Insertable),
|
||||
omit(id),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmTokenTransferVolumeLimit {
|
||||
pub id: i32,
|
||||
pub grant_id: i32,
|
||||
pub window_secs: i32,
|
||||
pub max_volume: Vec<u8>,
|
||||
}
|
||||
|
||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||
#[diesel(table_name = evm_token_transfer_log, check_for_backend(Sqlite))]
|
||||
#[view(
|
||||
NewEvmTokenTransferLog,
|
||||
derive(Insertable),
|
||||
omit(id, created_at),
|
||||
attributes_with = "deriveless"
|
||||
)]
|
||||
pub struct EvmTokenTransferLog {
|
||||
pub id: i32,
|
||||
pub grant_id: i32,
|
||||
pub log_id: i32,
|
||||
pub chain_id: i32,
|
||||
pub token_contract: Vec<u8>,
|
||||
pub recipient_address: Vec<u8>,
|
||||
pub value: Vec<u8>,
|
||||
pub created_at: SqliteTimestamp,
|
||||
pub nonce: i32,
|
||||
pub created_at: i32,
|
||||
pub updated_at: i32,
|
||||
}
|
||||
|
||||
@@ -20,99 +20,6 @@ diesel::table! {
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_basic_grant (id) {
|
||||
id -> Integer,
|
||||
wallet_id -> Integer,
|
||||
client_id -> Integer,
|
||||
chain_id -> Integer,
|
||||
valid_from -> Nullable<Integer>,
|
||||
valid_until -> Nullable<Integer>,
|
||||
max_gas_fee_per_gas -> Nullable<Binary>,
|
||||
max_priority_fee_per_gas -> Nullable<Binary>,
|
||||
rate_limit_count -> Nullable<Integer>,
|
||||
rate_limit_window_secs -> Nullable<Integer>,
|
||||
revoked_at -> Nullable<Integer>,
|
||||
created_at -> Integer,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_ether_transfer_grant (id) {
|
||||
id -> Integer,
|
||||
basic_grant_id -> Integer,
|
||||
limit_id -> Integer,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_ether_transfer_grant_target (id) {
|
||||
id -> Integer,
|
||||
grant_id -> Integer,
|
||||
address -> Binary,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_ether_transfer_limit (id) {
|
||||
id -> Integer,
|
||||
window_secs -> Integer,
|
||||
max_volume -> Binary,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_token_transfer_grant (id) {
|
||||
id -> Integer,
|
||||
basic_grant_id -> Integer,
|
||||
token_contract -> Binary,
|
||||
receiver -> Nullable<Binary>,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_token_transfer_log (id) {
|
||||
id -> Integer,
|
||||
grant_id -> Integer,
|
||||
log_id -> Integer,
|
||||
chain_id -> Integer,
|
||||
token_contract -> Binary,
|
||||
recipient_address -> Binary,
|
||||
value -> Binary,
|
||||
created_at -> Integer,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_token_transfer_volume_limit (id) {
|
||||
id -> Integer,
|
||||
grant_id -> Integer,
|
||||
window_secs -> Integer,
|
||||
max_volume -> Binary,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_transaction_log (id) {
|
||||
id -> Integer,
|
||||
grant_id -> Integer,
|
||||
client_id -> Integer,
|
||||
wallet_id -> Integer,
|
||||
chain_id -> Integer,
|
||||
eth_value -> Binary,
|
||||
signed_at -> Integer,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
evm_wallet (id) {
|
||||
id -> Integer,
|
||||
address -> Binary,
|
||||
aead_encrypted_id -> Integer,
|
||||
created_at -> Integer,
|
||||
}
|
||||
}
|
||||
|
||||
diesel::table! {
|
||||
program_client (id) {
|
||||
id -> Integer,
|
||||
@@ -159,29 +66,10 @@ diesel::table! {
|
||||
diesel::joinable!(aead_encrypted -> root_key_history (associated_root_key_id));
|
||||
diesel::joinable!(arbiter_settings -> root_key_history (root_key_id));
|
||||
diesel::joinable!(arbiter_settings -> tls_history (tls_id));
|
||||
diesel::joinable!(evm_basic_grant -> evm_wallet (wallet_id));
|
||||
diesel::joinable!(evm_basic_grant -> program_client (client_id));
|
||||
diesel::joinable!(evm_ether_transfer_grant -> evm_basic_grant (basic_grant_id));
|
||||
diesel::joinable!(evm_ether_transfer_grant -> evm_ether_transfer_limit (limit_id));
|
||||
diesel::joinable!(evm_ether_transfer_grant_target -> evm_ether_transfer_grant (grant_id));
|
||||
diesel::joinable!(evm_token_transfer_grant -> evm_basic_grant (basic_grant_id));
|
||||
diesel::joinable!(evm_token_transfer_log -> evm_token_transfer_grant (grant_id));
|
||||
diesel::joinable!(evm_token_transfer_log -> evm_transaction_log (log_id));
|
||||
diesel::joinable!(evm_token_transfer_volume_limit -> evm_token_transfer_grant (grant_id));
|
||||
diesel::joinable!(evm_wallet -> aead_encrypted (aead_encrypted_id));
|
||||
|
||||
diesel::allow_tables_to_appear_in_same_query!(
|
||||
aead_encrypted,
|
||||
arbiter_settings,
|
||||
evm_basic_grant,
|
||||
evm_ether_transfer_grant,
|
||||
evm_ether_transfer_grant_target,
|
||||
evm_ether_transfer_limit,
|
||||
evm_token_transfer_grant,
|
||||
evm_token_transfer_log,
|
||||
evm_token_transfer_volume_limit,
|
||||
evm_transaction_log,
|
||||
evm_wallet,
|
||||
program_client,
|
||||
root_key_history,
|
||||
tls_history,
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
use alloy::sol;
|
||||
|
||||
sol! {
|
||||
interface IERC20 {
|
||||
event Transfer(address indexed from, address indexed to, uint256 value);
|
||||
event Approval(address indexed owner, address indexed spender, uint256 value);
|
||||
|
||||
function totalSupply() external view returns (uint256);
|
||||
function balanceOf(address account) external view returns (uint256);
|
||||
function transfer(address to, uint256 value) external returns (bool);
|
||||
function allowance(address owner, address spender) external view returns (uint256);
|
||||
function approve(address spender, uint256 value) external returns (bool);
|
||||
function transferFrom(address from, address to, uint256 value) external returns (bool);
|
||||
}
|
||||
}
|
||||
|
||||
sol! {
|
||||
/// ERC-721: Non-Fungible Token Standard.
|
||||
#[derive(Debug)]
|
||||
interface IERC721 {
|
||||
event Transfer(address indexed from, address indexed to, uint256 indexed tokenId);
|
||||
event Approval(address indexed owner, address indexed approved, uint256 indexed tokenId);
|
||||
event ApprovalForAll(address indexed owner, address indexed operator, bool approved);
|
||||
|
||||
function balanceOf(address owner) external view returns (uint256 balance);
|
||||
function ownerOf(uint256 tokenId) external view returns (address owner);
|
||||
function safeTransferFrom(address from, address to, uint256 tokenId) external;
|
||||
function safeTransferFrom(address from, address to, uint256 tokenId, bytes calldata data) external;
|
||||
function transferFrom(address from, address to, uint256 tokenId) external;
|
||||
function approve(address to, uint256 tokenId) external;
|
||||
function setApprovalForAll(address operator, bool approved) external;
|
||||
function getApproved(uint256 tokenId) external view returns (address operator);
|
||||
function isApprovedForAll(address owner, address operator) external view returns (bool);
|
||||
}
|
||||
}
|
||||
|
||||
sol! {
|
||||
/// Wrapped Ether — the only functions beyond ERC-20 that matter.
|
||||
#[derive(Debug)]
|
||||
interface IWETH {
|
||||
function deposit() external payable;
|
||||
function withdraw(uint256 wad) external;
|
||||
}
|
||||
}
|
||||
|
||||
sol! {
|
||||
/// Permit2 — Uniswap's canonical token approval manager.
|
||||
/// Replaces per-contract ERC-20 approve() with a single approval hub.
|
||||
#[derive(Debug)]
|
||||
interface IPermit2 {
|
||||
struct TokenPermissions {
|
||||
address token;
|
||||
uint256 amount;
|
||||
}
|
||||
|
||||
struct PermitSingle {
|
||||
TokenPermissions details;
|
||||
address spender;
|
||||
uint256 sigDeadline;
|
||||
}
|
||||
|
||||
struct PermitBatch {
|
||||
TokenPermissions[] details;
|
||||
address spender;
|
||||
uint256 sigDeadline;
|
||||
}
|
||||
|
||||
struct AllowanceTransferDetails {
|
||||
address from;
|
||||
address to;
|
||||
uint160 amount;
|
||||
address token;
|
||||
}
|
||||
|
||||
function approve(address token, address spender, uint160 amount, uint48 expiration) external;
|
||||
function permit(address owner, PermitSingle calldata permitSingle, bytes calldata signature) external;
|
||||
function permit(address owner, PermitBatch calldata permitBatch, bytes calldata signature) external;
|
||||
function transferFrom(address from, address to, uint160 amount, address token) external;
|
||||
function transferFrom(AllowanceTransferDetails[] calldata transferDetails) external;
|
||||
|
||||
function allowance(address user, address token, address spender)
|
||||
external view returns (uint160 amount, uint48 expiration, uint48 nonce);
|
||||
}
|
||||
}
|
||||
@@ -1,331 +0,0 @@
|
||||
pub mod abi;
|
||||
pub mod safe_signer;
|
||||
|
||||
use alloy::{consensus::TxEip1559, primitives::{TxKind, U256}};
|
||||
use chrono::Utc;
|
||||
use diesel::{QueryResult, insert_into, sqlite::Sqlite};
|
||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||
|
||||
use crate::{
|
||||
db::{
|
||||
self,
|
||||
models::{
|
||||
EvmBasicGrant, NewEvmBasicGrant, NewEvmTransactionLog,
|
||||
SqliteTimestamp,
|
||||
},
|
||||
schema::{self, evm_transaction_log},
|
||||
},
|
||||
evm::policies::{
|
||||
DatabaseID, EvalContext, EvalViolation, FullGrant, Grant, Policy, SharedGrantSettings,
|
||||
SpecificGrant, SpecificMeaning,
|
||||
ether_transfer::EtherTransfer, token_transfers::TokenTransfer,
|
||||
},
|
||||
};
|
||||
|
||||
pub mod policies;
|
||||
mod utils;
|
||||
|
||||
/// Errors that can only occur once the transaction meaning is known (during policy evaluation)
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum PolicyError {
|
||||
#[error("Database connection pool error")]
|
||||
#[diagnostic(code(arbiter_server::evm::policy_error::pool))]
|
||||
Pool(#[from] db::PoolError),
|
||||
#[error("Database returned error")]
|
||||
#[diagnostic(code(arbiter_server::evm::policy_error::database))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
#[error("Transaction violates policy: {0:?}")]
|
||||
#[diagnostic(code(arbiter_server::evm::policy_error::violation))]
|
||||
Violations(Vec<EvalViolation>),
|
||||
#[error("No matching grant found")]
|
||||
#[diagnostic(code(arbiter_server::evm::policy_error::no_matching_grant))]
|
||||
NoMatchingGrant,
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum VetError {
|
||||
#[error("Contract creation transactions are not supported")]
|
||||
#[diagnostic(code(arbiter_server::evm::vet_error::contract_creation_unsupported))]
|
||||
ContractCreationNotSupported,
|
||||
#[error("Engine can't classify this transaction")]
|
||||
#[diagnostic(code(arbiter_server::evm::vet_error::unsupported))]
|
||||
UnsupportedTransactionType,
|
||||
#[error("Policy evaluation failed: {1}")]
|
||||
#[diagnostic(code(arbiter_server::evm::vet_error::evaluated))]
|
||||
Evaluated(SpecificMeaning, #[source] PolicyError),
|
||||
}
|
||||
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum SignError {
|
||||
#[error("Database connection pool error")]
|
||||
#[diagnostic(code(arbiter_server::evm::database_error))]
|
||||
Pool(#[from] db::PoolError),
|
||||
#[error("Database returned error")]
|
||||
#[diagnostic(code(arbiter_server::evm::database_error))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum AnalyzeError {
|
||||
#[error("Engine doesn't support granting permissions for contract creation")]
|
||||
#[diagnostic(code(arbiter_server::evm::analyze_error::contract_creation_not_supported))]
|
||||
ContractCreationNotSupported,
|
||||
|
||||
#[error("Unsupported transaction type")]
|
||||
#[diagnostic(code(arbiter_server::evm::analyze_error::unsupported_transaction_type))]
|
||||
UnsupportedTransactionType,
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum CreationError {
|
||||
#[error("Database connection pool error")]
|
||||
#[diagnostic(code(arbiter_server::evm::creation_error::database_error))]
|
||||
Pool(#[from] db::PoolError),
|
||||
|
||||
#[error("Database returned error")]
|
||||
#[diagnostic(code(arbiter_server::evm::creation_error::database_error))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
}
|
||||
|
||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||
pub enum ListGrantsError {
|
||||
#[error("Database connection pool error")]
|
||||
#[diagnostic(code(arbiter_server::evm::list_grants_error::pool))]
|
||||
Pool(#[from] db::PoolError),
|
||||
|
||||
#[error("Database returned error")]
|
||||
#[diagnostic(code(arbiter_server::evm::list_grants_error::database))]
|
||||
Database(#[from] diesel::result::Error),
|
||||
}
|
||||
|
||||
/// Controls whether a transaction should be executed or only validated
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum RunKind {
|
||||
/// Validate and record the transaction
|
||||
Execution,
|
||||
/// Validate only, do not record
|
||||
CheckOnly,
|
||||
}
|
||||
|
||||
async fn check_shared_constraints(
|
||||
context: &EvalContext,
|
||||
shared: &SharedGrantSettings,
|
||||
shared_grant_id: DatabaseID,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<EvalViolation>> {
|
||||
let mut violations = Vec::new();
|
||||
let now = Utc::now();
|
||||
|
||||
// Validity window
|
||||
if shared.valid_from.map_or(false, |t| now < t)
|
||||
|| shared.valid_until.map_or(false, |t| now > t)
|
||||
{
|
||||
violations.push(EvalViolation::InvalidTime);
|
||||
}
|
||||
|
||||
// Gas fee caps
|
||||
let fee_exceeded = shared
|
||||
.max_gas_fee_per_gas
|
||||
.map_or(false, |cap| U256::from(context.max_fee_per_gas) > cap);
|
||||
let priority_exceeded = shared
|
||||
.max_priority_fee_per_gas
|
||||
.map_or(false, |cap| U256::from(context.max_priority_fee_per_gas) > cap);
|
||||
if fee_exceeded || priority_exceeded {
|
||||
violations.push(EvalViolation::GasLimitExceeded {
|
||||
max_gas_fee_per_gas: shared.max_gas_fee_per_gas,
|
||||
max_priority_fee_per_gas: shared.max_priority_fee_per_gas,
|
||||
});
|
||||
}
|
||||
|
||||
// Transaction count rate limit
|
||||
if let Some(rate_limit) = &shared.rate_limit {
|
||||
let window_start = SqliteTimestamp(now - rate_limit.window);
|
||||
let count: i64 = evm_transaction_log::table
|
||||
.filter(evm_transaction_log::grant_id.eq(shared_grant_id))
|
||||
.filter(evm_transaction_log::signed_at.ge(window_start))
|
||||
.count()
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
if count >= rate_limit.count as i64 {
|
||||
violations.push(EvalViolation::RateLimitExceeded);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(violations)
|
||||
}
|
||||
|
||||
// Supporting only EIP-1559 transactions for now, but we can easily extend this to support legacy transactions if needed
|
||||
pub struct Engine {
|
||||
db: db::DatabasePool,
|
||||
}
|
||||
|
||||
impl Engine {
|
||||
async fn vet_transaction<P: Policy>(
|
||||
&self,
|
||||
context: EvalContext,
|
||||
meaning: &P::Meaning,
|
||||
run_kind: RunKind,
|
||||
) -> Result<(), PolicyError> {
|
||||
let mut conn = self.db.get().await?;
|
||||
|
||||
let grant = P::try_find_grant(&context, &mut conn)
|
||||
.await?
|
||||
.ok_or(PolicyError::NoMatchingGrant)?;
|
||||
|
||||
let mut violations =
|
||||
check_shared_constraints(&context, &grant.shared, grant.shared_grant_id, &mut conn)
|
||||
.await?;
|
||||
violations.extend(P::evaluate(&context, meaning, &grant, &mut conn).await?);
|
||||
|
||||
if !violations.is_empty() {
|
||||
return Err(PolicyError::Violations(violations));
|
||||
} else if run_kind == RunKind::Execution {
|
||||
conn.transaction(|conn| {
|
||||
Box::pin(async move {
|
||||
let log_id: i32 = insert_into(evm_transaction_log::table)
|
||||
.values(&NewEvmTransactionLog {
|
||||
grant_id: grant.shared_grant_id,
|
||||
client_id: context.client_id,
|
||||
wallet_id: context.wallet_id,
|
||||
chain_id: context.chain as i32,
|
||||
eth_value: utils::u256_to_bytes(context.value).to_vec(),
|
||||
signed_at: Utc::now().into(),
|
||||
})
|
||||
.returning(evm_transaction_log::id)
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
P::record_transaction(&context, meaning, log_id, &grant, conn).await?;
|
||||
|
||||
QueryResult::Ok(())
|
||||
})
|
||||
})
|
||||
.await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl Engine {
|
||||
pub fn new(db: db::DatabasePool) -> Self {
|
||||
Self { db }
|
||||
}
|
||||
|
||||
pub async fn create_grant<P: Policy>(
|
||||
&self,
|
||||
client_id: i32,
|
||||
full_grant: FullGrant<P::Settings>,
|
||||
) -> Result<i32, CreationError> {
|
||||
let mut conn = self.db.get().await?;
|
||||
|
||||
let id = conn
|
||||
.transaction(|conn| {
|
||||
Box::pin(async move {
|
||||
use schema::evm_basic_grant;
|
||||
|
||||
let basic_grant: EvmBasicGrant = insert_into(evm_basic_grant::table)
|
||||
.values(&NewEvmBasicGrant {
|
||||
wallet_id: full_grant.basic.wallet_id,
|
||||
chain_id: full_grant.basic.chain as i32,
|
||||
client_id: client_id,
|
||||
valid_from: full_grant.basic.valid_from.map(SqliteTimestamp),
|
||||
valid_until: full_grant.basic.valid_until.map(SqliteTimestamp),
|
||||
max_gas_fee_per_gas: full_grant
|
||||
.basic
|
||||
.max_gas_fee_per_gas
|
||||
.map(|fee| utils::u256_to_bytes(fee).to_vec()),
|
||||
max_priority_fee_per_gas: full_grant
|
||||
.basic
|
||||
.max_priority_fee_per_gas
|
||||
.map(|fee| utils::u256_to_bytes(fee).to_vec()),
|
||||
rate_limit_count: full_grant
|
||||
.basic
|
||||
.rate_limit
|
||||
.as_ref()
|
||||
.map(|rl| rl.count as i32),
|
||||
rate_limit_window_secs: full_grant
|
||||
.basic
|
||||
.rate_limit
|
||||
.as_ref()
|
||||
.map(|rl| rl.window.num_seconds() as i32),
|
||||
revoked_at: None,
|
||||
})
|
||||
.returning(evm_basic_grant::all_columns)
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
P::create_grant(&basic_grant, &full_grant.specific, conn).await
|
||||
})
|
||||
})
|
||||
.await?;
|
||||
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub async fn list_all_grants(&self) -> Result<Vec<Grant<SpecificGrant>>, ListGrantsError> {
|
||||
let mut conn = self.db.get().await?;
|
||||
|
||||
let mut grants: Vec<Grant<SpecificGrant>> = Vec::new();
|
||||
|
||||
grants.extend(
|
||||
EtherTransfer::find_all_grants(&mut conn)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(Grant::from),
|
||||
);
|
||||
grants.extend(
|
||||
TokenTransfer::find_all_grants(&mut conn)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(Grant::from),
|
||||
);
|
||||
|
||||
Ok(grants)
|
||||
}
|
||||
|
||||
pub async fn evaluate_transaction(
|
||||
&self,
|
||||
wallet_id: i32,
|
||||
client_id: i32,
|
||||
transaction: TxEip1559,
|
||||
run_kind: RunKind,
|
||||
) -> Result<SpecificMeaning, VetError> {
|
||||
let TxKind::Call(to) = transaction.to else {
|
||||
return Err(VetError::ContractCreationNotSupported);
|
||||
};
|
||||
let context = policies::EvalContext {
|
||||
wallet_id,
|
||||
client_id,
|
||||
chain: transaction.chain_id,
|
||||
to,
|
||||
value: transaction.value,
|
||||
calldata: transaction.input.clone(),
|
||||
max_fee_per_gas: transaction.max_fee_per_gas,
|
||||
max_priority_fee_per_gas: transaction.max_priority_fee_per_gas,
|
||||
};
|
||||
|
||||
if let Some(meaning) = EtherTransfer::analyze(&context) {
|
||||
return match self
|
||||
.vet_transaction::<EtherTransfer>(context, &meaning, run_kind)
|
||||
.await
|
||||
{
|
||||
Ok(()) => Ok(meaning.into()),
|
||||
Err(e) => Err(VetError::Evaluated(meaning.into(), e)),
|
||||
};
|
||||
}
|
||||
if let Some(meaning) = TokenTransfer::analyze(&context) {
|
||||
return match self
|
||||
.vet_transaction::<TokenTransfer>(context, &meaning, run_kind)
|
||||
.await
|
||||
{
|
||||
Ok(()) => Ok(meaning.into()),
|
||||
Err(e) => Err(VetError::Evaluated(meaning.into(), e)),
|
||||
};
|
||||
}
|
||||
|
||||
Err(VetError::UnsupportedTransactionType)
|
||||
}
|
||||
}
|
||||
@@ -1,220 +0,0 @@
|
||||
use std::fmt::Display;
|
||||
|
||||
use alloy::primitives::{Address, Bytes, ChainId, U256};
|
||||
use chrono::{DateTime, Duration, Utc};
|
||||
use diesel::{
|
||||
ExpressionMethods as _, QueryDsl, SelectableHelper, result::QueryResult, sqlite::Sqlite,
|
||||
};
|
||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||
use miette::Diagnostic;
|
||||
use thiserror::Error;
|
||||
|
||||
use crate::{
|
||||
db::models::{self, EvmBasicGrant},
|
||||
evm::utils,
|
||||
};
|
||||
|
||||
pub mod ether_transfer;
|
||||
pub mod token_transfers;
|
||||
|
||||
pub struct EvalContext {
|
||||
// Which wallet is this transaction for
|
||||
pub client_id: i32,
|
||||
pub wallet_id: i32,
|
||||
|
||||
// The transaction data
|
||||
pub chain: ChainId,
|
||||
pub to: Address,
|
||||
pub value: U256,
|
||||
pub calldata: Bytes,
|
||||
|
||||
// Gas pricing (EIP-1559)
|
||||
pub max_fee_per_gas: u128,
|
||||
pub max_priority_fee_per_gas: u128,
|
||||
}
|
||||
|
||||
#[derive(Debug, Error, Diagnostic)]
|
||||
pub enum EvalViolation {
|
||||
#[error("This grant doesn't allow transactions to the target address {target}")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::invalid_target))]
|
||||
InvalidTarget { target: Address },
|
||||
|
||||
#[error("Gas limit exceeded for this grant")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::gas_limit_exceeded))]
|
||||
GasLimitExceeded {
|
||||
max_gas_fee_per_gas: Option<U256>,
|
||||
max_priority_fee_per_gas: Option<U256>,
|
||||
},
|
||||
|
||||
#[error("Rate limit exceeded for this grant")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::rate_limit_exceeded))]
|
||||
RateLimitExceeded,
|
||||
|
||||
#[error("Transaction exceeds volumetric limits of the grant")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::volumetric_limit_exceeded))]
|
||||
VolumetricLimitExceeded,
|
||||
|
||||
#[error("Transaction is outside of the grant's validity period")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::invalid_time))]
|
||||
InvalidTime,
|
||||
|
||||
#[error("Transaction type is not allowed by this grant")]
|
||||
#[diagnostic(code(arbiter_server::evm::eval_violation::invalid_transaction_type))]
|
||||
InvalidTransactionType,
|
||||
}
|
||||
|
||||
pub type DatabaseID = i32;
|
||||
|
||||
pub struct Grant<PolicySettings> {
|
||||
pub id: DatabaseID,
|
||||
pub shared_grant_id: DatabaseID, // ID of the basic grant for shared-logic checks like rate limits and validity periods
|
||||
pub shared: SharedGrantSettings,
|
||||
pub settings: PolicySettings,
|
||||
}
|
||||
|
||||
pub trait Policy: Sized {
|
||||
type Settings: Send + Sync + 'static + Into<SpecificGrant>;
|
||||
type Meaning: Display + std::fmt::Debug + Send + Sync + 'static + Into<SpecificMeaning>;
|
||||
|
||||
fn analyze(context: &EvalContext) -> Option<Self::Meaning>;
|
||||
|
||||
// Evaluate whether a transaction with the given meaning complies with the provided grant, and return any violations if not
|
||||
// Empty vector means transaction is compliant with the grant
|
||||
fn evaluate(
|
||||
context: &EvalContext,
|
||||
meaning: &Self::Meaning,
|
||||
grant: &Grant<Self::Settings>,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> impl Future<Output = QueryResult<Vec<EvalViolation>>> + Send;
|
||||
|
||||
// Create a new grant in the database based on the provided grant details, and return its ID
|
||||
fn create_grant(
|
||||
basic: &models::EvmBasicGrant,
|
||||
grant: &Self::Settings,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> impl std::future::Future<Output = QueryResult<DatabaseID>> + Send;
|
||||
|
||||
// Try to find an existing grant that matches the transaction context, and return its details if found
|
||||
// Additionally, return ID of basic grant for shared-logic checks like rate limits and validity periods
|
||||
fn try_find_grant(
|
||||
context: &EvalContext,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> impl Future<Output = QueryResult<Option<Grant<Self::Settings>>>> + Send;
|
||||
|
||||
// Return all non-revoked grants, eagerly loading policy-specific settings
|
||||
fn find_all_grants(
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> impl Future<Output = QueryResult<Vec<Grant<Self::Settings>>>> + Send;
|
||||
|
||||
// Records, updates or deletes rate limits
|
||||
// In other words, records grant-specific things after transaction is executed
|
||||
fn record_transaction(
|
||||
context: &EvalContext,
|
||||
meaning: &Self::Meaning,
|
||||
log_id: i32,
|
||||
grant: &Grant<Self::Settings>,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> impl Future<Output = QueryResult<()>> + Send;
|
||||
}
|
||||
|
||||
pub enum ReceiverTarget {
|
||||
Specific(Vec<Address>), // only allow transfers to these addresses
|
||||
Any, // allow transfers to any address
|
||||
}
|
||||
|
||||
// Classification of what transaction does
|
||||
#[derive(Debug)]
|
||||
pub enum SpecificMeaning {
|
||||
EtherTransfer(ether_transfer::Meaning),
|
||||
TokenTransfer(token_transfers::Meaning),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct TransactionRateLimit {
|
||||
pub count: u32,
|
||||
pub window: Duration,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct VolumeRateLimit {
|
||||
pub max_volume: U256,
|
||||
pub window: Duration,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct SharedGrantSettings {
|
||||
pub wallet_id: i32,
|
||||
pub chain: ChainId,
|
||||
|
||||
pub valid_from: Option<DateTime<Utc>>,
|
||||
pub valid_until: Option<DateTime<Utc>>,
|
||||
|
||||
pub max_gas_fee_per_gas: Option<U256>,
|
||||
pub max_priority_fee_per_gas: Option<U256>,
|
||||
|
||||
pub rate_limit: Option<TransactionRateLimit>,
|
||||
}
|
||||
|
||||
impl SharedGrantSettings {
|
||||
fn try_from_model(model: EvmBasicGrant) -> QueryResult<Self> {
|
||||
Ok(Self {
|
||||
wallet_id: model.wallet_id,
|
||||
chain: model.chain_id as u64, // safe because chain_id is stored as i32 but is guaranteed to be a valid ChainId by the API when creating grants
|
||||
valid_from: model.valid_from.map(Into::into),
|
||||
valid_until: model.valid_until.map(Into::into),
|
||||
max_gas_fee_per_gas: model
|
||||
.max_gas_fee_per_gas
|
||||
.map(|b| utils::try_bytes_to_u256(&b))
|
||||
.transpose()?,
|
||||
max_priority_fee_per_gas: model
|
||||
.max_priority_fee_per_gas
|
||||
.map(|b| utils::try_bytes_to_u256(&b))
|
||||
.transpose()?,
|
||||
rate_limit: match (model.rate_limit_count, model.rate_limit_window_secs) {
|
||||
(Some(count), Some(window_secs)) => Some(TransactionRateLimit {
|
||||
count: count as u32,
|
||||
window: Duration::seconds(window_secs as i64),
|
||||
}),
|
||||
_ => None,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn query_by_id(
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
id: i32,
|
||||
) -> diesel::result::QueryResult<Self> {
|
||||
use crate::db::schema::evm_basic_grant;
|
||||
|
||||
let basic_grant: EvmBasicGrant = evm_basic_grant::table
|
||||
.select(EvmBasicGrant::as_select())
|
||||
.filter(evm_basic_grant::id.eq(id))
|
||||
.first::<EvmBasicGrant>(conn)
|
||||
.await?;
|
||||
|
||||
Self::try_from_model(basic_grant)
|
||||
}
|
||||
}
|
||||
|
||||
pub enum SpecificGrant {
|
||||
EtherTransfer(ether_transfer::Settings),
|
||||
TokenTransfer(token_transfers::Settings),
|
||||
}
|
||||
|
||||
/// Blanket conversion from a typed `Grant<S>` into `Grant<SpecificGrant>`.
|
||||
/// Lets the engine collect across all policies into one `Vec<Grant<SpecificGrant>>`.
|
||||
impl<S: Into<SpecificGrant>> From<Grant<S>> for Grant<SpecificGrant> {
|
||||
fn from(g: Grant<S>) -> Self {
|
||||
Grant {
|
||||
id: g.id,
|
||||
shared_grant_id: g.shared_grant_id,
|
||||
shared: g.shared,
|
||||
settings: g.settings.into(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct FullGrant<PolicyGrant> {
|
||||
pub basic: SharedGrantSettings,
|
||||
pub specific: PolicyGrant,
|
||||
}
|
||||
@@ -1,338 +0,0 @@
|
||||
use std::collections::HashMap;
|
||||
use std::fmt::Display;
|
||||
|
||||
use alloy::primitives::{Address, U256};
|
||||
use chrono::{DateTime, Duration, Utc};
|
||||
use diesel::dsl::insert_into;
|
||||
use diesel::sqlite::Sqlite;
|
||||
use diesel::{ExpressionMethods, JoinOnDsl, prelude::*};
|
||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||
|
||||
use crate::db::models::{
|
||||
EvmBasicGrant, EvmEtherTransferGrant, EvmEtherTransferGrantTarget, EvmEtherTransferLimit,
|
||||
NewEvmEtherTransferLimit, SqliteTimestamp,
|
||||
};
|
||||
use crate::db::schema::{evm_basic_grant, evm_ether_transfer_limit, evm_transaction_log};
|
||||
use crate::evm::policies::{
|
||||
Grant, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit,
|
||||
};
|
||||
use crate::{
|
||||
db::{
|
||||
models::{self, NewEvmEtherTransferGrant, NewEvmEtherTransferGrantTarget},
|
||||
schema::{evm_ether_transfer_grant, evm_ether_transfer_grant_target},
|
||||
},
|
||||
evm::{policies::Policy, utils},
|
||||
};
|
||||
|
||||
#[diesel::auto_type]
|
||||
fn grant_join() -> _ {
|
||||
evm_ether_transfer_grant::table.inner_join(
|
||||
evm_basic_grant::table
|
||||
.on(evm_ether_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
||||
)
|
||||
}
|
||||
|
||||
use super::{DatabaseID, EvalContext, EvalViolation};
|
||||
|
||||
// Plain ether transfer
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct Meaning {
|
||||
to: Address,
|
||||
value: U256,
|
||||
}
|
||||
impl Display for Meaning {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"Ether transfer of {} to {}",
|
||||
self.value,
|
||||
self.to.to_string()
|
||||
)
|
||||
}
|
||||
}
|
||||
impl Into<SpecificMeaning> for Meaning {
|
||||
fn into(self) -> SpecificMeaning {
|
||||
SpecificMeaning::EtherTransfer(self)
|
||||
}
|
||||
}
|
||||
|
||||
// A grant for ether transfers, which can be scoped to specific target addresses and volume limits
|
||||
pub struct Settings {
|
||||
target: Vec<Address>,
|
||||
limit: VolumeRateLimit,
|
||||
}
|
||||
|
||||
impl Into<SpecificGrant> for Settings {
|
||||
fn into(self) -> SpecificGrant {
|
||||
SpecificGrant::EtherTransfer(self)
|
||||
}
|
||||
}
|
||||
|
||||
async fn query_relevant_past_transaction(
|
||||
grant_id: i32,
|
||||
longest_window: Duration,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
||||
let past_transactions: Vec<(Vec<u8>, SqliteTimestamp)> = evm_transaction_log::table
|
||||
.filter(evm_transaction_log::grant_id.eq(grant_id))
|
||||
.filter(
|
||||
evm_transaction_log::signed_at.ge(SqliteTimestamp(chrono::Utc::now() - longest_window)),
|
||||
)
|
||||
.select((
|
||||
evm_transaction_log::eth_value,
|
||||
evm_transaction_log::signed_at,
|
||||
))
|
||||
.load(db)
|
||||
.await?;
|
||||
let past_transaction: Vec<(U256, DateTime<Utc>)> = past_transactions
|
||||
.into_iter()
|
||||
.filter_map(|(value_bytes, timestamp)| {
|
||||
let value = utils::bytes_to_u256(&value_bytes)?;
|
||||
Some((value, timestamp.0))
|
||||
})
|
||||
.collect();
|
||||
Ok(past_transaction)
|
||||
}
|
||||
|
||||
async fn check_rate_limits(
|
||||
grant: &Grant<Settings>,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<EvalViolation>> {
|
||||
let mut violations = Vec::new();
|
||||
let window = grant.settings.limit.window;
|
||||
|
||||
let past_transaction = query_relevant_past_transaction(grant.id, window, db).await?;
|
||||
|
||||
let window_start = chrono::Utc::now() - grant.settings.limit.window;
|
||||
let cumulative_volume: U256 = past_transaction
|
||||
.iter()
|
||||
.filter(|(_, timestamp)| timestamp >= &window_start)
|
||||
.fold(U256::default(), |acc, (value, _)| acc + *value);
|
||||
|
||||
if cumulative_volume > grant.settings.limit.max_volume {
|
||||
violations.push(EvalViolation::VolumetricLimitExceeded);
|
||||
}
|
||||
|
||||
Ok(violations)
|
||||
}
|
||||
|
||||
pub struct EtherTransfer;
|
||||
impl Policy for EtherTransfer {
|
||||
type Settings = Settings;
|
||||
|
||||
type Meaning = Meaning;
|
||||
|
||||
fn analyze(context: &EvalContext) -> Option<Self::Meaning> {
|
||||
if !context.calldata.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(Meaning {
|
||||
to: context.to,
|
||||
value: context.value,
|
||||
})
|
||||
}
|
||||
|
||||
async fn evaluate(
|
||||
_: &EvalContext,
|
||||
meaning: &Self::Meaning,
|
||||
grant: &Grant<Self::Settings>,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<EvalViolation>> {
|
||||
let mut violations = Vec::new();
|
||||
|
||||
// Check if the target address is within the grant's allowed targets
|
||||
if !grant.settings.target.contains(&meaning.to) {
|
||||
violations.push(EvalViolation::InvalidTarget { target: meaning.to });
|
||||
}
|
||||
|
||||
let rate_violations = check_rate_limits(grant, db).await?;
|
||||
violations.extend(rate_violations);
|
||||
|
||||
Ok(violations)
|
||||
}
|
||||
|
||||
async fn create_grant(
|
||||
basic: &models::EvmBasicGrant,
|
||||
grant: &Self::Settings,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> diesel::result::QueryResult<DatabaseID> {
|
||||
let limit_id: i32 = insert_into(evm_ether_transfer_limit::table)
|
||||
.values(NewEvmEtherTransferLimit {
|
||||
window_secs: grant.limit.window.num_seconds() as i32,
|
||||
max_volume: utils::u256_to_bytes(grant.limit.max_volume).to_vec(),
|
||||
})
|
||||
.returning(evm_ether_transfer_limit::id)
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
let grant_id: i32 = insert_into(evm_ether_transfer_grant::table)
|
||||
.values(&NewEvmEtherTransferGrant {
|
||||
basic_grant_id: basic.id,
|
||||
limit_id,
|
||||
})
|
||||
.returning(evm_ether_transfer_grant::id)
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
for target in &grant.target {
|
||||
insert_into(evm_ether_transfer_grant_target::table)
|
||||
.values(NewEvmEtherTransferGrantTarget {
|
||||
grant_id,
|
||||
address: target.to_vec(),
|
||||
})
|
||||
.execute(conn)
|
||||
.await?;
|
||||
}
|
||||
|
||||
Ok(grant_id)
|
||||
}
|
||||
|
||||
async fn try_find_grant(
|
||||
context: &EvalContext,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> diesel::result::QueryResult<Option<Grant<Self::Settings>>> {
|
||||
let target_bytes = context.to.to_vec();
|
||||
|
||||
// Find a grant where:
|
||||
// 1. The basic grant's wallet_id and client_id match the context
|
||||
// 2. Any of the grant's targets match the context's `to` address
|
||||
let grant: Option<(EvmBasicGrant, EvmEtherTransferGrant)> = grant_join()
|
||||
.filter(evm_basic_grant::wallet_id.eq(context.wallet_id))
|
||||
.filter(evm_basic_grant::client_id.eq(context.client_id))
|
||||
.filter(evm_ether_transfer_grant_target::address.eq(&target_bytes))
|
||||
.filter(evm_basic_grant::revoked_at.is_null())
|
||||
.select((
|
||||
EvmBasicGrant::as_select(),
|
||||
EvmEtherTransferGrant::as_select(),
|
||||
))
|
||||
.first(conn)
|
||||
.await
|
||||
.optional()?;
|
||||
|
||||
let Some((basic_grant, grant)) = grant else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
let target_bytes: Vec<EvmEtherTransferGrantTarget> = evm_ether_transfer_grant_target::table
|
||||
.select(EvmEtherTransferGrantTarget::as_select())
|
||||
.filter(evm_ether_transfer_grant_target::grant_id.eq(grant.id))
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
let limit: EvmEtherTransferLimit = evm_ether_transfer_limit::table
|
||||
.filter(evm_ether_transfer_limit::id.eq(grant.limit_id))
|
||||
.select(EvmEtherTransferLimit::as_select())
|
||||
.first::<EvmEtherTransferLimit>(conn)
|
||||
.await?;
|
||||
|
||||
// Convert bytes back to Address
|
||||
let targets: Vec<Address> = target_bytes
|
||||
.into_iter()
|
||||
.filter_map(|target| {
|
||||
// TODO: Handle invalid addresses more gracefully
|
||||
let arr: [u8; 20] = target.address.try_into().ok()?;
|
||||
Some(Address::from(arr))
|
||||
})
|
||||
.collect();
|
||||
|
||||
let settings = Settings {
|
||||
target: targets,
|
||||
limit: VolumeRateLimit {
|
||||
max_volume: utils::try_bytes_to_u256(&limit.max_volume)
|
||||
.map_err(|err| diesel::result::Error::DeserializationError(Box::new(err)))?,
|
||||
window: chrono::Duration::seconds(limit.window_secs as i64),
|
||||
},
|
||||
};
|
||||
|
||||
Ok(Some(Grant {
|
||||
id: grant.id,
|
||||
shared_grant_id: grant.basic_grant_id,
|
||||
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
||||
settings,
|
||||
}))
|
||||
}
|
||||
|
||||
async fn record_transaction(
|
||||
_context: &EvalContext,
|
||||
_: &Self::Meaning,
|
||||
_log_id: i32,
|
||||
_grant: &Grant<Self::Settings>,
|
||||
_conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> diesel::result::QueryResult<()> {
|
||||
// Basic log is sufficient
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn find_all_grants(
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<Grant<Self::Settings>>> {
|
||||
let grants: Vec<(EvmBasicGrant, EvmEtherTransferGrant)> = grant_join()
|
||||
.filter(evm_basic_grant::revoked_at.is_null())
|
||||
.select((EvmBasicGrant::as_select(), EvmEtherTransferGrant::as_select()))
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
if grants.is_empty() {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
let grant_ids: Vec<i32> = grants.iter().map(|(_, g)| g.id).collect();
|
||||
let limit_ids: Vec<i32> = grants.iter().map(|(_, g)| g.limit_id).collect();
|
||||
|
||||
let all_targets: Vec<EvmEtherTransferGrantTarget> = evm_ether_transfer_grant_target::table
|
||||
.filter(evm_ether_transfer_grant_target::grant_id.eq_any(&grant_ids))
|
||||
.select(EvmEtherTransferGrantTarget::as_select())
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
let all_limits: Vec<EvmEtherTransferLimit> = evm_ether_transfer_limit::table
|
||||
.filter(evm_ether_transfer_limit::id.eq_any(&limit_ids))
|
||||
.select(EvmEtherTransferLimit::as_select())
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
let mut targets_by_grant: HashMap<i32, Vec<EvmEtherTransferGrantTarget>> = HashMap::new();
|
||||
for target in all_targets {
|
||||
targets_by_grant.entry(target.grant_id).or_default().push(target);
|
||||
}
|
||||
|
||||
let limits_by_id: HashMap<i32, EvmEtherTransferLimit> =
|
||||
all_limits.into_iter().map(|l| (l.id, l)).collect();
|
||||
|
||||
grants
|
||||
.into_iter()
|
||||
.map(|(basic, specific)| {
|
||||
let targets: Vec<Address> = targets_by_grant
|
||||
.get(&specific.id)
|
||||
.map(|v| v.as_slice())
|
||||
.unwrap_or_default()
|
||||
.iter()
|
||||
.filter_map(|t| {
|
||||
let arr: [u8; 20] = t.address.clone().try_into().ok()?;
|
||||
Some(Address::from(arr))
|
||||
})
|
||||
.collect();
|
||||
|
||||
let limit = limits_by_id
|
||||
.get(&specific.limit_id)
|
||||
.ok_or(diesel::result::Error::NotFound)?;
|
||||
|
||||
Ok(Grant {
|
||||
id: specific.id,
|
||||
shared_grant_id: specific.basic_grant_id,
|
||||
shared: SharedGrantSettings::try_from_model(basic)?,
|
||||
settings: Settings {
|
||||
target: targets,
|
||||
limit: VolumeRateLimit {
|
||||
max_volume: utils::try_bytes_to_u256(&limit.max_volume)
|
||||
.map_err(|e| diesel::result::Error::DeserializationError(Box::new(e)))?,
|
||||
window: Duration::seconds(limit.window_secs as i64),
|
||||
},
|
||||
},
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
@@ -1,382 +0,0 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use alloy::{
|
||||
primitives::{Address, U256},
|
||||
sol_types::SolCall,
|
||||
};
|
||||
use arbiter_tokens_registry::evm::nonfungible::{self, TokenInfo};
|
||||
use chrono::{DateTime, Duration, Utc};
|
||||
use diesel::dsl::insert_into;
|
||||
use diesel::sqlite::Sqlite;
|
||||
use diesel::{ExpressionMethods, prelude::*};
|
||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||
|
||||
use crate::db::models::{
|
||||
EvmBasicGrant, EvmTokenTransferGrant, EvmTokenTransferVolumeLimit, NewEvmTokenTransferGrant,
|
||||
NewEvmTokenTransferLog, NewEvmTokenTransferVolumeLimit, SqliteTimestamp,
|
||||
};
|
||||
use crate::db::schema::{
|
||||
evm_basic_grant, evm_token_transfer_grant, evm_token_transfer_log,
|
||||
evm_token_transfer_volume_limit,
|
||||
};
|
||||
use crate::evm::{
|
||||
abi::IERC20::transferCall,
|
||||
policies::{
|
||||
Grant, Policy, SharedGrantSettings, SpecificGrant, SpecificMeaning, VolumeRateLimit,
|
||||
},
|
||||
utils,
|
||||
};
|
||||
|
||||
use super::{DatabaseID, EvalContext, EvalViolation};
|
||||
|
||||
#[diesel::auto_type]
|
||||
fn grant_join() -> _ {
|
||||
evm_token_transfer_grant::table.inner_join(
|
||||
evm_basic_grant::table.on(evm_token_transfer_grant::basic_grant_id.eq(evm_basic_grant::id)),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct Meaning {
|
||||
token: &'static TokenInfo,
|
||||
to: Address,
|
||||
value: U256,
|
||||
}
|
||||
impl std::fmt::Display for Meaning {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"Transfer of {} {} to {}",
|
||||
self.value, self.token.symbol, self.to
|
||||
)
|
||||
}
|
||||
}
|
||||
impl Into<SpecificMeaning> for Meaning {
|
||||
fn into(self) -> SpecificMeaning {
|
||||
SpecificMeaning::TokenTransfer(self)
|
||||
}
|
||||
}
|
||||
|
||||
// A grant for token transfers, which can be scoped to specific target addresses and volume limits
|
||||
pub struct Settings {
|
||||
token_contract: Address,
|
||||
target: Option<Address>,
|
||||
volume_limits: Vec<VolumeRateLimit>,
|
||||
}
|
||||
impl Into<SpecificGrant> for Settings {
|
||||
fn into(self) -> SpecificGrant {
|
||||
SpecificGrant::TokenTransfer(self)
|
||||
}
|
||||
}
|
||||
|
||||
async fn query_relevant_past_transfers(
|
||||
grant_id: i32,
|
||||
longest_window: Duration,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
||||
let past_logs: Vec<(Vec<u8>, SqliteTimestamp)> = evm_token_transfer_log::table
|
||||
.filter(evm_token_transfer_log::grant_id.eq(grant_id))
|
||||
.filter(
|
||||
evm_token_transfer_log::created_at
|
||||
.ge(SqliteTimestamp(chrono::Utc::now() - longest_window)),
|
||||
)
|
||||
.select((
|
||||
evm_token_transfer_log::value,
|
||||
evm_token_transfer_log::created_at,
|
||||
))
|
||||
.load(db)
|
||||
.await?;
|
||||
|
||||
let past_transfers: Vec<(U256, DateTime<Utc>)> = past_logs
|
||||
.into_iter()
|
||||
.filter_map(|(value_bytes, timestamp)| {
|
||||
let value = utils::bytes_to_u256(&value_bytes)?;
|
||||
Some((value, timestamp.0))
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(past_transfers)
|
||||
}
|
||||
|
||||
async fn check_volume_rate_limits(
|
||||
grant: &Grant<Settings>,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<EvalViolation>> {
|
||||
let mut violations = Vec::new();
|
||||
|
||||
let Some(longest_window) = grant.settings.volume_limits.iter().map(|l| l.window).max() else {
|
||||
return Ok(violations);
|
||||
};
|
||||
|
||||
let past_transfers = query_relevant_past_transfers(grant.id, longest_window, db).await?;
|
||||
|
||||
for limit in &grant.settings.volume_limits {
|
||||
let window_start = chrono::Utc::now() - limit.window;
|
||||
let cumulative_volume: U256 = past_transfers
|
||||
.iter()
|
||||
.filter(|(_, timestamp)| timestamp >= &window_start)
|
||||
.fold(U256::default(), |acc, (value, _)| acc + *value);
|
||||
|
||||
if cumulative_volume > limit.max_volume {
|
||||
violations.push(EvalViolation::VolumetricLimitExceeded);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(violations)
|
||||
}
|
||||
|
||||
pub struct TokenTransfer;
|
||||
impl Policy for TokenTransfer {
|
||||
type Settings = Settings;
|
||||
type Meaning = Meaning;
|
||||
|
||||
fn analyze(context: &EvalContext) -> Option<Self::Meaning> {
|
||||
let token = nonfungible::get_token(context.chain, context.to)?;
|
||||
let decoded = transferCall::abi_decode_raw_validate(&context.calldata).ok()?;
|
||||
|
||||
Some(Meaning {
|
||||
token,
|
||||
to: decoded.to,
|
||||
value: decoded.value,
|
||||
})
|
||||
}
|
||||
|
||||
async fn evaluate(
|
||||
context: &EvalContext,
|
||||
meaning: &Self::Meaning,
|
||||
grant: &Grant<Self::Settings>,
|
||||
db: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<EvalViolation>> {
|
||||
let mut violations = Vec::new();
|
||||
|
||||
// erc20 transfer shouldn't carry eth value
|
||||
if !context.value.is_zero() {
|
||||
violations.push(EvalViolation::InvalidTransactionType);
|
||||
return Ok(violations);
|
||||
}
|
||||
|
||||
if let Some(allowed) = grant.settings.target {
|
||||
if allowed != meaning.to {
|
||||
violations.push(EvalViolation::InvalidTarget { target: meaning.to });
|
||||
}
|
||||
}
|
||||
|
||||
let rate_violations = check_volume_rate_limits(grant, db).await?;
|
||||
violations.extend(rate_violations);
|
||||
|
||||
Ok(violations)
|
||||
}
|
||||
|
||||
async fn create_grant(
|
||||
basic: &EvmBasicGrant,
|
||||
grant: &Self::Settings,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<DatabaseID> {
|
||||
// Store the specific receiver as bytes (None means any receiver is allowed)
|
||||
let receiver: Option<Vec<u8>> = grant.target.map(|addr| addr.to_vec());
|
||||
|
||||
let grant_id: i32 = insert_into(evm_token_transfer_grant::table)
|
||||
.values(NewEvmTokenTransferGrant {
|
||||
basic_grant_id: basic.id,
|
||||
token_contract: grant.token_contract.to_vec(),
|
||||
receiver,
|
||||
})
|
||||
.returning(evm_token_transfer_grant::id)
|
||||
.get_result(conn)
|
||||
.await?;
|
||||
|
||||
for limit in &grant.volume_limits {
|
||||
insert_into(evm_token_transfer_volume_limit::table)
|
||||
.values(NewEvmTokenTransferVolumeLimit {
|
||||
grant_id,
|
||||
window_secs: limit.window.num_seconds() as i32,
|
||||
max_volume: utils::u256_to_bytes(limit.max_volume).to_vec(),
|
||||
})
|
||||
.execute(conn)
|
||||
.await?;
|
||||
}
|
||||
|
||||
Ok(grant_id)
|
||||
}
|
||||
|
||||
async fn try_find_grant(
|
||||
context: &EvalContext,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Option<Grant<Self::Settings>>> {
|
||||
let token_contract_bytes = context.to.to_vec();
|
||||
|
||||
let grant: Option<(EvmBasicGrant, EvmTokenTransferGrant)> = grant_join()
|
||||
.filter(evm_basic_grant::revoked_at.is_null())
|
||||
.filter(evm_basic_grant::wallet_id.eq(context.wallet_id))
|
||||
.filter(evm_basic_grant::client_id.eq(context.client_id))
|
||||
.filter(evm_token_transfer_grant::token_contract.eq(&token_contract_bytes))
|
||||
.select((
|
||||
EvmBasicGrant::as_select(),
|
||||
EvmTokenTransferGrant::as_select(),
|
||||
))
|
||||
.first(conn)
|
||||
.await
|
||||
.optional()?;
|
||||
|
||||
let Some((basic_grant, token_grant)) = grant else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
let volume_limits_db: Vec<EvmTokenTransferVolumeLimit> =
|
||||
evm_token_transfer_volume_limit::table
|
||||
.filter(evm_token_transfer_volume_limit::grant_id.eq(token_grant.id))
|
||||
.select(EvmTokenTransferVolumeLimit::as_select())
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
let volume_limits: Vec<VolumeRateLimit> = volume_limits_db
|
||||
.into_iter()
|
||||
.map(|row| {
|
||||
Ok(VolumeRateLimit {
|
||||
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|err| {
|
||||
diesel::result::Error::DeserializationError(Box::new(err))
|
||||
})?,
|
||||
window: Duration::seconds(row.window_secs as i64),
|
||||
})
|
||||
})
|
||||
.collect::<QueryResult<Vec<_>>>()?;
|
||||
|
||||
let token_contract: [u8; 20] = token_grant.token_contract.try_into().map_err(|_| {
|
||||
diesel::result::Error::DeserializationError(
|
||||
"Invalid token contract address length".into(),
|
||||
)
|
||||
})?;
|
||||
|
||||
let target: Option<Address> = match token_grant.receiver {
|
||||
None => None,
|
||||
Some(bytes) => {
|
||||
let arr: [u8; 20] = bytes.try_into().map_err(|_| {
|
||||
diesel::result::Error::DeserializationError(
|
||||
"Invalid receiver address length".into(),
|
||||
)
|
||||
})?;
|
||||
Some(Address::from(arr))
|
||||
}
|
||||
};
|
||||
|
||||
let settings = Settings {
|
||||
token_contract: Address::from(token_contract),
|
||||
target,
|
||||
volume_limits,
|
||||
};
|
||||
|
||||
Ok(Some(Grant {
|
||||
id: token_grant.id,
|
||||
shared_grant_id: token_grant.basic_grant_id,
|
||||
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
||||
settings,
|
||||
}))
|
||||
}
|
||||
|
||||
async fn record_transaction(
|
||||
context: &EvalContext,
|
||||
meaning: &Self::Meaning,
|
||||
log_id: i32,
|
||||
grant: &Grant<Self::Settings>,
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<()> {
|
||||
insert_into(evm_token_transfer_log::table)
|
||||
.values(NewEvmTokenTransferLog {
|
||||
grant_id: grant.id,
|
||||
log_id,
|
||||
chain_id: context.chain as i32,
|
||||
token_contract: context.to.to_vec(),
|
||||
recipient_address: meaning.to.to_vec(),
|
||||
value: utils::u256_to_bytes(meaning.value).to_vec(),
|
||||
})
|
||||
.execute(conn)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn find_all_grants(
|
||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||
) -> QueryResult<Vec<Grant<Self::Settings>>> {
|
||||
let grants: Vec<(EvmBasicGrant, EvmTokenTransferGrant)> = grant_join()
|
||||
.filter(evm_basic_grant::revoked_at.is_null())
|
||||
.select((
|
||||
EvmBasicGrant::as_select(),
|
||||
EvmTokenTransferGrant::as_select(),
|
||||
))
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
if grants.is_empty() {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
let grant_ids: Vec<i32> = grants.iter().map(|(_, g)| g.id).collect();
|
||||
|
||||
let all_volume_limits: Vec<EvmTokenTransferVolumeLimit> =
|
||||
evm_token_transfer_volume_limit::table
|
||||
.filter(evm_token_transfer_volume_limit::grant_id.eq_any(&grant_ids))
|
||||
.select(EvmTokenTransferVolumeLimit::as_select())
|
||||
.load(conn)
|
||||
.await?;
|
||||
|
||||
let mut limits_by_grant: HashMap<i32, Vec<EvmTokenTransferVolumeLimit>> = HashMap::new();
|
||||
for limit in all_volume_limits {
|
||||
limits_by_grant
|
||||
.entry(limit.grant_id)
|
||||
.or_default()
|
||||
.push(limit);
|
||||
}
|
||||
|
||||
grants
|
||||
.into_iter()
|
||||
.map(|(basic, specific)| {
|
||||
let volume_limits: Vec<VolumeRateLimit> = limits_by_grant
|
||||
.get(&specific.id)
|
||||
.map(|v| v.as_slice())
|
||||
.unwrap_or_default()
|
||||
.iter()
|
||||
.map(|row| {
|
||||
Ok(VolumeRateLimit {
|
||||
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|e| {
|
||||
diesel::result::Error::DeserializationError(Box::new(e))
|
||||
})?,
|
||||
window: Duration::seconds(row.window_secs as i64),
|
||||
})
|
||||
})
|
||||
.collect::<QueryResult<Vec<_>>>()?;
|
||||
|
||||
let token_contract: [u8; 20] =
|
||||
specific.token_contract.clone().try_into().map_err(|_| {
|
||||
diesel::result::Error::DeserializationError(
|
||||
"Invalid token contract address length".into(),
|
||||
)
|
||||
})?;
|
||||
|
||||
let target: Option<Address> = match &specific.receiver {
|
||||
None => None,
|
||||
Some(bytes) => {
|
||||
let arr: [u8; 20] = bytes.clone().try_into().map_err(|_| {
|
||||
diesel::result::Error::DeserializationError(
|
||||
"Invalid receiver address length".into(),
|
||||
)
|
||||
})?;
|
||||
Some(Address::from(arr))
|
||||
}
|
||||
};
|
||||
|
||||
Ok(Grant {
|
||||
id: specific.id,
|
||||
shared_grant_id: specific.basic_grant_id,
|
||||
shared: SharedGrantSettings::try_from_model(basic)?,
|
||||
settings: Settings {
|
||||
token_contract: Address::from(token_contract),
|
||||
target,
|
||||
volume_limits,
|
||||
},
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
@@ -1,196 +0,0 @@
|
||||
use std::sync::Mutex;
|
||||
|
||||
use alloy::{
|
||||
consensus::SignableTransaction,
|
||||
network::{TxSigner, TxSignerSync},
|
||||
primitives::{Address, ChainId, Signature, B256},
|
||||
signers::{Error, Result, Signer, SignerSync, utils::secret_key_to_address},
|
||||
};
|
||||
use async_trait::async_trait;
|
||||
use k256::ecdsa::{self, signature::hazmat::PrehashSigner, RecoveryId, SigningKey};
|
||||
use memsafe::MemSafe;
|
||||
|
||||
/// An Ethereum signer that stores its secp256k1 secret key inside a
|
||||
/// hardware-protected [`MemSafe`] cell.
|
||||
///
|
||||
/// The underlying memory page is kept non-readable/non-writable at rest.
|
||||
/// Access is temporarily elevated only for the duration of each signing
|
||||
/// operation, then immediately revoked.
|
||||
///
|
||||
/// Because [`MemSafe::read`] requires `&mut self` while the [`Signer`] trait
|
||||
/// requires `&self`, the cell is wrapped in a [`Mutex`].
|
||||
pub struct SafeSigner {
|
||||
key: Mutex<MemSafe<SigningKey>>,
|
||||
address: Address,
|
||||
chain_id: Option<ChainId>,
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for SafeSigner {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_struct("SafeSigner")
|
||||
.field("address", &self.address)
|
||||
.field("chain_id", &self.chain_id)
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
/// Generates a secp256k1 secret key directly inside a [`MemSafe`] cell.
|
||||
///
|
||||
/// Random bytes are written in-place into protected memory, then validated
|
||||
/// as a legal scalar on the secp256k1 curve (the scalar must be in
|
||||
/// `[1, n)` where `n` is the curve order — roughly 1-in-2^128 chance of
|
||||
/// rejection, but we retry to be correct).
|
||||
///
|
||||
/// Returns the protected key bytes and the derived Ethereum address.
|
||||
pub fn generate(rng: &mut impl rand::Rng) -> (MemSafe<[u8; 32]>, Address) {
|
||||
loop {
|
||||
let mut cell = MemSafe::new([0u8; 32]).expect("MemSafe allocation");
|
||||
{
|
||||
let mut w = cell.write().expect("MemSafe write");
|
||||
rng.fill_bytes(w.as_mut());
|
||||
}
|
||||
let reader = cell.read().expect("MemSafe read");
|
||||
if let Ok(sk) = SigningKey::from_slice(reader.as_ref()) {
|
||||
let address = secret_key_to_address(&sk);
|
||||
drop(reader);
|
||||
return (cell, address);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SafeSigner {
|
||||
/// Reconstructs a `SafeSigner` from key material held in a [`MemSafe`] buffer.
|
||||
///
|
||||
/// The key bytes are read from protected memory, parsed as a secp256k1
|
||||
/// scalar, and immediately moved into a new [`MemSafe`] cell. The raw
|
||||
/// bytes are never exposed outside this function.
|
||||
pub fn from_memsafe(mut cell: MemSafe<Vec<u8>>) -> Result<Self> {
|
||||
let reader = cell.read().map_err(Error::other)?;
|
||||
let sk = SigningKey::from_slice(reader.as_slice()).map_err(Error::other)?;
|
||||
drop(reader);
|
||||
Self::new(sk)
|
||||
}
|
||||
|
||||
/// Creates a new `SafeSigner` by moving the signing key into a protected
|
||||
/// memory region.
|
||||
pub fn new(key: SigningKey) -> Result<Self> {
|
||||
let address = secret_key_to_address(&key);
|
||||
let cell = MemSafe::new(key).map_err(Error::other)?;
|
||||
Ok(Self {
|
||||
key: Mutex::new(cell),
|
||||
address,
|
||||
chain_id: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn sign_hash_inner(&self, hash: &B256) -> Result<Signature> {
|
||||
let mut cell = self.key.lock().expect("SafeSigner mutex poisoned");
|
||||
let reader = cell.read().map_err(Error::other)?;
|
||||
let sig: (ecdsa::Signature, RecoveryId) = reader.sign_prehash(hash.as_ref())?;
|
||||
Ok(sig.into())
|
||||
}
|
||||
|
||||
fn sign_tx_inner(
|
||||
&self,
|
||||
tx: &mut dyn SignableTransaction<Signature>,
|
||||
) -> Result<Signature> {
|
||||
if let Some(chain_id) = self.chain_id {
|
||||
if !tx.set_chain_id_checked(chain_id) {
|
||||
return Err(Error::TransactionChainIdMismatch {
|
||||
signer: chain_id,
|
||||
tx: tx.chain_id().unwrap(),
|
||||
});
|
||||
}
|
||||
}
|
||||
self.sign_hash_inner(&tx.signature_hash()).map_err(Error::other)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Signer for SafeSigner {
|
||||
#[inline]
|
||||
async fn sign_hash(&self, hash: &B256) -> Result<Signature> {
|
||||
self.sign_hash_inner(hash)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn address(&self) -> Address {
|
||||
self.address
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn chain_id(&self) -> Option<ChainId> {
|
||||
self.chain_id
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn set_chain_id(&mut self, chain_id: Option<ChainId>) {
|
||||
self.chain_id = chain_id;
|
||||
}
|
||||
}
|
||||
|
||||
impl SignerSync for SafeSigner {
|
||||
#[inline]
|
||||
fn sign_hash_sync(&self, hash: &B256) -> Result<Signature> {
|
||||
self.sign_hash_inner(hash)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn chain_id_sync(&self) -> Option<ChainId> {
|
||||
self.chain_id
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl TxSigner<Signature> for SafeSigner {
|
||||
fn address(&self) -> Address {
|
||||
self.address
|
||||
}
|
||||
|
||||
async fn sign_transaction(
|
||||
&self,
|
||||
tx: &mut dyn SignableTransaction<Signature>,
|
||||
) -> Result<Signature> {
|
||||
self.sign_tx_inner(tx)
|
||||
}
|
||||
}
|
||||
|
||||
impl TxSignerSync<Signature> for SafeSigner {
|
||||
fn address(&self) -> Address {
|
||||
self.address
|
||||
}
|
||||
|
||||
fn sign_transaction_sync(
|
||||
&self,
|
||||
tx: &mut dyn SignableTransaction<Signature>,
|
||||
) -> Result<Signature> {
|
||||
self.sign_tx_inner(tx)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use alloy::signers::local::PrivateKeySigner;
|
||||
|
||||
#[test]
|
||||
fn sign_and_recover() {
|
||||
let pk = PrivateKeySigner::random();
|
||||
let key = pk.into_credential();
|
||||
let signer = SafeSigner::new(key).unwrap();
|
||||
let message = b"hello arbiter";
|
||||
let sig = signer.sign_message_sync(message).unwrap();
|
||||
let recovered = sig.recover_address_from_msg(message).unwrap();
|
||||
assert_eq!(recovered, Signer::address(&signer));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn chain_id_roundtrip() {
|
||||
let pk = PrivateKeySigner::random();
|
||||
let key = pk.into_credential();
|
||||
let mut signer = SafeSigner::new(key).unwrap();
|
||||
assert_eq!(Signer::chain_id(&signer), None);
|
||||
signer.set_chain_id(Some(1337));
|
||||
assert_eq!(Signer::chain_id(&signer), Some(1337));
|
||||
}
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
use alloy::primitives::U256;
|
||||
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
#[error("Expected {expected} bytes but got {actual} bytes")]
|
||||
pub struct LengthError {
|
||||
pub expected: usize,
|
||||
pub actual: usize,
|
||||
}
|
||||
|
||||
pub fn u256_to_bytes(value: U256) -> [u8; 32] {
|
||||
value.to_le_bytes()
|
||||
}
|
||||
pub fn bytes_to_u256(bytes: &[u8]) -> Option<U256> {
|
||||
let bytes: [u8; 32] = bytes.try_into().ok()?;
|
||||
Some(U256::from_le_bytes(bytes))
|
||||
}
|
||||
|
||||
pub fn try_bytes_to_u256(bytes: &[u8]) -> diesel::result::QueryResult<U256> {
|
||||
let bytes: [u8; 32] = bytes.try_into().map_err(|_| {
|
||||
diesel::result::Error::DeserializationError(Box::new(LengthError {
|
||||
expected: 32,
|
||||
actual: bytes.len(),
|
||||
}))
|
||||
})?;
|
||||
Ok(U256::from_le_bytes(bytes))
|
||||
}
|
||||
@@ -1,22 +1,19 @@
|
||||
#![forbid(unsafe_code)]
|
||||
use arbiter_proto::{
|
||||
proto::{
|
||||
client::{ClientRequest, ClientResponse},
|
||||
user_agent::{UserAgentRequest, UserAgentResponse},
|
||||
},
|
||||
transport::{IdentityRecvConverter, SendConverter, grpc},
|
||||
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
|
||||
transport::{BiStream, GrpcTransportActor, wire},
|
||||
};
|
||||
use async_trait::async_trait;
|
||||
use kameo::actor::PreparedActor;
|
||||
use tokio_stream::wrappers::ReceiverStream;
|
||||
|
||||
use tokio::sync::mpsc;
|
||||
use tonic::{Request, Response, Status};
|
||||
use tracing::info;
|
||||
|
||||
use crate::{
|
||||
actors::{
|
||||
client::{self, ClientError, ClientConnection as ClientConnectionProps, connect_client},
|
||||
user_agent::{self, UserAgentConnection, UserAgentError, connect_user_agent},
|
||||
client::handle_client,
|
||||
user_agent::UserAgentActor,
|
||||
},
|
||||
context::ServerContext,
|
||||
};
|
||||
@@ -24,112 +21,9 @@ use crate::{
|
||||
pub mod actors;
|
||||
pub mod context;
|
||||
pub mod db;
|
||||
pub mod evm;
|
||||
|
||||
const DEFAULT_CHANNEL_SIZE: usize = 1000;
|
||||
|
||||
struct UserAgentGrpcSender;
|
||||
|
||||
impl SendConverter for UserAgentGrpcSender {
|
||||
type Input = Result<UserAgentResponse, UserAgentError>;
|
||||
type Output = Result<UserAgentResponse, Status>;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
||||
match item {
|
||||
Ok(message) => Ok(message),
|
||||
Err(err) => Err(user_agent_error_status(err)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct ClientGrpcSender;
|
||||
|
||||
impl SendConverter for ClientGrpcSender {
|
||||
type Input = Result<ClientResponse, ClientError>;
|
||||
type Output = Result<ClientResponse, Status>;
|
||||
|
||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
||||
match item {
|
||||
Ok(message) => Ok(message),
|
||||
Err(err) => Err(client_error_status(err)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn client_error_status(value: ClientError) -> Status {
|
||||
match value {
|
||||
ClientError::MissingRequestPayload | ClientError::UnexpectedRequestPayload => {
|
||||
Status::invalid_argument("Expected message with payload")
|
||||
}
|
||||
ClientError::StateTransitionFailed => Status::internal("State machine error"),
|
||||
ClientError::Auth(ref err) => client_auth_error_status(err),
|
||||
ClientError::ConnectionRegistrationFailed => {
|
||||
Status::internal("Connection registration failed")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn client_auth_error_status(value: &client::auth::Error) -> Status {
|
||||
use client::auth::Error;
|
||||
match value {
|
||||
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
|
||||
Status::invalid_argument(value.to_string())
|
||||
}
|
||||
Error::InvalidAuthPubkeyEncoding => {
|
||||
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
|
||||
}
|
||||
Error::InvalidSignatureLength => Status::invalid_argument("Invalid signature length"),
|
||||
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
|
||||
Status::unauthenticated(value.to_string())
|
||||
}
|
||||
Error::Transport => Status::internal("Transport error"),
|
||||
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
|
||||
Error::DatabaseOperationFailed => Status::internal("Database error"),
|
||||
}
|
||||
}
|
||||
|
||||
fn user_agent_error_status(value: UserAgentError) -> Status {
|
||||
match value {
|
||||
UserAgentError::MissingRequestPayload | UserAgentError::UnexpectedRequestPayload => {
|
||||
Status::invalid_argument("Expected message with payload")
|
||||
}
|
||||
UserAgentError::InvalidStateForUnsealEncryptedKey => {
|
||||
Status::failed_precondition("Invalid state for unseal encrypted key")
|
||||
}
|
||||
UserAgentError::InvalidClientPubkeyLength => {
|
||||
Status::invalid_argument("client_pubkey must be 32 bytes")
|
||||
}
|
||||
UserAgentError::StateTransitionFailed => Status::internal("State machine error"),
|
||||
UserAgentError::KeyHolderActorUnreachable => Status::internal("Vault is not available"),
|
||||
UserAgentError::Auth(ref err) => auth_error_status(err),
|
||||
UserAgentError::ConnectionRegistrationFailed => {
|
||||
Status::internal("Failed registering connection")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn auth_error_status(value: &user_agent::auth::Error) -> Status {
|
||||
use user_agent::auth::Error;
|
||||
match value {
|
||||
Error::UnexpectedMessagePayload | Error::InvalidClientPubkeyLength => {
|
||||
Status::invalid_argument(value.to_string())
|
||||
}
|
||||
Error::InvalidAuthPubkeyEncoding => {
|
||||
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
|
||||
}
|
||||
Error::PublicKeyNotRegistered | Error::InvalidChallengeSolution => {
|
||||
Status::unauthenticated(value.to_string())
|
||||
}
|
||||
Error::InvalidBootstrapToken => Status::invalid_argument("Invalid bootstrap token"),
|
||||
Error::Transport => Status::internal("Transport error"),
|
||||
Error::BootstrapperActorUnreachable => {
|
||||
Status::internal("Bootstrap token consumption failed")
|
||||
}
|
||||
Error::DatabasePoolUnavailable => Status::internal("Database pool error"),
|
||||
Error::DatabaseOperationFailed => Status::internal("Database error"),
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Server {
|
||||
context: ServerContext,
|
||||
}
|
||||
@@ -145,54 +39,44 @@ impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
|
||||
type UserAgentStream = ReceiverStream<Result<UserAgentResponse, Status>>;
|
||||
type ClientStream = ReceiverStream<Result<ClientResponse, Status>>;
|
||||
|
||||
#[tracing::instrument(level = "debug", skip(self))]
|
||||
async fn client(
|
||||
&self,
|
||||
request: Request<tonic::Streaming<ClientRequest>>,
|
||||
) -> Result<Response<Self::ClientStream>, Status> {
|
||||
let req_stream = request.into_inner();
|
||||
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
||||
|
||||
let transport = grpc::GrpcAdapter::new(
|
||||
tx,
|
||||
req_stream,
|
||||
IdentityRecvConverter::<ClientRequest>::new(),
|
||||
ClientGrpcSender,
|
||||
);
|
||||
let props = ClientConnectionProps::new(
|
||||
self.context.db.clone(),
|
||||
Box::new(transport),
|
||||
self.context.actors.clone(),
|
||||
);
|
||||
tokio::spawn(connect_client(props));
|
||||
|
||||
info!(event = "connection established", "grpc.client");
|
||||
tokio::spawn(handle_client(
|
||||
self.context.clone(),
|
||||
BiStream {
|
||||
request_stream: req_stream,
|
||||
response_sender: tx,
|
||||
},
|
||||
));
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
#[tracing::instrument(level = "debug", skip(self))]
|
||||
async fn user_agent(
|
||||
&self,
|
||||
request: Request<tonic::Streaming<UserAgentRequest>>,
|
||||
) -> Result<Response<Self::UserAgentStream>, Status> {
|
||||
let req_stream = request.into_inner();
|
||||
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
||||
let context = self.context.clone();
|
||||
|
||||
let transport = grpc::GrpcAdapter::new(
|
||||
tx,
|
||||
req_stream,
|
||||
IdentityRecvConverter::<UserAgentRequest>::new(),
|
||||
UserAgentGrpcSender,
|
||||
);
|
||||
let props = UserAgentConnection::new(
|
||||
self.context.db.clone(),
|
||||
self.context.actors.clone(),
|
||||
Box::new(transport),
|
||||
);
|
||||
tokio::spawn(connect_user_agent(props));
|
||||
|
||||
info!(event = "connection established", "grpc.user_agent");
|
||||
wire(
|
||||
|prepared: PreparedActor<UserAgentActor>, recipient| {
|
||||
prepared.spawn(UserAgentActor::new(context, recipient));
|
||||
},
|
||||
|prepared: PreparedActor<GrpcTransportActor<_, _, _>>, business_recipient| {
|
||||
prepared.spawn(GrpcTransportActor::new(
|
||||
tx,
|
||||
req_stream,
|
||||
business_recipient,
|
||||
));
|
||||
},
|
||||
)
|
||||
.await;
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
mod common;
|
||||
|
||||
#[path = "client/auth.rs"]
|
||||
mod auth;
|
||||
@@ -1,111 +0,0 @@
|
||||
use arbiter_proto::proto::client::{
|
||||
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
|
||||
client_request::Payload as ClientRequestPayload,
|
||||
client_response::Payload as ClientResponsePayload,
|
||||
};
|
||||
use arbiter_proto::transport::Bi;
|
||||
use arbiter_server::actors::GlobalActors;
|
||||
use arbiter_server::{
|
||||
actors::client::{ClientConnection, connect_client},
|
||||
db::{self, schema},
|
||||
};
|
||||
use diesel::{ExpressionMethods as _, insert_into};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use ed25519_dalek::Signer as _;
|
||||
|
||||
use super::common::ChannelTransport;
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unregistered_pubkey_rejected() {
|
||||
let db = db::create_test_pool().await;
|
||||
|
||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
let props = ClientConnection::new(db.clone(), Box::new(server_transport), actors);
|
||||
let task = tokio::spawn(connect_client(props));
|
||||
|
||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||
|
||||
test_transport
|
||||
.send(ClientRequest {
|
||||
payload: Some(ClientRequestPayload::AuthChallengeRequest(
|
||||
AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
},
|
||||
)),
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Auth fails, connect_client returns, transport drops
|
||||
task.await.unwrap();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_challenge_auth() {
|
||||
let db = db::create_test_pool().await;
|
||||
|
||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||
|
||||
{
|
||||
let mut conn = db.get().await.unwrap();
|
||||
insert_into(schema::program_client::table)
|
||||
.values(schema::program_client::public_key.eq(pubkey_bytes.clone()))
|
||||
.execute(&mut conn)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
|
||||
let props = ClientConnection::new(db.clone(), Box::new(server_transport), actors);
|
||||
let task = tokio::spawn(connect_client(props));
|
||||
|
||||
// Send challenge request
|
||||
test_transport
|
||||
.send(ClientRequest {
|
||||
payload: Some(ClientRequestPayload::AuthChallengeRequest(
|
||||
AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
},
|
||||
)),
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Read the challenge response
|
||||
let response = test_transport
|
||||
.recv()
|
||||
.await
|
||||
.expect("should receive challenge");
|
||||
let challenge = match response {
|
||||
Ok(resp) => match resp.payload {
|
||||
Some(ClientResponsePayload::AuthChallenge(c)) => c,
|
||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
||||
},
|
||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||
};
|
||||
|
||||
// Sign the challenge and send solution
|
||||
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
let signature = new_key.sign(&formatted_challenge);
|
||||
|
||||
test_transport
|
||||
.send(ClientRequest {
|
||||
payload: Some(ClientRequestPayload::AuthChallengeSolution(
|
||||
AuthChallengeSolution {
|
||||
signature: signature.to_bytes().to_vec(),
|
||||
},
|
||||
)),
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Auth completes, session spawned
|
||||
task.await.unwrap();
|
||||
}
|
||||
@@ -1,14 +1,10 @@
|
||||
use arbiter_proto::transport::{Bi, Error};
|
||||
use arbiter_server::{
|
||||
actors::keyholder::KeyHolder,
|
||||
db::{self, schema},
|
||||
};
|
||||
use async_trait::async_trait;
|
||||
use diesel::QueryDsl;
|
||||
use diesel_async::RunQueryDsl;
|
||||
use memsafe::MemSafe;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
||||
@@ -30,46 +26,3 @@ pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
||||
.unwrap();
|
||||
id.expect("root_key_id should be set after bootstrap")
|
||||
}
|
||||
|
||||
|
||||
pub struct ChannelTransport<T, Y> {
|
||||
receiver: mpsc::Receiver<T>,
|
||||
sender: mpsc::Sender<Y>,
|
||||
}
|
||||
|
||||
impl<T, Y> ChannelTransport<T, Y> {
|
||||
pub fn new() -> (Self, ChannelTransport<Y, T>) {
|
||||
let (tx1, rx1) = mpsc::channel(10);
|
||||
let (tx2, rx2) = mpsc::channel(10);
|
||||
(
|
||||
Self {
|
||||
receiver: rx1,
|
||||
sender: tx2,
|
||||
},
|
||||
ChannelTransport {
|
||||
receiver: rx2,
|
||||
sender: tx1,
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
#[async_trait]
|
||||
impl<T, Y> Bi<T, Y> for ChannelTransport<T, Y>
|
||||
where
|
||||
T: Send + 'static,
|
||||
Y: Send + 'static,
|
||||
{
|
||||
async fn send(&mut self, item: Y) -> Result<(), Error> {
|
||||
self.sender
|
||||
.send(item)
|
||||
.await
|
||||
.map_err(|_| Error::ChannelClosed)
|
||||
}
|
||||
|
||||
async fn recv(&mut self) -> Option<T> {
|
||||
self.receiver.recv().await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,30 @@
|
||||
mod common;
|
||||
|
||||
use arbiter_proto::proto::UserAgentResponse;
|
||||
use arbiter_server::actors::user_agent::UserAgentError;
|
||||
use kameo::{Actor, actor::Recipient, actor::Spawn, prelude::Message};
|
||||
|
||||
/// A no-op actor that discards any messages it receives.
|
||||
#[derive(Actor)]
|
||||
struct NullSink;
|
||||
|
||||
impl Message<Result<UserAgentResponse, UserAgentError>> for NullSink {
|
||||
type Reply = ();
|
||||
|
||||
async fn handle(
|
||||
&mut self,
|
||||
_msg: Result<UserAgentResponse, UserAgentError>,
|
||||
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||
) -> Self::Reply {
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a `Recipient` that silently discards all messages.
|
||||
fn null_recipient() -> Recipient<Result<UserAgentResponse, UserAgentError>> {
|
||||
let actor_ref = NullSink::spawn(NullSink);
|
||||
actor_ref.recipient()
|
||||
}
|
||||
|
||||
#[path = "user_agent/auth.rs"]
|
||||
mod auth;
|
||||
#[path = "user_agent/unseal.rs"]
|
||||
|
||||
@@ -1,50 +1,57 @@
|
||||
use arbiter_proto::proto::user_agent::{
|
||||
AuthChallengeRequest, AuthChallengeSolution, UserAgentRequest,
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
use arbiter_proto::proto::{
|
||||
UserAgentResponse,
|
||||
auth::{self, AuthChallengeRequest, AuthOk},
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
};
|
||||
use arbiter_proto::transport::Bi;
|
||||
use arbiter_server::{
|
||||
actors::{
|
||||
GlobalActors,
|
||||
bootstrap::GetToken,
|
||||
user_agent::{UserAgentConnection, connect_user_agent},
|
||||
user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution, UserAgentActor},
|
||||
},
|
||||
db::{self, schema},
|
||||
};
|
||||
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
|
||||
use diesel_async::RunQueryDsl;
|
||||
use ed25519_dalek::Signer as _;
|
||||
|
||||
use super::common::ChannelTransport;
|
||||
use kameo::actor::Spawn;
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_bootstrap_token_auth() {
|
||||
let db = db::create_test_pool().await;
|
||||
let db =db::create_test_pool().await;
|
||||
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
||||
|
||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
||||
let task = tokio::spawn(connect_user_agent(props));
|
||||
let user_agent =
|
||||
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||
|
||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||
|
||||
test_transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
||||
AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: Some(token),
|
||||
},
|
||||
)),
|
||||
let result = user_agent_ref
|
||||
.ask(HandleAuthChallengeRequest {
|
||||
req: AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: Some(token),
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
.expect("Shouldn't fail to send message");
|
||||
|
||||
task.await.unwrap();
|
||||
assert_eq!(
|
||||
result,
|
||||
UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthMessage(
|
||||
arbiter_proto::proto::auth::ServerMessage {
|
||||
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
|
||||
AuthOk {},
|
||||
)),
|
||||
},
|
||||
)),
|
||||
}
|
||||
);
|
||||
|
||||
let mut conn = db.get().await.unwrap();
|
||||
let stored_pubkey: Vec<u8> = schema::useragent_client::table
|
||||
@@ -59,45 +66,49 @@ pub async fn test_bootstrap_token_auth() {
|
||||
#[test_log::test]
|
||||
pub async fn test_bootstrap_invalid_token_auth() {
|
||||
let db = db::create_test_pool().await;
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
|
||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
||||
let task = tokio::spawn(connect_user_agent(props));
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
let user_agent =
|
||||
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||
|
||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||
|
||||
test_transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
||||
AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: Some("invalid_token".to_string()),
|
||||
},
|
||||
)),
|
||||
let result = user_agent_ref
|
||||
.ask(HandleAuthChallengeRequest {
|
||||
req: AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: Some("invalid_token".to_string()),
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
.await;
|
||||
|
||||
// Auth fails, connect_user_agent returns, transport drops
|
||||
task.await.unwrap();
|
||||
|
||||
// Verify no key was registered
|
||||
let mut conn = db.get().await.unwrap();
|
||||
let count: i64 = schema::useragent_client::table
|
||||
.count()
|
||||
.get_result::<i64>(&mut conn)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(count, 0);
|
||||
match result {
|
||||
Err(kameo::error::SendError::HandlerError(err)) => {
|
||||
assert!(
|
||||
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidBootstrapToken),
|
||||
"Expected InvalidBootstrapToken, got {err:?}"
|
||||
);
|
||||
}
|
||||
Err(other) => {
|
||||
panic!("Expected SendError::HandlerError, got {other:?}");
|
||||
}
|
||||
Ok(_) => {
|
||||
panic!("Expected error due to invalid bootstrap token, but got success");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_challenge_auth() {
|
||||
let db = db::create_test_pool().await;
|
||||
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
let user_agent =
|
||||
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||
|
||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||
@@ -111,51 +122,50 @@ pub async fn test_challenge_auth() {
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||
let props = UserAgentConnection::new(db.clone(), actors, Box::new(server_transport));
|
||||
let task = tokio::spawn(connect_user_agent(props));
|
||||
|
||||
// Send challenge request
|
||||
test_transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(
|
||||
AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: None,
|
||||
},
|
||||
)),
|
||||
let result = user_agent_ref
|
||||
.ask(HandleAuthChallengeRequest {
|
||||
req: AuthChallengeRequest {
|
||||
pubkey: pubkey_bytes,
|
||||
bootstrap_token: None,
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
.expect("Shouldn't fail to send message");
|
||||
|
||||
// Read the challenge response
|
||||
let response = test_transport
|
||||
.recv()
|
||||
.await
|
||||
.expect("should receive challenge");
|
||||
let challenge = match response {
|
||||
Ok(resp) => match resp.payload {
|
||||
Some(UserAgentResponsePayload::AuthChallenge(c)) => c,
|
||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
||||
},
|
||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||
let UserAgentResponse {
|
||||
payload:
|
||||
Some(UserAgentResponsePayload::AuthMessage(arbiter_proto::proto::auth::ServerMessage {
|
||||
payload:
|
||||
Some(arbiter_proto::proto::auth::server_message::Payload::AuthChallenge(challenge)),
|
||||
})),
|
||||
} = result
|
||||
else {
|
||||
panic!("Expected auth challenge response, got {result:?}");
|
||||
};
|
||||
|
||||
// Sign the challenge and send solution
|
||||
let formatted_challenge = arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
let formatted_challenge = arbiter_proto::format_challenge(&challenge);
|
||||
let signature = new_key.sign(&formatted_challenge);
|
||||
let serialized_signature = signature.to_bytes().to_vec();
|
||||
|
||||
test_transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(
|
||||
AuthChallengeSolution {
|
||||
signature: signature.to_bytes().to_vec(),
|
||||
},
|
||||
)),
|
||||
let result = user_agent_ref
|
||||
.ask(HandleAuthChallengeSolution {
|
||||
solution: auth::AuthChallengeSolution {
|
||||
signature: serialized_signature,
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
.expect("Shouldn't fail to send message");
|
||||
|
||||
// Auth completes, session spawned
|
||||
task.await.unwrap();
|
||||
assert_eq!(
|
||||
result,
|
||||
UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthMessage(
|
||||
arbiter_proto::proto::auth::ServerMessage {
|
||||
payload: Some(arbiter_proto::proto::auth::server_message::Payload::AuthOk(
|
||||
AuthOk {},
|
||||
)),
|
||||
},
|
||||
)),
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,26 +1,30 @@
|
||||
use arbiter_proto::proto::user_agent::{
|
||||
UnsealEncryptedKey, UnsealResult, UnsealStart, UserAgentRequest,
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
use arbiter_proto::proto::{
|
||||
UnsealEncryptedKey, UnsealResult, UnsealStart, auth::AuthChallengeRequest,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
};
|
||||
use arbiter_server::{
|
||||
actors::{
|
||||
GlobalActors,
|
||||
bootstrap::GetToken,
|
||||
keyholder::{Bootstrap, Seal},
|
||||
user_agent::session::UserAgentSession,
|
||||
user_agent::{
|
||||
HandleAuthChallengeRequest, HandleUnsealEncryptedKey, HandleUnsealRequest,
|
||||
UserAgentActor,
|
||||
},
|
||||
},
|
||||
db,
|
||||
};
|
||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||
use kameo::actor::{ActorRef, Spawn};
|
||||
use memsafe::MemSafe;
|
||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||
|
||||
async fn setup_sealed_user_agent(
|
||||
async fn setup_authenticated_user_agent(
|
||||
seal_key: &[u8],
|
||||
) -> (db::DatabasePool, UserAgentSession) {
|
||||
) -> (arbiter_server::db::DatabasePool, ActorRef<UserAgentActor>) {
|
||||
let db = db::create_test_pool().await;
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
actors
|
||||
.key_holder
|
||||
.ask(Bootstrap {
|
||||
@@ -30,23 +34,37 @@ async fn setup_sealed_user_agent(
|
||||
.unwrap();
|
||||
actors.key_holder.ask(Seal).await.unwrap();
|
||||
|
||||
let session = UserAgentSession::new_test(db.clone(), actors);
|
||||
let user_agent =
|
||||
UserAgentActor::new_manual(db.clone(), actors.clone(), super::null_recipient());
|
||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||
|
||||
(db, session)
|
||||
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
||||
let auth_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||
user_agent_ref
|
||||
.ask(HandleAuthChallengeRequest {
|
||||
req: AuthChallengeRequest {
|
||||
pubkey: auth_key.verifying_key().to_bytes().to_vec(),
|
||||
bootstrap_token: Some(token),
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
(db, user_agent_ref)
|
||||
}
|
||||
|
||||
async fn client_dh_encrypt(
|
||||
user_agent: &mut UserAgentSession,
|
||||
user_agent_ref: &ActorRef<UserAgentActor>,
|
||||
key_to_send: &[u8],
|
||||
) -> UnsealEncryptedKey {
|
||||
let client_secret = EphemeralSecret::random();
|
||||
let client_public = PublicKey::from(&client_secret);
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealRequest {
|
||||
req: UnsealStart {
|
||||
client_pubkey: client_public.as_bytes().to_vec(),
|
||||
})),
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
@@ -73,22 +91,16 @@ async fn client_dh_encrypt(
|
||||
}
|
||||
}
|
||||
|
||||
fn unseal_key_request(req: UnsealEncryptedKey) -> UserAgentRequest {
|
||||
UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::UnsealEncryptedKey(req)),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unseal_success() {
|
||||
let seal_key = b"test-seal-key";
|
||||
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
|
||||
|
||||
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
|
||||
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
@@ -101,12 +113,12 @@ pub async fn test_unseal_success() {
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unseal_wrong_seal_key() {
|
||||
let (_db, mut user_agent) = setup_sealed_user_agent(b"correct-key").await;
|
||||
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"correct-key").await;
|
||||
|
||||
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
|
||||
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
@@ -119,26 +131,28 @@ pub async fn test_unseal_wrong_seal_key() {
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unseal_corrupted_ciphertext() {
|
||||
let (_db, mut user_agent) = setup_sealed_user_agent(b"test-key").await;
|
||||
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"test-key").await;
|
||||
|
||||
let client_secret = EphemeralSecret::random();
|
||||
let client_public = PublicKey::from(&client_secret);
|
||||
|
||||
user_agent
|
||||
.process_transport_inbound(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::UnsealStart(UnsealStart {
|
||||
user_agent_ref
|
||||
.ask(HandleUnsealRequest {
|
||||
req: UnsealStart {
|
||||
client_pubkey: client_public.as_bytes().to_vec(),
|
||||
})),
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(unseal_key_request(UnsealEncryptedKey {
|
||||
nonce: vec![0u8; 24],
|
||||
ciphertext: vec![0u8; 32],
|
||||
associated_data: vec![],
|
||||
}))
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealEncryptedKey {
|
||||
req: UnsealEncryptedKey {
|
||||
nonce: vec![0u8; 24],
|
||||
ciphertext: vec![0u8; 32],
|
||||
associated_data: vec![],
|
||||
},
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
@@ -148,17 +162,49 @@ pub async fn test_unseal_corrupted_ciphertext() {
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unseal_start_without_auth_fails() {
|
||||
let db = db::create_test_pool().await;
|
||||
|
||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||
let user_agent =
|
||||
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||
|
||||
let client_secret = EphemeralSecret::random();
|
||||
let client_public = PublicKey::from(&client_secret);
|
||||
|
||||
let result = user_agent_ref
|
||||
.ask(HandleUnsealRequest {
|
||||
req: UnsealStart {
|
||||
client_pubkey: client_public.as_bytes().to_vec(),
|
||||
},
|
||||
})
|
||||
.await;
|
||||
|
||||
match result {
|
||||
Err(kameo::error::SendError::HandlerError(err)) => {
|
||||
assert!(
|
||||
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidState),
|
||||
"Expected InvalidState, got {err:?}"
|
||||
);
|
||||
}
|
||||
other => panic!("Expected state machine error, got {other:?}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[test_log::test]
|
||||
pub async fn test_unseal_retry_after_invalid_key() {
|
||||
let seal_key = b"real-seal-key";
|
||||
let (_db, mut user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
|
||||
|
||||
{
|
||||
let encrypted_key = client_dh_encrypt(&mut user_agent, b"wrong-key").await;
|
||||
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
@@ -169,10 +215,10 @@ pub async fn test_unseal_retry_after_invalid_key() {
|
||||
}
|
||||
|
||||
{
|
||||
let encrypted_key = client_dh_encrypt(&mut user_agent, seal_key).await;
|
||||
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
|
||||
|
||||
let response = user_agent
|
||||
.process_transport_inbound(unseal_key_request(encrypted_key))
|
||||
let response = user_agent_ref
|
||||
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
[package]
|
||||
name = "arbiter-tokens-registry"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
alloy.workspace = true
|
||||
@@ -1 +0,0 @@
|
||||
pub mod nonfungible;
|
||||
@@ -1,13 +0,0 @@
|
||||
use alloy::primitives::{Address, ChainId, address};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
|
||||
pub struct TokenInfo {
|
||||
pub name: &'static str,
|
||||
pub symbol: &'static str,
|
||||
pub decimals: u32,
|
||||
pub contract: Address,
|
||||
pub chain: ChainId,
|
||||
pub logo_uri: Option<&'static str>,
|
||||
}
|
||||
|
||||
include!("tokens.rs");
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1 +0,0 @@
|
||||
pub mod evm;
|
||||
@@ -9,13 +9,7 @@ arbiter-proto.path = "../arbiter-proto"
|
||||
kameo.workspace = true
|
||||
tokio = {workspace = true, features = ["net"]}
|
||||
tonic.workspace = true
|
||||
tonic.features = ["tls-aws-lc"]
|
||||
tracing.workspace = true
|
||||
ed25519-dalek.workspace = true
|
||||
smlang.workspace = true
|
||||
x25519-dalek.workspace = true
|
||||
thiserror.workspace = true
|
||||
tokio-stream.workspace = true
|
||||
http = "1.4.0"
|
||||
rustls-webpki = { version = "0.103.9", features = ["aws-lc-rs"] }
|
||||
async-trait.workspace = true
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
use arbiter_proto::{
|
||||
proto::{
|
||||
user_agent::{UserAgentRequest, UserAgentResponse},
|
||||
arbiter_service_client::ArbiterServiceClient,
|
||||
},
|
||||
transport::{IdentityRecvConverter, IdentitySendConverter, grpc},
|
||||
url::ArbiterUrl,
|
||||
};
|
||||
use ed25519_dalek::SigningKey;
|
||||
use kameo::actor::{ActorRef, Spawn};
|
||||
|
||||
use tokio::sync::mpsc;
|
||||
use tokio_stream::wrappers::ReceiverStream;
|
||||
|
||||
use tonic::transport::ClientTlsConfig;
|
||||
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum ConnectError {
|
||||
#[error("Could establish connection")]
|
||||
Connection(#[from] tonic::transport::Error),
|
||||
|
||||
#[error("Invalid server URI")]
|
||||
InvalidUri(#[from] http::uri::InvalidUri),
|
||||
|
||||
#[error("Invalid CA certificate")]
|
||||
InvalidCaCert(#[from] webpki::Error),
|
||||
|
||||
#[error("gRPC error")]
|
||||
Grpc(#[from] tonic::Status),
|
||||
}
|
||||
|
||||
use super::UserAgentActor;
|
||||
|
||||
pub type UserAgentGrpc = ActorRef<
|
||||
UserAgentActor<
|
||||
grpc::GrpcAdapter<
|
||||
IdentityRecvConverter<UserAgentResponse>,
|
||||
IdentitySendConverter<UserAgentRequest>,
|
||||
>,
|
||||
>,
|
||||
>;
|
||||
pub async fn connect_grpc(
|
||||
url: ArbiterUrl,
|
||||
key: SigningKey,
|
||||
) -> Result<UserAgentGrpc, ConnectError> {
|
||||
let bootstrap_token = url.bootstrap_token.clone();
|
||||
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)?.to_owned();
|
||||
let tls = ClientTlsConfig::new().trust_anchor(anchor);
|
||||
|
||||
// TODO: if `host` is localhost, we need to verify server's process authenticity
|
||||
let channel = tonic::transport::Channel::from_shared(format!("{}:{}", url.host, url.port))?
|
||||
.tls_config(tls)?
|
||||
.connect()
|
||||
.await?;
|
||||
|
||||
let mut client = ArbiterServiceClient::new(channel);
|
||||
let (tx, rx) = mpsc::channel(16);
|
||||
let bistream = client.user_agent(ReceiverStream::new(rx)).await?;
|
||||
let bistream = bistream.into_inner();
|
||||
|
||||
let adapter = grpc::GrpcAdapter::new(
|
||||
tx,
|
||||
bistream,
|
||||
IdentityRecvConverter::new(),
|
||||
IdentitySendConverter::new(),
|
||||
);
|
||||
|
||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token, adapter));
|
||||
|
||||
Ok(actor)
|
||||
}
|
||||
@@ -1,195 +1,66 @@
|
||||
use arbiter_proto::{
|
||||
format_challenge,
|
||||
proto::user_agent::{
|
||||
AuthChallengeRequest, AuthChallengeSolution, AuthOk,
|
||||
UserAgentRequest, UserAgentResponse,
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
},
|
||||
transport::Bi,
|
||||
use arbiter_proto::{proto::UserAgentRequest, transport::TransportActor};
|
||||
use ed25519_dalek::SigningKey;
|
||||
use kameo::{
|
||||
Actor, Reply,
|
||||
actor::{ActorRef, WeakActorRef},
|
||||
prelude::Message,
|
||||
};
|
||||
use ed25519_dalek::{Signer, SigningKey};
|
||||
use kameo::{Actor, actor::ActorRef};
|
||||
use smlang::statemachine;
|
||||
use tokio::select;
|
||||
use tracing::{error, info};
|
||||
use tonic::transport::CertificateDer;
|
||||
use tracing::{debug, error};
|
||||
|
||||
struct Storage {
|
||||
pub identity: SigningKey,
|
||||
pub server_ca_cert: CertificateDer<'static>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum InitError {
|
||||
StorageError,
|
||||
Other(String),
|
||||
}
|
||||
|
||||
statemachine! {
|
||||
name: UserAgent,
|
||||
name: UserAgentStateMachine,
|
||||
custom_error: false,
|
||||
transitions: {
|
||||
*Init + SentAuthChallengeRequest = WaitingForServerAuth,
|
||||
WaitingForServerAuth + ReceivedAuthChallenge = WaitingForAuthOk,
|
||||
WaitingForServerAuth + ReceivedAuthOk = Authenticated,
|
||||
WaitingForAuthOk + ReceivedAuthOk = Authenticated,
|
||||
*Init + SendAuthChallenge = WaitingForAuthSolution
|
||||
}
|
||||
}
|
||||
|
||||
pub struct DummyContext;
|
||||
impl UserAgentStateMachineContext for DummyContext {}
|
||||
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum InboundError {
|
||||
#[error("Invalid user agent response")]
|
||||
InvalidResponse,
|
||||
#[error("Expected response payload")]
|
||||
MissingResponsePayload,
|
||||
#[error("Unexpected response payload")]
|
||||
UnexpectedResponsePayload,
|
||||
#[error("Invalid state for auth challenge")]
|
||||
InvalidStateForAuthChallenge,
|
||||
#[error("Invalid state for auth ok")]
|
||||
InvalidStateForAuthOk,
|
||||
#[error("State machine error")]
|
||||
StateTransitionFailed,
|
||||
#[error("Transport send failed")]
|
||||
TransportSendFailed,
|
||||
}
|
||||
|
||||
pub struct UserAgentActor<Transport>
|
||||
where
|
||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
||||
{
|
||||
pub struct UserAgentActor<A: TransportActor<UserAgentRequest>> {
|
||||
key: SigningKey,
|
||||
bootstrap_token: Option<String>,
|
||||
state: UserAgentStateMachine<DummyContext>,
|
||||
transport: Transport,
|
||||
server_ca_cert: CertificateDer<'static>,
|
||||
sender: ActorRef<A>,
|
||||
}
|
||||
|
||||
impl<Transport> UserAgentActor<Transport>
|
||||
where
|
||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
||||
{
|
||||
pub fn new(key: SigningKey, bootstrap_token: Option<String>, transport: Transport) -> Self {
|
||||
Self {
|
||||
key,
|
||||
bootstrap_token,
|
||||
state: UserAgentStateMachine::new(DummyContext),
|
||||
transport,
|
||||
}
|
||||
}
|
||||
|
||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), InboundError> {
|
||||
self.state.process_event(event).map_err(|e| {
|
||||
error!(?e, "useragent state transition failed");
|
||||
InboundError::StateTransitionFailed
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn send_auth_challenge_request(&mut self) -> Result<(), InboundError> {
|
||||
let req = AuthChallengeRequest {
|
||||
pubkey: self.key.verifying_key().to_bytes().to_vec(),
|
||||
bootstrap_token: self.bootstrap_token.take(),
|
||||
};
|
||||
|
||||
self.transition(UserAgentEvents::SentAuthChallengeRequest)?;
|
||||
|
||||
self.transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
|
||||
})
|
||||
.await
|
||||
.map_err(|_| InboundError::TransportSendFailed)?;
|
||||
|
||||
info!(actor = "useragent", "auth.request.sent");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_auth_challenge(
|
||||
&mut self,
|
||||
challenge: arbiter_proto::proto::user_agent::AuthChallenge,
|
||||
) -> Result<(), InboundError> {
|
||||
self.transition(UserAgentEvents::ReceivedAuthChallenge)?;
|
||||
|
||||
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
let signature = self.key.sign(&formatted);
|
||||
let solution = AuthChallengeSolution {
|
||||
signature: signature.to_bytes().to_vec(),
|
||||
};
|
||||
|
||||
self.transport
|
||||
.send(UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
|
||||
})
|
||||
.await
|
||||
.map_err(|_| InboundError::TransportSendFailed)?;
|
||||
|
||||
info!(actor = "useragent", "auth.solution.sent");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn handle_auth_ok(&mut self, _ok: AuthOk) -> Result<(), InboundError> {
|
||||
self.transition(UserAgentEvents::ReceivedAuthOk)?;
|
||||
info!(actor = "useragent", "auth.ok");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn process_inbound_transport(
|
||||
&mut self,
|
||||
inbound: UserAgentResponse
|
||||
) -> Result<(), InboundError> {
|
||||
let payload = inbound
|
||||
.payload
|
||||
.ok_or(InboundError::MissingResponsePayload)?;
|
||||
|
||||
match payload {
|
||||
UserAgentResponsePayload::AuthChallenge(challenge) => {
|
||||
self.handle_auth_challenge(challenge).await
|
||||
}
|
||||
UserAgentResponsePayload::AuthOk(ok) => self.handle_auth_ok(ok),
|
||||
_ => Err(InboundError::UnexpectedResponsePayload),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Transport> Actor for UserAgentActor<Transport>
|
||||
where
|
||||
Transport: Bi<UserAgentResponse, UserAgentRequest>,
|
||||
{
|
||||
impl<A: TransportActor<UserAgentRequest>> Actor for UserAgentActor<A> {
|
||||
type Args = Self;
|
||||
|
||||
type Error = ();
|
||||
type Error = InitError;
|
||||
|
||||
async fn on_start(
|
||||
mut args: Self::Args,
|
||||
_actor_ref: ActorRef<Self>,
|
||||
) -> Result<Self, Self::Error> {
|
||||
if let Err(err) = args.send_auth_challenge_request().await {
|
||||
error!(?err, actor = "useragent", "auth.start.failed");
|
||||
return Err(());
|
||||
}
|
||||
Ok(args)
|
||||
async fn on_start(args: Self::Args, actor_ref: ActorRef<Self>) -> Result<Self, Self::Error> {
|
||||
todo!()
|
||||
}
|
||||
|
||||
async fn next(
|
||||
async fn on_link_died(
|
||||
&mut self,
|
||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
||||
loop {
|
||||
select! {
|
||||
signal = mailbox_rx.recv() => {
|
||||
return signal;
|
||||
}
|
||||
inbound = self.transport.recv() => {
|
||||
match inbound {
|
||||
Some(inbound) => {
|
||||
if let Err(err) = self.process_inbound_transport(inbound).await {
|
||||
error!(?err, actor = "useragent", "transport.inbound.failed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
None => {
|
||||
info!(actor = "useragent", "transport.closed");
|
||||
return Some(kameo::mailbox::Signal::Stop);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
_: WeakActorRef<Self>,
|
||||
id: kameo::prelude::ActorId,
|
||||
_: kameo::prelude::ActorStopReason,
|
||||
) -> Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error> {
|
||||
if id == self.sender.id() {
|
||||
error!("Transport actor died, stopping UserAgentActor");
|
||||
Ok(std::ops::ControlFlow::Break(
|
||||
kameo::prelude::ActorStopReason::Normal,
|
||||
))
|
||||
} else {
|
||||
debug!(
|
||||
"Linked actor {} died, but it's not the transport actor, ignoring",
|
||||
id
|
||||
);
|
||||
Ok(std::ops::ControlFlow::Continue(()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
mod grpc;
|
||||
pub use grpc::{connect_grpc, ConnectError};
|
||||
|
||||
@@ -1,141 +0,0 @@
|
||||
use arbiter_proto::{
|
||||
format_challenge,
|
||||
proto::user_agent::{
|
||||
AuthChallenge, AuthOk,
|
||||
UserAgentRequest, UserAgentResponse,
|
||||
user_agent_request::Payload as UserAgentRequestPayload,
|
||||
user_agent_response::Payload as UserAgentResponsePayload,
|
||||
},
|
||||
transport::Bi,
|
||||
};
|
||||
use arbiter_useragent::UserAgentActor;
|
||||
use ed25519_dalek::SigningKey;
|
||||
use kameo::actor::Spawn;
|
||||
use tokio::sync::mpsc;
|
||||
use tokio::time::{Duration, timeout};
|
||||
use async_trait::async_trait;
|
||||
|
||||
struct TestTransport {
|
||||
inbound_rx: mpsc::Receiver<UserAgentResponse>,
|
||||
outbound_tx: mpsc::Sender<UserAgentRequest>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Bi<UserAgentResponse, UserAgentRequest> for TestTransport {
|
||||
async fn send(&mut self, item: UserAgentRequest) -> Result<(), arbiter_proto::transport::Error> {
|
||||
self.outbound_tx
|
||||
.send(item)
|
||||
.await
|
||||
.map_err(|_| arbiter_proto::transport::Error::ChannelClosed)
|
||||
}
|
||||
|
||||
async fn recv(&mut self) -> Option<UserAgentResponse> {
|
||||
self.inbound_rx.recv().await
|
||||
}
|
||||
}
|
||||
|
||||
fn make_transport() -> (
|
||||
TestTransport,
|
||||
mpsc::Sender<UserAgentResponse>,
|
||||
mpsc::Receiver<UserAgentRequest>,
|
||||
) {
|
||||
let (inbound_tx, inbound_rx) = mpsc::channel(8);
|
||||
let (outbound_tx, outbound_rx) = mpsc::channel(8);
|
||||
(
|
||||
TestTransport {
|
||||
inbound_rx,
|
||||
outbound_tx,
|
||||
},
|
||||
inbound_tx,
|
||||
outbound_rx,
|
||||
)
|
||||
}
|
||||
|
||||
fn test_key() -> SigningKey {
|
||||
SigningKey::from_bytes(&[7u8; 32])
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn sends_auth_request_on_start_with_bootstrap_token() {
|
||||
let key = test_key();
|
||||
let pubkey = key.verifying_key().to_bytes().to_vec();
|
||||
let bootstrap_token = Some("bootstrap-123".to_string());
|
||||
let (transport, inbound_tx, mut outbound_rx) = make_transport();
|
||||
|
||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, bootstrap_token.clone(), transport));
|
||||
|
||||
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
|
||||
.await
|
||||
.expect("timed out waiting for auth request")
|
||||
.expect("channel closed before auth request");
|
||||
|
||||
let UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeRequest(req)),
|
||||
} = outbound
|
||||
else {
|
||||
panic!("expected auth challenge request");
|
||||
};
|
||||
|
||||
assert_eq!(req.pubkey, pubkey);
|
||||
assert_eq!(req.bootstrap_token, bootstrap_token);
|
||||
|
||||
drop(inbound_tx);
|
||||
drop(actor);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn challenge_flow_sends_solution_from_transport_inbound() {
|
||||
let key = test_key();
|
||||
let verify_key = key.verifying_key();
|
||||
let (transport, inbound_tx, mut outbound_rx) = make_transport();
|
||||
|
||||
let actor = UserAgentActor::spawn(UserAgentActor::new(key, None, transport));
|
||||
|
||||
let _initial_auth_request = timeout(Duration::from_secs(1), outbound_rx.recv())
|
||||
.await
|
||||
.expect("timed out waiting for initial auth request")
|
||||
.expect("missing initial auth request");
|
||||
|
||||
let challenge = AuthChallenge {
|
||||
pubkey: verify_key.to_bytes().to_vec(),
|
||||
nonce: 42,
|
||||
};
|
||||
inbound_tx
|
||||
.send(UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthChallenge(challenge.clone())),
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let outbound = timeout(Duration::from_secs(1), outbound_rx.recv())
|
||||
.await
|
||||
.expect("timed out waiting for challenge solution")
|
||||
.expect("missing challenge solution");
|
||||
|
||||
let UserAgentRequest {
|
||||
payload: Some(UserAgentRequestPayload::AuthChallengeSolution(solution)),
|
||||
} = outbound
|
||||
else {
|
||||
panic!("expected auth challenge solution");
|
||||
};
|
||||
|
||||
let formatted = format_challenge(challenge.nonce, &challenge.pubkey);
|
||||
let sig: ed25519_dalek::Signature = solution
|
||||
.signature
|
||||
.as_slice()
|
||||
.try_into()
|
||||
.expect("signature bytes length");
|
||||
verify_key
|
||||
.verify_strict(&formatted, &sig)
|
||||
.expect("solution signature should verify");
|
||||
|
||||
inbound_tx
|
||||
.send(UserAgentResponse {
|
||||
payload: Some(UserAgentResponsePayload::AuthOk(AuthOk {})),
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
drop(inbound_tx);
|
||||
drop(actor);
|
||||
}
|
||||
Reference in New Issue
Block a user