Compare commits
9 Commits
enforcing-
...
feat-lints
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
41b3fc5d39 | ||
|
|
f6a0c32b9d | ||
| 62dff3f810 | |||
|
|
6e22f368c9 | ||
| f3cf6a9438 | |||
|
|
a9f9fc2a9d | ||
|
|
d22ab49e3d | ||
|
|
a845181ef6 | ||
|
|
0d424f3afc |
21
AGENTS.md
21
AGENTS.md
@@ -100,6 +100,27 @@ diesel migration generate <name> --migration-dir crates/arbiter-server/migration
|
|||||||
diesel migration run --migration-dir crates/arbiter-server/migrations
|
diesel migration run --migration-dir crates/arbiter-server/migrations
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Code Conventions
|
||||||
|
|
||||||
|
**`#[must_use]` Attribute:**
|
||||||
|
Apply the `#[must_use]` attribute to return types of functions where the return value is critical and should not be accidentally ignored. This is commonly used for:
|
||||||
|
|
||||||
|
- Methods that return `bool` indicating success/failure or validation state
|
||||||
|
- Any function where ignoring the return value indicates a logic error
|
||||||
|
|
||||||
|
Do not apply `#[must_use]` redundantly to items (types or functions) that are already annotated with `#[must_use]`.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[must_use]
|
||||||
|
pub fn verify(&self, nonce: i32, context: &[u8], signature: &Signature) -> bool {
|
||||||
|
// verification logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This forces callers to either use the return value or explicitly ignore it with `let _ = ...;`, preventing silent failures.
|
||||||
|
|
||||||
## User Agent (Flutter + Rinf at `useragent/`)
|
## User Agent (Flutter + Rinf at `useragent/`)
|
||||||
|
|
||||||
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
||||||
|
|||||||
21
CLAUDE.md
21
CLAUDE.md
@@ -100,6 +100,27 @@ diesel migration generate <name> --migration-dir crates/arbiter-server/migration
|
|||||||
diesel migration run --migration-dir crates/arbiter-server/migrations
|
diesel migration run --migration-dir crates/arbiter-server/migrations
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Code Conventions
|
||||||
|
|
||||||
|
**`#[must_use]` Attribute:**
|
||||||
|
Apply the `#[must_use]` attribute to return types of functions where the return value is critical and should not be accidentally ignored. This is commonly used for:
|
||||||
|
|
||||||
|
- Methods that return `bool` indicating success/failure or validation state
|
||||||
|
- Any function where ignoring the return value indicates a logic error
|
||||||
|
|
||||||
|
Do not apply `#[must_use]` redundantly to items (types or functions) that are already annotated with `#[must_use]`.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[must_use]
|
||||||
|
pub fn verify(&self, nonce: i32, context: &[u8], signature: &Signature) -> bool {
|
||||||
|
// verification logic
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This forces callers to either use the return value or explicitly ignore it with `let _ = ...;`, preventing silent failures.
|
||||||
|
|
||||||
## User Agent (Flutter + Rinf at `useragent/`)
|
## User Agent (Flutter + Rinf at `useragent/`)
|
||||||
|
|
||||||
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
The Flutter app uses [Rinf](https://rinf.cunarist.org) to call Rust code. The Rust logic lives in `useragent/native/hub/` as a separate crate that uses `arbiter-useragent` for the gRPC client.
|
||||||
|
|||||||
@@ -67,18 +67,14 @@ The `program_client.nonce` column stores the **next usable nonce** — i.e. it i
|
|||||||
## Cryptography
|
## Cryptography
|
||||||
|
|
||||||
### Authentication
|
### Authentication
|
||||||
- **Client protocol:** ed25519
|
- **Client protocol:** ML-DSA
|
||||||
|
|
||||||
### User-Agent Authentication
|
### User-Agent Authentication
|
||||||
|
|
||||||
User-agent authentication supports multiple signature schemes because platform-provided "hardware-bound" keys do not expose a uniform algorithm across operating systems and hardware.
|
User-agent authentication supports multiple signature schemes because platform-provided "hardware-bound" keys do not expose a uniform algorithm across operating systems and hardware.
|
||||||
|
|
||||||
- **Supported schemes:** RSA, Ed25519, ECDSA (secp256k1)
|
- **Supported schemes:** ML-DSA
|
||||||
- **Why:** the user agent authenticates with keys backed by platform facilities, and those facilities differ by platform
|
- **Why:** Secure Enclave (MacOS) support them natively, on other platforms we could emulate while they roll-out
|
||||||
- **Apple Silicon Secure Enclave / Secure Element:** ECDSA-only in practice
|
|
||||||
- **Windows Hello / TPM 2.0:** currently RSA-backed in our integration
|
|
||||||
|
|
||||||
This is why the user-agent auth protocol carries an explicit `KeyType`, while the SDK client protocol remains fixed to ed25519.
|
|
||||||
|
|
||||||
### Encryption at Rest
|
### Encryption at Rest
|
||||||
- **Scheme:** Symmetric AEAD — currently **XChaCha20-Poly1305**
|
- **Scheme:** Symmetric AEAD — currently **XChaCha20-Poly1305**
|
||||||
|
|||||||
924
server/Cargo.lock
generated
924
server/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,44 +4,169 @@ members = [
|
|||||||
]
|
]
|
||||||
resolver = "3"
|
resolver = "3"
|
||||||
|
|
||||||
[workspace.lints.clippy]
|
|
||||||
disallowed-methods = "deny"
|
|
||||||
|
|
||||||
|
|
||||||
[workspace.dependencies]
|
[workspace.dependencies]
|
||||||
tonic = { version = "0.14.5", features = [
|
|
||||||
"deflate",
|
|
||||||
"gzip",
|
|
||||||
"tls-connect-info",
|
|
||||||
"zstd",
|
|
||||||
] }
|
|
||||||
tracing = "0.1.44"
|
|
||||||
tokio = { version = "1.50.0", features = ["full"] }
|
|
||||||
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
|
||||||
chrono = { version = "0.4.44", features = ["serde"] }
|
|
||||||
rand = "0.10.0"
|
|
||||||
rustls = { version = "0.23.37", features = ["aws-lc-rs"] }
|
|
||||||
smlang = "0.8.0"
|
|
||||||
thiserror = "2.0.18"
|
|
||||||
async-trait = "0.1.89"
|
|
||||||
futures = "0.3.32"
|
|
||||||
tokio-stream = { version = "0.1.18", features = ["full"] }
|
|
||||||
kameo = "0.19.2"
|
|
||||||
prost-types = { version = "0.14.3", features = ["chrono"] }
|
|
||||||
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
|
||||||
rstest = "0.26.1"
|
|
||||||
rustls-pki-types = "1.14.0"
|
|
||||||
alloy = "1.7.3"
|
alloy = "1.7.3"
|
||||||
rcgen = { version = "0.14.7", features = [
|
async-trait = "0.1.89"
|
||||||
"aws_lc_rs",
|
base64 = "0.22.1"
|
||||||
"pem",
|
chrono = { version = "0.4.44", features = ["serde"] }
|
||||||
"x509-parser",
|
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
||||||
"zeroize",
|
futures = "0.3.32"
|
||||||
], default-features = false }
|
hmac = "0.12.1"
|
||||||
k256 = { version = "0.13.4", features = ["ecdsa", "pkcs8"] }
|
k256 = { version = "0.13.4", features = ["ecdsa", "pkcs8"] }
|
||||||
rsa = { version = "0.9", features = ["sha2"] }
|
kameo = { version = "0.20.0", git = "https://github.com/tqwewe/kameo" } # hold this until new patch version is released
|
||||||
sha2 = "0.10"
|
|
||||||
spki = "0.7"
|
|
||||||
prost = "0.14.3"
|
|
||||||
miette = { version = "7.6.0", features = ["fancy", "serde"] }
|
miette = { version = "7.6.0", features = ["fancy", "serde"] }
|
||||||
|
ml-dsa = { version = "0.1.0-rc.8", features = ["zeroize"] }
|
||||||
mutants = "0.0.4"
|
mutants = "0.0.4"
|
||||||
|
prost = "0.14.3"
|
||||||
|
prost-types = { version = "0.14.3", features = ["chrono"] }
|
||||||
|
rand = "0.10.0"
|
||||||
|
rcgen = { version = "0.14.7", features = [ "aws_lc_rs", "pem", "x509-parser", "zeroize" ], default-features = false }
|
||||||
|
rsa = { version = "0.9", features = ["sha2"] }
|
||||||
|
rstest = "0.26.1"
|
||||||
|
rustls = { version = "0.23.37", features = ["aws-lc-rs", "logging", "prefer-post-quantum", "std"], default-features = false }
|
||||||
|
rustls-pki-types = "1.14.0"
|
||||||
|
sha2 = "0.10"
|
||||||
|
smlang = "0.8.0"
|
||||||
|
spki = "0.7"
|
||||||
|
thiserror = "2.0.18"
|
||||||
|
tokio = { version = "1.50.0", features = ["full"] }
|
||||||
|
tokio-stream = { version = "0.1.18", features = ["full"] }
|
||||||
|
tonic = { version = "0.14.5", features = [ "deflate", "gzip", "tls-connect-info", "zstd" ] }
|
||||||
|
tracing = "0.1.44"
|
||||||
|
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
||||||
|
|
||||||
|
[workspace.lints.rust]
|
||||||
|
missing_unsafe_on_extern = "deny"
|
||||||
|
unsafe_attr_outside_unsafe = "deny"
|
||||||
|
unsafe_op_in_unsafe_fn = "deny"
|
||||||
|
unstable_features = "deny"
|
||||||
|
|
||||||
|
deprecated_safe_2024 = "warn"
|
||||||
|
ffi_unwind_calls = "warn"
|
||||||
|
linker_messages = "warn"
|
||||||
|
|
||||||
|
elided_lifetimes_in_paths = "warn"
|
||||||
|
explicit_outlives_requirements = "warn"
|
||||||
|
impl-trait-overcaptures = "warn"
|
||||||
|
impl-trait-redundant-captures = "warn"
|
||||||
|
redundant_lifetimes = "warn"
|
||||||
|
single_use_lifetimes = "warn"
|
||||||
|
unused_lifetimes = "warn"
|
||||||
|
|
||||||
|
macro_use_extern_crate = "warn"
|
||||||
|
redundant_imports = "warn"
|
||||||
|
unused_import_braces = "warn"
|
||||||
|
unused_macro_rules = "warn"
|
||||||
|
unused_qualifications = "warn"
|
||||||
|
|
||||||
|
unit_bindings = "warn"
|
||||||
|
|
||||||
|
# missing_docs = "warn" # ENABLE BY THE FIRST MAJOR VERSION!!
|
||||||
|
unnameable_types = "warn"
|
||||||
|
variant_size_differences = "warn"
|
||||||
|
|
||||||
|
[workspace.lints.clippy]
|
||||||
|
derive_partial_eq_without_eq = "allow"
|
||||||
|
future_not_send = "allow"
|
||||||
|
inconsistent_struct_constructor = "allow"
|
||||||
|
inline_always = "allow"
|
||||||
|
missing_errors_doc = "allow"
|
||||||
|
missing_fields_in_debug = "allow"
|
||||||
|
missing_panics_doc = "allow"
|
||||||
|
must_use_candidate = "allow"
|
||||||
|
needless_pass_by_ref_mut = "allow"
|
||||||
|
pub_underscore_fields = "allow"
|
||||||
|
redundant_pub_crate = "allow"
|
||||||
|
uninhabited_references = "allow" # safe with unsafe_code = "forbid" and standard uninhabited pattern (match *self {})
|
||||||
|
|
||||||
|
# restriction lints
|
||||||
|
alloc_instead_of_core = "warn"
|
||||||
|
allow_attributes_without_reason = "warn"
|
||||||
|
as_conversions = "warn"
|
||||||
|
assertions_on_result_states = "warn"
|
||||||
|
cfg_not_test = "warn"
|
||||||
|
clone_on_ref_ptr = "warn"
|
||||||
|
cognitive_complexity = "warn"
|
||||||
|
create_dir = "warn"
|
||||||
|
dbg_macro = "warn"
|
||||||
|
decimal_literal_representation = "warn"
|
||||||
|
default_union_representation = "warn"
|
||||||
|
deref_by_slicing = "warn"
|
||||||
|
disallowed_script_idents = "warn"
|
||||||
|
doc_include_without_cfg = "warn"
|
||||||
|
empty_drop = "warn"
|
||||||
|
empty_enum_variants_with_brackets = "warn"
|
||||||
|
empty_structs_with_brackets = "warn"
|
||||||
|
error_impl_error = "warn"
|
||||||
|
exit = "warn"
|
||||||
|
filetype_is_file = "warn"
|
||||||
|
float_arithmetic = "warn"
|
||||||
|
float_cmp_const = "warn"
|
||||||
|
fn_to_numeric_cast_any = "warn"
|
||||||
|
get_unwrap = "warn"
|
||||||
|
if_then_some_else_none = "warn"
|
||||||
|
indexing_slicing = "warn"
|
||||||
|
infinite_loop = "warn"
|
||||||
|
inline_asm_x86_att_syntax = "warn"
|
||||||
|
inline_asm_x86_intel_syntax = "warn"
|
||||||
|
integer_division = "warn"
|
||||||
|
large_include_file = "warn"
|
||||||
|
lossy_float_literal = "warn"
|
||||||
|
map_with_unused_argument_over_ranges = "warn"
|
||||||
|
mem_forget = "warn"
|
||||||
|
missing_assert_message = "warn"
|
||||||
|
mixed_read_write_in_expression = "warn"
|
||||||
|
modulo_arithmetic = "warn"
|
||||||
|
multiple_unsafe_ops_per_block = "warn"
|
||||||
|
mutex_atomic = "warn"
|
||||||
|
mutex_integer = "warn"
|
||||||
|
needless_raw_strings = "warn"
|
||||||
|
non_ascii_literal = "warn"
|
||||||
|
non_zero_suggestions = "warn"
|
||||||
|
pathbuf_init_then_push = "warn"
|
||||||
|
pointer_format = "warn"
|
||||||
|
precedence_bits = "warn"
|
||||||
|
pub_without_shorthand = "warn"
|
||||||
|
rc_buffer = "warn"
|
||||||
|
rc_mutex = "warn"
|
||||||
|
redundant_test_prefix = "warn"
|
||||||
|
redundant_type_annotations = "warn"
|
||||||
|
ref_patterns = "warn"
|
||||||
|
renamed_function_params = "warn"
|
||||||
|
rest_pat_in_fully_bound_structs = "warn"
|
||||||
|
return_and_then = "warn"
|
||||||
|
semicolon_inside_block = "warn"
|
||||||
|
str_to_string = "warn"
|
||||||
|
string_add = "warn"
|
||||||
|
string_lit_chars_any = "warn"
|
||||||
|
string_slice = "warn"
|
||||||
|
suspicious_xor_used_as_pow = "warn"
|
||||||
|
try_err = "warn"
|
||||||
|
undocumented_unsafe_blocks = "warn"
|
||||||
|
uninlined_format_args = "warn"
|
||||||
|
unnecessary_safety_comment = "warn"
|
||||||
|
unnecessary_safety_doc = "warn"
|
||||||
|
unnecessary_self_imports = "warn"
|
||||||
|
unneeded_field_pattern = "warn"
|
||||||
|
unused_result_ok = "warn"
|
||||||
|
verbose_file_reads = "warn"
|
||||||
|
|
||||||
|
# cargo lints
|
||||||
|
negative_feature_names = "warn"
|
||||||
|
redundant_feature_names = "warn"
|
||||||
|
wildcard_dependencies = "warn"
|
||||||
|
|
||||||
|
# ENABLE BY THE FIRST MAJOR VERSION!!
|
||||||
|
# todo = "warn"
|
||||||
|
# unimplemented = "warn"
|
||||||
|
# panic = "warn"
|
||||||
|
# panic_in_result_fn = "warn"
|
||||||
|
#
|
||||||
|
# cargo_common_metadata = "warn"
|
||||||
|
# multiple_crate_versions = "warn" # a controversial option since it's really difficult to maintain
|
||||||
|
|
||||||
|
disallowed_methods = "deny"
|
||||||
|
|
||||||
|
nursery = { level = "warn", priority = -1 }
|
||||||
|
pedantic = { level = "warn", priority = -1 }
|
||||||
|
|||||||
@@ -6,6 +6,23 @@ disallowed-methods = [
|
|||||||
{ path = "rsa::RsaPrivateKey::decrypt_blinded", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). Only PSS signing/verification is permitted." },
|
{ path = "rsa::RsaPrivateKey::decrypt_blinded", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). Only PSS signing/verification is permitted." },
|
||||||
{ path = "rsa::traits::Decryptor::decrypt", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
{ path = "rsa::traits::Decryptor::decrypt", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
||||||
{ path = "rsa::traits::RandomizedDecryptor::decrypt_with_rng", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt_with_rng() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
{ path = "rsa::traits::RandomizedDecryptor::decrypt_with_rng", reason = "RSA decryption is forbidden (RUSTSEC-2023-0071 Marvin Attack). This blocks decrypt_with_rng() on rsa::{pkcs1v15,oaep}::DecryptingKey." },
|
||||||
|
|
||||||
{ path = "arbiter_server::crypto::integrity::v1::lookup_verified_allow_unavailable", reason = "This function allows integrity checks to be bypassed when vault key material is unavailable, which can lead to silent security failures if used incorrectly. It should only be used in specific contexts where this behavior is acceptable, and its use should be carefully audited." },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
allow-indexing-slicing-in-tests = true
|
||||||
|
allow-panic-in-tests = true
|
||||||
|
check-inconsistent-struct-field-initializers = true
|
||||||
|
suppress-restriction-lint-in-const = true
|
||||||
|
allow-renamed-params-for = [
|
||||||
|
"core::convert::From",
|
||||||
|
"core::convert::TryFrom",
|
||||||
|
"core::str::FromStr",
|
||||||
|
"kameo::actor::Actor",
|
||||||
|
]
|
||||||
|
|
||||||
|
module-items-ordered-within-groupings = ["UPPER_SNAKE_CASE"]
|
||||||
|
source-item-ordering = ["enum"]
|
||||||
|
trait-assoc-item-kinds-order = [
|
||||||
|
"const",
|
||||||
|
"type",
|
||||||
|
"fn",
|
||||||
|
] # community tested standard
|
||||||
|
|||||||
@@ -13,12 +13,12 @@ evm = ["dep:alloy"]
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
arbiter-proto.path = "../arbiter-proto"
|
arbiter-proto.path = "../arbiter-proto"
|
||||||
|
arbiter-crypto.path = "../arbiter-crypto"
|
||||||
alloy = { workspace = true, optional = true }
|
alloy = { workspace = true, optional = true }
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
tonic.features = ["tls-aws-lc"]
|
tonic.features = ["tls-aws-lc"]
|
||||||
tokio.workspace = true
|
tokio.workspace = true
|
||||||
tokio-stream.workspace = true
|
tokio-stream.workspace = true
|
||||||
ed25519-dalek.workspace = true
|
|
||||||
thiserror.workspace = true
|
thiserror.workspace = true
|
||||||
http = "1.4.0"
|
http = "1.4.0"
|
||||||
rustls-webpki = { version = "0.103.10", features = ["aws-lc-rs"] }
|
rustls-webpki = { version = "0.103.10", features = ["aws-lc-rs"] }
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
|
use arbiter_crypto::authn::{CLIENT_CONTEXT, SigningKey, format_challenge};
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
ClientMetadata, format_challenge,
|
ClientMetadata,
|
||||||
proto::{
|
proto::{
|
||||||
client::{
|
client::{
|
||||||
ClientRequest,
|
ClientRequest,
|
||||||
@@ -14,7 +15,6 @@ use arbiter_proto::{
|
|||||||
shared::ClientInfo as ProtoClientInfo,
|
shared::ClientInfo as ProtoClientInfo,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
use ed25519_dalek::Signer as _;
|
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
storage::StorageError,
|
storage::StorageError,
|
||||||
@@ -23,20 +23,20 @@ use crate::{
|
|||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum AuthError {
|
pub enum AuthError {
|
||||||
#[error("Auth challenge was not returned by server")]
|
|
||||||
MissingAuthChallenge,
|
|
||||||
|
|
||||||
#[error("Client approval denied by User Agent")]
|
#[error("Client approval denied by User Agent")]
|
||||||
ApprovalDenied,
|
ApprovalDenied,
|
||||||
|
|
||||||
|
#[error("Auth challenge was not returned by server")]
|
||||||
|
MissingAuthChallenge,
|
||||||
|
|
||||||
#[error("No User Agents online to approve client")]
|
#[error("No User Agents online to approve client")]
|
||||||
NoUserAgentsOnline,
|
NoUserAgentsOnline,
|
||||||
|
|
||||||
#[error("Unexpected auth response payload")]
|
|
||||||
UnexpectedAuthResponse,
|
|
||||||
|
|
||||||
#[error("Signing key storage error")]
|
#[error("Signing key storage error")]
|
||||||
Storage(#[from] StorageError),
|
Storage(#[from] StorageError),
|
||||||
|
|
||||||
|
#[error("Unexpected auth response payload")]
|
||||||
|
UnexpectedAuthResponse,
|
||||||
}
|
}
|
||||||
|
|
||||||
fn map_auth_result(code: i32) -> AuthError {
|
fn map_auth_result(code: i32) -> AuthError {
|
||||||
@@ -54,14 +54,14 @@ fn map_auth_result(code: i32) -> AuthError {
|
|||||||
async fn send_auth_challenge_request(
|
async fn send_auth_challenge_request(
|
||||||
transport: &mut ClientTransport,
|
transport: &mut ClientTransport,
|
||||||
metadata: ClientMetadata,
|
metadata: ClientMetadata,
|
||||||
key: &ed25519_dalek::SigningKey,
|
key: &SigningKey,
|
||||||
) -> std::result::Result<(), AuthError> {
|
) -> Result<(), AuthError> {
|
||||||
transport
|
transport
|
||||||
.send(ClientRequest {
|
.send(ClientRequest {
|
||||||
request_id: next_request_id(),
|
request_id: next_request_id(),
|
||||||
payload: Some(ClientRequestPayload::Auth(proto_auth::Request {
|
payload: Some(ClientRequestPayload::Auth(proto_auth::Request {
|
||||||
payload: Some(AuthRequestPayload::ChallengeRequest(AuthChallengeRequest {
|
payload: Some(AuthRequestPayload::ChallengeRequest(AuthChallengeRequest {
|
||||||
pubkey: key.verifying_key().to_bytes().to_vec(),
|
pubkey: key.public_key().to_bytes(),
|
||||||
client_info: Some(ProtoClientInfo {
|
client_info: Some(ProtoClientInfo {
|
||||||
name: metadata.name,
|
name: metadata.name,
|
||||||
description: metadata.description,
|
description: metadata.description,
|
||||||
@@ -76,7 +76,7 @@ async fn send_auth_challenge_request(
|
|||||||
|
|
||||||
async fn receive_auth_challenge(
|
async fn receive_auth_challenge(
|
||||||
transport: &mut ClientTransport,
|
transport: &mut ClientTransport,
|
||||||
) -> std::result::Result<AuthChallenge, AuthError> {
|
) -> Result<AuthChallenge, AuthError> {
|
||||||
let response = transport
|
let response = transport
|
||||||
.recv()
|
.recv()
|
||||||
.await
|
.await
|
||||||
@@ -95,11 +95,14 @@ async fn receive_auth_challenge(
|
|||||||
|
|
||||||
async fn send_auth_challenge_solution(
|
async fn send_auth_challenge_solution(
|
||||||
transport: &mut ClientTransport,
|
transport: &mut ClientTransport,
|
||||||
key: &ed25519_dalek::SigningKey,
|
key: &SigningKey,
|
||||||
challenge: AuthChallenge,
|
challenge: AuthChallenge,
|
||||||
) -> std::result::Result<(), AuthError> {
|
) -> Result<(), AuthError> {
|
||||||
let challenge_payload = format_challenge(challenge.nonce, &challenge.pubkey);
|
let challenge_payload = format_challenge(challenge.nonce, &challenge.pubkey);
|
||||||
let signature = key.sign(&challenge_payload).to_bytes().to_vec();
|
let signature = key
|
||||||
|
.sign_message(&challenge_payload, CLIENT_CONTEXT)
|
||||||
|
.map_err(|_| AuthError::UnexpectedAuthResponse)?
|
||||||
|
.to_bytes();
|
||||||
|
|
||||||
transport
|
transport
|
||||||
.send(ClientRequest {
|
.send(ClientRequest {
|
||||||
@@ -114,9 +117,7 @@ async fn send_auth_challenge_solution(
|
|||||||
.map_err(|_| AuthError::UnexpectedAuthResponse)
|
.map_err(|_| AuthError::UnexpectedAuthResponse)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn receive_auth_confirmation(
|
async fn receive_auth_confirmation(transport: &mut ClientTransport) -> Result<(), AuthError> {
|
||||||
transport: &mut ClientTransport,
|
|
||||||
) -> std::result::Result<(), AuthError> {
|
|
||||||
let response = transport
|
let response = transport
|
||||||
.recv()
|
.recv()
|
||||||
.await
|
.await
|
||||||
@@ -137,11 +138,11 @@ async fn receive_auth_confirmation(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) async fn authenticate(
|
pub async fn authenticate(
|
||||||
transport: &mut ClientTransport,
|
transport: &mut ClientTransport,
|
||||||
metadata: ClientMetadata,
|
metadata: ClientMetadata,
|
||||||
key: &ed25519_dalek::SigningKey,
|
key: &SigningKey,
|
||||||
) -> std::result::Result<(), AuthError> {
|
) -> Result<(), AuthError> {
|
||||||
send_auth_challenge_request(transport, metadata, key).await?;
|
send_auth_challenge_request(transport, metadata, key).await?;
|
||||||
let challenge = receive_auth_challenge(transport).await?;
|
let challenge = receive_auth_challenge(transport).await?;
|
||||||
send_auth_challenge_solution(transport, key, challenge).await?;
|
send_auth_challenge_solution(transport, key, challenge).await?;
|
||||||
|
|||||||
@@ -29,16 +29,16 @@ async fn main() {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
println!("{:#?}", url);
|
println!("{url:#?}");
|
||||||
|
|
||||||
let metadata = ClientMetadata {
|
let metadata = ClientMetadata {
|
||||||
name: "arbiter-client test_connect".to_string(),
|
name: "arbiter-client test_connect".to_owned(),
|
||||||
description: Some("Manual connection smoke test".to_string()),
|
description: Some("Manual connection smoke test".to_owned()),
|
||||||
version: Some(env!("CARGO_PKG_VERSION").to_string()),
|
version: Some(env!("CARGO_PKG_VERSION").to_owned()),
|
||||||
};
|
};
|
||||||
|
|
||||||
match ArbiterClient::connect(url, metadata).await {
|
match ArbiterClient::connect(url, metadata).await {
|
||||||
Ok(_) => println!("Connected and authenticated successfully."),
|
Ok(_) => println!("Connected and authenticated successfully."),
|
||||||
Err(err) => eprintln!("Failed to connect: {:#?}", err),
|
Err(err) => eprintln!("Failed to connect: {err:#?}"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use arbiter_crypto::authn::SigningKey;
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
ClientMetadata, proto::arbiter_service_client::ArbiterServiceClient, url::ArbiterUrl,
|
ClientMetadata, proto::arbiter_service_client::ArbiterServiceClient, url::ArbiterUrl,
|
||||||
};
|
};
|
||||||
@@ -17,33 +18,39 @@ use crate::{
|
|||||||
use crate::wallets::evm::ArbiterEvmWallet;
|
use crate::wallets::evm::ArbiterEvmWallet;
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum Error {
|
pub enum ArbiterClientError {
|
||||||
#[error("gRPC error")]
|
#[error("Authentication error")]
|
||||||
Grpc(#[from] tonic::Status),
|
Authentication(#[from] AuthError),
|
||||||
|
|
||||||
#[error("Could not establish connection")]
|
#[error("Could not establish connection")]
|
||||||
Connection(#[from] tonic::transport::Error),
|
Connection(#[from] tonic::transport::Error),
|
||||||
|
|
||||||
#[error("Invalid server URI")]
|
#[error("gRPC error")]
|
||||||
InvalidUri(#[from] http::uri::InvalidUri),
|
Grpc(#[from] tonic::Status),
|
||||||
|
|
||||||
#[error("Invalid CA certificate")]
|
#[error("Invalid CA certificate")]
|
||||||
InvalidCaCert(#[from] webpki::Error),
|
InvalidCaCert(#[from] webpki::Error),
|
||||||
|
|
||||||
#[error("Authentication error")]
|
#[error("Invalid server URI")]
|
||||||
Authentication(#[from] AuthError),
|
InvalidUri(#[from] http::uri::InvalidUri),
|
||||||
|
|
||||||
#[error("Storage error")]
|
#[error("Storage error")]
|
||||||
Storage(#[from] StorageError),
|
Storage(#[from] StorageError),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ArbiterClient {
|
pub struct ArbiterClient {
|
||||||
#[allow(dead_code)]
|
#[expect(
|
||||||
|
dead_code,
|
||||||
|
reason = "transport will be used in future methods for sending requests and receiving responses"
|
||||||
|
)]
|
||||||
transport: Arc<Mutex<ClientTransport>>,
|
transport: Arc<Mutex<ClientTransport>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ArbiterClient {
|
impl ArbiterClient {
|
||||||
pub async fn connect(url: ArbiterUrl, metadata: ClientMetadata) -> Result<Self, Error> {
|
pub async fn connect(
|
||||||
|
url: ArbiterUrl,
|
||||||
|
metadata: ClientMetadata,
|
||||||
|
) -> Result<Self, ArbiterClientError> {
|
||||||
let storage = FileSigningKeyStorage::from_default_location()?;
|
let storage = FileSigningKeyStorage::from_default_location()?;
|
||||||
Self::connect_with_storage(url, metadata, &storage).await
|
Self::connect_with_storage(url, metadata, &storage).await
|
||||||
}
|
}
|
||||||
@@ -52,7 +59,7 @@ impl ArbiterClient {
|
|||||||
url: ArbiterUrl,
|
url: ArbiterUrl,
|
||||||
metadata: ClientMetadata,
|
metadata: ClientMetadata,
|
||||||
storage: &S,
|
storage: &S,
|
||||||
) -> Result<Self, Error> {
|
) -> Result<Self, ArbiterClientError> {
|
||||||
let key = storage.load_or_create()?;
|
let key = storage.load_or_create()?;
|
||||||
Self::connect_with_key(url, metadata, key).await
|
Self::connect_with_key(url, metadata, key).await
|
||||||
}
|
}
|
||||||
@@ -60,8 +67,8 @@ impl ArbiterClient {
|
|||||||
pub async fn connect_with_key(
|
pub async fn connect_with_key(
|
||||||
url: ArbiterUrl,
|
url: ArbiterUrl,
|
||||||
metadata: ClientMetadata,
|
metadata: ClientMetadata,
|
||||||
key: ed25519_dalek::SigningKey,
|
key: SigningKey,
|
||||||
) -> Result<Self, Error> {
|
) -> Result<Self, ArbiterClientError> {
|
||||||
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)?.to_owned();
|
let anchor = webpki::anchor_from_trusted_cert(&url.ca_cert)?.to_owned();
|
||||||
let tls = ClientTlsConfig::new().trust_anchor(anchor);
|
let tls = ClientTlsConfig::new().trust_anchor(anchor);
|
||||||
|
|
||||||
@@ -88,7 +95,8 @@ impl ArbiterClient {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(feature = "evm")]
|
#[cfg(feature = "evm")]
|
||||||
pub async fn evm_wallets(&self) -> Result<Vec<ArbiterEvmWallet>, Error> {
|
#[expect(clippy::unused_async, reason = "false positive")]
|
||||||
|
pub async fn evm_wallets(&self) -> Result<Vec<ArbiterEvmWallet>, ArbiterClientError> {
|
||||||
todo!("fetch EVM wallet list from server")
|
todo!("fetch EVM wallet list from server")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ mod transport;
|
|||||||
pub mod wallets;
|
pub mod wallets;
|
||||||
|
|
||||||
pub use auth::AuthError;
|
pub use auth::AuthError;
|
||||||
pub use client::{ArbiterClient, Error};
|
pub use client::{ArbiterClient, ArbiterClientError};
|
||||||
pub use storage::{FileSigningKeyStorage, SigningKeyStorage, StorageError};
|
pub use storage::{FileSigningKeyStorage, SigningKeyStorage, StorageError};
|
||||||
|
|
||||||
#[cfg(feature = "evm")]
|
#[cfg(feature = "evm")]
|
||||||
|
|||||||
@@ -1,17 +1,18 @@
|
|||||||
|
use arbiter_crypto::authn::SigningKey;
|
||||||
use arbiter_proto::home_path;
|
use arbiter_proto::home_path;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum StorageError {
|
pub enum StorageError {
|
||||||
#[error("I/O error")]
|
|
||||||
Io(#[from] std::io::Error),
|
|
||||||
|
|
||||||
#[error("Invalid signing key length in storage: expected {expected} bytes, got {actual} bytes")]
|
#[error("Invalid signing key length in storage: expected {expected} bytes, got {actual} bytes")]
|
||||||
InvalidKeyLength { expected: usize, actual: usize },
|
InvalidKeyLength { expected: usize, actual: usize },
|
||||||
|
|
||||||
|
#[error("I/O error")]
|
||||||
|
Io(#[from] std::io::Error),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub trait SigningKeyStorage {
|
pub trait SigningKeyStorage {
|
||||||
fn load_or_create(&self) -> std::result::Result<ed25519_dalek::SigningKey, StorageError>;
|
fn load_or_create(&self) -> Result<SigningKey, StorageError>;
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
@@ -20,17 +21,17 @@ pub struct FileSigningKeyStorage {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl FileSigningKeyStorage {
|
impl FileSigningKeyStorage {
|
||||||
pub const DEFAULT_FILE_NAME: &str = "sdk_client_ed25519.key";
|
pub const DEFAULT_FILE_NAME: &str = "sdk_client_ml_dsa.key";
|
||||||
|
|
||||||
pub fn new(path: impl Into<PathBuf>) -> Self {
|
pub fn new(path: impl Into<PathBuf>) -> Self {
|
||||||
Self { path: path.into() }
|
Self { path: path.into() }
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn from_default_location() -> std::result::Result<Self, StorageError> {
|
pub fn from_default_location() -> Result<Self, StorageError> {
|
||||||
Ok(Self::new(home_path()?.join(Self::DEFAULT_FILE_NAME)))
|
Ok(Self::new(home_path()?.join(Self::DEFAULT_FILE_NAME)))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn read_key(path: &Path) -> std::result::Result<ed25519_dalek::SigningKey, StorageError> {
|
fn read_key(path: &Path) -> Result<SigningKey, StorageError> {
|
||||||
let bytes = std::fs::read(path)?;
|
let bytes = std::fs::read(path)?;
|
||||||
let raw: [u8; 32] =
|
let raw: [u8; 32] =
|
||||||
bytes
|
bytes
|
||||||
@@ -39,12 +40,12 @@ impl FileSigningKeyStorage {
|
|||||||
expected: 32,
|
expected: 32,
|
||||||
actual: v.len(),
|
actual: v.len(),
|
||||||
})?;
|
})?;
|
||||||
Ok(ed25519_dalek::SigningKey::from_bytes(&raw))
|
Ok(SigningKey::from_seed(raw))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl SigningKeyStorage for FileSigningKeyStorage {
|
impl SigningKeyStorage for FileSigningKeyStorage {
|
||||||
fn load_or_create(&self) -> std::result::Result<ed25519_dalek::SigningKey, StorageError> {
|
fn load_or_create(&self) -> Result<SigningKey, StorageError> {
|
||||||
if let Some(parent) = self.path.parent() {
|
if let Some(parent) = self.path.parent() {
|
||||||
std::fs::create_dir_all(parent)?;
|
std::fs::create_dir_all(parent)?;
|
||||||
}
|
}
|
||||||
@@ -53,8 +54,8 @@ impl SigningKeyStorage for FileSigningKeyStorage {
|
|||||||
return Self::read_key(&self.path);
|
return Self::read_key(&self.path);
|
||||||
}
|
}
|
||||||
|
|
||||||
let key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let key = SigningKey::generate();
|
||||||
let raw_key = key.to_bytes();
|
let raw_key = key.to_seed();
|
||||||
|
|
||||||
// Use create_new to prevent accidental overwrite if another process creates the key first.
|
// Use create_new to prevent accidental overwrite if another process creates the key first.
|
||||||
match std::fs::OpenOptions::new()
|
match std::fs::OpenOptions::new()
|
||||||
@@ -103,7 +104,7 @@ mod tests {
|
|||||||
.load_or_create()
|
.load_or_create()
|
||||||
.expect("second load_or_create should read same key");
|
.expect("second load_or_create should read same key");
|
||||||
|
|
||||||
assert_eq!(key_a.to_bytes(), key_b.to_bytes());
|
assert_eq!(key_a.to_seed(), key_b.to_seed());
|
||||||
assert!(path.exists());
|
assert!(path.exists());
|
||||||
|
|
||||||
std::fs::remove_file(path).expect("temp key file should be removable");
|
std::fs::remove_file(path).expect("temp key file should be removable");
|
||||||
@@ -124,7 +125,7 @@ mod tests {
|
|||||||
assert_eq!(expected, 32);
|
assert_eq!(expected, 32);
|
||||||
assert_eq!(actual, 31);
|
assert_eq!(actual, 31);
|
||||||
}
|
}
|
||||||
other => panic!("unexpected error: {other:?}"),
|
other @ StorageError::Io(_) => panic!("unexpected error: {other:?}"),
|
||||||
}
|
}
|
||||||
|
|
||||||
std::fs::remove_file(path).expect("temp key file should be removable");
|
std::fs::remove_file(path).expect("temp key file should be removable");
|
||||||
|
|||||||
@@ -2,15 +2,15 @@ use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
|
|||||||
use std::sync::atomic::{AtomicI32, Ordering};
|
use std::sync::atomic::{AtomicI32, Ordering};
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
pub(crate) const BUFFER_LENGTH: usize = 16;
|
pub const BUFFER_LENGTH: usize = 16;
|
||||||
static NEXT_REQUEST_ID: AtomicI32 = AtomicI32::new(1);
|
static NEXT_REQUEST_ID: AtomicI32 = AtomicI32::new(1);
|
||||||
|
|
||||||
pub(crate) fn next_request_id() -> i32 {
|
pub fn next_request_id() -> i32 {
|
||||||
NEXT_REQUEST_ID.fetch_add(1, Ordering::Relaxed)
|
NEXT_REQUEST_ID.fetch_add(1, Ordering::Relaxed)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub(crate) enum ClientSignError {
|
pub enum ClientSignError {
|
||||||
#[error("Transport channel closed")]
|
#[error("Transport channel closed")]
|
||||||
ChannelClosed,
|
ChannelClosed,
|
||||||
|
|
||||||
@@ -18,7 +18,7 @@ pub(crate) enum ClientSignError {
|
|||||||
ConnectionClosed,
|
ConnectionClosed,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) struct ClientTransport {
|
pub struct ClientTransport {
|
||||||
pub(crate) sender: mpsc::Sender<ClientRequest>,
|
pub(crate) sender: mpsc::Sender<ClientRequest>,
|
||||||
pub(crate) receiver: tonic::Streaming<ClientResponse>,
|
pub(crate) receiver: tonic::Streaming<ClientResponse>,
|
||||||
}
|
}
|
||||||
@@ -27,18 +27,17 @@ impl ClientTransport {
|
|||||||
pub(crate) async fn send(
|
pub(crate) async fn send(
|
||||||
&mut self,
|
&mut self,
|
||||||
request: ClientRequest,
|
request: ClientRequest,
|
||||||
) -> std::result::Result<(), ClientSignError> {
|
) -> Result<(), ClientSignError> {
|
||||||
self.sender
|
self.sender
|
||||||
.send(request)
|
.send(request)
|
||||||
.await
|
.await
|
||||||
.map_err(|_| ClientSignError::ChannelClosed)
|
.map_err(|_| ClientSignError::ChannelClosed)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) async fn recv(&mut self) -> std::result::Result<ClientResponse, ClientSignError> {
|
pub(crate) async fn recv(&mut self) -> Result<ClientResponse, ClientSignError> {
|
||||||
match self.receiver.message().await {
|
match self.receiver.message().await {
|
||||||
Ok(Some(resp)) => Ok(resp),
|
Ok(Some(resp)) => Ok(resp),
|
||||||
Ok(None) => Err(ClientSignError::ConnectionClosed),
|
Ok(None) | Err(_) => Err(ClientSignError::ConnectionClosed),
|
||||||
Err(_) => Err(ClientSignError::ConnectionClosed),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -61,9 +61,9 @@ pub struct ArbiterEvmWallet {
|
|||||||
impl ArbiterEvmWallet {
|
impl ArbiterEvmWallet {
|
||||||
#[expect(
|
#[expect(
|
||||||
dead_code,
|
dead_code,
|
||||||
reason = "constructor may be used in future extensions, e.g. to support wallet listing"
|
reason = "new will be used in future methods for creating wallets with different parameters"
|
||||||
)]
|
)]
|
||||||
pub(crate) fn new(transport: Arc<Mutex<ClientTransport>>, address: Address) -> Self {
|
pub(crate) const fn new(transport: Arc<Mutex<ClientTransport>>, address: Address) -> Self {
|
||||||
Self {
|
Self {
|
||||||
transport,
|
transport,
|
||||||
address,
|
address,
|
||||||
@@ -71,11 +71,12 @@ impl ArbiterEvmWallet {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn address(&self) -> Address {
|
pub const fn address(&self) -> Address {
|
||||||
self.address
|
self.address
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn with_chain_id(mut self, chain_id: ChainId) -> Self {
|
#[must_use]
|
||||||
|
pub const fn with_chain_id(mut self, chain_id: ChainId) -> Self {
|
||||||
self.chain_id = Some(chain_id);
|
self.chain_id = Some(chain_id);
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
@@ -150,6 +151,7 @@ impl TxSigner<Signature> for ArbiterEvmWallet {
|
|||||||
.recv()
|
.recv()
|
||||||
.await
|
.await
|
||||||
.map_err(|_| Error::other("failed to receive evm sign transaction response"))?;
|
.map_err(|_| Error::other("failed to receive evm sign transaction response"))?;
|
||||||
|
drop(transport);
|
||||||
|
|
||||||
if response.request_id != Some(request_id) {
|
if response.request_id != Some(request_id) {
|
||||||
return Err(Error::other(
|
return Err(Error::other(
|
||||||
|
|||||||
1
server/crates/arbiter-crypto/.gitignore
vendored
Normal file
1
server/crates/arbiter-crypto/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
/target
|
||||||
21
server/crates/arbiter-crypto/Cargo.toml
Normal file
21
server/crates/arbiter-crypto/Cargo.toml
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
[package]
|
||||||
|
name = "arbiter-crypto"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
ml-dsa = {workspace = true, optional = true }
|
||||||
|
rand = {workspace = true, optional = true}
|
||||||
|
base64 = {workspace = true, optional = true }
|
||||||
|
memsafe = {version = "0.4.0", optional = true}
|
||||||
|
hmac.workspace = true
|
||||||
|
alloy.workspace = true
|
||||||
|
chrono.workspace = true
|
||||||
|
|
||||||
|
[lints]
|
||||||
|
workspace = true
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = ["authn", "safecell"]
|
||||||
|
authn = ["dep:ml-dsa", "dep:rand", "dep:base64"]
|
||||||
|
safecell = ["dep:memsafe"]
|
||||||
2
server/crates/arbiter-crypto/src/authn/mod.rs
Normal file
2
server/crates/arbiter-crypto/src/authn/mod.rs
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
pub mod v1;
|
||||||
|
pub use v1::*;
|
||||||
194
server/crates/arbiter-crypto/src/authn/v1.rs
Normal file
194
server/crates/arbiter-crypto/src/authn/v1.rs
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
use base64::{Engine as _, prelude::BASE64_STANDARD};
|
||||||
|
use hmac::digest::Digest;
|
||||||
|
use ml_dsa::{
|
||||||
|
EncodedVerifyingKey, Error, KeyGen, MlDsa87, Seed, Signature as MlDsaSignature,
|
||||||
|
SigningKey as MlDsaSigningKey, VerifyingKey as MlDsaVerifyingKey, signature::Keypair as _,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub static CLIENT_CONTEXT: &[u8] = b"arbiter_client";
|
||||||
|
pub static USERAGENT_CONTEXT: &[u8] = b"arbiter_user_agent";
|
||||||
|
|
||||||
|
pub fn format_challenge(nonce: i32, pubkey: &[u8]) -> Vec<u8> {
|
||||||
|
let concat_form = format!("{}:{}", nonce, BASE64_STANDARD.encode(pubkey));
|
||||||
|
concat_form.into_bytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type KeyParams = MlDsa87;
|
||||||
|
|
||||||
|
#[derive(Clone, Debug, PartialEq)]
|
||||||
|
pub struct PublicKey(Box<MlDsaVerifyingKey<KeyParams>>);
|
||||||
|
|
||||||
|
impl crate::hashing::Hashable for PublicKey {
|
||||||
|
fn hash<H: Digest>(&self, hasher: &mut H) {
|
||||||
|
hasher.update(self.to_bytes());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug, PartialEq)]
|
||||||
|
pub struct Signature(Box<MlDsaSignature<KeyParams>>);
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct SigningKey(Box<MlDsaSigningKey<KeyParams>>);
|
||||||
|
|
||||||
|
impl PublicKey {
|
||||||
|
pub fn to_bytes(&self) -> Vec<u8> {
|
||||||
|
self.0.encode().0.to_vec()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn verify(&self, nonce: i32, context: &[u8], signature: &Signature) -> bool {
|
||||||
|
self.0.verify_with_context(
|
||||||
|
&format_challenge(nonce, &self.to_bytes()),
|
||||||
|
context,
|
||||||
|
&signature.0,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Signature {
|
||||||
|
pub fn to_bytes(&self) -> Vec<u8> {
|
||||||
|
self.0.encode().0.to_vec()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SigningKey {
|
||||||
|
pub fn generate() -> Self {
|
||||||
|
Self(Box::new(KeyParams::key_gen(&mut rand::rng())))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_seed(seed: [u8; 32]) -> Self {
|
||||||
|
Self(Box::new(KeyParams::from_seed(&Seed::from(seed))))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn to_seed(&self) -> [u8; 32] {
|
||||||
|
self.0.to_seed().into()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn public_key(&self) -> PublicKey {
|
||||||
|
self.0.verifying_key().into()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn sign_message(&self, message: &[u8], context: &[u8]) -> Result<Signature, Error> {
|
||||||
|
self.0
|
||||||
|
.signing_key()
|
||||||
|
.sign_deterministic(message, context)
|
||||||
|
.map(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn sign_challenge(&self, nonce: i32, context: &[u8]) -> Result<Signature, Error> {
|
||||||
|
self.sign_message(
|
||||||
|
&format_challenge(nonce, &self.public_key().to_bytes()),
|
||||||
|
context,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<MlDsaVerifyingKey<KeyParams>> for PublicKey {
|
||||||
|
fn from(value: MlDsaVerifyingKey<KeyParams>) -> Self {
|
||||||
|
Self(Box::new(value))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<MlDsaSignature<KeyParams>> for Signature {
|
||||||
|
fn from(value: MlDsaSignature<KeyParams>) -> Self {
|
||||||
|
Self(Box::new(value))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<MlDsaSigningKey<KeyParams>> for SigningKey {
|
||||||
|
fn from(value: MlDsaSigningKey<KeyParams>) -> Self {
|
||||||
|
Self(Box::new(value))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<Vec<u8>> for PublicKey {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
fn try_from(value: Vec<u8>) -> Result<Self, Self::Error> {
|
||||||
|
Self::try_from(value.as_slice())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<&'_ [u8]> for PublicKey {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
|
||||||
|
let encoded = EncodedVerifyingKey::<KeyParams>::try_from(value).map_err(|_| ())?;
|
||||||
|
Ok(Self(Box::new(MlDsaVerifyingKey::decode(&encoded))))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<Vec<u8>> for Signature {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
fn try_from(value: Vec<u8>) -> Result<Self, Self::Error> {
|
||||||
|
Self::try_from(value.as_slice())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<&'_ [u8]> for Signature {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
|
||||||
|
MlDsaSignature::try_from(value)
|
||||||
|
.map(|sig| Self(Box::new(sig)))
|
||||||
|
.map_err(|_| ())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use ml_dsa::{KeyGen, MlDsa87, signature::Keypair as _};
|
||||||
|
|
||||||
|
use super::{CLIENT_CONTEXT, PublicKey, Signature, SigningKey, USERAGENT_CONTEXT};
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn public_key_round_trip_decodes() {
|
||||||
|
let key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
|
let encoded = PublicKey::from(key.verifying_key()).to_bytes();
|
||||||
|
|
||||||
|
let decoded = PublicKey::try_from(encoded.as_slice()).expect("public key should decode");
|
||||||
|
|
||||||
|
assert_eq!(decoded, PublicKey::from(key.verifying_key()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn signature_round_trip_decodes() {
|
||||||
|
let key = SigningKey::generate();
|
||||||
|
let signature = key
|
||||||
|
.sign_message(b"challenge", CLIENT_CONTEXT)
|
||||||
|
.expect("signature should be created");
|
||||||
|
|
||||||
|
let decoded =
|
||||||
|
Signature::try_from(signature.to_bytes().as_slice()).expect("signature should decode");
|
||||||
|
|
||||||
|
assert_eq!(decoded, signature);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn challenge_verification_uses_context_and_canonical_key_bytes() {
|
||||||
|
let key = SigningKey::generate();
|
||||||
|
let public_key = key.public_key();
|
||||||
|
let nonce = 17;
|
||||||
|
let signature = key
|
||||||
|
.sign_challenge(nonce, CLIENT_CONTEXT)
|
||||||
|
.expect("signature should be created");
|
||||||
|
|
||||||
|
assert!(public_key.verify(nonce, CLIENT_CONTEXT, &signature));
|
||||||
|
assert!(!public_key.verify(nonce, USERAGENT_CONTEXT, &signature));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn signing_key_round_trip_seed_preserves_public_key_and_signing() {
|
||||||
|
let original = SigningKey::generate();
|
||||||
|
let restored = SigningKey::from_seed(original.to_seed());
|
||||||
|
|
||||||
|
assert_eq!(restored.public_key(), original.public_key());
|
||||||
|
|
||||||
|
let signature = restored
|
||||||
|
.sign_challenge(9, CLIENT_CONTEXT)
|
||||||
|
.expect("signature should be created");
|
||||||
|
|
||||||
|
assert!(restored.public_key().verify(9, CLIENT_CONTEXT, &signature));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,21 +1,25 @@
|
|||||||
use hmac::digest::Digest;
|
pub use hmac::digest::Digest;
|
||||||
use std::collections::HashSet;
|
use std::collections::HashSet;
|
||||||
|
|
||||||
/// Deterministically hash a value by feeding its fields into the hasher in a consistent order.
|
/// Deterministically hash a value by feeding its fields into the hasher in a consistent order.
|
||||||
|
#[diagnostic::on_unimplemented(
|
||||||
|
note = "for local types consider adding `#[derive(arbiter_macros::Hashable)]` to your `{Self}` type",
|
||||||
|
note = "for types from other crates check whether the crate offers a `Hashable` implementation"
|
||||||
|
)]
|
||||||
pub trait Hashable {
|
pub trait Hashable {
|
||||||
fn hash<H: Digest>(&self, hasher: &mut H);
|
fn hash<H: Digest>(&self, hasher: &mut H);
|
||||||
}
|
}
|
||||||
|
|
||||||
macro_rules! impl_numeric {
|
macro_rules! impl_numeric {
|
||||||
($($t:ty),*) => {
|
($($t:ty),*) => {
|
||||||
$(
|
$(
|
||||||
impl Hashable for $t {
|
impl Hashable for $t {
|
||||||
fn hash<H: Digest>(&self, hasher: &mut H) {
|
fn hash<H: Digest>(&self, hasher: &mut H) {
|
||||||
hasher.update(&self.to_be_bytes());
|
hasher.update(&self.to_be_bytes());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
)*
|
||||||
)*
|
};
|
||||||
};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl_numeric!(u8, u16, u32, u64, i8, i16, i32, i64);
|
impl_numeric!(u8, u16, u32, u64, i8, i16, i32, i64);
|
||||||
@@ -45,7 +49,7 @@ impl<T: Hashable + PartialOrd> Hashable for Vec<T> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T: Hashable + PartialOrd> Hashable for HashSet<T> {
|
impl<T: Hashable + PartialOrd, S: std::hash::BuildHasher> Hashable for HashSet<T, S> {
|
||||||
fn hash<H: Digest>(&self, hasher: &mut H) {
|
fn hash<H: Digest>(&self, hasher: &mut H) {
|
||||||
let ref_sorted = {
|
let ref_sorted = {
|
||||||
let mut sorted = self.iter().collect::<Vec<_>>();
|
let mut sorted = self.iter().collect::<Vec<_>>();
|
||||||
5
server/crates/arbiter-crypto/src/lib.rs
Normal file
5
server/crates/arbiter-crypto/src/lib.rs
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
#[cfg(feature = "authn")]
|
||||||
|
pub mod authn;
|
||||||
|
pub mod hashing;
|
||||||
|
#[cfg(feature = "safecell")]
|
||||||
|
pub mod safecell;
|
||||||
@@ -29,7 +29,7 @@ pub trait SafeCellHandle<T> {
|
|||||||
let mut cell = Self::new(T::default());
|
let mut cell = Self::new(T::default());
|
||||||
{
|
{
|
||||||
let mut handle = cell.write();
|
let mut handle = cell.write();
|
||||||
f(handle.deref_mut());
|
f(&mut *handle);
|
||||||
}
|
}
|
||||||
cell
|
cell
|
||||||
}
|
}
|
||||||
@@ -105,6 +105,11 @@ impl<T> SafeCellHandle<T> for MemSafeCell<T> {
|
|||||||
|
|
||||||
fn abort_memory_breach(action: &str, err: &memsafe::error::MemoryError) -> ! {
|
fn abort_memory_breach(action: &str, err: &memsafe::error::MemoryError) -> ! {
|
||||||
eprintln!("fatal {action}: {err}");
|
eprintln!("fatal {action}: {err}");
|
||||||
|
// SAFETY: Intentionally cause a segmentation fault to prevent further execution in a compromised state.
|
||||||
|
unsafe {
|
||||||
|
let unsafe_pointer = std::ptr::null_mut::<u8>();
|
||||||
|
std::ptr::write_volatile(unsafe_pointer, 0);
|
||||||
|
}
|
||||||
std::process::abort();
|
std::process::abort();
|
||||||
}
|
}
|
||||||
|
|
||||||
18
server/crates/arbiter-macros/Cargo.toml
Normal file
18
server/crates/arbiter-macros/Cargo.toml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
[package]
|
||||||
|
name = "arbiter-macros"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
|
||||||
|
[lib]
|
||||||
|
proc-macro = true
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
proc-macro2 = "1.0"
|
||||||
|
quote = "1.0"
|
||||||
|
syn = { version = "2.0", features = ["derive", "fold", "full", "visit-mut"] }
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
arbiter-crypto = { path = "../arbiter-crypto" }
|
||||||
|
|
||||||
|
[lints]
|
||||||
|
workspace = true
|
||||||
133
server/crates/arbiter-macros/src/hashable.rs
Normal file
133
server/crates/arbiter-macros/src/hashable.rs
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
use proc_macro2::{Span, TokenStream, TokenTree};
|
||||||
|
use quote::quote;
|
||||||
|
use syn::parse_quote;
|
||||||
|
use syn::spanned::Spanned;
|
||||||
|
use syn::{DataStruct, DeriveInput, Fields, Generics, Index};
|
||||||
|
|
||||||
|
use crate::utils::{HASHABLE_TRAIT_PATH, HMAC_DIGEST_PATH};
|
||||||
|
|
||||||
|
pub(crate) fn derive(input: &DeriveInput) -> TokenStream {
|
||||||
|
match &input.data {
|
||||||
|
syn::Data::Struct(struct_data) => hashable_struct(input, struct_data),
|
||||||
|
syn::Data::Enum(_) => {
|
||||||
|
syn::Error::new_spanned(input, "Hashable can currently be derived only for structs")
|
||||||
|
.to_compile_error()
|
||||||
|
}
|
||||||
|
syn::Data::Union(_) => {
|
||||||
|
syn::Error::new_spanned(input, "Hashable cannot be derived for unions")
|
||||||
|
.to_compile_error()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hashable_struct(input: &DeriveInput, struct_data: &DataStruct) -> TokenStream {
|
||||||
|
let ident = &input.ident;
|
||||||
|
let hashable_trait = HASHABLE_TRAIT_PATH.to_path();
|
||||||
|
let hmac_digest = HMAC_DIGEST_PATH.to_path();
|
||||||
|
let generics = add_hashable_bounds(input.generics.clone(), &hashable_trait);
|
||||||
|
let field_accesses = collect_field_accesses(struct_data);
|
||||||
|
let hash_calls = build_hash_calls(&field_accesses, &hashable_trait);
|
||||||
|
|
||||||
|
let (impl_generics, ty_generics, where_clause) = generics.split_for_impl();
|
||||||
|
|
||||||
|
quote! {
|
||||||
|
#[automatically_derived]
|
||||||
|
impl #impl_generics #hashable_trait for #ident #ty_generics #where_clause {
|
||||||
|
fn hash<H: #hmac_digest>(&self, hasher: &mut H) {
|
||||||
|
#(#hash_calls)*
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn add_hashable_bounds(mut generics: Generics, hashable_trait: &syn::Path) -> Generics {
|
||||||
|
for type_param in generics.type_params_mut() {
|
||||||
|
type_param.bounds.push(parse_quote!(#hashable_trait));
|
||||||
|
}
|
||||||
|
|
||||||
|
generics
|
||||||
|
}
|
||||||
|
|
||||||
|
struct FieldAccess {
|
||||||
|
access: TokenStream,
|
||||||
|
span: Span,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn collect_field_accesses(struct_data: &DataStruct) -> Vec<FieldAccess> {
|
||||||
|
match &struct_data.fields {
|
||||||
|
Fields::Named(fields) => {
|
||||||
|
// Keep deterministic alphabetical order for named fields.
|
||||||
|
// Do not remove this sort, because it keeps hash output stable regardless of source order.
|
||||||
|
let mut named_fields = fields
|
||||||
|
.named
|
||||||
|
.iter()
|
||||||
|
.map(|field| {
|
||||||
|
let name = field
|
||||||
|
.ident
|
||||||
|
.as_ref()
|
||||||
|
.expect("Fields::Named(fields) must have names")
|
||||||
|
.clone();
|
||||||
|
(name.to_string(), name)
|
||||||
|
})
|
||||||
|
.collect::<Vec<_>>();
|
||||||
|
|
||||||
|
named_fields.sort_by(|a, b| a.0.cmp(&b.0));
|
||||||
|
|
||||||
|
named_fields
|
||||||
|
.into_iter()
|
||||||
|
.map(|(_, name)| FieldAccess {
|
||||||
|
access: quote! { #name },
|
||||||
|
span: name.span(),
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
Fields::Unnamed(fields) => fields
|
||||||
|
.unnamed
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(i, field)| FieldAccess {
|
||||||
|
access: {
|
||||||
|
let index = Index::from(i);
|
||||||
|
quote! { #index }
|
||||||
|
},
|
||||||
|
span: field.ty.span(),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
Fields::Unit => Vec::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_hash_calls(
|
||||||
|
field_accesses: &[FieldAccess],
|
||||||
|
hashable_trait: &syn::Path,
|
||||||
|
) -> Vec<TokenStream> {
|
||||||
|
field_accesses
|
||||||
|
.iter()
|
||||||
|
.map(|field| {
|
||||||
|
let access = &field.access;
|
||||||
|
let call = quote! {
|
||||||
|
#hashable_trait::hash(&self.#access, hasher);
|
||||||
|
};
|
||||||
|
|
||||||
|
respan(call, field.span)
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Recursively set span on all tokens, including interpolated ones.
|
||||||
|
fn respan(tokens: TokenStream, span: Span) -> TokenStream {
|
||||||
|
tokens
|
||||||
|
.into_iter()
|
||||||
|
.map(|tt| match tt {
|
||||||
|
TokenTree::Group(g) => {
|
||||||
|
let mut new = proc_macro2::Group::new(g.delimiter(), respan(g.stream(), span));
|
||||||
|
new.set_span(span);
|
||||||
|
TokenTree::Group(new)
|
||||||
|
}
|
||||||
|
mut other => {
|
||||||
|
other.set_span(span);
|
||||||
|
other
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
10
server/crates/arbiter-macros/src/lib.rs
Normal file
10
server/crates/arbiter-macros/src/lib.rs
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
use syn::{DeriveInput, parse_macro_input};
|
||||||
|
|
||||||
|
mod hashable;
|
||||||
|
mod utils;
|
||||||
|
|
||||||
|
#[proc_macro_derive(Hashable)]
|
||||||
|
pub fn derive_hashable(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
|
||||||
|
let input = parse_macro_input!(input as DeriveInput);
|
||||||
|
hashable::derive(&input).into()
|
||||||
|
}
|
||||||
24
server/crates/arbiter-macros/src/utils.rs
Normal file
24
server/crates/arbiter-macros/src/utils.rs
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
pub(crate) struct ToPath(pub &'static str);
|
||||||
|
|
||||||
|
impl ToPath {
|
||||||
|
pub(crate) fn to_path(&self) -> syn::Path {
|
||||||
|
syn::parse_str(self.0).expect("Invalid path")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
macro_rules! ensure_path {
|
||||||
|
($path:path as $name:ident) => {
|
||||||
|
const _: () = {
|
||||||
|
#[cfg(test)]
|
||||||
|
#[expect(
|
||||||
|
unused_imports,
|
||||||
|
reason = "Ensures the path is valid and will cause a compile error if not"
|
||||||
|
)]
|
||||||
|
use $path as _;
|
||||||
|
};
|
||||||
|
pub(crate) const $name: ToPath = ToPath(stringify!($path));
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_path!(::arbiter_crypto::hashing::Hashable as HASHABLE_TRAIT_PATH);
|
||||||
|
ensure_path!(::arbiter_crypto::hashing::Digest as HMAC_DIGEST_PATH);
|
||||||
@@ -17,7 +17,7 @@ url = "2.5.8"
|
|||||||
miette.workspace = true
|
miette.workspace = true
|
||||||
thiserror.workspace = true
|
thiserror.workspace = true
|
||||||
rustls-pki-types.workspace = true
|
rustls-pki-types.workspace = true
|
||||||
base64 = "0.22.1"
|
base64.workspace = true
|
||||||
prost-types.workspace = true
|
prost-types.workspace = true
|
||||||
tracing.workspace = true
|
tracing.workspace = true
|
||||||
async-trait.workspace = true
|
async-trait.workspace = true
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
pub mod transport;
|
pub mod transport;
|
||||||
pub mod url;
|
pub mod url;
|
||||||
|
|
||||||
use base64::{Engine, prelude::BASE64_STANDARD};
|
|
||||||
|
|
||||||
pub mod proto {
|
pub mod proto {
|
||||||
tonic::include_proto!("arbiter");
|
tonic::include_proto!("arbiter");
|
||||||
|
|
||||||
@@ -84,8 +82,3 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
|||||||
|
|
||||||
Ok(arbiter_home)
|
Ok(arbiter_home)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn format_challenge(nonce: i32, pubkey: &[u8]) -> Vec<u8> {
|
|
||||||
let concat_form = format!("{}:{}", nonce, BASE64_STANDARD.encode(pubkey));
|
|
||||||
concat_form.into_bytes()
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ mod tests {
|
|||||||
|
|
||||||
#[rstest]
|
#[rstest]
|
||||||
|
|
||||||
fn test_parsing_correctness(
|
fn parsing_correctness(
|
||||||
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
|
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
|
||||||
|
|
||||||
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
|
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
|
||||||
|
|||||||
@@ -16,9 +16,9 @@ diesel-async = { version = "0.8.0", features = [
|
|||||||
"sqlite",
|
"sqlite",
|
||||||
"tokio",
|
"tokio",
|
||||||
] }
|
] }
|
||||||
ed25519-dalek.workspace = true
|
|
||||||
ed25519-dalek.features = ["serde"]
|
|
||||||
arbiter-proto.path = "../arbiter-proto"
|
arbiter-proto.path = "../arbiter-proto"
|
||||||
|
arbiter-crypto.path = "../arbiter-crypto"
|
||||||
|
arbiter-macros.path = "../arbiter-macros"
|
||||||
tracing.workspace = true
|
tracing.workspace = true
|
||||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
@@ -37,21 +37,15 @@ dashmap = "6.1.0"
|
|||||||
rand.workspace = true
|
rand.workspace = true
|
||||||
rcgen.workspace = true
|
rcgen.workspace = true
|
||||||
chrono.workspace = true
|
chrono.workspace = true
|
||||||
memsafe = "0.4.0"
|
|
||||||
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
x25519-dalek.workspace = true
|
|
||||||
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
||||||
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
||||||
restructed = "0.2.2"
|
restructed = "0.2.2"
|
||||||
strum = { version = "0.28.0", features = ["derive"] }
|
strum = { version = "0.28.0", features = ["derive"] }
|
||||||
pem = "3.0.6"
|
pem = "3.0.6"
|
||||||
k256.workspace = true
|
|
||||||
k256.features = ["serde"]
|
|
||||||
rsa.workspace = true
|
|
||||||
rsa.features = ["serde"]
|
|
||||||
sha2.workspace = true
|
sha2.workspace = true
|
||||||
hmac = "0.12"
|
hmac.workspace = true
|
||||||
spki.workspace = true
|
spki.workspace = true
|
||||||
alloy.workspace = true
|
alloy.workspace = true
|
||||||
prost-types.workspace = true
|
prost-types.workspace = true
|
||||||
@@ -61,6 +55,10 @@ anyhow = "1.0.102"
|
|||||||
serde_with = "3.18.0"
|
serde_with = "3.18.0"
|
||||||
mutants.workspace = true
|
mutants.workspace = true
|
||||||
subtle = "2.6.1"
|
subtle = "2.6.1"
|
||||||
|
ml-dsa.workspace = true
|
||||||
|
ed25519-dalek.workspace = true
|
||||||
|
x25519-dalek.workspace = true
|
||||||
|
k256.workspace = true
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
insta = "1.46.3"
|
insta = "1.46.3"
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ create table if not exists useragent_client (
|
|||||||
id integer not null primary key,
|
id integer not null primary key,
|
||||||
nonce integer not null default(1), -- used for auth challenge
|
nonce integer not null default(1), -- used for auth challenge
|
||||||
public_key blob not null,
|
public_key blob not null,
|
||||||
key_type integer not null default(1), -- 1=Ed25519, 2=ECDSA(secp256k1)
|
key_type integer not null default(1),
|
||||||
created_at integer not null default(unixepoch ('now')),
|
created_at integer not null default(unixepoch ('now')),
|
||||||
updated_at integer not null default(unixepoch ('now'))
|
updated_at integer not null default(unixepoch ('now'))
|
||||||
) STRICT;
|
) STRICT;
|
||||||
|
|||||||
@@ -13,8 +13,8 @@ const TOKEN_LENGTH: usize = 64;
|
|||||||
pub async fn generate_token() -> Result<String, std::io::Error> {
|
pub async fn generate_token() -> Result<String, std::io::Error> {
|
||||||
let rng: StdRng = make_rng();
|
let rng: StdRng = make_rng();
|
||||||
|
|
||||||
let token: String = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
|
let token = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
|
||||||
Default::default(),
|
String::default(),
|
||||||
|mut accum, char| {
|
|mut accum, char| {
|
||||||
accum += char.to_string().as_str();
|
accum += char.to_string().as_str();
|
||||||
accum
|
accum
|
||||||
@@ -27,15 +27,15 @@ pub async fn generate_token() -> Result<String, std::io::Error> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Error, Debug)]
|
#[derive(Error, Debug)]
|
||||||
pub enum Error {
|
pub enum BootstrappError {
|
||||||
#[error("Database error: {0}")]
|
#[error("Database error: {0}")]
|
||||||
Database(#[from] db::PoolError),
|
Database(#[from] db::PoolError),
|
||||||
|
|
||||||
#[error("Database query error: {0}")]
|
|
||||||
Query(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("I/O error: {0}")]
|
#[error("I/O error: {0}")]
|
||||||
Io(#[from] std::io::Error),
|
Io(#[from] std::io::Error),
|
||||||
|
|
||||||
|
#[error("Database query error: {0}")]
|
||||||
|
Query(#[from] diesel::result::Error),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Actor)]
|
#[derive(Actor)]
|
||||||
@@ -44,7 +44,7 @@ pub struct Bootstrapper {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Bootstrapper {
|
impl Bootstrapper {
|
||||||
pub async fn new(db: &DatabasePool) -> Result<Self, Error> {
|
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrappError> {
|
||||||
let row_count: i64 = {
|
let row_count: i64 = {
|
||||||
let mut conn = db.get().await?;
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
@@ -69,16 +69,13 @@ impl Bootstrapper {
|
|||||||
impl Bootstrapper {
|
impl Bootstrapper {
|
||||||
#[message]
|
#[message]
|
||||||
pub fn is_correct_token(&self, token: String) -> bool {
|
pub fn is_correct_token(&self, token: String) -> bool {
|
||||||
match &self.token {
|
self.token.as_ref().is_some_and(|expected| {
|
||||||
Some(expected) => {
|
let expected_bytes = expected.as_bytes();
|
||||||
let expected_bytes = expected.as_bytes();
|
let token_bytes = token.as_bytes();
|
||||||
let token_bytes = token.as_bytes();
|
|
||||||
|
|
||||||
let choice = expected_bytes.ct_eq(token_bytes);
|
let choice = expected_bytes.ct_eq(token_bytes);
|
||||||
bool::from(choice)
|
bool::from(choice)
|
||||||
}
|
})
|
||||||
None => false,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
|
use arbiter_crypto::authn::{self, CLIENT_CONTEXT};
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
ClientMetadata, format_challenge,
|
ClientMetadata,
|
||||||
|
proto::client::auth::{AuthChallenge as ProtoAuthChallenge, AuthResult as ProtoAuthResult},
|
||||||
transport::{Bi, expect_message},
|
transport::{Bi, expect_message},
|
||||||
};
|
};
|
||||||
use chrono::Utc;
|
use chrono::Utc;
|
||||||
@@ -8,7 +10,6 @@ use diesel::{
|
|||||||
dsl::insert_into, update,
|
dsl::insert_into, update,
|
||||||
};
|
};
|
||||||
use diesel_async::RunQueryDsl as _;
|
use diesel_async::RunQueryDsl as _;
|
||||||
use ed25519_dalek::{Signature, VerifyingKey};
|
|
||||||
use kameo::{actor::ActorRef, error::SendError};
|
use kameo::{actor::ActorRef, error::SendError};
|
||||||
use tracing::error;
|
use tracing::error;
|
||||||
|
|
||||||
@@ -18,7 +19,7 @@ use crate::{
|
|||||||
flow_coordinator::{self, RequestClientApproval},
|
flow_coordinator::{self, RequestClientApproval},
|
||||||
keyholder::KeyHolder,
|
keyholder::KeyHolder,
|
||||||
},
|
},
|
||||||
crypto::integrity::{self},
|
crypto::integrity::{self, AttestationStatus},
|
||||||
db::{
|
db::{
|
||||||
self,
|
self,
|
||||||
models::{ProgramClientMetadata, SqliteTimestamp},
|
models::{ProgramClientMetadata, SqliteTimestamp},
|
||||||
@@ -27,34 +28,60 @@ use crate::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum Error {
|
pub enum ClientAuthError {
|
||||||
#[error("Database pool unavailable")]
|
|
||||||
DatabasePoolUnavailable,
|
|
||||||
#[error("Database operation failed")]
|
|
||||||
DatabaseOperationFailed,
|
|
||||||
#[error("Integrity check failed")]
|
|
||||||
IntegrityCheckFailed,
|
|
||||||
#[error("Invalid challenge solution")]
|
|
||||||
InvalidChallengeSolution,
|
|
||||||
#[error("Client approval request failed")]
|
#[error("Client approval request failed")]
|
||||||
ApproveError(#[from] ApproveError),
|
ApproveError(#[from] ApproveError),
|
||||||
|
|
||||||
|
#[error("Database operation failed")]
|
||||||
|
DatabaseOperationFailed,
|
||||||
|
|
||||||
|
#[error("Database pool unavailable")]
|
||||||
|
DatabasePoolUnavailable,
|
||||||
|
|
||||||
|
#[error("Integrity check failed")]
|
||||||
|
IntegrityCheckFailed,
|
||||||
|
|
||||||
|
#[error("Invalid challenge solution")]
|
||||||
|
InvalidChallengeSolution,
|
||||||
|
|
||||||
#[error("Transport error")]
|
#[error("Transport error")]
|
||||||
Transport,
|
Transport,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<diesel::result::Error> for Error {
|
impl From<diesel::result::Error> for ClientAuthError {
|
||||||
fn from(e: diesel::result::Error) -> Self {
|
fn from(e: diesel::result::Error) -> Self {
|
||||||
error!(?e, "Database error");
|
error!(?e, "Database error");
|
||||||
Self::DatabaseOperationFailed
|
Self::DatabaseOperationFailed
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<ClientAuthError> for arbiter_proto::proto::client::auth::AuthResult {
|
||||||
|
fn from(value: ClientAuthError) -> Self {
|
||||||
|
match value {
|
||||||
|
ClientAuthError::ApproveError(e) => match e {
|
||||||
|
ApproveError::Denied => Self::ApprovalDenied,
|
||||||
|
ApproveError::Internal => Self::Internal,
|
||||||
|
ApproveError::Upstream(flow_coordinator::ApprovalError::NoUserAgentsConnected) => {
|
||||||
|
Self::NoUserAgentsOnline
|
||||||
|
} // ApproveError::Upstream(_) => Self::Internal,
|
||||||
|
},
|
||||||
|
ClientAuthError::DatabaseOperationFailed
|
||||||
|
| ClientAuthError::DatabasePoolUnavailable
|
||||||
|
| ClientAuthError::IntegrityCheckFailed
|
||||||
|
| ClientAuthError::Transport => Self::Internal,
|
||||||
|
ClientAuthError::InvalidChallengeSolution => Self::InvalidSignature,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum ApproveError {
|
pub enum ApproveError {
|
||||||
#[error("Internal error")]
|
|
||||||
Internal,
|
|
||||||
#[error("Client connection denied by user agents")]
|
#[error("Client connection denied by user agents")]
|
||||||
Denied,
|
Denied,
|
||||||
|
|
||||||
|
#[error("Internal error")]
|
||||||
|
Internal,
|
||||||
|
|
||||||
#[error("Upstream error: {0}")]
|
#[error("Upstream error: {0}")]
|
||||||
Upstream(flow_coordinator::ApprovalError),
|
Upstream(flow_coordinator::ApprovalError),
|
||||||
}
|
}
|
||||||
@@ -62,30 +89,45 @@ pub enum ApproveError {
|
|||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub enum Inbound {
|
pub enum Inbound {
|
||||||
AuthChallengeRequest {
|
AuthChallengeRequest {
|
||||||
pubkey: VerifyingKey,
|
pubkey: authn::PublicKey,
|
||||||
metadata: ClientMetadata,
|
metadata: ClientMetadata,
|
||||||
},
|
},
|
||||||
AuthChallengeSolution {
|
AuthChallengeSolution {
|
||||||
signature: Signature,
|
signature: authn::Signature,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub enum Outbound {
|
pub enum Outbound {
|
||||||
AuthChallenge { pubkey: VerifyingKey, nonce: i32 },
|
AuthChallenge {
|
||||||
|
pubkey: authn::PublicKey,
|
||||||
|
nonce: i32,
|
||||||
|
},
|
||||||
AuthSuccess,
|
AuthSuccess,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<Outbound> for arbiter_proto::proto::client::auth::response::Payload {
|
||||||
|
fn from(value: Outbound) -> Self {
|
||||||
|
match value {
|
||||||
|
Outbound::AuthChallenge { pubkey, nonce } => Self::Challenge(ProtoAuthChallenge {
|
||||||
|
pubkey: pubkey.to_bytes(),
|
||||||
|
nonce,
|
||||||
|
}),
|
||||||
|
Outbound::AuthSuccess => Self::Result(ProtoAuthResult::Success.into()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Returns the current nonce and client ID for a registered client.
|
/// Returns the current nonce and client ID for a registered client.
|
||||||
/// Returns `None` if the pubkey is not registered.
|
/// Returns `None` if the pubkey is not registered.
|
||||||
async fn get_current_nonce_and_id(
|
async fn get_current_nonce_and_id(
|
||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
pubkey: &VerifyingKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<Option<(i32, i32)>, Error> {
|
) -> Result<Option<(i32, i32)>, ClientAuthError> {
|
||||||
let pubkey_bytes = pubkey.as_bytes().to_vec();
|
let pubkey_bytes = pubkey.to_bytes();
|
||||||
let mut conn = db.get().await.map_err(|e| {
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
ClientAuthError::DatabasePoolUnavailable
|
||||||
})?;
|
})?;
|
||||||
program_client::table
|
program_client::table
|
||||||
.filter(program_client::public_key.eq(&pubkey_bytes))
|
.filter(program_client::public_key.eq(&pubkey_bytes))
|
||||||
@@ -95,30 +137,30 @@ async fn get_current_nonce_and_id(
|
|||||||
.optional()
|
.optional()
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Database error");
|
error!(error = ?e, "Database error");
|
||||||
Error::DatabaseOperationFailed
|
ClientAuthError::DatabaseOperationFailed
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn verify_integrity(
|
async fn verify_integrity(
|
||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &VerifyingKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), ClientAuthError> {
|
||||||
let mut db_conn = db.get().await.map_err(|e| {
|
let mut db_conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
ClientAuthError::DatabasePoolUnavailable
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let (id, nonce) = get_current_nonce_and_id(db, pubkey).await?.ok_or_else(|| {
|
let (id, nonce) = get_current_nonce_and_id(db, pubkey).await?.ok_or_else(|| {
|
||||||
error!("Client not found during integrity verification");
|
error!("Client not found during integrity verification");
|
||||||
Error::DatabaseOperationFailed
|
ClientAuthError::DatabaseOperationFailed
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
integrity::verify_entity(
|
let attestation = integrity::verify_entity(
|
||||||
&mut db_conn,
|
&mut db_conn,
|
||||||
keyholder,
|
keyholder,
|
||||||
&ClientCredentials {
|
&ClientCredentials {
|
||||||
pubkey: *pubkey,
|
pubkey: pubkey.clone(),
|
||||||
nonce,
|
nonce,
|
||||||
},
|
},
|
||||||
id,
|
id,
|
||||||
@@ -126,9 +168,14 @@ async fn verify_integrity(
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(?e, "Integrity verification failed");
|
error!(?e, "Integrity verification failed");
|
||||||
Error::IntegrityCheckFailed
|
ClientAuthError::IntegrityCheckFailed
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
|
if attestation != AttestationStatus::Attested {
|
||||||
|
error!("Integrity attestation unavailable for client {id}");
|
||||||
|
return Err(ClientAuthError::IntegrityCheckFailed);
|
||||||
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -137,17 +184,19 @@ async fn verify_integrity(
|
|||||||
async fn create_nonce(
|
async fn create_nonce(
|
||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &VerifyingKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<i32, Error> {
|
) -> Result<i32, ClientAuthError> {
|
||||||
let pubkey_bytes = pubkey.as_bytes().to_vec();
|
let pubkey_bytes = pubkey.to_bytes();
|
||||||
|
let pubkey = pubkey.clone();
|
||||||
|
|
||||||
let mut conn = db.get().await.map_err(|e| {
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
ClientAuthError::DatabasePoolUnavailable
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
conn.exclusive_transaction(|conn| {
|
conn.exclusive_transaction(|conn| {
|
||||||
let keyholder = keyholder.clone();
|
let keyholder = keyholder.clone();
|
||||||
|
let pubkey = pubkey.clone();
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
let (id, new_nonce): (i32, i32) = update(program_client::table)
|
let (id, new_nonce): (i32, i32) = update(program_client::table)
|
||||||
.filter(program_client::public_key.eq(&pubkey_bytes))
|
.filter(program_client::public_key.eq(&pubkey_bytes))
|
||||||
@@ -160,7 +209,7 @@ async fn create_nonce(
|
|||||||
conn,
|
conn,
|
||||||
&keyholder,
|
&keyholder,
|
||||||
&ClientCredentials {
|
&ClientCredentials {
|
||||||
pubkey: *pubkey,
|
pubkey: pubkey.clone(),
|
||||||
nonce: new_nonce,
|
nonce: new_nonce,
|
||||||
},
|
},
|
||||||
id,
|
id,
|
||||||
@@ -168,7 +217,7 @@ async fn create_nonce(
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(?e, "Integrity sign failed after nonce update");
|
error!(?e, "Integrity sign failed after nonce update");
|
||||||
Error::DatabaseOperationFailed
|
ClientAuthError::DatabaseOperationFailed
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
Ok(new_nonce)
|
Ok(new_nonce)
|
||||||
@@ -180,7 +229,7 @@ async fn create_nonce(
|
|||||||
async fn approve_new_client(
|
async fn approve_new_client(
|
||||||
actors: &crate::actors::GlobalActors,
|
actors: &crate::actors::GlobalActors,
|
||||||
profile: ClientProfile,
|
profile: ClientProfile,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), ClientAuthError> {
|
||||||
let result = actors
|
let result = actors
|
||||||
.flow_coordinator
|
.flow_coordinator
|
||||||
.ask(RequestClientApproval { client: profile })
|
.ask(RequestClientApproval { client: profile })
|
||||||
@@ -188,14 +237,14 @@ async fn approve_new_client(
|
|||||||
|
|
||||||
match result {
|
match result {
|
||||||
Ok(true) => Ok(()),
|
Ok(true) => Ok(()),
|
||||||
Ok(false) => Err(Error::ApproveError(ApproveError::Denied)),
|
Ok(false) => Err(ClientAuthError::ApproveError(ApproveError::Denied)),
|
||||||
Err(SendError::HandlerError(e)) => {
|
Err(SendError::HandlerError(e)) => {
|
||||||
error!(error = ?e, "Approval upstream error");
|
error!(error = ?e, "Approval upstream error");
|
||||||
Err(Error::ApproveError(ApproveError::Upstream(e)))
|
Err(ClientAuthError::ApproveError(ApproveError::Upstream(e)))
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!(error = ?e, "Approval request to flow coordinator failed");
|
error!(error = ?e, "Approval request to flow coordinator failed");
|
||||||
Err(Error::ApproveError(ApproveError::Internal))
|
Err(ClientAuthError::ApproveError(ApproveError::Internal))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -203,19 +252,21 @@ async fn approve_new_client(
|
|||||||
async fn insert_client(
|
async fn insert_client(
|
||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &VerifyingKey,
|
pubkey: &authn::PublicKey,
|
||||||
metadata: &ClientMetadata,
|
metadata: &ClientMetadata,
|
||||||
) -> Result<i32, Error> {
|
) -> Result<i32, ClientAuthError> {
|
||||||
use crate::db::schema::{client_metadata, program_client};
|
use crate::db::schema::client_metadata;
|
||||||
|
let pubkey = pubkey.clone();
|
||||||
let metadata = metadata.clone();
|
let metadata = metadata.clone();
|
||||||
|
|
||||||
let mut conn = db.get().await.map_err(|e| {
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
ClientAuthError::DatabasePoolUnavailable
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
conn.exclusive_transaction(|conn| {
|
conn.exclusive_transaction(|conn| {
|
||||||
let keyholder = keyholder.clone();
|
let keyholder = keyholder.clone();
|
||||||
|
let pubkey = pubkey.clone();
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
const NONCE_START: i32 = 1;
|
const NONCE_START: i32 = 1;
|
||||||
|
|
||||||
@@ -231,7 +282,7 @@ async fn insert_client(
|
|||||||
|
|
||||||
let client_id = insert_into(program_client::table)
|
let client_id = insert_into(program_client::table)
|
||||||
.values((
|
.values((
|
||||||
program_client::public_key.eq(pubkey.as_bytes().to_vec()),
|
program_client::public_key.eq(pubkey.to_bytes()),
|
||||||
program_client::metadata_id.eq(metadata_id),
|
program_client::metadata_id.eq(metadata_id),
|
||||||
program_client::nonce.eq(NONCE_START),
|
program_client::nonce.eq(NONCE_START),
|
||||||
))
|
))
|
||||||
@@ -244,7 +295,7 @@ async fn insert_client(
|
|||||||
conn,
|
conn,
|
||||||
&keyholder,
|
&keyholder,
|
||||||
&ClientCredentials {
|
&ClientCredentials {
|
||||||
pubkey: *pubkey,
|
pubkey: pubkey.clone(),
|
||||||
nonce: NONCE_START,
|
nonce: NONCE_START,
|
||||||
},
|
},
|
||||||
client_id,
|
client_id,
|
||||||
@@ -252,7 +303,7 @@ async fn insert_client(
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Failed to sign integrity tag for new client key");
|
error!(error = ?e, "Failed to sign integrity tag for new client key");
|
||||||
Error::DatabaseOperationFailed
|
ClientAuthError::DatabaseOperationFailed
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
Ok(client_id)
|
Ok(client_id)
|
||||||
@@ -265,14 +316,14 @@ async fn sync_client_metadata(
|
|||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
client_id: i32,
|
client_id: i32,
|
||||||
metadata: &ClientMetadata,
|
metadata: &ClientMetadata,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), ClientAuthError> {
|
||||||
use crate::db::schema::{client_metadata, client_metadata_history};
|
use crate::db::schema::{client_metadata, client_metadata_history};
|
||||||
|
|
||||||
let now = SqliteTimestamp(Utc::now());
|
let now = SqliteTimestamp(Utc::now());
|
||||||
|
|
||||||
let mut conn = db.get().await.map_err(|e| {
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
error!(error = ?e, "Database pool error");
|
error!(error = ?e, "Database pool error");
|
||||||
Error::DatabasePoolUnavailable
|
ClientAuthError::DatabasePoolUnavailable
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
conn.exclusive_transaction(|conn| {
|
conn.exclusive_transaction(|conn| {
|
||||||
@@ -328,70 +379,71 @@ async fn sync_client_metadata(
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Database error");
|
error!(error = ?e, "Database error");
|
||||||
Error::DatabaseOperationFailed
|
ClientAuthError::DatabaseOperationFailed
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn challenge_client<T>(
|
async fn challenge_client<T>(
|
||||||
transport: &mut T,
|
transport: &mut T,
|
||||||
pubkey: VerifyingKey,
|
pubkey: authn::PublicKey,
|
||||||
nonce: i32,
|
nonce: i32,
|
||||||
) -> Result<(), Error>
|
) -> Result<(), ClientAuthError>
|
||||||
where
|
where
|
||||||
T: Bi<Inbound, Result<Outbound, Error>> + ?Sized,
|
T: Bi<Inbound, Result<Outbound, ClientAuthError>> + ?Sized,
|
||||||
{
|
{
|
||||||
transport
|
transport
|
||||||
.send(Ok(Outbound::AuthChallenge { pubkey, nonce }))
|
.send(Ok(Outbound::AuthChallenge {
|
||||||
|
pubkey: pubkey.clone(),
|
||||||
|
nonce,
|
||||||
|
}))
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Failed to send auth challenge");
|
error!(error = ?e, "Failed to send auth challenge");
|
||||||
Error::Transport
|
ClientAuthError::Transport
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let signature = expect_message(transport, |req: Inbound| match req {
|
let signature = expect_message(transport, |req: Inbound| match req {
|
||||||
Inbound::AuthChallengeSolution { signature } => Some(signature),
|
Inbound::AuthChallengeSolution { signature } => Some(signature),
|
||||||
_ => None,
|
Inbound::AuthChallengeRequest { .. } => None,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Failed to receive challenge solution");
|
error!(error = ?e, "Failed to receive challenge solution");
|
||||||
Error::Transport
|
ClientAuthError::Transport
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let formatted = format_challenge(nonce, pubkey.as_bytes());
|
if !pubkey.verify(nonce, CLIENT_CONTEXT, &signature) {
|
||||||
|
|
||||||
pubkey.verify_strict(&formatted, &signature).map_err(|_| {
|
|
||||||
error!("Challenge solution verification failed");
|
error!("Challenge solution verification failed");
|
||||||
Error::InvalidChallengeSolution
|
return Err(ClientAuthError::InvalidChallengeSolution);
|
||||||
})?;
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn authenticate<T>(props: &mut ClientConnection, transport: &mut T) -> Result<i32, Error>
|
pub async fn authenticate<T>(
|
||||||
|
props: &mut ClientConnection,
|
||||||
|
transport: &mut T,
|
||||||
|
) -> Result<i32, ClientAuthError>
|
||||||
where
|
where
|
||||||
T: Bi<Inbound, Result<Outbound, Error>> + Send + ?Sized,
|
T: Bi<Inbound, Result<Outbound, ClientAuthError>> + Send + ?Sized,
|
||||||
{
|
{
|
||||||
let Some(Inbound::AuthChallengeRequest { pubkey, metadata }) = transport.recv().await else {
|
let Some(Inbound::AuthChallengeRequest { pubkey, metadata }) = transport.recv().await else {
|
||||||
return Err(Error::Transport);
|
return Err(ClientAuthError::Transport);
|
||||||
};
|
};
|
||||||
|
|
||||||
let client_id = match get_current_nonce_and_id(&props.db, &pubkey).await? {
|
let client_id = if let Some((id, _)) = get_current_nonce_and_id(&props.db, &pubkey).await? {
|
||||||
Some((id, _)) => {
|
verify_integrity(&props.db, &props.actors.key_holder, &pubkey).await?;
|
||||||
verify_integrity(&props.db, &props.actors.key_holder, &pubkey).await?;
|
id
|
||||||
id
|
} else {
|
||||||
}
|
approve_new_client(
|
||||||
None => {
|
&props.actors,
|
||||||
approve_new_client(
|
ClientProfile {
|
||||||
&props.actors,
|
pubkey: pubkey.clone(),
|
||||||
ClientProfile {
|
metadata: metadata.clone(),
|
||||||
pubkey,
|
},
|
||||||
metadata: metadata.clone(),
|
)
|
||||||
},
|
.await?;
|
||||||
)
|
insert_client(&props.db, &props.actors.key_holder, &pubkey, &metadata).await?
|
||||||
.await?;
|
|
||||||
insert_client(&props.db, &props.actors.key_holder, &pubkey, &metadata).await?
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
sync_client_metadata(&props.db, client_id, &metadata).await?;
|
sync_client_metadata(&props.db, client_id, &metadata).await?;
|
||||||
@@ -403,7 +455,7 @@ where
|
|||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(error = ?e, "Failed to send auth success");
|
error!(error = ?e, "Failed to send auth success");
|
||||||
Error::Transport
|
ClientAuthError::Transport
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
Ok(client_id)
|
Ok(client_id)
|
||||||
|
|||||||
@@ -1,21 +1,23 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
use arbiter_proto::{ClientMetadata, transport::Bi};
|
use arbiter_proto::{ClientMetadata, transport::Bi};
|
||||||
use kameo::actor::Spawn;
|
use kameo::actor::Spawn;
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::{GlobalActors, client::session::ClientSession},
|
actors::{GlobalActors, client::session::ClientSession},
|
||||||
crypto::integrity::{Integrable, hashing::Hashable},
|
crypto::integrity::Integrable,
|
||||||
db,
|
db,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub struct ClientProfile {
|
pub struct ClientProfile {
|
||||||
pub pubkey: ed25519_dalek::VerifyingKey,
|
pub pubkey: authn::PublicKey,
|
||||||
pub metadata: ClientMetadata,
|
pub metadata: ClientMetadata,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(arbiter_macros::Hashable)]
|
||||||
pub struct ClientCredentials {
|
pub struct ClientCredentials {
|
||||||
pub pubkey: ed25519_dalek::VerifyingKey,
|
pub pubkey: authn::PublicKey,
|
||||||
pub nonce: i32,
|
pub nonce: i32,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -23,20 +25,13 @@ impl Integrable for ClientCredentials {
|
|||||||
const KIND: &'static str = "client_credentials";
|
const KIND: &'static str = "client_credentials";
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Hashable for ClientCredentials {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
hasher.update(self.pubkey.as_bytes());
|
|
||||||
self.nonce.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ClientConnection {
|
pub struct ClientConnection {
|
||||||
pub(crate) db: db::DatabasePool,
|
pub(crate) db: db::DatabasePool,
|
||||||
pub(crate) actors: GlobalActors,
|
pub(crate) actors: GlobalActors,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientConnection {
|
impl ClientConnection {
|
||||||
pub fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
pub const fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
Self { db, actors }
|
Self { db, actors }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -46,9 +41,11 @@ pub mod session;
|
|||||||
|
|
||||||
pub async fn connect_client<T>(mut props: ClientConnection, transport: &mut T)
|
pub async fn connect_client<T>(mut props: ClientConnection, transport: &mut T)
|
||||||
where
|
where
|
||||||
T: Bi<auth::Inbound, Result<auth::Outbound, auth::Error>> + Send + ?Sized,
|
T: Bi<auth::Inbound, Result<auth::Outbound, auth::ClientAuthError>> + Send + ?Sized,
|
||||||
{
|
{
|
||||||
match auth::authenticate(&mut props, transport).await {
|
let fut = auth::authenticate(&mut props, transport);
|
||||||
|
println!("authenticate future size: {}", size_of_val(&fut));
|
||||||
|
match fut.await {
|
||||||
Ok(client_id) => {
|
Ok(client_id) => {
|
||||||
ClientSession::spawn(ClientSession::new(props, client_id));
|
ClientSession::spawn(ClientSession::new(props, client_id));
|
||||||
info!("Client authenticated, session started");
|
info!("Client authenticated, session started");
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ pub struct ClientSession {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl ClientSession {
|
impl ClientSession {
|
||||||
pub(crate) fn new(props: ClientConnection, client_id: i32) -> Self {
|
pub(crate) const fn new(props: ClientConnection, client_id: i32) -> Self {
|
||||||
Self { props, client_id }
|
Self { props, client_id }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -29,14 +29,16 @@ impl ClientSession {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl ClientSession {
|
impl ClientSession {
|
||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_query_vault_state(&mut self) -> Result<KeyHolderState, Error> {
|
pub(crate) async fn handle_query_vault_state(
|
||||||
|
&mut self,
|
||||||
|
) -> Result<KeyHolderState, ClientSessionError> {
|
||||||
use crate::actors::keyholder::GetState;
|
use crate::actors::keyholder::GetState;
|
||||||
|
|
||||||
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
||||||
Ok(state) => state,
|
Ok(state) => state,
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, actor = "client", "keyholder.query.failed");
|
error!(?err, actor = "client", "keyholder.query.failed");
|
||||||
return Err(Error::Internal);
|
return Err(ClientSessionError::Internal);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -75,7 +77,7 @@ impl ClientSession {
|
|||||||
impl Actor for ClientSession {
|
impl Actor for ClientSession {
|
||||||
type Args = Self;
|
type Args = Self;
|
||||||
|
|
||||||
type Error = Error;
|
type Error = ClientSessionError;
|
||||||
|
|
||||||
async fn on_start(
|
async fn on_start(
|
||||||
args: Self::Args,
|
args: Self::Args,
|
||||||
@@ -86,13 +88,13 @@ impl Actor for ClientSession {
|
|||||||
.flow_coordinator
|
.flow_coordinator
|
||||||
.ask(RegisterClient { actor: this })
|
.ask(RegisterClient { actor: this })
|
||||||
.await
|
.await
|
||||||
.map_err(|_| Error::ConnectionRegistrationFailed)?;
|
.map_err(|_| ClientSessionError::ConnectionRegistrationFailed)?;
|
||||||
Ok(args)
|
Ok(args)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientSession {
|
impl ClientSession {
|
||||||
pub fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
pub const fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
let props = ClientConnection::new(db, actors);
|
let props = ClientConnection::new(db, actors);
|
||||||
Self {
|
Self {
|
||||||
props,
|
props,
|
||||||
@@ -102,7 +104,7 @@ impl ClientSession {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum Error {
|
pub enum ClientSessionError {
|
||||||
#[error("Connection registration failed")]
|
#[error("Connection registration failed")]
|
||||||
ConnectionRegistrationFailed,
|
ConnectionRegistrationFailed,
|
||||||
#[error("Internal error")]
|
#[error("Internal error")]
|
||||||
@@ -111,9 +113,9 @@ pub enum Error {
|
|||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum SignTransactionRpcError {
|
pub enum SignTransactionRpcError {
|
||||||
#[error("Policy evaluation failed")]
|
|
||||||
Vet(#[from] VetError),
|
|
||||||
|
|
||||||
#[error("Internal error")]
|
#[error("Internal error")]
|
||||||
Internal,
|
Internal,
|
||||||
|
|
||||||
|
#[error("Policy evaluation failed")]
|
||||||
|
Vet(#[from] VetError),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
use alloy::{consensus::TxEip1559, primitives::Address, signers::Signature};
|
use alloy::{
|
||||||
|
consensus::TxEip1559, network::TxSignerSync as _, primitives::Address, signers::Signature,
|
||||||
|
};
|
||||||
use diesel::{
|
use diesel::{
|
||||||
ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into,
|
ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into,
|
||||||
};
|
};
|
||||||
@@ -21,8 +23,8 @@ use crate::{
|
|||||||
ether_transfer::EtherTransfer, token_transfers::TokenTransfer,
|
ether_transfer::EtherTransfer, token_transfers::TokenTransfer,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
pub use crate::evm::safe_signer;
|
pub use crate::evm::safe_signer;
|
||||||
|
|
||||||
@@ -35,7 +37,7 @@ pub enum SignTransactionError {
|
|||||||
Database(#[from] DatabaseError),
|
Database(#[from] DatabaseError),
|
||||||
|
|
||||||
#[error("Keyholder error: {0}")]
|
#[error("Keyholder error: {0}")]
|
||||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
Keyholder(#[from] crate::actors::keyholder::KeyHolderError),
|
||||||
|
|
||||||
#[error("Keyholder mailbox error")]
|
#[error("Keyholder mailbox error")]
|
||||||
KeyholderSend,
|
KeyholderSend,
|
||||||
@@ -48,9 +50,9 @@ pub enum SignTransactionError {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum Error {
|
pub enum EvmActorError {
|
||||||
#[error("Keyholder error: {0}")]
|
#[error("Keyholder error: {0}")]
|
||||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
Keyholder(#[from] crate::actors::keyholder::KeyHolderError),
|
||||||
|
|
||||||
#[error("Keyholder mailbox error")]
|
#[error("Keyholder mailbox error")]
|
||||||
KeyholderSend,
|
KeyholderSend,
|
||||||
@@ -59,7 +61,7 @@ pub enum Error {
|
|||||||
Database(#[from] DatabaseError),
|
Database(#[from] DatabaseError),
|
||||||
|
|
||||||
#[error("Integrity violation: {0}")]
|
#[error("Integrity violation: {0}")]
|
||||||
Integrity(#[from] integrity::Error),
|
Integrity(#[from] integrity::IntegrityError),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Actor)]
|
#[derive(Actor)]
|
||||||
@@ -88,7 +90,7 @@ impl EvmActor {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl EvmActor {
|
impl EvmActor {
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn generate(&mut self) -> Result<(i32, Address), Error> {
|
pub async fn generate(&mut self) -> Result<(i32, Address), EvmActorError> {
|
||||||
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
||||||
|
|
||||||
let plaintext = key_cell.read_inline(|reader| SafeCell::new(reader.to_vec()));
|
let plaintext = key_cell.read_inline(|reader| SafeCell::new(reader.to_vec()));
|
||||||
@@ -97,7 +99,7 @@ impl EvmActor {
|
|||||||
.keyholder
|
.keyholder
|
||||||
.ask(CreateNew { plaintext })
|
.ask(CreateNew { plaintext })
|
||||||
.await
|
.await
|
||||||
.map_err(|_| Error::KeyholderSend)?;
|
.map_err(|_| EvmActorError::KeyholderSend)?;
|
||||||
|
|
||||||
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
||||||
let wallet_id = insert_into(schema::evm_wallet::table)
|
let wallet_id = insert_into(schema::evm_wallet::table)
|
||||||
@@ -114,7 +116,7 @@ impl EvmActor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn list_wallets(&self) -> Result<Vec<(i32, Address)>, Error> {
|
pub async fn list_wallets(&self) -> Result<Vec<(i32, Address)>, EvmActorError> {
|
||||||
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
||||||
let rows: Vec<models::EvmWallet> = schema::evm_wallet::table
|
let rows: Vec<models::EvmWallet> = schema::evm_wallet::table
|
||||||
.select(models::EvmWallet::as_select())
|
.select(models::EvmWallet::as_select())
|
||||||
@@ -136,7 +138,7 @@ impl EvmActor {
|
|||||||
&mut self,
|
&mut self,
|
||||||
basic: SharedGrantSettings,
|
basic: SharedGrantSettings,
|
||||||
grant: SpecificGrant,
|
grant: SpecificGrant,
|
||||||
) -> Result<integrity::Verified<i32>, Error> {
|
) -> Result<i32, EvmActorError> {
|
||||||
match grant {
|
match grant {
|
||||||
SpecificGrant::EtherTransfer(settings) => self
|
SpecificGrant::EtherTransfer(settings) => self
|
||||||
.engine
|
.engine
|
||||||
@@ -145,7 +147,7 @@ impl EvmActor {
|
|||||||
specific: settings,
|
specific: settings,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(Error::from),
|
.map_err(EvmActorError::from),
|
||||||
SpecificGrant::TokenTransfer(settings) => self
|
SpecificGrant::TokenTransfer(settings) => self
|
||||||
.engine
|
.engine
|
||||||
.create_grant::<TokenTransfer>(CombinedSettings {
|
.create_grant::<TokenTransfer>(CombinedSettings {
|
||||||
@@ -153,12 +155,13 @@ impl EvmActor {
|
|||||||
specific: settings,
|
specific: settings,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(Error::from),
|
.map_err(EvmActorError::from),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn useragent_delete_grant(&mut self, _grant_id: i32) -> Result<(), Error> {
|
#[expect(clippy::unused_async, reason = "reserved for impl")]
|
||||||
|
pub async fn useragent_delete_grant(&mut self, _grant_id: i32) -> Result<(), EvmActorError> {
|
||||||
// let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
// let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
||||||
// let keyholder = self.keyholder.clone();
|
// let keyholder = self.keyholder.clone();
|
||||||
|
|
||||||
@@ -183,11 +186,15 @@ impl EvmActor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn useragent_list_grants(&mut self) -> Result<Vec<Grant<SpecificGrant>>, Error> {
|
pub async fn useragent_list_grants(
|
||||||
|
&mut self,
|
||||||
|
) -> Result<Vec<Grant<SpecificGrant>>, EvmActorError> {
|
||||||
match self.engine.list_all_grants().await {
|
match self.engine.list_all_grants().await {
|
||||||
Ok(grants) => Ok(grants),
|
Ok(grants) => Ok(grants),
|
||||||
Err(ListError::Database(db_err)) => Err(Error::Database(db_err)),
|
Err(ListError::Database(db_err)) => Err(EvmActorError::Database(db_err)),
|
||||||
Err(ListError::Integrity(integrity_err)) => Err(Error::Integrity(integrity_err)),
|
Err(ListError::Integrity(integrity_err)) => {
|
||||||
|
Err(EvmActorError::Integrity(integrity_err))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -267,7 +274,6 @@ impl EvmActor {
|
|||||||
.evaluate_transaction(wallet_access, transaction.clone(), RunKind::Execution)
|
.evaluate_transaction(wallet_access, transaction.clone(), RunKind::Execution)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
use alloy::network::TxSignerSync as _;
|
|
||||||
Ok(signer.sign_transaction_sync(&mut transaction)?)
|
Ok(signer.sign_transaction_sync(&mut transaction)?)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ impl Actor for ClientApprovalController {
|
|||||||
async fn on_start(
|
async fn on_start(
|
||||||
Args {
|
Args {
|
||||||
client,
|
client,
|
||||||
mut user_agents,
|
user_agents,
|
||||||
reply,
|
reply,
|
||||||
}: Self::Args,
|
}: Self::Args,
|
||||||
actor_ref: ActorRef<Self>,
|
actor_ref: ActorRef<Self>,
|
||||||
@@ -52,8 +52,9 @@ impl Actor for ClientApprovalController {
|
|||||||
reply: Some(reply),
|
reply: Some(reply),
|
||||||
};
|
};
|
||||||
|
|
||||||
for user_agent in user_agents.drain(..) {
|
for user_agent in user_agents {
|
||||||
actor_ref.link(&user_agent).await;
|
actor_ref.link(&user_agent).await;
|
||||||
|
|
||||||
let _ = user_agent
|
let _ = user_agent
|
||||||
.tell(BeginNewClientApproval {
|
.tell(BeginNewClientApproval {
|
||||||
client: client.clone(),
|
client: client.clone(),
|
||||||
@@ -85,7 +86,7 @@ impl Actor for ClientApprovalController {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl ClientApprovalController {
|
impl ClientApprovalController {
|
||||||
#[message(ctx)]
|
#[message(ctx)]
|
||||||
pub async fn client_approval_answer(&mut self, approved: bool, ctx: &mut Context<Self, ()>) {
|
pub fn client_approval_answer(&mut self, approved: bool, ctx: &mut Context<Self, ()>) {
|
||||||
if !approved {
|
if !approved {
|
||||||
// Denial wins immediately regardless of other pending responses.
|
// Denial wins immediately regardless of other pending responses.
|
||||||
self.send_reply(Ok(false));
|
self.send_reply(Ok(false));
|
||||||
|
|||||||
@@ -92,7 +92,7 @@ impl FlowCoordinator {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message(ctx)]
|
#[message(ctx)]
|
||||||
pub async fn request_client_approval(
|
pub fn request_client_approval(
|
||||||
&mut self,
|
&mut self,
|
||||||
client: ClientProfile,
|
client: ClientProfile,
|
||||||
ctx: &mut Context<Self, DelegatedReply<Result<bool, ApprovalError>>>,
|
ctx: &mut Context<Self, DelegatedReply<Result<bool, ApprovalError>>>,
|
||||||
|
|||||||
@@ -9,22 +9,17 @@ use kameo::{Actor, Reply, messages};
|
|||||||
use strum::{EnumDiscriminants, IntoDiscriminant};
|
use strum::{EnumDiscriminants, IntoDiscriminant};
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
|
|
||||||
use crate::{
|
use crate::crypto::{
|
||||||
crypto::{
|
KeyCell, derive_key,
|
||||||
KeyCell, derive_key,
|
encryption::v1::{self, Nonce},
|
||||||
encryption::v1::{self, Nonce},
|
integrity::v1::HmacSha256,
|
||||||
integrity::v1::HmacSha256,
|
|
||||||
},
|
|
||||||
safe_cell::SafeCell,
|
|
||||||
};
|
};
|
||||||
use crate::{
|
use crate::db::{
|
||||||
db::{
|
self,
|
||||||
self,
|
models::{self, RootKeyHistory},
|
||||||
models::{self, RootKeyHistory},
|
schema::{self},
|
||||||
schema::{self},
|
|
||||||
},
|
|
||||||
safe_cell::SafeCellHandle as _,
|
|
||||||
};
|
};
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
#[derive(Default, EnumDiscriminants)]
|
#[derive(Default, EnumDiscriminants)]
|
||||||
#[strum_discriminants(derive(Reply), vis(pub), name(KeyHolderState))]
|
#[strum_discriminants(derive(Reply), vis(pub), name(KeyHolderState))]
|
||||||
@@ -41,19 +36,12 @@ enum State {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum Error {
|
pub enum KeyHolderError {
|
||||||
#[error("Keyholder is already bootstrapped")]
|
#[error("Keyholder is already bootstrapped")]
|
||||||
AlreadyBootstrapped,
|
AlreadyBootstrapped,
|
||||||
#[error("Keyholder is not bootstrapped")]
|
|
||||||
NotBootstrapped,
|
|
||||||
#[error("Invalid key provided")]
|
|
||||||
InvalidKey,
|
|
||||||
|
|
||||||
#[error("Requested aead entry not found")]
|
#[error("Broken database")]
|
||||||
NotFound,
|
BrokenDatabase,
|
||||||
|
|
||||||
#[error("Encryption error: {0}")]
|
|
||||||
Encryption(#[from] chacha20poly1305::aead::Error),
|
|
||||||
|
|
||||||
#[error("Database error: {0}")]
|
#[error("Database error: {0}")]
|
||||||
DatabaseConnection(#[from] db::PoolError),
|
DatabaseConnection(#[from] db::PoolError),
|
||||||
@@ -61,11 +49,21 @@ pub enum Error {
|
|||||||
#[error("Database transaction error: {0}")]
|
#[error("Database transaction error: {0}")]
|
||||||
DatabaseTransaction(#[from] diesel::result::Error),
|
DatabaseTransaction(#[from] diesel::result::Error),
|
||||||
|
|
||||||
#[error("Broken database")]
|
#[error("Encryption error: {0}")]
|
||||||
BrokenDatabase,
|
Encryption(#[from] chacha20poly1305::aead::Error),
|
||||||
|
|
||||||
|
#[error("Invalid key provided")]
|
||||||
|
InvalidKey,
|
||||||
|
|
||||||
|
#[error("Keyholder is not bootstrapped")]
|
||||||
|
NotBootstrapped,
|
||||||
|
|
||||||
|
#[error("Requested aead entry not found")]
|
||||||
|
NotFound,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
||||||
|
///
|
||||||
/// Provides API for encrypting and decrypting data using the vault root key.
|
/// Provides API for encrypting and decrypting data using the vault root key.
|
||||||
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
||||||
#[derive(Actor)]
|
#[derive(Actor)]
|
||||||
@@ -76,7 +74,7 @@ pub struct KeyHolder {
|
|||||||
|
|
||||||
#[messages]
|
#[messages]
|
||||||
impl KeyHolder {
|
impl KeyHolder {
|
||||||
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
|
pub async fn new(db: db::DatabasePool) -> Result<Self, KeyHolderError> {
|
||||||
let state = {
|
let state = {
|
||||||
let mut conn = db.get().await?;
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
@@ -99,7 +97,10 @@ impl KeyHolder {
|
|||||||
|
|
||||||
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
||||||
// additional layer of protection against nonce-reuse
|
// additional layer of protection against nonce-reuse
|
||||||
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
|
async fn get_new_nonce(
|
||||||
|
pool: &db::DatabasePool,
|
||||||
|
root_key_id: i32,
|
||||||
|
) -> Result<Nonce, KeyHolderError> {
|
||||||
let mut conn = pool.get().await?;
|
let mut conn = pool.get().await?;
|
||||||
|
|
||||||
let nonce = conn
|
let nonce = conn
|
||||||
@@ -111,12 +112,12 @@ impl KeyHolder {
|
|||||||
.first(conn)
|
.first(conn)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
let mut nonce = Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
|
let mut nonce = Nonce::try_from(current_nonce.as_slice()).map_err(|()| {
|
||||||
error!(
|
error!(
|
||||||
"Broken database: invalid nonce for root key history id={}",
|
"Broken database: invalid nonce for root key history id={}",
|
||||||
root_key_id
|
root_key_id
|
||||||
);
|
);
|
||||||
Error::BrokenDatabase
|
KeyHolderError::BrokenDatabase
|
||||||
})?;
|
})?;
|
||||||
nonce.increment();
|
nonce.increment();
|
||||||
|
|
||||||
@@ -126,7 +127,7 @@ impl KeyHolder {
|
|||||||
.execute(conn)
|
.execute(conn)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
Result::<_, Error>::Ok(nonce)
|
Result::<_, KeyHolderError>::Ok(nonce)
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.await?;
|
.await?;
|
||||||
@@ -135,9 +136,12 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn bootstrap(&mut self, seal_key_raw: SafeCell<Vec<u8>>) -> Result<(), Error> {
|
pub async fn bootstrap(
|
||||||
|
&mut self,
|
||||||
|
seal_key_raw: SafeCell<Vec<u8>>,
|
||||||
|
) -> Result<(), KeyHolderError> {
|
||||||
if !matches!(self.state, State::Unbootstrapped) {
|
if !matches!(self.state, State::Unbootstrapped) {
|
||||||
return Err(Error::AlreadyBootstrapped);
|
return Err(KeyHolderError::AlreadyBootstrapped);
|
||||||
}
|
}
|
||||||
let salt = v1::generate_salt();
|
let salt = v1::generate_salt();
|
||||||
let mut seal_key = derive_key(seal_key_raw, &salt);
|
let mut seal_key = derive_key(seal_key_raw, &salt);
|
||||||
@@ -153,7 +157,7 @@ impl KeyHolder {
|
|||||||
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
||||||
.map_err(|err| {
|
.map_err(|err| {
|
||||||
error!(?err, "Fatal bootstrap error");
|
error!(?err, "Fatal bootstrap error");
|
||||||
Error::Encryption(err)
|
KeyHolderError::Encryption(err)
|
||||||
})
|
})
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
@@ -197,12 +201,15 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn try_unseal(&mut self, seal_key_raw: SafeCell<Vec<u8>>) -> Result<(), Error> {
|
pub async fn try_unseal(
|
||||||
|
&mut self,
|
||||||
|
seal_key_raw: SafeCell<Vec<u8>>,
|
||||||
|
) -> Result<(), KeyHolderError> {
|
||||||
let State::Sealed {
|
let State::Sealed {
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
} = &self.state
|
} = &self.state
|
||||||
else {
|
else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
|
|
||||||
// We don't want to hold connection while doing expensive KDF work
|
// We don't want to hold connection while doing expensive KDF work
|
||||||
@@ -218,16 +225,16 @@ impl KeyHolder {
|
|||||||
let salt = ¤t_key.salt;
|
let salt = ¤t_key.salt;
|
||||||
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
||||||
error!("Broken database: invalid salt for root key");
|
error!("Broken database: invalid salt for root key");
|
||||||
Error::BrokenDatabase
|
KeyHolderError::BrokenDatabase
|
||||||
})?;
|
})?;
|
||||||
let mut seal_key = derive_key(seal_key_raw, &salt);
|
let mut seal_key = derive_key(seal_key_raw, &salt);
|
||||||
|
|
||||||
let mut root_key = SafeCell::new(current_key.ciphertext.clone());
|
let mut root_key = SafeCell::new(current_key.ciphertext.clone());
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
let nonce = Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
||||||
|_| {
|
|()| {
|
||||||
error!("Broken database: invalid nonce for root key");
|
error!("Broken database: invalid nonce for root key");
|
||||||
Error::BrokenDatabase
|
KeyHolderError::BrokenDatabase
|
||||||
},
|
},
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
@@ -235,14 +242,14 @@ impl KeyHolder {
|
|||||||
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
||||||
.map_err(|err| {
|
.map_err(|err| {
|
||||||
error!(?err, "Failed to unseal root key: invalid seal key");
|
error!(?err, "Failed to unseal root key: invalid seal key");
|
||||||
Error::InvalidKey
|
KeyHolderError::InvalidKey
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
self.state = State::Unsealed {
|
self.state = State::Unsealed {
|
||||||
root_key_history_id: current_key.id,
|
root_key_history_id: current_key.id,
|
||||||
root_key: KeyCell::try_from(root_key).map_err(|err| {
|
root_key: KeyCell::try_from(root_key).map_err(|err| {
|
||||||
error!(?err, "Broken database: invalid encryption key size");
|
error!(?err, "Broken database: invalid encryption key size");
|
||||||
Error::BrokenDatabase
|
KeyHolderError::BrokenDatabase
|
||||||
})?,
|
})?,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -252,9 +259,9 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn decrypt(&mut self, aead_id: i32) -> Result<SafeCell<Vec<u8>>, Error> {
|
pub async fn decrypt(&mut self, aead_id: i32) -> Result<SafeCell<Vec<u8>>, KeyHolderError> {
|
||||||
let State::Unsealed { root_key, .. } = &mut self.state else {
|
let State::Unsealed { root_key, .. } = &mut self.state else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
|
|
||||||
let row: models::AeadEncrypted = {
|
let row: models::AeadEncrypted = {
|
||||||
@@ -265,15 +272,15 @@ impl KeyHolder {
|
|||||||
.first(&mut conn)
|
.first(&mut conn)
|
||||||
.await
|
.await
|
||||||
.optional()?
|
.optional()?
|
||||||
.ok_or(Error::NotFound)?
|
.ok_or(KeyHolderError::NotFound)?
|
||||||
};
|
};
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
|
let nonce = Nonce::try_from(row.current_nonce.as_slice()).map_err(|()| {
|
||||||
error!(
|
error!(
|
||||||
"Broken database: invalid nonce for aead_encrypted id={}",
|
"Broken database: invalid nonce for aead_encrypted id={}",
|
||||||
aead_id
|
aead_id
|
||||||
);
|
);
|
||||||
Error::BrokenDatabase
|
KeyHolderError::BrokenDatabase
|
||||||
})?;
|
})?;
|
||||||
let mut output = SafeCell::new(row.ciphertext);
|
let mut output = SafeCell::new(row.ciphertext);
|
||||||
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
||||||
@@ -282,14 +289,17 @@ impl KeyHolder {
|
|||||||
|
|
||||||
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn create_new(&mut self, mut plaintext: SafeCell<Vec<u8>>) -> Result<i32, Error> {
|
pub async fn create_new(
|
||||||
|
&mut self,
|
||||||
|
mut plaintext: SafeCell<Vec<u8>>,
|
||||||
|
) -> Result<i32, KeyHolderError> {
|
||||||
let State::Unsealed {
|
let State::Unsealed {
|
||||||
root_key,
|
root_key,
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
..
|
..
|
||||||
} = &mut self.state
|
} = &mut self.state
|
||||||
else {
|
else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
|
|
||||||
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
||||||
@@ -325,21 +335,19 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub fn sign_integrity(&mut self, mac_input: Vec<u8>) -> Result<(i32, Vec<u8>), Error> {
|
pub fn sign_integrity(&mut self, mac_input: Vec<u8>) -> Result<(i32, Vec<u8>), KeyHolderError> {
|
||||||
let State::Unsealed {
|
let State::Unsealed {
|
||||||
root_key,
|
root_key,
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
} = &mut self.state
|
} = &mut self.state
|
||||||
else {
|
else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut hmac = root_key
|
let mut hmac = root_key.0.read_inline(|k| {
|
||||||
.0
|
HmacSha256::new_from_slice(k)
|
||||||
.read_inline(|k| match HmacSha256::new_from_slice(k) {
|
.unwrap_or_else(|_| unreachable!("HMAC accepts keys of any size"))
|
||||||
Ok(v) => v,
|
});
|
||||||
Err(_) => unreachable!("HMAC accepts keys of any size"),
|
|
||||||
});
|
|
||||||
hmac.update(&root_key_history_id.to_be_bytes());
|
hmac.update(&root_key_history_id.to_be_bytes());
|
||||||
hmac.update(&mac_input);
|
hmac.update(&mac_input);
|
||||||
|
|
||||||
@@ -353,25 +361,23 @@ impl KeyHolder {
|
|||||||
mac_input: Vec<u8>,
|
mac_input: Vec<u8>,
|
||||||
expected_mac: Vec<u8>,
|
expected_mac: Vec<u8>,
|
||||||
key_version: i32,
|
key_version: i32,
|
||||||
) -> Result<bool, Error> {
|
) -> Result<bool, KeyHolderError> {
|
||||||
let State::Unsealed {
|
let State::Unsealed {
|
||||||
root_key,
|
root_key,
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
} = &mut self.state
|
} = &mut self.state
|
||||||
else {
|
else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
|
|
||||||
if *root_key_history_id != key_version {
|
if *root_key_history_id != key_version {
|
||||||
return Ok(false);
|
return Ok(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut hmac = root_key
|
let mut hmac = root_key.0.read_inline(|k| {
|
||||||
.0
|
HmacSha256::new_from_slice(k)
|
||||||
.read_inline(|k| match HmacSha256::new_from_slice(k) {
|
.unwrap_or_else(|_| unreachable!("HMAC accepts keys of any size"))
|
||||||
Ok(v) => v,
|
});
|
||||||
Err(_) => unreachable!("HMAC accepts keys of any size"),
|
|
||||||
});
|
|
||||||
hmac.update(&key_version.to_be_bytes());
|
hmac.update(&key_version.to_be_bytes());
|
||||||
hmac.update(&mac_input);
|
hmac.update(&mac_input);
|
||||||
|
|
||||||
@@ -379,13 +385,13 @@ impl KeyHolder {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub fn seal(&mut self) -> Result<(), Error> {
|
pub fn seal(&mut self) -> Result<(), KeyHolderError> {
|
||||||
let State::Unsealed {
|
let State::Unsealed {
|
||||||
root_key_history_id,
|
root_key_history_id,
|
||||||
..
|
..
|
||||||
} = &self.state
|
} = &self.state
|
||||||
else {
|
else {
|
||||||
return Err(Error::NotBootstrapped);
|
return Err(KeyHolderError::NotBootstrapped);
|
||||||
};
|
};
|
||||||
self.state = State::Sealed {
|
self.state = State::Sealed {
|
||||||
root_key_history_id: *root_key_history_id,
|
root_key_history_id: *root_key_history_id,
|
||||||
@@ -396,14 +402,7 @@ impl KeyHolder {
|
|||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use diesel::SelectableHelper;
|
use arbiter_crypto::safecell::SafeCellHandle as _;
|
||||||
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
db::{self},
|
|
||||||
safe_cell::SafeCell,
|
|
||||||
};
|
|
||||||
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
||||||
@@ -419,12 +418,12 @@ mod tests {
|
|||||||
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
let root_key_history_id = match actor.state {
|
let State::Unsealed {
|
||||||
State::Unsealed {
|
root_key_history_id,
|
||||||
root_key_history_id,
|
..
|
||||||
..
|
} = actor.state
|
||||||
} => root_key_history_id,
|
else {
|
||||||
_ => panic!("expected unsealed state"),
|
panic!("expected unsealed state");
|
||||||
};
|
};
|
||||||
|
|
||||||
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
||||||
@@ -436,8 +435,8 @@ mod tests {
|
|||||||
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
let root_row = schema::root_key_history::table
|
||||||
.select(models::RootKeyHistory::as_select())
|
.select(RootKeyHistory::as_select())
|
||||||
.first(&mut conn)
|
.first(&mut conn)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|||||||
@@ -11,18 +11,18 @@ use crate::{
|
|||||||
|
|
||||||
pub mod bootstrap;
|
pub mod bootstrap;
|
||||||
pub mod client;
|
pub mod client;
|
||||||
mod evm;
|
pub mod evm;
|
||||||
pub mod flow_coordinator;
|
pub mod flow_coordinator;
|
||||||
pub mod keyholder;
|
pub mod keyholder;
|
||||||
pub mod user_agent;
|
pub mod user_agent;
|
||||||
|
|
||||||
#[derive(Error, Debug)]
|
#[derive(Error, Debug)]
|
||||||
pub enum SpawnError {
|
pub enum GlobalActorsSpawnError {
|
||||||
#[error("Failed to spawn Bootstrapper actor")]
|
#[error("Failed to spawn Bootstrapper actor")]
|
||||||
Bootstrapper(#[from] bootstrap::Error),
|
Bootstrapper(#[from] bootstrap::BootstrappError),
|
||||||
|
|
||||||
#[error("Failed to spawn KeyHolder actor")]
|
#[error("Failed to spawn KeyHolder actor")]
|
||||||
KeyHolder(#[from] keyholder::Error),
|
KeyHolder(#[from] keyholder::KeyHolderError),
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Long-lived actors that are shared across all connections and handle global state and operations
|
/// Long-lived actors that are shared across all connections and handle global state and operations
|
||||||
@@ -35,7 +35,7 @@ pub struct GlobalActors {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl GlobalActors {
|
impl GlobalActors {
|
||||||
pub async fn spawn(db: db::DatabasePool) -> Result<Self, SpawnError> {
|
pub async fn spawn(db: db::DatabasePool) -> Result<Self, GlobalActorsSpawnError> {
|
||||||
let key_holder = KeyHolder::spawn(KeyHolder::new(db.clone()).await?);
|
let key_holder = KeyHolder::spawn(KeyHolder::new(db.clone()).await?);
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
|
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
|
||||||
|
|||||||
@@ -1,18 +1,20 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
use arbiter_proto::transport::Bi;
|
use arbiter_proto::transport::Bi;
|
||||||
use tracing::error;
|
use tracing::error;
|
||||||
|
|
||||||
use crate::actors::user_agent::{
|
use crate::actors::user_agent::{
|
||||||
AuthPublicKey, UserAgentConnection,
|
UserAgentConnection,
|
||||||
auth::state::{AuthContext, AuthStateMachine},
|
auth::state::{AuthContext, AuthStateMachine},
|
||||||
};
|
};
|
||||||
|
|
||||||
mod state;
|
mod state;
|
||||||
use state::*;
|
use state::{
|
||||||
|
AuthError, AuthEvents, AuthStates, BootstrapAuthRequest, ChallengeRequest, ChallengeSolution,
|
||||||
|
};
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub enum Inbound {
|
pub enum Inbound {
|
||||||
AuthChallengeRequest {
|
AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey,
|
pubkey: authn::PublicKey,
|
||||||
bootstrap_token: Option<String>,
|
bootstrap_token: Option<String>,
|
||||||
},
|
},
|
||||||
AuthChallengeSolution {
|
AuthChallengeSolution {
|
||||||
@@ -30,26 +32,17 @@ pub enum Error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Error {
|
impl Error {
|
||||||
#[track_caller]
|
fn internal(details: impl Into<String>) -> Self {
|
||||||
pub(super) fn internal(details: impl Into<String>, err: &impl std::fmt::Debug) -> Self {
|
Self::Internal {
|
||||||
let details = details.into();
|
details: details.into(),
|
||||||
let caller = std::panic::Location::caller();
|
}
|
||||||
error!(
|
|
||||||
caller_file = %caller.file(),
|
|
||||||
caller_line = caller.line(),
|
|
||||||
caller_column = caller.column(),
|
|
||||||
details = %details,
|
|
||||||
error = ?err,
|
|
||||||
"Internal error"
|
|
||||||
);
|
|
||||||
|
|
||||||
Self::Internal { details }
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<diesel::result::Error> for Error {
|
impl From<diesel::result::Error> for Error {
|
||||||
fn from(e: diesel::result::Error) -> Self {
|
fn from(e: diesel::result::Error) -> Self {
|
||||||
Self::internal("Database error", &e)
|
error!(?e, "Database error");
|
||||||
|
Self::internal("Database error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -80,7 +73,7 @@ fn parse_auth_event(payload: Inbound) -> AuthEvents {
|
|||||||
pub async fn authenticate<T>(
|
pub async fn authenticate<T>(
|
||||||
props: &mut UserAgentConnection,
|
props: &mut UserAgentConnection,
|
||||||
transport: T,
|
transport: T,
|
||||||
) -> Result<AuthPublicKey, Error>
|
) -> Result<authn::PublicKey, Error>
|
||||||
where
|
where
|
||||||
T: Bi<Inbound, Result<Outbound, Error>> + Send,
|
T: Bi<Inbound, Result<Outbound, Error>> + Send,
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use arbiter_crypto::authn::{self, USERAGENT_CONTEXT};
|
||||||
use arbiter_proto::transport::Bi;
|
use arbiter_proto::transport::Bi;
|
||||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
@@ -9,24 +10,24 @@ use crate::{
|
|||||||
actors::{
|
actors::{
|
||||||
bootstrap::ConsumeToken,
|
bootstrap::ConsumeToken,
|
||||||
keyholder::KeyHolder,
|
keyholder::KeyHolder,
|
||||||
user_agent::{AuthPublicKey, UserAgentConnection, UserAgentCredentials, auth::Outbound},
|
user_agent::{UserAgentConnection, UserAgentCredentials, auth::Outbound},
|
||||||
},
|
},
|
||||||
crypto::integrity,
|
crypto::integrity,
|
||||||
db::{DatabasePool, schema::useragent_client},
|
db::{DatabasePool, schema::useragent_client},
|
||||||
};
|
};
|
||||||
|
|
||||||
pub struct ChallengeRequest {
|
pub struct ChallengeRequest {
|
||||||
pub pubkey: AuthPublicKey,
|
pub pubkey: authn::PublicKey,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct BootstrapAuthRequest {
|
pub struct BootstrapAuthRequest {
|
||||||
pub pubkey: AuthPublicKey,
|
pub pubkey: authn::PublicKey,
|
||||||
pub token: String,
|
pub token: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ChallengeContext {
|
pub struct ChallengeContext {
|
||||||
pub challenge_nonce: i32,
|
pub challenge_nonce: i32,
|
||||||
pub key: AuthPublicKey,
|
pub key: authn::PublicKey,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ChallengeSolution {
|
pub struct ChallengeSolution {
|
||||||
@@ -38,26 +39,25 @@ smlang::statemachine!(
|
|||||||
custom_error: true,
|
custom_error: true,
|
||||||
transitions: {
|
transitions: {
|
||||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
||||||
Init + BootstrapAuthRequest(BootstrapAuthRequest) / async verify_bootstrap_token = AuthOk(AuthPublicKey),
|
Init + BootstrapAuthRequest(BootstrapAuthRequest) / async verify_bootstrap_token = AuthOk(authn::PublicKey),
|
||||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) / async verify_solution = AuthOk(AuthPublicKey),
|
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) / async verify_solution = AuthOk(authn::PublicKey),
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|
||||||
/// Returns the current nonce, ready to use for the challenge nonce.
|
/// Returns the current nonce, ready to use for the challenge nonce.
|
||||||
async fn get_current_nonce_and_id(
|
async fn get_current_nonce_and_id(
|
||||||
db: &DatabasePool,
|
db: &DatabasePool,
|
||||||
key: &AuthPublicKey,
|
key: &authn::PublicKey,
|
||||||
) -> Result<(i32, i32), Error> {
|
) -> Result<(i32, i32), Error> {
|
||||||
let mut db_conn = db
|
let mut db_conn = db.get().await.map_err(|e| {
|
||||||
.get()
|
error!(error = ?e, "Database pool error");
|
||||||
.await
|
Error::internal("Database unavailable")
|
||||||
.map_err(|e| Error::internal("Database unavailable", &e))?;
|
})?;
|
||||||
db_conn
|
db_conn
|
||||||
.exclusive_transaction(|conn| {
|
.exclusive_transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
useragent_client::table
|
useragent_client::table
|
||||||
.filter(useragent_client::public_key.eq(key.to_stored_bytes()))
|
.filter(useragent_client::public_key.eq(key.to_bytes()))
|
||||||
.filter(useragent_client::key_type.eq(key.key_type()))
|
|
||||||
.select((useragent_client::id, useragent_client::nonce))
|
.select((useragent_client::id, useragent_client::nonce))
|
||||||
.first::<(i32, i32)>(conn)
|
.first::<(i32, i32)>(conn)
|
||||||
.await
|
.await
|
||||||
@@ -65,7 +65,10 @@ async fn get_current_nonce_and_id(
|
|||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.optional()
|
.optional()
|
||||||
.map_err(|e| Error::internal("Database operation failed", &e))?
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database error");
|
||||||
|
Error::internal("Database operation failed")
|
||||||
|
})?
|
||||||
.ok_or_else(|| {
|
.ok_or_else(|| {
|
||||||
error!(?key, "Public key not found in database");
|
error!(?key, "Public key not found in database");
|
||||||
Error::UnregisteredPublicKey
|
Error::UnregisteredPublicKey
|
||||||
@@ -75,16 +78,16 @@ async fn get_current_nonce_and_id(
|
|||||||
async fn verify_integrity(
|
async fn verify_integrity(
|
||||||
db: &DatabasePool,
|
db: &DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &AuthPublicKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let mut db_conn = db
|
let mut db_conn = db.get().await.map_err(|e| {
|
||||||
.get()
|
error!(error = ?e, "Database pool error");
|
||||||
.await
|
Error::internal("Database unavailable")
|
||||||
.map_err(|e| Error::internal("Database unavailable", &e))?;
|
})?;
|
||||||
|
|
||||||
let (id, nonce) = get_current_nonce_and_id(db, pubkey).await?;
|
let (id, nonce) = get_current_nonce_and_id(db, pubkey).await?;
|
||||||
|
|
||||||
let attestation_status = integrity::check_entity_attestation(
|
let _result = integrity::verify_entity(
|
||||||
&mut db_conn,
|
&mut db_conn,
|
||||||
keyholder,
|
keyholder,
|
||||||
&UserAgentCredentials {
|
&UserAgentCredentials {
|
||||||
@@ -94,39 +97,36 @@ async fn verify_integrity(
|
|||||||
id,
|
id,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| Error::internal("Integrity verification failed", &e))?;
|
.map_err(|e| {
|
||||||
|
error!(?e, "Integrity verification failed");
|
||||||
|
Error::internal("Integrity verification failed")
|
||||||
|
})?;
|
||||||
|
|
||||||
use integrity::AttestationStatus as AS;
|
Ok(())
|
||||||
// SAFETY (policy): challenge auth must work in both vault states.
|
|
||||||
// While sealed, integrity checks can only report `Unavailable` because key material is not
|
|
||||||
// accessible. While unsealed, the same check can report `Attested`.
|
|
||||||
// This path intentionally accepts both outcomes to keep challenge auth available across state
|
|
||||||
// transitions; stricter verification is enforced in sensitive post-auth flows.
|
|
||||||
match attestation_status {
|
|
||||||
AS::Attested | AS::Unavailable => Ok(()),
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn create_nonce(
|
async fn create_nonce(
|
||||||
db: &DatabasePool,
|
db: &DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &AuthPublicKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<i32, Error> {
|
) -> Result<i32, Error> {
|
||||||
let mut db_conn = db
|
let mut db_conn = db.get().await.map_err(|e| {
|
||||||
.get()
|
error!(error = ?e, "Database pool error");
|
||||||
.await
|
Error::internal("Database unavailable")
|
||||||
.map_err(|e| Error::internal("Database unavailable", &e))?;
|
})?;
|
||||||
let new_nonce = db_conn
|
let new_nonce = db_conn
|
||||||
.exclusive_transaction(|conn| {
|
.exclusive_transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
let (id, new_nonce): (i32, i32) = update(useragent_client::table)
|
let (id, new_nonce): (i32, i32) = update(useragent_client::table)
|
||||||
.filter(useragent_client::public_key.eq(pubkey.to_stored_bytes()))
|
.filter(useragent_client::public_key.eq(pubkey.to_bytes()))
|
||||||
.filter(useragent_client::key_type.eq(pubkey.key_type()))
|
|
||||||
.set(useragent_client::nonce.eq(useragent_client::nonce + 1))
|
.set(useragent_client::nonce.eq(useragent_client::nonce + 1))
|
||||||
.returning((useragent_client::id, useragent_client::nonce))
|
.returning((useragent_client::id, useragent_client::nonce))
|
||||||
.get_result(conn)
|
.get_result(conn)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| Error::internal("Database operation failed", &e))?;
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database error");
|
||||||
|
Error::internal("Database operation failed")
|
||||||
|
})?;
|
||||||
|
|
||||||
integrity::sign_entity(
|
integrity::sign_entity(
|
||||||
conn,
|
conn,
|
||||||
@@ -138,7 +138,10 @@ async fn create_nonce(
|
|||||||
id,
|
id,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| Error::internal("Database error", &e))?;
|
.map_err(|e| {
|
||||||
|
error!(?e, "Integrity signature update failed");
|
||||||
|
Error::internal("Database error")
|
||||||
|
})?;
|
||||||
|
|
||||||
Result::<_, Error>::Ok(new_nonce)
|
Result::<_, Error>::Ok(new_nonce)
|
||||||
})
|
})
|
||||||
@@ -150,14 +153,13 @@ async fn create_nonce(
|
|||||||
async fn register_key(
|
async fn register_key(
|
||||||
db: &DatabasePool,
|
db: &DatabasePool,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
pubkey: &AuthPublicKey,
|
pubkey: &authn::PublicKey,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let pubkey_bytes = pubkey.to_stored_bytes();
|
let pubkey_bytes = pubkey.to_bytes();
|
||||||
let key_type = pubkey.key_type();
|
let mut conn = db.get().await.map_err(|e| {
|
||||||
let mut conn = db
|
error!(error = ?e, "Database pool error");
|
||||||
.get()
|
Error::internal("Database unavailable")
|
||||||
.await
|
})?;
|
||||||
.map_err(|e| Error::internal("Database unavailable", &e))?;
|
|
||||||
|
|
||||||
conn.transaction(|conn| {
|
conn.transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
@@ -167,37 +169,26 @@ async fn register_key(
|
|||||||
.values((
|
.values((
|
||||||
useragent_client::public_key.eq(pubkey_bytes),
|
useragent_client::public_key.eq(pubkey_bytes),
|
||||||
useragent_client::nonce.eq(NONCE_START),
|
useragent_client::nonce.eq(NONCE_START),
|
||||||
useragent_client::key_type.eq(key_type),
|
|
||||||
))
|
))
|
||||||
.returning(useragent_client::id)
|
.returning(useragent_client::id)
|
||||||
.get_result(conn)
|
.get_result(conn)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| Error::internal("Database operation failed", &e))?;
|
.map_err(|e| {
|
||||||
|
error!(error = ?e, "Database error");
|
||||||
|
Error::internal("Database operation failed")
|
||||||
|
})?;
|
||||||
|
|
||||||
if let Err(e) = integrity::sign_entity(
|
let entity = UserAgentCredentials {
|
||||||
conn,
|
pubkey: pubkey.clone(),
|
||||||
keyholder,
|
nonce: NONCE_START,
|
||||||
&UserAgentCredentials {
|
};
|
||||||
pubkey: pubkey.clone(),
|
|
||||||
nonce: NONCE_START,
|
integrity::sign_entity(conn, keyholder, &entity, id)
|
||||||
},
|
.await
|
||||||
id,
|
.map_err(|e| {
|
||||||
)
|
error!(error = ?e, "Failed to sign integrity tag for new user-agent key");
|
||||||
.await
|
Error::internal("Failed to register public key")
|
||||||
{
|
})?;
|
||||||
match e {
|
|
||||||
integrity::Error::Keyholder(
|
|
||||||
crate::actors::keyholder::Error::NotBootstrapped,
|
|
||||||
) => {
|
|
||||||
// IMPORTANT: bootstrap-token auth must work before the vault has a root key.
|
|
||||||
// We intentionally allow creating the DB row first and backfill envelopes
|
|
||||||
// after bootstrap/unseal to keep the bootstrap flow possible.
|
|
||||||
}
|
|
||||||
other => {
|
|
||||||
return Err(Error::internal("Failed to register public key", &other));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Result::<_, Error>::Ok(())
|
Result::<_, Error>::Ok(())
|
||||||
})
|
})
|
||||||
@@ -213,14 +204,14 @@ pub struct AuthContext<'a, T> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'a, T> AuthContext<'a, T> {
|
impl<'a, T> AuthContext<'a, T> {
|
||||||
pub fn new(conn: &'a mut UserAgentConnection, transport: T) -> Self {
|
pub const fn new(conn: &'a mut UserAgentConnection, transport: T) -> Self {
|
||||||
Self { conn, transport }
|
Self { conn, transport }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T> AuthStateMachineContext for AuthContext<'_, T>
|
impl<T> AuthStateMachineContext for AuthContext<'_, T>
|
||||||
where
|
where
|
||||||
T: Bi<super::Inbound, Result<super::Outbound, Error>> + Send,
|
T: Bi<super::Inbound, Result<Outbound, Error>> + Send,
|
||||||
{
|
{
|
||||||
type Error = Error;
|
type Error = Error;
|
||||||
|
|
||||||
@@ -246,12 +237,10 @@ where
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(missing_docs)]
|
|
||||||
#[allow(clippy::result_unit_err)]
|
|
||||||
async fn verify_bootstrap_token(
|
async fn verify_bootstrap_token(
|
||||||
&mut self,
|
&mut self,
|
||||||
BootstrapAuthRequest { pubkey, token }: BootstrapAuthRequest,
|
BootstrapAuthRequest { pubkey, token }: BootstrapAuthRequest,
|
||||||
) -> Result<AuthPublicKey, Self::Error> {
|
) -> Result<authn::PublicKey, Self::Error> {
|
||||||
let token_ok: bool = self
|
let token_ok: bool = self
|
||||||
.conn
|
.conn
|
||||||
.actors
|
.actors
|
||||||
@@ -260,35 +249,33 @@ where
|
|||||||
token: token.clone(),
|
token: token.clone(),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.map_err(|e| Error::internal("Failed to consume bootstrap token", &e))?;
|
.map_err(|e| {
|
||||||
|
error!(?e, "Failed to consume bootstrap token");
|
||||||
|
Error::internal("Failed to consume bootstrap token")
|
||||||
|
})?;
|
||||||
|
|
||||||
if !token_ok {
|
if !token_ok {
|
||||||
error!("Invalid bootstrap token provided");
|
error!("Invalid bootstrap token provided");
|
||||||
return Err(Error::InvalidBootstrapToken);
|
return Err(Error::InvalidBootstrapToken);
|
||||||
}
|
}
|
||||||
|
|
||||||
match token_ok {
|
if token_ok {
|
||||||
true => {
|
register_key(&self.conn.db, &self.conn.actors.key_holder, &pubkey).await?;
|
||||||
register_key(&self.conn.db, &self.conn.actors.key_holder, &pubkey).await?;
|
self.transport
|
||||||
self.transport
|
.send(Ok(Outbound::AuthSuccess))
|
||||||
.send(Ok(Outbound::AuthSuccess))
|
.await
|
||||||
.await
|
.map_err(|_| Error::Transport)?;
|
||||||
.map_err(|_| Error::Transport)?;
|
Ok(pubkey)
|
||||||
Ok(pubkey)
|
} else {
|
||||||
}
|
error!("Invalid bootstrap token provided");
|
||||||
false => {
|
self.transport
|
||||||
error!("Invalid bootstrap token provided");
|
.send(Err(Error::InvalidBootstrapToken))
|
||||||
self.transport
|
.await
|
||||||
.send(Err(Error::InvalidBootstrapToken))
|
.map_err(|_| Error::Transport)?;
|
||||||
.await
|
Err(Error::InvalidBootstrapToken)
|
||||||
.map_err(|_| Error::Transport)?;
|
|
||||||
Err(Error::InvalidBootstrapToken)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(missing_docs)]
|
|
||||||
#[allow(clippy::unused_unit)]
|
|
||||||
async fn verify_solution(
|
async fn verify_solution(
|
||||||
&mut self,
|
&mut self,
|
||||||
ChallengeContext {
|
ChallengeContext {
|
||||||
@@ -296,51 +283,26 @@ where
|
|||||||
key,
|
key,
|
||||||
}: &ChallengeContext,
|
}: &ChallengeContext,
|
||||||
ChallengeSolution { solution }: ChallengeSolution,
|
ChallengeSolution { solution }: ChallengeSolution,
|
||||||
) -> Result<AuthPublicKey, Self::Error> {
|
) -> Result<authn::PublicKey, Self::Error> {
|
||||||
let formatted = arbiter_proto::format_challenge(*challenge_nonce, &key.to_stored_bytes());
|
let signature = authn::Signature::try_from(solution.as_slice()).map_err(|()| {
|
||||||
|
error!("Failed to decode signature in challenge solution");
|
||||||
|
Error::InvalidChallengeSolution
|
||||||
|
})?;
|
||||||
|
|
||||||
let valid = match key {
|
let valid = key.verify(*challenge_nonce, USERAGENT_CONTEXT, &signature);
|
||||||
AuthPublicKey::Ed25519(vk) => {
|
|
||||||
let sig = solution.as_slice().try_into().map_err(|_| {
|
|
||||||
error!(?solution, "Invalid Ed25519 signature length");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
vk.verify_strict(&formatted, &sig).is_ok()
|
|
||||||
}
|
|
||||||
AuthPublicKey::EcdsaSecp256k1(vk) => {
|
|
||||||
use k256::ecdsa::signature::Verifier as _;
|
|
||||||
let sig = k256::ecdsa::Signature::try_from(solution.as_slice()).map_err(|_| {
|
|
||||||
error!(?solution, "Invalid ECDSA signature bytes");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
vk.verify(&formatted, &sig).is_ok()
|
|
||||||
}
|
|
||||||
AuthPublicKey::Rsa(pk) => {
|
|
||||||
use rsa::signature::Verifier as _;
|
|
||||||
let verifying_key = rsa::pss::VerifyingKey::<sha2::Sha256>::new(pk.clone());
|
|
||||||
let sig = rsa::pss::Signature::try_from(solution.as_slice()).map_err(|_| {
|
|
||||||
error!(?solution, "Invalid RSA signature bytes");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
verifying_key.verify(&formatted, &sig).is_ok()
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
match valid {
|
if valid {
|
||||||
true => {
|
self.transport
|
||||||
self.transport
|
.send(Ok(Outbound::AuthSuccess))
|
||||||
.send(Ok(Outbound::AuthSuccess))
|
.await
|
||||||
.await
|
.map_err(|_| Error::Transport)?;
|
||||||
.map_err(|_| Error::Transport)?;
|
Ok(key.clone())
|
||||||
Ok(key.clone())
|
} else {
|
||||||
}
|
self.transport
|
||||||
false => {
|
.send(Err(Error::InvalidChallengeSolution))
|
||||||
self.transport
|
.await
|
||||||
.send(Err(Error::InvalidChallengeSolution))
|
.map_err(|_| Error::Transport)?;
|
||||||
.await
|
Err(Error::InvalidChallengeSolution)
|
||||||
.map_err(|_| Error::Transport)?;
|
|
||||||
Err(Error::InvalidChallengeSolution)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,22 +1,13 @@
|
|||||||
use crate::{
|
use crate::{
|
||||||
actors::{GlobalActors, client::ClientProfile},
|
actors::{GlobalActors, client::ClientProfile},
|
||||||
crypto::integrity::Integrable,
|
crypto::integrity::Integrable,
|
||||||
db::{self, models::KeyType},
|
db,
|
||||||
};
|
};
|
||||||
|
use arbiter_crypto::authn;
|
||||||
|
|
||||||
/// Abstraction over Ed25519 / ECDSA-secp256k1 / RSA public keys used during the auth handshake.
|
#[derive(Debug, arbiter_macros::Hashable)]
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
pub enum AuthPublicKey {
|
|
||||||
Ed25519(ed25519_dalek::VerifyingKey),
|
|
||||||
/// Compressed SEC1 public key; signature bytes are raw 64-byte (r||s).
|
|
||||||
EcdsaSecp256k1(k256::ecdsa::VerifyingKey),
|
|
||||||
/// RSA-2048+ public key (Windows Hello / KeyCredentialManager); signature bytes are PSS+SHA-256.
|
|
||||||
Rsa(rsa::RsaPublicKey),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub struct UserAgentCredentials {
|
pub struct UserAgentCredentials {
|
||||||
pub pubkey: AuthPublicKey,
|
pub pubkey: authn::PublicKey,
|
||||||
pub nonce: i32,
|
pub nonce: i32,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -24,67 +15,11 @@ impl Integrable for UserAgentCredentials {
|
|||||||
const KIND: &'static str = "useragent_credentials";
|
const KIND: &'static str = "useragent_credentials";
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AuthPublicKey {
|
|
||||||
/// Canonical bytes stored in DB and echoed back in the challenge.
|
|
||||||
/// Ed25519: raw 32 bytes. ECDSA: SEC1 compressed 33 bytes. RSA: DER-encoded SPKI.
|
|
||||||
pub fn to_stored_bytes(&self) -> Vec<u8> {
|
|
||||||
match self {
|
|
||||||
AuthPublicKey::Ed25519(k) => k.to_bytes().to_vec(),
|
|
||||||
// SEC1 compressed (33 bytes) is the natural compact format for secp256k1
|
|
||||||
AuthPublicKey::EcdsaSecp256k1(k) => k.to_encoded_point(true).as_bytes().to_vec(),
|
|
||||||
AuthPublicKey::Rsa(k) => {
|
|
||||||
use rsa::pkcs8::EncodePublicKey as _;
|
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
k.to_public_key_der()
|
|
||||||
.expect("rsa SPKI encoding is infallible")
|
|
||||||
.to_vec()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn key_type(&self) -> KeyType {
|
|
||||||
match self {
|
|
||||||
AuthPublicKey::Ed25519(_) => KeyType::Ed25519,
|
|
||||||
AuthPublicKey::EcdsaSecp256k1(_) => KeyType::EcdsaSecp256k1,
|
|
||||||
AuthPublicKey::Rsa(_) => KeyType::Rsa,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TryFrom<(KeyType, Vec<u8>)> for AuthPublicKey {
|
|
||||||
type Error = &'static str;
|
|
||||||
|
|
||||||
fn try_from(value: (KeyType, Vec<u8>)) -> Result<Self, Self::Error> {
|
|
||||||
let (key_type, bytes) = value;
|
|
||||||
match key_type {
|
|
||||||
KeyType::Ed25519 => {
|
|
||||||
let bytes: [u8; 32] = bytes.try_into().map_err(|_| "invalid Ed25519 key length")?;
|
|
||||||
let key = ed25519_dalek::VerifyingKey::from_bytes(&bytes)
|
|
||||||
.map_err(|_e| "invalid Ed25519 key")?;
|
|
||||||
Ok(AuthPublicKey::Ed25519(key))
|
|
||||||
}
|
|
||||||
KeyType::EcdsaSecp256k1 => {
|
|
||||||
let point =
|
|
||||||
k256::EncodedPoint::from_bytes(&bytes).map_err(|_e| "invalid ECDSA key")?;
|
|
||||||
let key = k256::ecdsa::VerifyingKey::from_encoded_point(&point)
|
|
||||||
.map_err(|_e| "invalid ECDSA key")?;
|
|
||||||
Ok(AuthPublicKey::EcdsaSecp256k1(key))
|
|
||||||
}
|
|
||||||
KeyType::Rsa => {
|
|
||||||
use rsa::pkcs8::DecodePublicKey as _;
|
|
||||||
let key = rsa::RsaPublicKey::from_public_key_der(&bytes)
|
|
||||||
.map_err(|_e| "invalid RSA key")?;
|
|
||||||
Ok(AuthPublicKey::Rsa(key))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Messages, sent by user agent to connection client without having a request
|
// Messages, sent by user agent to connection client without having a request
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub enum OutOfBand {
|
pub enum OutOfBand {
|
||||||
ClientConnectionRequest { profile: ClientProfile },
|
ClientConnectionRequest { profile: ClientProfile },
|
||||||
ClientConnectionCancel { pubkey: ed25519_dalek::VerifyingKey },
|
ClientConnectionCancel { pubkey: authn::PublicKey },
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct UserAgentConnection {
|
pub struct UserAgentConnection {
|
||||||
@@ -93,7 +28,7 @@ pub struct UserAgentConnection {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl UserAgentConnection {
|
impl UserAgentConnection {
|
||||||
pub fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
pub const fn new(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
||||||
Self { db, actors }
|
Self { db, actors }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -103,18 +38,3 @@ pub mod session;
|
|||||||
|
|
||||||
pub use auth::authenticate;
|
pub use auth::authenticate;
|
||||||
pub use session::UserAgentSession;
|
pub use session::UserAgentSession;
|
||||||
|
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
|
||||||
|
|
||||||
impl Hashable for AuthPublicKey {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
hasher.update(self.to_stored_bytes());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Hashable for UserAgentCredentials {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.pubkey.hash(hasher);
|
|
||||||
self.nonce.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
|
|
||||||
use std::{borrow::Cow, collections::HashMap};
|
use std::{borrow::Cow, collections::HashMap};
|
||||||
|
|
||||||
use arbiter_proto::transport::Sender;
|
use arbiter_proto::transport::Sender;
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use kameo::{Actor, actor::ActorRef, messages};
|
use kameo::{Actor, actor::ActorRef, messages};
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
use tracing::error;
|
use tracing::error;
|
||||||
@@ -12,33 +13,32 @@ use crate::actors::{
|
|||||||
flow_coordinator::{RegisterUserAgent, client_connect_approval::ClientApprovalController},
|
flow_coordinator::{RegisterUserAgent, client_connect_approval::ClientApprovalController},
|
||||||
user_agent::{OutOfBand, UserAgentConnection},
|
user_agent::{OutOfBand, UserAgentConnection},
|
||||||
};
|
};
|
||||||
|
|
||||||
mod state;
|
mod state;
|
||||||
use state::{DummyContext, UserAgentEvents, UserAgentStateMachine};
|
use state::{DummyContext, UserAgentEvents, UserAgentStateMachine};
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
#[derive(Debug, Error)]
|
||||||
pub enum Error {
|
pub enum UserAgentSessionError {
|
||||||
#[error("State transition failed")]
|
|
||||||
State,
|
|
||||||
|
|
||||||
#[error("Internal error: {message}")]
|
#[error("Internal error: {message}")]
|
||||||
Internal { message: Cow<'static, str> },
|
Internal { message: Cow<'static, str> },
|
||||||
|
|
||||||
|
#[error("State transition failed")]
|
||||||
|
State,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<crate::db::PoolError> for Error {
|
impl From<crate::db::PoolError> for UserAgentSessionError {
|
||||||
fn from(err: crate::db::PoolError) -> Self {
|
fn from(err: crate::db::PoolError) -> Self {
|
||||||
error!(?err, "Database pool error");
|
error!(?err, "Database pool error");
|
||||||
Self::internal("Database pool error")
|
Self::internal("Database pool error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl From<diesel::result::Error> for Error {
|
impl From<diesel::result::Error> for UserAgentSessionError {
|
||||||
fn from(err: diesel::result::Error) -> Self {
|
fn from(err: diesel::result::Error) -> Self {
|
||||||
error!(?err, "Database error");
|
error!(?err, "Database error");
|
||||||
Self::internal("Database error")
|
Self::internal("Database error")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Error {
|
impl UserAgentSessionError {
|
||||||
pub fn internal(message: impl Into<Cow<'static, str>>) -> Self {
|
pub fn internal(message: impl Into<Cow<'static, str>>) -> Self {
|
||||||
Self::Internal {
|
Self::Internal {
|
||||||
message: message.into(),
|
message: message.into(),
|
||||||
@@ -47,6 +47,7 @@ impl Error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub struct PendingClientApproval {
|
pub struct PendingClientApproval {
|
||||||
|
pubkey: authn::PublicKey,
|
||||||
controller: ActorRef<ClientApprovalController>,
|
controller: ActorRef<ClientApprovalController>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -55,7 +56,7 @@ pub struct UserAgentSession {
|
|||||||
state: UserAgentStateMachine<DummyContext>,
|
state: UserAgentStateMachine<DummyContext>,
|
||||||
sender: Box<dyn Sender<OutOfBand>>,
|
sender: Box<dyn Sender<OutOfBand>>,
|
||||||
|
|
||||||
pending_client_approvals: HashMap<VerifyingKey, PendingClientApproval>,
|
pending_client_approvals: HashMap<Vec<u8>, PendingClientApproval>,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod connection;
|
pub mod connection;
|
||||||
@@ -66,7 +67,7 @@ impl UserAgentSession {
|
|||||||
props,
|
props,
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
state: UserAgentStateMachine::new(DummyContext),
|
||||||
sender,
|
sender,
|
||||||
pending_client_approvals: Default::default(),
|
pending_client_approvals: HashMap::default(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -86,10 +87,10 @@ impl UserAgentSession {
|
|||||||
Self::new(UserAgentConnection::new(db, actors), Box::new(DummySender))
|
Self::new(UserAgentConnection::new(db, actors), Box::new(DummySender))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Error> {
|
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentSessionError> {
|
||||||
self.state.process_event(event).map_err(|e| {
|
self.state.process_event(event).map_err(|e| {
|
||||||
error!(?e, "State transition failed");
|
error!(?e, "State transition failed");
|
||||||
Error::State
|
UserAgentSessionError::State
|
||||||
})?;
|
})?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -118,19 +119,24 @@ impl UserAgentSession {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
self.pending_client_approvals
|
self.pending_client_approvals.insert(
|
||||||
.insert(client.pubkey, PendingClientApproval { controller });
|
client.pubkey.to_bytes(),
|
||||||
|
PendingClientApproval {
|
||||||
|
pubkey: client.pubkey,
|
||||||
|
controller,
|
||||||
|
},
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Actor for UserAgentSession {
|
impl Actor for UserAgentSession {
|
||||||
type Args = Self;
|
type Args = Self;
|
||||||
|
|
||||||
type Error = Error;
|
type Error = UserAgentSessionError;
|
||||||
|
|
||||||
async fn on_start(
|
async fn on_start(
|
||||||
args: Self::Args,
|
args: Self::Args,
|
||||||
this: kameo::prelude::ActorRef<Self>,
|
this: ActorRef<Self>,
|
||||||
) -> Result<Self, Self::Error> {
|
) -> Result<Self, Self::Error> {
|
||||||
args.props
|
args.props
|
||||||
.actors
|
.actors
|
||||||
@@ -144,7 +150,9 @@ impl Actor for UserAgentSession {
|
|||||||
?err,
|
?err,
|
||||||
"Failed to register user agent connection with flow coordinator"
|
"Failed to register user agent connection with flow coordinator"
|
||||||
);
|
);
|
||||||
Error::internal("Failed to register user agent connection with flow coordinator")
|
UserAgentSessionError::internal(
|
||||||
|
"Failed to register user agent connection with flow coordinator",
|
||||||
|
)
|
||||||
})?;
|
})?;
|
||||||
Ok(args)
|
Ok(args)
|
||||||
}
|
}
|
||||||
@@ -158,14 +166,18 @@ impl Actor for UserAgentSession {
|
|||||||
let cancelled_pubkey = self
|
let cancelled_pubkey = self
|
||||||
.pending_client_approvals
|
.pending_client_approvals
|
||||||
.iter()
|
.iter()
|
||||||
.find_map(|(k, v)| (v.controller.id() == id).then_some(*k));
|
.find_map(|(k, v)| (v.controller.id() == id).then_some(k.clone()));
|
||||||
|
|
||||||
if let Some(pubkey) = cancelled_pubkey {
|
if let Some(pubkey_bytes) = cancelled_pubkey {
|
||||||
self.pending_client_approvals.remove(&pubkey);
|
let Some(approval) = self.pending_client_approvals.remove(&pubkey_bytes) else {
|
||||||
|
return Ok(std::ops::ControlFlow::Continue(()));
|
||||||
|
};
|
||||||
|
|
||||||
if let Err(e) = self
|
if let Err(e) = self
|
||||||
.sender
|
.sender
|
||||||
.send(OutOfBand::ClientConnectionCancel { pubkey })
|
.send(OutOfBand::ClientConnectionCancel {
|
||||||
|
pubkey: approval.pubkey,
|
||||||
|
})
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
error!(
|
error!(
|
||||||
|
|||||||
@@ -1,101 +1,46 @@
|
|||||||
use std::sync::Mutex;
|
use std::sync::Mutex;
|
||||||
|
|
||||||
use alloy::{consensus::TxEip1559, primitives::Address, signers::Signature};
|
use alloy::{consensus::TxEip1559, primitives::Address, signers::Signature};
|
||||||
|
use arbiter_crypto::{
|
||||||
|
authn,
|
||||||
|
safecell::{SafeCell, SafeCellHandle as _},
|
||||||
|
};
|
||||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl as _, SelectableHelper};
|
use diesel::{ExpressionMethods as _, QueryDsl as _, SelectableHelper};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
use kameo::error::SendError;
|
use kameo::error::SendError;
|
||||||
use kameo::messages;
|
use kameo::messages;
|
||||||
use kameo::prelude::Context;
|
use kameo::prelude::Context;
|
||||||
|
use thiserror::Error;
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
|
|
||||||
|
use crate::{actors::flow_coordinator::client_connect_approval::ClientApprovalAnswer, evm::policies::SharedGrantSettings};
|
||||||
use crate::actors::keyholder::KeyHolderState;
|
use crate::actors::keyholder::KeyHolderState;
|
||||||
use crate::actors::user_agent::session::Error;
|
use crate::actors::user_agent::session::UserAgentSessionError;
|
||||||
|
use crate::actors::{
|
||||||
|
evm::{
|
||||||
|
ClientSignTransaction, Generate, ListWallets, SignTransactionError as EvmSignError,
|
||||||
|
UseragentCreateGrant, UseragentListGrants,
|
||||||
|
},
|
||||||
|
keyholder::{self, Bootstrap, TryUnseal},
|
||||||
|
user_agent::session::{
|
||||||
|
UserAgentSession,
|
||||||
|
state::{UnsealContext, UserAgentEvents, UserAgentStates},
|
||||||
|
},
|
||||||
|
};
|
||||||
use crate::db::models::{
|
use crate::db::models::{
|
||||||
EvmWalletAccess, NewEvmWalletAccess, ProgramClient, ProgramClientMetadata,
|
EvmWalletAccess, NewEvmWalletAccess, ProgramClient, ProgramClientMetadata,
|
||||||
};
|
};
|
||||||
use crate::evm::policies::{Grant, SpecificGrant};
|
use crate::evm::policies::{Grant, SpecificGrant};
|
||||||
use crate::safe_cell::SafeCell;
|
|
||||||
use crate::{
|
|
||||||
actors::flow_coordinator::client_connect_approval::ClientApprovalAnswer,
|
|
||||||
crypto::integrity::{self, Verified},
|
|
||||||
};
|
|
||||||
use crate::{
|
|
||||||
actors::{
|
|
||||||
evm::{
|
|
||||||
ClientSignTransaction, Generate, ListWallets, SignTransactionError as EvmSignError,
|
|
||||||
UseragentCreateGrant, UseragentDeleteGrant, UseragentListGrants,
|
|
||||||
},
|
|
||||||
keyholder::{self, Bootstrap, TryUnseal},
|
|
||||||
user_agent::session::{
|
|
||||||
UserAgentSession,
|
|
||||||
state::{UnsealContext, UserAgentEvents, UserAgentStates},
|
|
||||||
},
|
|
||||||
user_agent::{AuthPublicKey, UserAgentCredentials},
|
|
||||||
},
|
|
||||||
db::schema::useragent_client,
|
|
||||||
safe_cell::SafeCellHandle as _,
|
|
||||||
};
|
|
||||||
|
|
||||||
fn is_vault_sealed_from_evm<M>(err: &SendError<M, crate::actors::evm::Error>) -> bool {
|
|
||||||
matches!(
|
|
||||||
err,
|
|
||||||
SendError::HandlerError(crate::actors::evm::Error::Keyholder(
|
|
||||||
keyholder::Error::NotBootstrapped
|
|
||||||
)) | SendError::HandlerError(crate::actors::evm::Error::Integrity(
|
|
||||||
crate::crypto::integrity::Error::Keyholder(keyholder::Error::NotBootstrapped)
|
|
||||||
))
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
async fn backfill_useragent_integrity(&self) -> Result<(), Error> {
|
fn take_unseal_secret(&self) -> Result<(EphemeralSecret, PublicKey), UserAgentSessionError> {
|
||||||
let mut conn = self.props.db.get().await?;
|
|
||||||
let keyholder = self.props.actors.key_holder.clone();
|
|
||||||
|
|
||||||
conn.transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let rows: Vec<(i32, i32, Vec<u8>, crate::db::models::KeyType)> =
|
|
||||||
useragent_client::table
|
|
||||||
.select((
|
|
||||||
useragent_client::id,
|
|
||||||
useragent_client::nonce,
|
|
||||||
useragent_client::public_key,
|
|
||||||
useragent_client::key_type,
|
|
||||||
))
|
|
||||||
.load(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
for (id, nonce, public_key, key_type) in rows {
|
|
||||||
let pubkey = AuthPublicKey::try_from((key_type, public_key)).map_err(|e| {
|
|
||||||
Error::internal(format!("Invalid user-agent key in db: {e}"))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
integrity::sign_entity(
|
|
||||||
conn,
|
|
||||||
&keyholder,
|
|
||||||
&UserAgentCredentials { pubkey, nonce },
|
|
||||||
id,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
.map_err(|e| {
|
|
||||||
Error::internal(format!("Failed to backfill user-agent integrity: {e}"))
|
|
||||||
})?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Result::<_, Error>::Ok(())
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn take_unseal_secret(&mut self) -> Result<(EphemeralSecret, PublicKey), Error> {
|
|
||||||
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
||||||
error!("Received encrypted key in invalid state");
|
error!("Received encrypted key in invalid state");
|
||||||
return Err(Error::internal("Invalid state for unseal encrypted key"));
|
return Err(UserAgentSessionError::internal(
|
||||||
|
"Invalid state for unseal encrypted key",
|
||||||
|
));
|
||||||
};
|
};
|
||||||
|
|
||||||
let ephemeral_secret = {
|
let ephemeral_secret = {
|
||||||
@@ -105,13 +50,14 @@ impl UserAgentSession {
|
|||||||
)]
|
)]
|
||||||
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
||||||
let secret = secret_lock.take();
|
let secret = secret_lock.take();
|
||||||
match secret {
|
if let Some(secret) = secret {
|
||||||
Some(secret) => secret,
|
secret
|
||||||
None => {
|
} else {
|
||||||
drop(secret_lock);
|
drop(secret_lock);
|
||||||
error!("Ephemeral secret already taken");
|
error!("Ephemeral secret already taken");
|
||||||
return Err(Error::internal("Ephemeral secret already taken"));
|
return Err(UserAgentSessionError::internal(
|
||||||
}
|
"Ephemeral secret already taken",
|
||||||
|
));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -137,7 +83,7 @@ impl UserAgentSession {
|
|||||||
});
|
});
|
||||||
|
|
||||||
match decryption_result {
|
match decryption_result {
|
||||||
Ok(_) => Ok(key_buffer),
|
Ok(()) => Ok(key_buffer),
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "Failed to decrypt encrypted key material");
|
error!(?err, "Failed to decrypt encrypted key material");
|
||||||
Err(())
|
Err(())
|
||||||
@@ -155,7 +101,7 @@ pub enum UnsealError {
|
|||||||
#[error("Invalid key provided for unsealing")]
|
#[error("Invalid key provided for unsealing")]
|
||||||
InvalidKey,
|
InvalidKey,
|
||||||
#[error("Internal error during unsealing process")]
|
#[error("Internal error during unsealing process")]
|
||||||
General(#[from] super::Error),
|
General(#[from] UserAgentSessionError),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
#[derive(Debug, Error)]
|
||||||
@@ -166,7 +112,7 @@ pub enum BootstrapError {
|
|||||||
AlreadyBootstrapped,
|
AlreadyBootstrapped,
|
||||||
|
|
||||||
#[error("Internal error during bootstrapping process")]
|
#[error("Internal error during bootstrapping process")]
|
||||||
General(#[from] super::Error),
|
General(#[from] UserAgentSessionError),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
#[derive(Debug, Error)]
|
||||||
@@ -190,16 +136,16 @@ pub enum GrantMutationError {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn handle_unseal_request(
|
pub fn handle_unseal_request(
|
||||||
&mut self,
|
&mut self,
|
||||||
client_pubkey: x25519_dalek::PublicKey,
|
client_pubkey: PublicKey,
|
||||||
) -> Result<UnsealStartResponse, Error> {
|
) -> Result<UnsealStartResponse, UserAgentSessionError> {
|
||||||
let secret = EphemeralSecret::random();
|
let secret = EphemeralSecret::random();
|
||||||
let public_key = PublicKey::from(&secret);
|
let public_key = PublicKey::from(&secret);
|
||||||
|
|
||||||
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
||||||
secret: Mutex::new(Some(secret)),
|
|
||||||
client_public_key: client_pubkey,
|
client_public_key: client_pubkey,
|
||||||
|
secret: Mutex::new(Some(secret)),
|
||||||
}))?;
|
}))?;
|
||||||
|
|
||||||
Ok(UnsealStartResponse {
|
Ok(UnsealStartResponse {
|
||||||
@@ -216,27 +162,24 @@ impl UserAgentSession {
|
|||||||
) -> Result<(), UnsealError> {
|
) -> Result<(), UnsealError> {
|
||||||
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
||||||
Ok(values) => values,
|
Ok(values) => values,
|
||||||
Err(Error::State) => {
|
Err(UserAgentSessionError::State) => {
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
return Err(UnsealError::InvalidKey);
|
return Err(UnsealError::InvalidKey);
|
||||||
}
|
}
|
||||||
Err(_err) => {
|
Err(_err) => {
|
||||||
return Err(Error::internal("Failed to take unseal secret").into());
|
return Err(UserAgentSessionError::internal("Failed to take unseal secret").into());
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
let seal_key_buffer = match Self::decrypt_client_key_material(
|
let Ok(seal_key_buffer) = Self::decrypt_client_key_material(
|
||||||
ephemeral_secret,
|
ephemeral_secret,
|
||||||
client_public_key,
|
client_public_key,
|
||||||
&nonce,
|
&nonce,
|
||||||
&ciphertext,
|
&ciphertext,
|
||||||
&associated_data,
|
&associated_data,
|
||||||
) {
|
) else {
|
||||||
Ok(buffer) => buffer,
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(()) => {
|
return Err(UnsealError::InvalidKey);
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
return Err(UnsealError::InvalidKey);
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
match self
|
match self
|
||||||
@@ -248,13 +191,12 @@ impl UserAgentSession {
|
|||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
Ok(_) => {
|
Ok(()) => {
|
||||||
self.backfill_useragent_integrity().await?;
|
|
||||||
info!("Successfully unsealed key with client-provided key");
|
info!("Successfully unsealed key with client-provided key");
|
||||||
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
Err(SendError::HandlerError(keyholder::KeyHolderError::InvalidKey)) => {
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(UnsealError::InvalidKey)
|
Err(UnsealError::InvalidKey)
|
||||||
}
|
}
|
||||||
@@ -266,7 +208,7 @@ impl UserAgentSession {
|
|||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "Failed to send unseal request to keyholder");
|
error!(?err, "Failed to send unseal request to keyholder");
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(Error::internal("Vault actor error").into())
|
Err(UserAgentSessionError::internal("Vault actor error").into())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -280,25 +222,22 @@ impl UserAgentSession {
|
|||||||
) -> Result<(), BootstrapError> {
|
) -> Result<(), BootstrapError> {
|
||||||
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
let (ephemeral_secret, client_public_key) = match self.take_unseal_secret() {
|
||||||
Ok(values) => values,
|
Ok(values) => values,
|
||||||
Err(Error::State) => {
|
Err(UserAgentSessionError::State) => {
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
return Err(BootstrapError::InvalidKey);
|
return Err(BootstrapError::InvalidKey);
|
||||||
}
|
}
|
||||||
Err(err) => return Err(err.into()),
|
Err(err) => return Err(err.into()),
|
||||||
};
|
};
|
||||||
|
|
||||||
let seal_key_buffer = match Self::decrypt_client_key_material(
|
let Ok(seal_key_buffer) = Self::decrypt_client_key_material(
|
||||||
ephemeral_secret,
|
ephemeral_secret,
|
||||||
client_public_key,
|
client_public_key,
|
||||||
&nonce,
|
&nonce,
|
||||||
&ciphertext,
|
&ciphertext,
|
||||||
&associated_data,
|
&associated_data,
|
||||||
) {
|
) else {
|
||||||
Ok(buffer) => buffer,
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(()) => {
|
return Err(BootstrapError::InvalidKey);
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
|
||||||
return Err(BootstrapError::InvalidKey);
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
match self
|
match self
|
||||||
@@ -310,13 +249,12 @@ impl UserAgentSession {
|
|||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
Ok(_) => {
|
Ok(()) => {
|
||||||
self.backfill_useragent_integrity().await?;
|
|
||||||
info!("Successfully bootstrapped vault with client-provided key");
|
info!("Successfully bootstrapped vault with client-provided key");
|
||||||
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
Err(SendError::HandlerError(keyholder::Error::AlreadyBootstrapped)) => {
|
Err(SendError::HandlerError(keyholder::KeyHolderError::AlreadyBootstrapped)) => {
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(BootstrapError::AlreadyBootstrapped)
|
Err(BootstrapError::AlreadyBootstrapped)
|
||||||
}
|
}
|
||||||
@@ -328,7 +266,7 @@ impl UserAgentSession {
|
|||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "Failed to send bootstrap request to keyholder");
|
error!(?err, "Failed to send bootstrap request to keyholder");
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
Err(BootstrapError::General(Error::internal(
|
Err(BootstrapError::General(UserAgentSessionError::internal(
|
||||||
"Vault actor error",
|
"Vault actor error",
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
@@ -339,14 +277,16 @@ impl UserAgentSession {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_query_vault_state(&mut self) -> Result<KeyHolderState, Error> {
|
pub(crate) async fn handle_query_vault_state(
|
||||||
|
&mut self,
|
||||||
|
) -> Result<KeyHolderState, UserAgentSessionError> {
|
||||||
use crate::actors::keyholder::GetState;
|
use crate::actors::keyholder::GetState;
|
||||||
|
|
||||||
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
let vault_state = match self.props.actors.key_holder.ask(GetState {}).await {
|
||||||
Ok(state) => state,
|
Ok(state) => state,
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, actor = "useragent", "keyholder.query.failed");
|
error!(?err, actor = "useragent", "keyholder.query.failed");
|
||||||
return Err(Error::internal("Vault is in broken state"));
|
return Err(UserAgentSessionError::internal("Vault is in broken state"));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -357,26 +297,32 @@ impl UserAgentSession {
|
|||||||
#[messages]
|
#[messages]
|
||||||
impl UserAgentSession {
|
impl UserAgentSession {
|
||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_evm_wallet_create(&mut self) -> Result<(i32, Address), Error> {
|
pub(crate) async fn handle_evm_wallet_create(
|
||||||
|
&mut self,
|
||||||
|
) -> Result<(i32, Address), UserAgentSessionError> {
|
||||||
match self.props.actors.evm.ask(Generate {}).await {
|
match self.props.actors.evm.ask(Generate {}).await {
|
||||||
Ok(address) => Ok(address),
|
Ok(address) => Ok(address),
|
||||||
Err(SendError::HandlerError(err)) => Err(Error::internal(format!(
|
Err(SendError::HandlerError(err)) => Err(UserAgentSessionError::internal(format!(
|
||||||
"EVM wallet generation failed: {err}"
|
"EVM wallet generation failed: {err}"
|
||||||
))),
|
))),
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "EVM actor unreachable during wallet create");
|
error!(?err, "EVM actor unreachable during wallet create");
|
||||||
Err(Error::internal("EVM actor unreachable"))
|
Err(UserAgentSessionError::internal("EVM actor unreachable"))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_evm_wallet_list(&mut self) -> Result<Vec<(i32, Address)>, Error> {
|
pub(crate) async fn handle_evm_wallet_list(
|
||||||
|
&mut self,
|
||||||
|
) -> Result<Vec<(i32, Address)>, UserAgentSessionError> {
|
||||||
match self.props.actors.evm.ask(ListWallets {}).await {
|
match self.props.actors.evm.ask(ListWallets {}).await {
|
||||||
Ok(wallets) => Ok(wallets),
|
Ok(wallets) => Ok(wallets),
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "EVM wallet list failed");
|
error!(?err, "EVM wallet list failed");
|
||||||
Err(Error::internal("Failed to list EVM wallets"))
|
Err(UserAgentSessionError::internal(
|
||||||
|
"Failed to list EVM wallets",
|
||||||
|
))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -387,13 +333,12 @@ impl UserAgentSession {
|
|||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_grant_list(
|
pub(crate) async fn handle_grant_list(
|
||||||
&mut self,
|
&mut self,
|
||||||
) -> Result<Vec<Grant<SpecificGrant>>, GrantMutationError> {
|
) -> Result<Vec<Grant<SpecificGrant>>, UserAgentSessionError> {
|
||||||
match self.props.actors.evm.ask(UseragentListGrants {}).await {
|
match self.props.actors.evm.ask(UseragentListGrants {}).await {
|
||||||
Ok(grants) => Ok(grants),
|
Ok(grants) => Ok(grants),
|
||||||
Err(err) if is_vault_sealed_from_evm(&err) => Err(GrantMutationError::VaultSealed),
|
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "EVM grant list failed");
|
error!(?err, "EVM grant list failed");
|
||||||
Err(GrantMutationError::Internal)
|
Err(UserAgentSessionError::internal("Failed to list EVM grants"))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -401,9 +346,9 @@ impl UserAgentSession {
|
|||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_grant_create(
|
pub(crate) async fn handle_grant_create(
|
||||||
&mut self,
|
&mut self,
|
||||||
basic: crate::evm::policies::SharedGrantSettings,
|
basic: SharedGrantSettings,
|
||||||
grant: crate::evm::policies::SpecificGrant,
|
grant: SpecificGrant,
|
||||||
) -> Result<Verified<i32>, GrantMutationError> {
|
) -> Result<i32, GrantMutationError> {
|
||||||
match self
|
match self
|
||||||
.props
|
.props
|
||||||
.actors
|
.actors
|
||||||
@@ -412,7 +357,6 @@ impl UserAgentSession {
|
|||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
Ok(grant_id) => Ok(grant_id),
|
Ok(grant_id) => Ok(grant_id),
|
||||||
Err(err) if is_vault_sealed_from_evm(&err) => Err(GrantMutationError::VaultSealed),
|
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "EVM grant create failed");
|
error!(?err, "EVM grant create failed");
|
||||||
Err(GrantMutationError::Internal)
|
Err(GrantMutationError::Internal)
|
||||||
@@ -421,26 +365,26 @@ impl UserAgentSession {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
|
#[expect(clippy::unused_async, reason = "false positive")]
|
||||||
pub(crate) async fn handle_grant_delete(
|
pub(crate) async fn handle_grant_delete(
|
||||||
&mut self,
|
&mut self,
|
||||||
grant_id: i32,
|
grant_id: i32,
|
||||||
) -> Result<(), GrantMutationError> {
|
) -> Result<(), GrantMutationError> {
|
||||||
match self
|
// match self
|
||||||
.props
|
// .props
|
||||||
.actors
|
// .actors
|
||||||
.evm
|
// .evm
|
||||||
.ask(UseragentDeleteGrant {
|
// .ask(UseragentDeleteGrant { grant_id })
|
||||||
_grant_id: grant_id,
|
// .await
|
||||||
})
|
// {
|
||||||
.await
|
// Ok(()) => Ok(()),
|
||||||
{
|
// Err(err) => {
|
||||||
Ok(()) => Ok(()),
|
// error!(?err, "EVM grant delete failed");
|
||||||
Err(err) if is_vault_sealed_from_evm(&err) => Err(GrantMutationError::VaultSealed),
|
// Err(GrantMutationError::Internal)
|
||||||
Err(err) => {
|
// }
|
||||||
error!(?err, "EVM grant delete failed");
|
// }
|
||||||
Err(GrantMutationError::Internal)
|
let _ = grant_id;
|
||||||
}
|
todo!()
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
@@ -476,7 +420,7 @@ impl UserAgentSession {
|
|||||||
pub(crate) async fn handle_grant_evm_wallet_access(
|
pub(crate) async fn handle_grant_evm_wallet_access(
|
||||||
&mut self,
|
&mut self,
|
||||||
entries: Vec<NewEvmWalletAccess>,
|
entries: Vec<NewEvmWalletAccess>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), UserAgentSessionError> {
|
||||||
let mut conn = self.props.db.get().await?;
|
let mut conn = self.props.db.get().await?;
|
||||||
conn.transaction(|conn| {
|
conn.transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
@@ -490,7 +434,7 @@ impl UserAgentSession {
|
|||||||
.await?;
|
.await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Result::<_, Error>::Ok(())
|
Result::<_, UserAgentSessionError>::Ok(())
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.await?;
|
.await?;
|
||||||
@@ -501,7 +445,7 @@ impl UserAgentSession {
|
|||||||
pub(crate) async fn handle_revoke_evm_wallet_access(
|
pub(crate) async fn handle_revoke_evm_wallet_access(
|
||||||
&mut self,
|
&mut self,
|
||||||
entries: Vec<i32>,
|
entries: Vec<i32>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), UserAgentSessionError> {
|
||||||
let mut conn = self.props.db.get().await?;
|
let mut conn = self.props.db.get().await?;
|
||||||
conn.transaction(|conn| {
|
conn.transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
@@ -513,7 +457,7 @@ impl UserAgentSession {
|
|||||||
.await?;
|
.await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Result::<_, Error>::Ok(())
|
Result::<_, UserAgentSessionError>::Ok(())
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.await?;
|
.await?;
|
||||||
@@ -523,10 +467,9 @@ impl UserAgentSession {
|
|||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_list_wallet_access(
|
pub(crate) async fn handle_list_wallet_access(
|
||||||
&mut self,
|
&mut self,
|
||||||
) -> Result<Vec<EvmWalletAccess>, Error> {
|
) -> Result<Vec<EvmWalletAccess>, UserAgentSessionError> {
|
||||||
let mut conn = self.props.db.get().await?;
|
let mut conn = self.props.db.get().await?;
|
||||||
use crate::db::schema::evm_wallet_access;
|
let access_entries = crate::db::schema::evm_wallet_access::table
|
||||||
let access_entries = evm_wallet_access::table
|
|
||||||
.select(EvmWalletAccess::as_select())
|
.select(EvmWalletAccess::as_select())
|
||||||
.load::<_>(&mut conn)
|
.load::<_>(&mut conn)
|
||||||
.await?;
|
.await?;
|
||||||
@@ -540,15 +483,15 @@ impl UserAgentSession {
|
|||||||
pub(crate) async fn handle_new_client_approve(
|
pub(crate) async fn handle_new_client_approve(
|
||||||
&mut self,
|
&mut self,
|
||||||
approved: bool,
|
approved: bool,
|
||||||
pubkey: ed25519_dalek::VerifyingKey,
|
pubkey: authn::PublicKey,
|
||||||
ctx: &mut Context<Self, Result<(), Error>>,
|
ctx: &Context<Self, Result<(), UserAgentSessionError>>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), UserAgentSessionError> {
|
||||||
let pending_approval = match self.pending_client_approvals.remove(&pubkey) {
|
let Some(pending_approval) = self.pending_client_approvals.remove(&pubkey.to_bytes())
|
||||||
Some(approval) => approval,
|
else {
|
||||||
None => {
|
error!("Received client connection response for unknown client");
|
||||||
error!("Received client connection response for unknown client");
|
return Err(UserAgentSessionError::internal(
|
||||||
return Err(Error::internal("Unknown client in connection response"));
|
"Unknown client in connection response",
|
||||||
}
|
));
|
||||||
};
|
};
|
||||||
|
|
||||||
pending_approval
|
pending_approval
|
||||||
@@ -560,7 +503,9 @@ impl UserAgentSession {
|
|||||||
?err,
|
?err,
|
||||||
"Failed to send client approval response to controller"
|
"Failed to send client approval response to controller"
|
||||||
);
|
);
|
||||||
Error::internal("Failed to send client approval response to controller")
|
UserAgentSessionError::internal(
|
||||||
|
"Failed to send client approval response to controller",
|
||||||
|
)
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
ctx.actor_ref().unlink(&pending_approval.controller).await;
|
ctx.actor_ref().unlink(&pending_approval.controller).await;
|
||||||
@@ -571,7 +516,7 @@ impl UserAgentSession {
|
|||||||
#[message]
|
#[message]
|
||||||
pub(crate) async fn handle_sdk_client_list(
|
pub(crate) async fn handle_sdk_client_list(
|
||||||
&mut self,
|
&mut self,
|
||||||
) -> Result<Vec<(ProgramClient, ProgramClientMetadata)>, Error> {
|
) -> Result<Vec<(ProgramClient, ProgramClientMetadata)>, UserAgentSessionError> {
|
||||||
use crate::db::schema::{client_metadata, program_client};
|
use crate::db::schema::{client_metadata, program_client};
|
||||||
let mut conn = self.props.db.get().await?;
|
let mut conn = self.props.db.get().await?;
|
||||||
|
|
||||||
|
|||||||
@@ -19,8 +19,6 @@ smlang::statemachine!(
|
|||||||
|
|
||||||
pub struct DummyContext;
|
pub struct DummyContext;
|
||||||
impl UserAgentStateMachineContext for DummyContext {
|
impl UserAgentStateMachineContext for DummyContext {
|
||||||
#[allow(missing_docs)]
|
|
||||||
#[allow(clippy::unused_unit)]
|
|
||||||
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
||||||
Ok(event_data)
|
Ok(event_data)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,22 +25,22 @@ pub enum InitError {
|
|||||||
Tls(#[from] tls::InitError),
|
Tls(#[from] tls::InitError),
|
||||||
|
|
||||||
#[error("Actor spawn failed: {0}")]
|
#[error("Actor spawn failed: {0}")]
|
||||||
ActorSpawn(#[from] crate::actors::SpawnError),
|
ActorSpawn(#[from] crate::actors::GlobalActorsSpawnError),
|
||||||
|
|
||||||
#[error("I/O Error: {0}")]
|
#[error("I/O Error: {0}")]
|
||||||
Io(#[from] std::io::Error),
|
Io(#[from] std::io::Error),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct _ServerContextInner {
|
pub struct __ServerContextInner {
|
||||||
pub db: db::DatabasePool,
|
pub db: db::DatabasePool,
|
||||||
pub tls: TlsManager,
|
pub tls: TlsManager,
|
||||||
pub actors: GlobalActors,
|
pub actors: GlobalActors,
|
||||||
}
|
}
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
pub struct ServerContext(Arc<_ServerContextInner>);
|
pub struct ServerContext(Arc<__ServerContextInner>);
|
||||||
|
|
||||||
impl std::ops::Deref for ServerContext {
|
impl std::ops::Deref for ServerContext {
|
||||||
type Target = _ServerContextInner;
|
type Target = __ServerContextInner;
|
||||||
|
|
||||||
fn deref(&self) -> &Self::Target {
|
fn deref(&self) -> &Self::Target {
|
||||||
&self.0
|
&self.0
|
||||||
@@ -49,7 +49,7 @@ impl std::ops::Deref for ServerContext {
|
|||||||
|
|
||||||
impl ServerContext {
|
impl ServerContext {
|
||||||
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
|
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
|
||||||
Ok(Self(Arc::new(_ServerContextInner {
|
Ok(Self(Arc::new(__ServerContextInner {
|
||||||
actors: GlobalActors::spawn(db.clone()).await?,
|
actors: GlobalActors::spawn(db.clone()).await?,
|
||||||
tls: TlsManager::new(db.clone()).await?,
|
tls: TlsManager::new(db.clone()).await?,
|
||||||
db,
|
db,
|
||||||
|
|||||||
@@ -22,9 +22,10 @@ use crate::db::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
const ENCODE_CONFIG: pem::EncodeConfig = {
|
const ENCODE_CONFIG: pem::EncodeConfig = {
|
||||||
let line_ending = match cfg!(target_family = "windows") {
|
let line_ending = if cfg!(target_family = "windows") {
|
||||||
true => pem::LineEnding::CRLF,
|
pem::LineEnding::CRLF
|
||||||
false => pem::LineEnding::LF,
|
} else {
|
||||||
|
pem::LineEnding::LF
|
||||||
};
|
};
|
||||||
pem::EncodeConfig::new().set_line_ending(line_ending)
|
pem::EncodeConfig::new().set_line_ending(line_ending)
|
||||||
};
|
};
|
||||||
@@ -52,11 +53,14 @@ pub enum InitError {
|
|||||||
|
|
||||||
pub type PemCert = String;
|
pub type PemCert = String;
|
||||||
|
|
||||||
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
|
pub fn encode_cert_to_pem(cert: &CertificateDer<'_>) -> PemCert {
|
||||||
pem::encode_config(&Pem::new("CERTIFICATE", cert.to_vec()), ENCODE_CONFIG)
|
pem::encode_config(&Pem::new("CERTIFICATE", cert.to_vec()), ENCODE_CONFIG)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(unused)]
|
#[expect(
|
||||||
|
unused,
|
||||||
|
reason = "may be needed for future cert rotation implementation"
|
||||||
|
)]
|
||||||
struct SerializedTls {
|
struct SerializedTls {
|
||||||
cert_pem: PemCert,
|
cert_pem: PemCert,
|
||||||
cert_key_pem: String,
|
cert_key_pem: String,
|
||||||
@@ -85,7 +89,7 @@ impl TlsCa {
|
|||||||
|
|
||||||
let cert_key_pem = certified_issuer.key().serialize_pem();
|
let cert_key_pem = certified_issuer.key().serialize_pem();
|
||||||
|
|
||||||
#[allow(
|
#[expect(
|
||||||
clippy::unwrap_used,
|
clippy::unwrap_used,
|
||||||
reason = "Broken cert couldn't bootstrap server anyway"
|
reason = "Broken cert couldn't bootstrap server anyway"
|
||||||
)]
|
)]
|
||||||
@@ -124,7 +128,11 @@ impl TlsCa {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(unused)]
|
#[expect(
|
||||||
|
unused,
|
||||||
|
clippy::unnecessary_wraps,
|
||||||
|
reason = "may be needed for future cert rotation implementation"
|
||||||
|
)]
|
||||||
fn serialize(&self) -> Result<SerializedTls, InitError> {
|
fn serialize(&self) -> Result<SerializedTls, InitError> {
|
||||||
let cert_key_pem = self.issuer.key().serialize_pem();
|
let cert_key_pem = self.issuer.key().serialize_pem();
|
||||||
Ok(SerializedTls {
|
Ok(SerializedTls {
|
||||||
@@ -133,7 +141,10 @@ impl TlsCa {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(unused)]
|
#[expect(
|
||||||
|
unused,
|
||||||
|
reason = "may be needed for future cert rotation implementation"
|
||||||
|
)]
|
||||||
fn try_deserialize(cert_pem: &str, cert_key_pem: &str) -> Result<Self, InitError> {
|
fn try_deserialize(cert_pem: &str, cert_key_pem: &str) -> Result<Self, InitError> {
|
||||||
let keypair =
|
let keypair =
|
||||||
KeyPair::from_pem(cert_key_pem).map_err(InitError::KeyDeserializationError)?;
|
KeyPair::from_pem(cert_key_pem).map_err(InitError::KeyDeserializationError)?;
|
||||||
@@ -234,10 +245,10 @@ impl TlsManager {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn cert(&self) -> &CertificateDer<'static> {
|
pub const fn cert(&self) -> &CertificateDer<'static> {
|
||||||
&self.cert
|
&self.cert
|
||||||
}
|
}
|
||||||
pub fn ca_cert(&self) -> &CertificateDer<'static> {
|
pub const fn ca_cert(&self) -> &CertificateDer<'static> {
|
||||||
&self.ca_cert
|
&self.ca_cert
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,8 +5,8 @@ use rand::{
|
|||||||
rngs::{StdRng, SysRng},
|
rngs::{StdRng, SysRng},
|
||||||
};
|
};
|
||||||
|
|
||||||
pub const ROOT_KEY_TAG: &[u8] = "arbiter/seal/v1".as_bytes();
|
pub const ROOT_KEY_TAG: &[u8] = b"arbiter/seal/v1";
|
||||||
pub const TAG: &[u8] = "arbiter/private-key/v1".as_bytes();
|
pub const TAG: &[u8] = b"arbiter/private-key/v1";
|
||||||
|
|
||||||
pub const NONCE_LENGTH: usize = 24;
|
pub const NONCE_LENGTH: usize = 24;
|
||||||
|
|
||||||
@@ -15,11 +15,13 @@ pub struct Nonce(pub [u8; NONCE_LENGTH]);
|
|||||||
impl Nonce {
|
impl Nonce {
|
||||||
pub fn increment(&mut self) {
|
pub fn increment(&mut self) {
|
||||||
for i in (0..self.0.len()).rev() {
|
for i in (0..self.0.len()).rev() {
|
||||||
if self.0[i] == 0xFF {
|
if let Some(byte) = self.0.get_mut(i) {
|
||||||
self.0[i] = 0;
|
if *byte == 0xFF {
|
||||||
} else {
|
*byte = 0;
|
||||||
self.0[i] += 1;
|
} else {
|
||||||
break;
|
*byte += 1;
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -45,24 +47,17 @@ pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
|
|||||||
|
|
||||||
pub fn generate_salt() -> Salt {
|
pub fn generate_salt() -> Salt {
|
||||||
let mut salt = Salt::default();
|
let mut salt = Salt::default();
|
||||||
#[allow(
|
let mut rng =
|
||||||
clippy::unwrap_used,
|
StdRng::try_from_rng(&mut SysRng).expect("Rng failure is unrecoverable and should panic");
|
||||||
reason = "Rng failure is unrecoverable and should panic"
|
|
||||||
)]
|
|
||||||
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
|
||||||
rng.fill_bytes(&mut salt);
|
rng.fill_bytes(&mut salt);
|
||||||
salt
|
salt
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use std::ops::Deref as _;
|
|
||||||
|
|
||||||
use super::*;
|
use super::*;
|
||||||
use crate::{
|
use crate::crypto::derive_key;
|
||||||
crypto::derive_key,
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
pub fn derive_seal_key_deterministic() {
|
pub fn derive_seal_key_deterministic() {
|
||||||
@@ -77,7 +72,7 @@ mod tests {
|
|||||||
let key1_reader = key1.0.read();
|
let key1_reader = key1.0.read();
|
||||||
let key2_reader = key2.0.read();
|
let key2_reader = key2.0.read();
|
||||||
|
|
||||||
assert_eq!(key1_reader.deref(), key2_reader.deref());
|
assert_eq!(&*key1_reader, &*key2_reader);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -88,14 +83,13 @@ mod tests {
|
|||||||
|
|
||||||
let mut key = derive_key(password, &salt);
|
let mut key = derive_key(password, &salt);
|
||||||
let key_reader = key.0.read();
|
let key_reader = key.0.read();
|
||||||
let key_ref = key_reader.deref();
|
|
||||||
|
|
||||||
assert_ne!(key_ref.as_slice(), &[0u8; 32][..]);
|
assert_ne!(key_reader.as_slice(), &[0u8; 32][..]);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
// We should fuzz this
|
// We should fuzz this
|
||||||
pub fn test_nonce_increment() {
|
pub fn nonce_increment() {
|
||||||
let mut nonce = Nonce([0u8; NONCE_LENGTH]);
|
let mut nonce = Nonce([0u8; NONCE_LENGTH]);
|
||||||
nonce.increment();
|
nonce.increment();
|
||||||
|
|
||||||
|
|||||||
@@ -1,34 +1,29 @@
|
|||||||
use crate::actors::keyholder;
|
use crate::actors::keyholder;
|
||||||
|
use arbiter_crypto::hashing::Hashable;
|
||||||
use hmac::Hmac;
|
use hmac::Hmac;
|
||||||
use sha2::Sha256;
|
use sha2::Sha256;
|
||||||
use std::future::Future;
|
|
||||||
use std::ops::Deref;
|
|
||||||
use std::pin::Pin;
|
|
||||||
|
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, dsl::insert_into, sqlite::Sqlite};
|
use diesel::{ExpressionMethods as _, QueryDsl, dsl::insert_into, sqlite::Sqlite};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
use kameo::{actor::ActorRef, error::SendError};
|
use kameo::{actor::ActorRef, error::SendError};
|
||||||
use sha2::Digest as _;
|
use sha2::Digest as _;
|
||||||
|
|
||||||
pub mod hashing;
|
|
||||||
use self::hashing::Hashable;
|
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::keyholder::{KeyHolder, SignIntegrity, VerifyIntegrity},
|
actors::keyholder::{KeyHolder, SignIntegrity, VerifyIntegrity},
|
||||||
db::{
|
db::{
|
||||||
self,
|
self,
|
||||||
models::{IntegrityEnvelope as IntegrityEnvelopeRow, NewIntegrityEnvelope},
|
models::{IntegrityEnvelope, NewIntegrityEnvelope},
|
||||||
schema::integrity_envelope,
|
schema::integrity_envelope,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum Error {
|
pub enum IntegrityError {
|
||||||
#[error("Database error: {0}")]
|
#[error("Database error: {0}")]
|
||||||
Database(#[from] db::DatabaseError),
|
Database(#[from] db::DatabaseError),
|
||||||
|
|
||||||
#[error("KeyHolder error: {0}")]
|
#[error("KeyHolder error: {0}")]
|
||||||
Keyholder(#[from] keyholder::Error),
|
Keyholder(#[from] keyholder::KeyHolderError),
|
||||||
|
|
||||||
#[error("KeyHolder mailbox error")]
|
#[error("KeyHolder mailbox error")]
|
||||||
KeyholderSend,
|
KeyholderSend,
|
||||||
@@ -50,35 +45,11 @@ pub enum Error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
#[must_use]
|
|
||||||
pub enum AttestationStatus {
|
pub enum AttestationStatus {
|
||||||
Attested,
|
Attested,
|
||||||
Unavailable,
|
Unavailable,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub struct Verified<T>(T);
|
|
||||||
|
|
||||||
impl<T> AsRef<T> for Verified<T> {
|
|
||||||
fn as_ref(&self) -> &T {
|
|
||||||
&self.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Verified<T> {
|
|
||||||
pub fn into_inner(self) -> T {
|
|
||||||
self.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Deref for Verified<T> {
|
|
||||||
type Target = T;
|
|
||||||
|
|
||||||
fn deref(&self) -> &Self::Target {
|
|
||||||
&self.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub const CURRENT_PAYLOAD_VERSION: i32 = 1;
|
pub const CURRENT_PAYLOAD_VERSION: i32 = 1;
|
||||||
pub const INTEGRITY_SUBKEY_TAG: &[u8] = b"arbiter/db-integrity-key/v1";
|
pub const INTEGRITY_SUBKEY_TAG: &[u8] = b"arbiter/db-integrity-key/v1";
|
||||||
|
|
||||||
@@ -96,6 +67,11 @@ fn payload_hash(payload: &impl Hashable) -> [u8; 32] {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn push_len_prefixed(out: &mut Vec<u8>, bytes: &[u8]) {
|
fn push_len_prefixed(out: &mut Vec<u8>, bytes: &[u8]) {
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #85"
|
||||||
|
)]
|
||||||
out.extend_from_slice(&(bytes.len() as u32).to_be_bytes());
|
out.extend_from_slice(&(bytes.len() as u32).to_be_bytes());
|
||||||
out.extend_from_slice(bytes);
|
out.extend_from_slice(bytes);
|
||||||
}
|
}
|
||||||
@@ -114,95 +90,31 @@ fn build_mac_input(
|
|||||||
out
|
out
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
pub trait IntoId {
|
||||||
pub struct EntityId(Vec<u8>);
|
fn into_id(self) -> Vec<u8>;
|
||||||
|
}
|
||||||
|
|
||||||
impl Deref for EntityId {
|
impl IntoId for i32 {
|
||||||
type Target = [u8];
|
fn into_id(self) -> Vec<u8> {
|
||||||
|
self.to_be_bytes().to_vec()
|
||||||
fn deref(&self) -> &Self::Target {
|
|
||||||
&self.0
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<i32> for EntityId {
|
impl IntoId for &'_ [u8] {
|
||||||
fn from(value: i32) -> Self {
|
fn into_id(self) -> Vec<u8> {
|
||||||
Self(value.to_be_bytes().to_vec())
|
self.to_vec()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<&'_ [u8]> for EntityId {
|
pub async fn sign_entity<E: Integrable>(
|
||||||
fn from(bytes: &'_ [u8]) -> Self {
|
|
||||||
Self(bytes.to_vec())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn lookup_verified<E, C, F, Fut>(
|
|
||||||
conn: &mut C,
|
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
|
||||||
entity_id: impl Into<EntityId>,
|
|
||||||
load: F,
|
|
||||||
) -> Result<Verified<E>, Error>
|
|
||||||
where
|
|
||||||
C: AsyncConnection<Backend = Sqlite>,
|
|
||||||
E: Integrable,
|
|
||||||
F: FnOnce(&mut C) -> Fut,
|
|
||||||
Fut: Future<Output = Result<E, db::DatabaseError>>,
|
|
||||||
{
|
|
||||||
let entity = load(conn).await?;
|
|
||||||
verify_entity(conn, keyholder, &entity, entity_id).await?;
|
|
||||||
Ok(Verified(entity))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn lookup_verified_allow_unavailable<E, C, F, Fut>(
|
|
||||||
conn: &mut C,
|
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
|
||||||
entity_id: impl Into<EntityId>,
|
|
||||||
load: F,
|
|
||||||
) -> Result<Verified<E>, Error>
|
|
||||||
where
|
|
||||||
C: AsyncConnection<Backend = Sqlite>,
|
|
||||||
E: Integrable+ 'static,
|
|
||||||
F: FnOnce(&mut C) -> Fut,
|
|
||||||
Fut: Future<Output = Result<E, db::DatabaseError>>,
|
|
||||||
{
|
|
||||||
let entity = load(conn).await?;
|
|
||||||
match check_entity_attestation(conn, keyholder, &entity, entity_id.into()).await? {
|
|
||||||
// IMPORTANT: allow_unavailable mode must succeed with an unattested result when vault key
|
|
||||||
// material is unavailable, otherwise integrity checks can be silently bypassed while sealed.
|
|
||||||
AttestationStatus::Attested | AttestationStatus::Unavailable => Ok(Verified(entity)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn lookup_verified_from_query<E, Id, C, F>(
|
|
||||||
conn: &mut C,
|
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
|
||||||
load: F,
|
|
||||||
) -> Result<Verified<E>, Error>
|
|
||||||
where
|
|
||||||
C: AsyncConnection<Backend = Sqlite> + Send,
|
|
||||||
E: Integrable,
|
|
||||||
Id: Into<EntityId>,
|
|
||||||
F: for<'a> FnOnce(
|
|
||||||
&'a mut C,
|
|
||||||
) -> Pin<
|
|
||||||
Box<dyn Future<Output = Result<(Id, E), db::DatabaseError>> + Send + 'a>,
|
|
||||||
>,
|
|
||||||
{
|
|
||||||
let (entity_id, entity) = load(conn).await?;
|
|
||||||
verify_entity(conn, keyholder, &entity, entity_id).await?;
|
|
||||||
Ok(Verified(entity))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn sign_entity<E: Integrable, Id: Into<EntityId> + Clone>(
|
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
entity: &E,
|
entity: &E,
|
||||||
as_entity_id: Id,
|
entity_id: impl IntoId,
|
||||||
) -> Result<Verified<Id>, Error> {
|
) -> Result<(), IntegrityError> {
|
||||||
let payload_hash = payload_hash(entity);
|
let payload_hash = payload_hash(&entity);
|
||||||
|
|
||||||
let entity_id = as_entity_id.clone().into();
|
let entity_id = entity_id.into_id();
|
||||||
|
|
||||||
let mac_input = build_mac_input(E::KIND, &entity_id, E::VERSION, &payload_hash);
|
let mac_input = build_mac_input(E::KIND, &entity_id, E::VERSION, &payload_hash);
|
||||||
|
|
||||||
@@ -210,17 +122,17 @@ pub async fn sign_entity<E: Integrable, Id: Into<EntityId> + Clone>(
|
|||||||
.ask(SignIntegrity { mac_input })
|
.ask(SignIntegrity { mac_input })
|
||||||
.await
|
.await
|
||||||
.map_err(|err| match err {
|
.map_err(|err| match err {
|
||||||
kameo::error::SendError::HandlerError(inner) => Error::Keyholder(inner),
|
SendError::HandlerError(inner) => IntegrityError::Keyholder(inner),
|
||||||
_ => Error::KeyholderSend,
|
_ => IntegrityError::KeyholderSend,
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
insert_into(integrity_envelope::table)
|
insert_into(integrity_envelope::table)
|
||||||
.values(NewIntegrityEnvelope {
|
.values(NewIntegrityEnvelope {
|
||||||
entity_kind: E::KIND.to_owned(),
|
entity_kind: E::KIND.to_owned(),
|
||||||
entity_id: entity_id.to_vec(),
|
entity_id,
|
||||||
payload_version: E::VERSION,
|
payload_version: E::VERSION,
|
||||||
key_version,
|
key_version,
|
||||||
mac: mac.to_vec(),
|
mac: mac.clone(),
|
||||||
})
|
})
|
||||||
.on_conflict((
|
.on_conflict((
|
||||||
integrity_envelope::entity_id,
|
integrity_envelope::entity_id,
|
||||||
@@ -236,37 +148,37 @@ pub async fn sign_entity<E: Integrable, Id: Into<EntityId> + Clone>(
|
|||||||
.await
|
.await
|
||||||
.map_err(db::DatabaseError::from)?;
|
.map_err(db::DatabaseError::from)?;
|
||||||
|
|
||||||
Ok(Verified(as_entity_id))
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn check_entity_attestation<E: Integrable>(
|
pub async fn verify_entity<E: Integrable>(
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
keyholder: &ActorRef<KeyHolder>,
|
||||||
entity: &E,
|
entity: &E,
|
||||||
entity_id: impl Into<EntityId>,
|
entity_id: impl IntoId,
|
||||||
) -> Result<AttestationStatus, Error> {
|
) -> Result<AttestationStatus, IntegrityError> {
|
||||||
let entity_id = entity_id.into();
|
let entity_id = entity_id.into_id();
|
||||||
let envelope: IntegrityEnvelopeRow = integrity_envelope::table
|
let envelope: IntegrityEnvelope = integrity_envelope::table
|
||||||
.filter(integrity_envelope::entity_kind.eq(E::KIND))
|
.filter(integrity_envelope::entity_kind.eq(E::KIND))
|
||||||
.filter(integrity_envelope::entity_id.eq(&*entity_id))
|
.filter(integrity_envelope::entity_id.eq(&entity_id))
|
||||||
.first(conn)
|
.first(conn)
|
||||||
.await
|
.await
|
||||||
.map_err(|err| match err {
|
.map_err(|err| match err {
|
||||||
diesel::result::Error::NotFound => Error::MissingEnvelope {
|
diesel::result::Error::NotFound => IntegrityError::MissingEnvelope {
|
||||||
entity_kind: E::KIND,
|
entity_kind: E::KIND,
|
||||||
},
|
},
|
||||||
other => Error::Database(db::DatabaseError::from(other)),
|
other => IntegrityError::Database(db::DatabaseError::from(other)),
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
if envelope.payload_version != E::VERSION {
|
if envelope.payload_version != E::VERSION {
|
||||||
return Err(Error::PayloadVersionMismatch {
|
return Err(IntegrityError::PayloadVersionMismatch {
|
||||||
entity_kind: E::KIND,
|
entity_kind: E::KIND,
|
||||||
expected: E::VERSION,
|
expected: E::VERSION,
|
||||||
found: envelope.payload_version,
|
found: envelope.payload_version,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
let payload_hash = payload_hash(entity);
|
let payload_hash = payload_hash(&entity);
|
||||||
let mac_input = build_mac_input(E::KIND, &entity_id, envelope.payload_version, &payload_hash);
|
let mac_input = build_mac_input(E::KIND, &entity_id, envelope.payload_version, &payload_hash);
|
||||||
|
|
||||||
let result = keyholder
|
let result = keyholder
|
||||||
@@ -279,77 +191,34 @@ pub async fn check_entity_attestation<E: Integrable>(
|
|||||||
|
|
||||||
match result {
|
match result {
|
||||||
Ok(true) => Ok(AttestationStatus::Attested),
|
Ok(true) => Ok(AttestationStatus::Attested),
|
||||||
Ok(false) => Err(Error::MacMismatch {
|
Ok(false) => Err(IntegrityError::MacMismatch {
|
||||||
entity_kind: E::KIND,
|
entity_kind: E::KIND,
|
||||||
}),
|
}),
|
||||||
Err(SendError::HandlerError(keyholder::Error::NotBootstrapped)) => {
|
Err(SendError::HandlerError(keyholder::KeyHolderError::NotBootstrapped)) => {
|
||||||
Ok(AttestationStatus::Unavailable)
|
Ok(AttestationStatus::Unavailable)
|
||||||
}
|
}
|
||||||
Err(_) => Err(Error::KeyholderSend),
|
Err(_) => Err(IntegrityError::KeyholderSend),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn verify_entity<'a, E: Integrable>(
|
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
|
||||||
keyholder: &ActorRef<KeyHolder>,
|
|
||||||
entity: &'a E,
|
|
||||||
entity_id: impl Into<EntityId>,
|
|
||||||
) -> Result<Verified<&'a E>, Error> {
|
|
||||||
match check_entity_attestation::<E>(conn, keyholder, entity, entity_id).await? {
|
|
||||||
AttestationStatus::Attested => Ok(Verified(entity)),
|
|
||||||
AttestationStatus::Unavailable => Err(Error::Keyholder(keyholder::Error::NotBootstrapped)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn delete_envelope<E: Integrable>(
|
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
|
||||||
entity_id: impl Into<EntityId>,
|
|
||||||
) -> Result<usize, Error> {
|
|
||||||
let entity_id = entity_id.into();
|
|
||||||
|
|
||||||
let affected = diesel::delete(
|
|
||||||
integrity_envelope::table
|
|
||||||
.filter(integrity_envelope::entity_kind.eq(E::KIND))
|
|
||||||
.filter(integrity_envelope::entity_id.eq(&*entity_id)),
|
|
||||||
)
|
|
||||||
.execute(conn)
|
|
||||||
.await
|
|
||||||
.map_err(db::DatabaseError::from)?;
|
|
||||||
|
|
||||||
Ok(affected)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl};
|
use diesel::{ExpressionMethods as _, QueryDsl};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::{actor::ActorRef, prelude::Spawn};
|
use kameo::{actor::ActorRef, prelude::Spawn};
|
||||||
use sha2::Digest;
|
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::keyholder::{Bootstrap, KeyHolder},
|
actors::keyholder::{Bootstrap, KeyHolder},
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
use super::hashing::Hashable;
|
use super::{Integrable, IntegrityError, sign_entity, verify_entity};
|
||||||
use super::{
|
#[derive(Clone, arbiter_macros::Hashable)]
|
||||||
check_entity_attestation, AttestationStatus, Error, Integrable, lookup_verified,
|
|
||||||
lookup_verified_allow_unavailable, lookup_verified_from_query, sign_entity, verify_entity,
|
|
||||||
};
|
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
struct DummyEntity {
|
struct DummyEntity {
|
||||||
payload_version: i32,
|
payload_version: i32,
|
||||||
payload: Vec<u8>,
|
payload: Vec<u8>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Hashable for DummyEntity {
|
|
||||||
fn hash<H: Digest>(&self, hasher: &mut H) {
|
|
||||||
self.payload_version.hash(hasher);
|
|
||||||
self.payload.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
impl Integrable for DummyEntity {
|
impl Integrable for DummyEntity {
|
||||||
const KIND: &'static str = "dummy_entity";
|
const KIND: &'static str = "dummy_entity";
|
||||||
}
|
}
|
||||||
@@ -367,12 +236,12 @@ mod tests {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn sign_writes_envelope_and_verify_passes() {
|
async fn sign_writes_envelope_and_verify_passes() {
|
||||||
|
const ENTITY_ID: &[u8] = b"entity-id-7";
|
||||||
|
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
let keyholder = bootstrapped_keyholder(&db).await;
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
const ENTITY_ID: &[u8] = b"entity-id-7";
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
let entity = DummyEntity {
|
||||||
payload_version: 1,
|
payload_version: 1,
|
||||||
payload: b"payload-v1".to_vec(),
|
payload: b"payload-v1".to_vec(),
|
||||||
@@ -391,19 +260,19 @@ mod tests {
|
|||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
assert_eq!(count, 1, "envelope row must be created exactly once");
|
assert_eq!(count, 1, "envelope row must be created exactly once");
|
||||||
let _ = check_entity_attestation(&mut conn, &keyholder, &entity, ENTITY_ID)
|
verify_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn tampered_mac_fails_verification() {
|
async fn tampered_mac_fails_verification() {
|
||||||
|
const ENTITY_ID: &[u8] = b"entity-id-11";
|
||||||
|
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
let keyholder = bootstrapped_keyholder(&db).await;
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
const ENTITY_ID: &[u8] = b"entity-id-11";
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
let entity = DummyEntity {
|
||||||
payload_version: 1,
|
payload_version: 1,
|
||||||
payload: b"payload-v1".to_vec(),
|
payload: b"payload-v1".to_vec(),
|
||||||
@@ -421,20 +290,20 @@ mod tests {
|
|||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let err = check_entity_attestation(&mut conn, &keyholder, &entity, ENTITY_ID)
|
let err = verify_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::MacMismatch { .. }));
|
assert!(matches!(err, IntegrityError::MacMismatch { .. }));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn changed_payload_fails_verification() {
|
async fn changed_payload_fails_verification() {
|
||||||
|
const ENTITY_ID: &[u8] = b"entity-id-21";
|
||||||
|
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
let keyholder = bootstrapped_keyholder(&db).await;
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
|
|
||||||
const ENTITY_ID: &[u8] = b"entity-id-21";
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
let entity = DummyEntity {
|
||||||
payload_version: 1,
|
payload_version: 1,
|
||||||
payload: b"payload-v1".to_vec(),
|
payload: b"payload-v1".to_vec(),
|
||||||
@@ -449,233 +318,9 @@ mod tests {
|
|||||||
..entity
|
..entity
|
||||||
};
|
};
|
||||||
|
|
||||||
let err = check_entity_attestation(&mut conn, &keyholder, &tampered, ENTITY_ID)
|
let err = verify_entity(&mut conn, &keyholder, &tampered, ENTITY_ID)
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::MacMismatch { .. }));
|
assert!(matches!(err, IntegrityError::MacMismatch { .. }));
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn allow_unavailable_lookup_passes_while_sealed() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: &[u8] = b"entity-id-31";
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(keyholder);
|
|
||||||
|
|
||||||
let sealed_keyholder = KeyHolder::spawn(KeyHolder::new(db.clone()).await.unwrap());
|
|
||||||
let status = check_entity_attestation(&mut conn, &sealed_keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(status, AttestationStatus::Unavailable);
|
|
||||||
|
|
||||||
#[expect(clippy::disallowed_methods, reason = "test only")]
|
|
||||||
lookup_verified_allow_unavailable(&mut conn, &sealed_keyholder, ENTITY_ID, |_| async {
|
|
||||||
Ok::<_, db::DatabaseError>(DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn strict_verify_fails_closed_while_sealed() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: &[u8] = b"entity-id-41";
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(keyholder);
|
|
||||||
|
|
||||||
let sealed_keyholder = KeyHolder::spawn(KeyHolder::new(db.clone()).await.unwrap());
|
|
||||||
|
|
||||||
let err = verify_entity(&mut conn, &sealed_keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
assert!(matches!(
|
|
||||||
err,
|
|
||||||
Error::Keyholder(crate::actors::keyholder::Error::NotBootstrapped)
|
|
||||||
));
|
|
||||||
|
|
||||||
let err = lookup_verified(&mut conn, &sealed_keyholder, ENTITY_ID, |_| async {
|
|
||||||
Ok::<_, db::DatabaseError>(DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
assert!(matches!(
|
|
||||||
err,
|
|
||||||
Error::Keyholder(crate::actors::keyholder::Error::NotBootstrapped)
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn lookup_verified_supports_loaded_aggregate() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: i32 = 77;
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let verified = lookup_verified(&mut conn, &keyholder, ENTITY_ID, |_| async {
|
|
||||||
Ok::<_, db::DatabaseError>(DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(verified.payload, b"payload-v1".to_vec());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn lookup_verified_allow_unavailable_works_while_sealed() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: i32 = 78;
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(keyholder);
|
|
||||||
|
|
||||||
let sealed_keyholder = KeyHolder::spawn(KeyHolder::new(db.clone()).await.unwrap());
|
|
||||||
|
|
||||||
#[expect(clippy::disallowed_methods, reason = "test only")]
|
|
||||||
lookup_verified_allow_unavailable(&mut conn, &sealed_keyholder, ENTITY_ID, |_| async {
|
|
||||||
Ok::<_, db::DatabaseError>(DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn extension_trait_lookup_verified_required_works() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: i32 = 79;
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let verified = lookup_verified(&mut conn, &keyholder, ENTITY_ID, |_| {
|
|
||||||
Box::pin(async {
|
|
||||||
Ok::<_, db::DatabaseError>(DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(verified.payload, b"payload-v1".to_vec());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
async fn lookup_verified_from_query_helpers_work() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let keyholder = bootstrapped_keyholder(&db).await;
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
|
|
||||||
const ENTITY_ID: i32 = 80;
|
|
||||||
|
|
||||||
let entity = DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
};
|
|
||||||
|
|
||||||
sign_entity(&mut conn, &keyholder, &entity, ENTITY_ID)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let verified = lookup_verified_from_query(&mut conn, &keyholder, |_| {
|
|
||||||
Box::pin(async {
|
|
||||||
Ok::<_, db::DatabaseError>((
|
|
||||||
ENTITY_ID,
|
|
||||||
DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
},
|
|
||||||
))
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(verified.payload, b"payload-v1".to_vec());
|
|
||||||
|
|
||||||
drop(keyholder);
|
|
||||||
let sealed_keyholder = KeyHolder::spawn(KeyHolder::new(db.clone()).await.unwrap());
|
|
||||||
|
|
||||||
let err = lookup_verified_from_query(&mut conn, &sealed_keyholder, |_| {
|
|
||||||
Box::pin(async {
|
|
||||||
Ok::<_, db::DatabaseError>((
|
|
||||||
ENTITY_ID,
|
|
||||||
DummyEntity {
|
|
||||||
payload_version: 1,
|
|
||||||
payload: b"payload-v1".to_vec(),
|
|
||||||
},
|
|
||||||
))
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
|
|
||||||
assert!(matches!(
|
|
||||||
err,
|
|
||||||
Error::Keyholder(crate::actors::keyholder::Error::NotBootstrapped)
|
|
||||||
));
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::ops::Deref as _;
|
|
||||||
|
|
||||||
use argon2::{Algorithm, Argon2};
|
use argon2::{Algorithm, Argon2};
|
||||||
use chacha20poly1305::{
|
use chacha20poly1305::{
|
||||||
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
|
AeadInPlace, Key, KeyInit as _, XChaCha20Poly1305, XNonce,
|
||||||
@@ -10,7 +8,7 @@ use rand::{
|
|||||||
rngs::{StdRng, SysRng},
|
rngs::{StdRng, SysRng},
|
||||||
};
|
};
|
||||||
|
|
||||||
use crate::safe_cell::{SafeCell, SafeCellHandle as _};
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
pub mod encryption;
|
pub mod encryption;
|
||||||
pub mod integrity;
|
pub mod integrity;
|
||||||
@@ -41,11 +39,8 @@ impl TryFrom<SafeCell<Vec<u8>>> for KeyCell {
|
|||||||
impl KeyCell {
|
impl KeyCell {
|
||||||
pub fn new_secure_random() -> Self {
|
pub fn new_secure_random() -> Self {
|
||||||
let key = SafeCell::new_inline(|key_buffer: &mut Key| {
|
let key = SafeCell::new_inline(|key_buffer: &mut Key| {
|
||||||
#[allow(
|
let mut rng = StdRng::try_from_rng(&mut SysRng)
|
||||||
clippy::unwrap_used,
|
.expect("Rng failure is unrecoverable and should panic");
|
||||||
reason = "Rng failure is unrecoverable and should panic"
|
|
||||||
)]
|
|
||||||
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
|
||||||
rng.fill_bytes(key_buffer);
|
rng.fill_bytes(key_buffer);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -59,8 +54,7 @@ impl KeyCell {
|
|||||||
mut buffer: impl AsMut<Vec<u8>>,
|
mut buffer: impl AsMut<Vec<u8>>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let key_reader = self.0.read();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let cipher = XChaCha20Poly1305::new(&key_reader);
|
||||||
let cipher = XChaCha20Poly1305::new(key_ref);
|
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
let buffer = buffer.as_mut();
|
let buffer = buffer.as_mut();
|
||||||
cipher.encrypt_in_place(nonce, associated_data, buffer)
|
cipher.encrypt_in_place(nonce, associated_data, buffer)
|
||||||
@@ -72,8 +66,7 @@ impl KeyCell {
|
|||||||
buffer: &mut SafeCell<Vec<u8>>,
|
buffer: &mut SafeCell<Vec<u8>>,
|
||||||
) -> Result<(), Error> {
|
) -> Result<(), Error> {
|
||||||
let key_reader = self.0.read();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let cipher = XChaCha20Poly1305::new(&key_reader);
|
||||||
let cipher = XChaCha20Poly1305::new(key_ref);
|
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
let mut buffer = buffer.write();
|
let mut buffer = buffer.write();
|
||||||
let buffer: &mut Vec<u8> = buffer.as_mut();
|
let buffer: &mut Vec<u8> = buffer.as_mut();
|
||||||
@@ -87,8 +80,7 @@ impl KeyCell {
|
|||||||
plaintext: impl AsRef<[u8]>,
|
plaintext: impl AsRef<[u8]>,
|
||||||
) -> Result<Vec<u8>, Error> {
|
) -> Result<Vec<u8>, Error> {
|
||||||
let key_reader = self.0.read();
|
let key_reader = self.0.read();
|
||||||
let key_ref = key_reader.deref();
|
let mut cipher = XChaCha20Poly1305::new(&key_reader);
|
||||||
let mut cipher = XChaCha20Poly1305::new(key_ref);
|
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
|
|
||||||
let ciphertext = cipher.encrypt(
|
let ciphertext = cipher.encrypt(
|
||||||
@@ -116,20 +108,15 @@ pub fn derive_key(mut password: SafeCell<Vec<u8>>, salt: &Salt) -> KeyCell {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
#[allow(clippy::unwrap_used)]
|
|
||||||
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
||||||
let mut key = SafeCell::new(Key::default());
|
let mut key = SafeCell::new(Key::default());
|
||||||
password.read_inline(|password_source| {
|
password.read_inline(|password_source| {
|
||||||
let mut key_buffer = key.write();
|
let mut key_buffer = key.write();
|
||||||
let key_buffer: &mut [u8] = key_buffer.as_mut();
|
let key_buffer: &mut [u8] = key_buffer.as_mut();
|
||||||
|
|
||||||
#[allow(
|
|
||||||
clippy::unwrap_used,
|
|
||||||
reason = "Better fail completely than return a weak key"
|
|
||||||
)]
|
|
||||||
hasher
|
hasher
|
||||||
.hash_password_into(password_source.deref(), salt, key_buffer)
|
.hash_password_into(password_source, salt, key_buffer)
|
||||||
.unwrap();
|
.expect("Better fail completely than return a weak key");
|
||||||
});
|
});
|
||||||
|
|
||||||
key.into()
|
key.into()
|
||||||
@@ -141,7 +128,7 @@ mod tests {
|
|||||||
derive_key,
|
derive_key,
|
||||||
encryption::v1::{Nonce, generate_salt},
|
encryption::v1::{Nonce, generate_salt},
|
||||||
};
|
};
|
||||||
use crate::safe_cell::{SafeCell, SafeCellHandle as _};
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
pub fn encrypt_decrypt() {
|
pub fn encrypt_decrypt() {
|
||||||
|
|||||||
@@ -23,14 +23,14 @@ const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
|
|||||||
|
|
||||||
#[derive(Error, Debug)]
|
#[derive(Error, Debug)]
|
||||||
pub enum DatabaseSetupError {
|
pub enum DatabaseSetupError {
|
||||||
#[error("Failed to determine home directory")]
|
#[error(transparent)]
|
||||||
HomeDir(std::io::Error),
|
ConcurrencySetup(diesel::result::Error),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
Connection(diesel::ConnectionError),
|
Connection(diesel::ConnectionError),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error("Failed to determine home directory")]
|
||||||
ConcurrencySetup(diesel::result::Error),
|
HomeDir(std::io::Error),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
Migration(Box<dyn std::error::Error + Send + Sync>),
|
Migration(Box<dyn std::error::Error + Send + Sync>),
|
||||||
@@ -41,10 +41,11 @@ pub enum DatabaseSetupError {
|
|||||||
|
|
||||||
#[derive(Error, Debug)]
|
#[derive(Error, Debug)]
|
||||||
pub enum DatabaseError {
|
pub enum DatabaseError {
|
||||||
#[error("Database connection error")]
|
|
||||||
Pool(#[from] PoolError),
|
|
||||||
#[error("Database query error")]
|
#[error("Database query error")]
|
||||||
Connection(#[from] diesel::result::Error),
|
Connection(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("Database connection error")]
|
||||||
|
Pool(#[from] PoolError),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tracing::instrument(level = "info")]
|
#[tracing::instrument(level = "info")]
|
||||||
@@ -93,13 +94,16 @@ fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tracing::instrument(level = "info")]
|
#[tracing::instrument(level = "info")]
|
||||||
|
/// Creates a connection pool for the `SQLite` database.
|
||||||
|
///
|
||||||
|
/// # Panics
|
||||||
|
/// Panics if the database path is not valid UTF-8.
|
||||||
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
||||||
let database_url = url.map(String::from).unwrap_or(
|
let database_url = url.map(String::from).unwrap_or(
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
database_path()?
|
database_path()?
|
||||||
.to_str()
|
.to_str()
|
||||||
.expect("database path is not valid UTF-8")
|
.expect("database path is not valid UTF-8")
|
||||||
.to_string(),
|
.to_owned(),
|
||||||
);
|
);
|
||||||
|
|
||||||
initialize_database(&database_url)?;
|
initialize_database(&database_url)?;
|
||||||
@@ -134,19 +138,19 @@ pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetu
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[mutants::skip]
|
#[mutants::skip]
|
||||||
|
#[expect(clippy::missing_panics_doc, reason = "Tests oriented function")]
|
||||||
|
/// Creates a test database pool with a temporary `SQLite` database file.
|
||||||
pub async fn create_test_pool() -> DatabasePool {
|
pub async fn create_test_pool() -> DatabasePool {
|
||||||
use rand::distr::{Alphanumeric, SampleString as _};
|
use rand::distr::{Alphanumeric, SampleString as _};
|
||||||
|
|
||||||
let tempfile_name = Alphanumeric.sample_string(&mut rand::rng(), 16);
|
let tempfile_name = Alphanumeric.sample_string(&mut rand::rng(), 16);
|
||||||
|
|
||||||
let file = std::env::temp_dir().join(tempfile_name);
|
let file = std::env::temp_dir().join(tempfile_name);
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
let url = file
|
let url = file
|
||||||
.to_str()
|
.to_str()
|
||||||
.expect("temp file path is not valid UTF-8")
|
.expect("temp file path is not valid UTF-8")
|
||||||
.to_string();
|
.to_owned();
|
||||||
|
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
create_pool(Some(&url))
|
create_pool(Some(&url))
|
||||||
.await
|
.await
|
||||||
.expect("Failed to create test database pool")
|
.expect("Failed to create test database pool")
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
#![allow(unused)]
|
#![allow(
|
||||||
#![allow(clippy::all)]
|
clippy::duplicated_attributes,
|
||||||
|
reason = "restructed's #[view] causes false positives"
|
||||||
|
)]
|
||||||
|
|
||||||
use crate::db::schema::{
|
use crate::db::schema::{
|
||||||
self, aead_encrypted, arbiter_settings, evm_basic_grant, evm_ether_transfer_grant,
|
self, aead_encrypted, arbiter_settings, evm_basic_grant, evm_ether_transfer_grant,
|
||||||
@@ -7,7 +9,6 @@ use crate::db::schema::{
|
|||||||
evm_token_transfer_log, evm_token_transfer_volume_limit, evm_transaction_log, evm_wallet,
|
evm_token_transfer_log, evm_token_transfer_volume_limit, evm_transaction_log, evm_wallet,
|
||||||
integrity_envelope, root_key_history, tls_history,
|
integrity_envelope, root_key_history, tls_history,
|
||||||
};
|
};
|
||||||
use chrono::{DateTime, Utc};
|
|
||||||
use diesel::{prelude::*, sqlite::Sqlite};
|
use diesel::{prelude::*, sqlite::Sqlite};
|
||||||
use restructed::Models;
|
use restructed::Models;
|
||||||
|
|
||||||
@@ -27,16 +28,16 @@ pub mod types {
|
|||||||
pub struct SqliteTimestamp(pub DateTime<Utc>);
|
pub struct SqliteTimestamp(pub DateTime<Utc>);
|
||||||
impl SqliteTimestamp {
|
impl SqliteTimestamp {
|
||||||
pub fn now() -> Self {
|
pub fn now() -> Self {
|
||||||
SqliteTimestamp(Utc::now())
|
Self(Utc::now())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<chrono::DateTime<Utc>> for SqliteTimestamp {
|
impl From<DateTime<Utc>> for SqliteTimestamp {
|
||||||
fn from(dt: chrono::DateTime<Utc>) -> Self {
|
fn from(dt: DateTime<Utc>) -> Self {
|
||||||
SqliteTimestamp(dt)
|
Self(dt)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl From<SqliteTimestamp> for chrono::DateTime<Utc> {
|
impl From<SqliteTimestamp> for DateTime<Utc> {
|
||||||
fn from(ts: SqliteTimestamp) -> Self {
|
fn from(ts: SqliteTimestamp) -> Self {
|
||||||
ts.0
|
ts.0
|
||||||
}
|
}
|
||||||
@@ -47,6 +48,11 @@ pub mod types {
|
|||||||
&'b self,
|
&'b self,
|
||||||
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
||||||
) -> diesel::serialize::Result {
|
) -> diesel::serialize::Result {
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #84; this will break up in 2038 :3"
|
||||||
|
)]
|
||||||
let unix_timestamp = self.0.timestamp() as i32;
|
let unix_timestamp = self.0.timestamp() as i32;
|
||||||
out.set_value(unix_timestamp);
|
out.set_value(unix_timestamp);
|
||||||
Ok(IsNull::No)
|
Ok(IsNull::No)
|
||||||
@@ -69,41 +75,47 @@ pub mod types {
|
|||||||
let datetime =
|
let datetime =
|
||||||
DateTime::from_timestamp(unix_timestamp, 0).ok_or("Timestamp is out of bounds")?;
|
DateTime::from_timestamp(unix_timestamp, 0).ok_or("Timestamp is out of bounds")?;
|
||||||
|
|
||||||
Ok(SqliteTimestamp(datetime))
|
Ok(Self(datetime))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Key algorithm stored in the `useragent_client.key_type` column.
|
#[derive(Debug, FromSqlRow, AsExpression, Clone)]
|
||||||
/// Values must stay stable — they are persisted in the database.
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, FromSqlRow, AsExpression, strum::FromRepr)]
|
|
||||||
#[diesel(sql_type = Integer)]
|
#[diesel(sql_type = Integer)]
|
||||||
#[repr(i32)]
|
#[repr(transparent)] // hint compiler to optimize the wrapper struct away
|
||||||
pub enum KeyType {
|
pub struct ChainId(pub i32);
|
||||||
Ed25519 = 1,
|
|
||||||
EcdsaSecp256k1 = 2,
|
|
||||||
Rsa = 3,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ToSql<Integer, Sqlite> for KeyType {
|
#[expect(
|
||||||
|
clippy::cast_sign_loss,
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "safe because chain_id is stored as i32 but is guaranteed to be a valid ChainId by the API when creating grants"
|
||||||
|
)]
|
||||||
|
const _: () = {
|
||||||
|
impl From<ChainId> for alloy::primitives::ChainId {
|
||||||
|
fn from(chain_id: ChainId) -> Self {
|
||||||
|
chain_id.0 as Self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<alloy::primitives::ChainId> for ChainId {
|
||||||
|
fn from(chain_id: alloy::primitives::ChainId) -> Self {
|
||||||
|
Self(chain_id as _)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
impl FromSql<Integer, Sqlite> for ChainId {
|
||||||
|
fn from_sql(
|
||||||
|
bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
||||||
|
) -> diesel::deserialize::Result<Self> {
|
||||||
|
FromSql::<Integer, Sqlite>::from_sql(bytes).map(Self)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl ToSql<Integer, Sqlite> for ChainId {
|
||||||
fn to_sql<'b>(
|
fn to_sql<'b>(
|
||||||
&'b self,
|
&'b self,
|
||||||
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
out: &mut diesel::serialize::Output<'b, '_, Sqlite>,
|
||||||
) -> diesel::serialize::Result {
|
) -> diesel::serialize::Result {
|
||||||
out.set_value(*self as i32);
|
ToSql::<Integer, Sqlite>::to_sql(&self.0, out)
|
||||||
Ok(IsNull::No)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl FromSql<Integer, Sqlite> for KeyType {
|
|
||||||
fn from_sql(
|
|
||||||
mut bytes: <Sqlite as diesel::backend::Backend>::RawValue<'_>,
|
|
||||||
) -> diesel::deserialize::Result<Self> {
|
|
||||||
let Some(SqliteType::Long) = bytes.value_type() else {
|
|
||||||
return Err("Expected Integer for KeyType".into());
|
|
||||||
};
|
|
||||||
let discriminant = bytes.read_long();
|
|
||||||
KeyType::from_repr(discriminant as i32)
|
|
||||||
.ok_or_else(|| format!("Unknown KeyType discriminant: {discriminant}").into())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -244,7 +256,6 @@ pub struct UseragentClient {
|
|||||||
pub public_key: Vec<u8>,
|
pub public_key: Vec<u8>,
|
||||||
pub created_at: SqliteTimestamp,
|
pub created_at: SqliteTimestamp,
|
||||||
pub updated_at: SqliteTimestamp,
|
pub updated_at: SqliteTimestamp,
|
||||||
pub key_type: KeyType,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||||
@@ -272,7 +283,7 @@ pub struct EvmEtherTransferLimit {
|
|||||||
pub struct EvmBasicGrant {
|
pub struct EvmBasicGrant {
|
||||||
pub id: i32,
|
pub id: i32,
|
||||||
pub wallet_access_id: i32, // references evm_wallet_access.id
|
pub wallet_access_id: i32, // references evm_wallet_access.id
|
||||||
pub chain_id: i32,
|
pub chain_id: ChainId,
|
||||||
pub valid_from: Option<SqliteTimestamp>,
|
pub valid_from: Option<SqliteTimestamp>,
|
||||||
pub valid_until: Option<SqliteTimestamp>,
|
pub valid_until: Option<SqliteTimestamp>,
|
||||||
pub max_gas_fee_per_gas: Option<Vec<u8>>,
|
pub max_gas_fee_per_gas: Option<Vec<u8>>,
|
||||||
@@ -295,7 +306,7 @@ pub struct EvmTransactionLog {
|
|||||||
pub id: i32,
|
pub id: i32,
|
||||||
pub grant_id: i32,
|
pub grant_id: i32,
|
||||||
pub wallet_access_id: i32,
|
pub wallet_access_id: i32,
|
||||||
pub chain_id: i32,
|
pub chain_id: ChainId,
|
||||||
pub eth_value: Vec<u8>,
|
pub eth_value: Vec<u8>,
|
||||||
pub signed_at: SqliteTimestamp,
|
pub signed_at: SqliteTimestamp,
|
||||||
}
|
}
|
||||||
@@ -370,7 +381,7 @@ pub struct EvmTokenTransferLog {
|
|||||||
pub id: i32,
|
pub id: i32,
|
||||||
pub grant_id: i32,
|
pub grant_id: i32,
|
||||||
pub log_id: i32,
|
pub log_id: i32,
|
||||||
pub chain_id: i32,
|
pub chain_id: ChainId,
|
||||||
pub token_contract: Vec<u8>,
|
pub token_contract: Vec<u8>,
|
||||||
pub recipient_address: Vec<u8>,
|
pub recipient_address: Vec<u8>,
|
||||||
pub value: Vec<u8>,
|
pub value: Vec<u8>,
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ sol! {
|
|||||||
|
|
||||||
sol! {
|
sol! {
|
||||||
/// Permit2 — Uniswap's canonical token approval manager.
|
/// Permit2 — Uniswap's canonical token approval manager.
|
||||||
/// Replaces per-contract ERC-20 approve() with a single approval hub.
|
/// Replaces per-contract ERC-20 `approve()` with a single approval hub.
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
interface IPermit2 {
|
interface IPermit2 {
|
||||||
struct TokenPermissions {
|
struct TokenPermissions {
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ use kameo::actor::ActorRef;
|
|||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::keyholder::KeyHolder,
|
actors::keyholder::KeyHolder,
|
||||||
crypto::integrity::{self, Verified},
|
crypto::integrity,
|
||||||
db::{
|
db::{
|
||||||
self, DatabaseError,
|
self, DatabaseError,
|
||||||
models::{
|
models::{
|
||||||
@@ -34,14 +34,14 @@ mod utils;
|
|||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum PolicyError {
|
pub enum PolicyError {
|
||||||
#[error("Database error")]
|
#[error("Database error")]
|
||||||
Database(#[from] crate::db::DatabaseError),
|
Database(#[from] DatabaseError),
|
||||||
#[error("Transaction violates policy: {0:?}")]
|
#[error("Transaction violates policy: {0:?}")]
|
||||||
Violations(Vec<EvalViolation>),
|
Violations(Vec<EvalViolation>),
|
||||||
#[error("No matching grant found")]
|
#[error("No matching grant found")]
|
||||||
NoMatchingGrant,
|
NoMatchingGrant,
|
||||||
|
|
||||||
#[error("Integrity error: {0}")]
|
#[error("Integrity error: {0}")]
|
||||||
Integrity(#[from] integrity::Error),
|
Integrity(#[from] integrity::IntegrityError),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
@@ -66,10 +66,10 @@ pub enum AnalyzeError {
|
|||||||
#[derive(Debug, thiserror::Error)]
|
#[derive(Debug, thiserror::Error)]
|
||||||
pub enum ListError {
|
pub enum ListError {
|
||||||
#[error("Database error")]
|
#[error("Database error")]
|
||||||
Database(#[from] crate::db::DatabaseError),
|
Database(#[from] DatabaseError),
|
||||||
|
|
||||||
#[error("Integrity verification failed for grant")]
|
#[error("Integrity verification failed for grant")]
|
||||||
Integrity(#[from] integrity::Error),
|
Integrity(#[from] integrity::IntegrityError),
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Controls whether a transaction should be executed or only validated
|
/// Controls whether a transaction should be executed or only validated
|
||||||
@@ -127,7 +127,7 @@ async fn check_shared_constraints(
|
|||||||
.get_result(conn)
|
.get_result(conn)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
if count >= rate_limit.count as i64 {
|
if count >= rate_limit.count.into() {
|
||||||
violations.push(EvalViolation::RateLimitExceeded);
|
violations.push(EvalViolation::RateLimitExceeded);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -153,36 +153,12 @@ impl Engine {
|
|||||||
{
|
{
|
||||||
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
let mut conn = self.db.get().await.map_err(DatabaseError::from)?;
|
||||||
|
|
||||||
let verified_settings =
|
let grant = P::try_find_grant(&context, &mut conn)
|
||||||
match integrity::lookup_verified_from_query(&mut conn, &self.keyholder, |conn| {
|
|
||||||
let context = context.clone();
|
|
||||||
Box::pin(async move {
|
|
||||||
let grant = P::try_find_grant(&context, conn)
|
|
||||||
.await
|
|
||||||
.map_err(DatabaseError::from)?
|
|
||||||
.ok_or_else(|| DatabaseError::from(diesel::result::Error::NotFound))?;
|
|
||||||
|
|
||||||
Ok::<_, DatabaseError>((grant.common_settings_id, grant.settings))
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(verified) => verified,
|
|
||||||
Err(integrity::Error::Database(DatabaseError::Connection(
|
|
||||||
diesel::result::Error::NotFound,
|
|
||||||
))) => return Err(PolicyError::NoMatchingGrant),
|
|
||||||
Err(err) => return Err(PolicyError::Integrity(err)),
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut grant = P::try_find_grant(&context, &mut conn)
|
|
||||||
.await
|
.await
|
||||||
.map_err(DatabaseError::from)?
|
.map_err(DatabaseError::from)?
|
||||||
.ok_or(PolicyError::NoMatchingGrant)?;
|
.ok_or(PolicyError::NoMatchingGrant)?;
|
||||||
|
|
||||||
// IMPORTANT: policy evaluation uses extra non-integrity fields from Grant
|
integrity::verify_entity(&mut conn, &self.keyholder, &grant.settings, grant.id).await?;
|
||||||
// (e.g., per-policy ids), so we currently reload Grant after the query-native
|
|
||||||
// integrity check over canonicalized settings.
|
|
||||||
grant.settings = verified_settings.into_inner();
|
|
||||||
|
|
||||||
let mut violations = check_shared_constraints(
|
let mut violations = check_shared_constraints(
|
||||||
&context,
|
&context,
|
||||||
@@ -209,7 +185,7 @@ impl Engine {
|
|||||||
.values(&NewEvmTransactionLog {
|
.values(&NewEvmTransactionLog {
|
||||||
grant_id: grant.common_settings_id,
|
grant_id: grant.common_settings_id,
|
||||||
wallet_access_id: context.target.id,
|
wallet_access_id: context.target.id,
|
||||||
chain_id: context.chain as i32,
|
chain_id: context.chain.into(),
|
||||||
eth_value: utils::u256_to_bytes(context.value).to_vec(),
|
eth_value: utils::u256_to_bytes(context.value).to_vec(),
|
||||||
signed_at: Utc::now().into(),
|
signed_at: Utc::now().into(),
|
||||||
})
|
})
|
||||||
@@ -231,14 +207,14 @@ impl Engine {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Engine {
|
impl Engine {
|
||||||
pub fn new(db: db::DatabasePool, keyholder: ActorRef<KeyHolder>) -> Self {
|
pub const fn new(db: db::DatabasePool, keyholder: ActorRef<KeyHolder>) -> Self {
|
||||||
Self { db, keyholder }
|
Self { db, keyholder }
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn create_grant<P: Policy>(
|
pub async fn create_grant<P: Policy>(
|
||||||
&self,
|
&self,
|
||||||
full_grant: CombinedSettings<P::Settings>,
|
full_grant: CombinedSettings<P::Settings>,
|
||||||
) -> Result<Verified<i32>, DatabaseError>
|
) -> Result<i32, DatabaseError>
|
||||||
where
|
where
|
||||||
P::Settings: Clone,
|
P::Settings: Clone,
|
||||||
{
|
{
|
||||||
@@ -250,9 +226,15 @@ impl Engine {
|
|||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
use schema::evm_basic_grant;
|
use schema::evm_basic_grant;
|
||||||
|
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::cast_possible_wrap,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #86"
|
||||||
|
)]
|
||||||
let basic_grant: EvmBasicGrant = insert_into(evm_basic_grant::table)
|
let basic_grant: EvmBasicGrant = insert_into(evm_basic_grant::table)
|
||||||
.values(&NewEvmBasicGrant {
|
.values(&NewEvmBasicGrant {
|
||||||
chain_id: full_grant.shared.chain as i32,
|
chain_id: full_grant.shared.chain.into(),
|
||||||
wallet_access_id: full_grant.shared.wallet_access_id,
|
wallet_access_id: full_grant.shared.wallet_access_id,
|
||||||
valid_from: full_grant.shared.valid_from.map(SqliteTimestamp),
|
valid_from: full_grant.shared.valid_from.map(SqliteTimestamp),
|
||||||
valid_until: full_grant.shared.valid_until.map(SqliteTimestamp),
|
valid_until: full_grant.shared.valid_until.map(SqliteTimestamp),
|
||||||
@@ -282,12 +264,11 @@ impl Engine {
|
|||||||
|
|
||||||
P::create_grant(&basic_grant, &full_grant.specific, conn).await?;
|
P::create_grant(&basic_grant, &full_grant.specific, conn).await?;
|
||||||
|
|
||||||
let verified_entity_id =
|
integrity::sign_entity(conn, &keyholder, &full_grant, basic_grant.id)
|
||||||
integrity::sign_entity(conn, &keyholder, &full_grant, basic_grant.id)
|
.await
|
||||||
.await
|
.map_err(|_| diesel::result::Error::RollbackTransaction)?;
|
||||||
.map_err(|_| diesel::result::Error::RollbackTransaction)?;
|
|
||||||
|
|
||||||
QueryResult::Ok(verified_entity_id)
|
QueryResult::Ok(basic_grant.id)
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.await?;
|
.await?;
|
||||||
@@ -298,7 +279,7 @@ impl Engine {
|
|||||||
async fn list_one_kind<Kind: Policy, Y>(
|
async fn list_one_kind<Kind: Policy, Y>(
|
||||||
&self,
|
&self,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> Result<Vec<Grant<Y>>, ListError>
|
) -> Result<impl Iterator<Item = Grant<Y>>, ListError>
|
||||||
where
|
where
|
||||||
Y: From<Kind::Settings>,
|
Y: From<Kind::Settings>,
|
||||||
{
|
{
|
||||||
@@ -306,26 +287,16 @@ impl Engine {
|
|||||||
.await
|
.await
|
||||||
.map_err(DatabaseError::from)?;
|
.map_err(DatabaseError::from)?;
|
||||||
|
|
||||||
let mut verified_grants = Vec::with_capacity(all_grants.len());
|
// Verify integrity of all grants before returning any results
|
||||||
|
for grant in &all_grants {
|
||||||
// Verify integrity of all grants before returning any results.
|
integrity::verify_entity(conn, &self.keyholder, &grant.settings, grant.id).await?;
|
||||||
for grant in all_grants {
|
|
||||||
integrity::verify_entity(
|
|
||||||
conn,
|
|
||||||
&self.keyholder,
|
|
||||||
&grant.settings,
|
|
||||||
grant.common_settings_id,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
verified_grants.push(Grant {
|
|
||||||
id: grant.id,
|
|
||||||
common_settings_id: grant.common_settings_id,
|
|
||||||
settings: grant.settings.generalize(),
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(verified_grants)
|
Ok(all_grants.into_iter().map(|g| Grant {
|
||||||
|
id: g.id,
|
||||||
|
common_settings_id: g.common_settings_id,
|
||||||
|
settings: g.settings.generalize(),
|
||||||
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn list_all_grants(&self) -> Result<Vec<Grant<SpecificGrant>>, ListError> {
|
pub async fn list_all_grants(&self) -> Result<Vec<Grant<SpecificGrant>>, ListError> {
|
||||||
@@ -348,7 +319,7 @@ impl Engine {
|
|||||||
let TxKind::Call(to) = transaction.to else {
|
let TxKind::Call(to) = transaction.to else {
|
||||||
return Err(VetError::ContractCreationNotSupported);
|
return Err(VetError::ContractCreationNotSupported);
|
||||||
};
|
};
|
||||||
let context = policies::EvalContext {
|
let context = EvalContext {
|
||||||
target,
|
target,
|
||||||
chain: transaction.chain_id,
|
chain: transaction.chain_id,
|
||||||
to,
|
to,
|
||||||
@@ -439,10 +410,16 @@ mod tests {
|
|||||||
conn: &mut DatabaseConnection,
|
conn: &mut DatabaseConnection,
|
||||||
shared: &SharedGrantSettings,
|
shared: &SharedGrantSettings,
|
||||||
) -> EvmBasicGrant {
|
) -> EvmBasicGrant {
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::cast_possible_wrap,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #86"
|
||||||
|
)]
|
||||||
insert_into(evm_basic_grant::table)
|
insert_into(evm_basic_grant::table)
|
||||||
.values(NewEvmBasicGrant {
|
.values(NewEvmBasicGrant {
|
||||||
wallet_access_id: shared.wallet_access_id,
|
wallet_access_id: shared.wallet_access_id,
|
||||||
chain_id: shared.chain as i32,
|
chain_id: shared.chain.into(),
|
||||||
valid_from: shared.valid_from.map(SqliteTimestamp),
|
valid_from: shared.valid_from.map(SqliteTimestamp),
|
||||||
valid_until: shared.valid_until.map(SqliteTimestamp),
|
valid_until: shared.valid_until.map(SqliteTimestamp),
|
||||||
max_gas_fee_per_gas: shared
|
max_gas_fee_per_gas: shared
|
||||||
@@ -606,7 +583,7 @@ mod tests {
|
|||||||
.values(NewEvmTransactionLog {
|
.values(NewEvmTransactionLog {
|
||||||
grant_id: basic_grant.id,
|
grant_id: basic_grant.id,
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
eth_value: super::utils::u256_to_bytes(U256::ZERO).to_vec(),
|
eth_value: super::utils::u256_to_bytes(U256::ZERO).to_vec(),
|
||||||
signed_at: SqliteTimestamp(Utc::now()),
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ use thiserror::Error;
|
|||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
crypto::integrity::v1::Integrable,
|
crypto::integrity::v1::Integrable,
|
||||||
db::models::{self, EvmBasicGrant, EvmWalletAccess},
|
db::models::{EvmBasicGrant, EvmWalletAccess},
|
||||||
evm::utils,
|
evm::utils,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -87,10 +87,10 @@ pub trait Policy: Sized {
|
|||||||
|
|
||||||
// Create a new grant in the database based on the provided grant details, and return its ID
|
// Create a new grant in the database based on the provided grant details, and return its ID
|
||||||
fn create_grant(
|
fn create_grant(
|
||||||
basic: &models::EvmBasicGrant,
|
basic: &EvmBasicGrant,
|
||||||
grant: &Self::Settings,
|
grant: &Self::Settings,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> impl std::future::Future<Output = QueryResult<DatabaseID>> + Send;
|
) -> impl Future<Output = QueryResult<DatabaseID>> + Send;
|
||||||
|
|
||||||
// Try to find an existing grant that matches the transaction context, and return its details if found
|
// Try to find an existing grant that matches the transaction context, and return its details if found
|
||||||
// Additionally, return ID of basic grant for shared-logic checks like rate limits and validity periods
|
// Additionally, return ID of basic grant for shared-logic checks like rate limits and validity periods
|
||||||
@@ -127,19 +127,19 @@ pub enum SpecificMeaning {
|
|||||||
TokenTransfer(token_transfers::Meaning),
|
TokenTransfer(token_transfers::Meaning),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, arbiter_macros::Hashable)]
|
||||||
pub struct TransactionRateLimit {
|
pub struct TransactionRateLimit {
|
||||||
pub count: u32,
|
pub count: u32,
|
||||||
pub window: Duration,
|
pub window: Duration,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, arbiter_macros::Hashable)]
|
||||||
pub struct VolumeRateLimit {
|
pub struct VolumeRateLimit {
|
||||||
pub max_volume: U256,
|
pub max_volume: U256,
|
||||||
pub window: Duration,
|
pub window: Duration,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash, arbiter_macros::Hashable)]
|
||||||
pub struct SharedGrantSettings {
|
pub struct SharedGrantSettings {
|
||||||
pub wallet_access_id: i32,
|
pub wallet_access_id: i32,
|
||||||
pub chain: ChainId,
|
pub chain: ChainId,
|
||||||
@@ -157,7 +157,7 @@ impl SharedGrantSettings {
|
|||||||
pub(crate) fn try_from_model(model: EvmBasicGrant) -> QueryResult<Self> {
|
pub(crate) fn try_from_model(model: EvmBasicGrant) -> QueryResult<Self> {
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
wallet_access_id: model.wallet_access_id,
|
wallet_access_id: model.wallet_access_id,
|
||||||
chain: model.chain_id as u64, // safe because chain_id is stored as i32 but is guaranteed to be a valid ChainId by the API when creating grants
|
chain: model.chain_id.into(),
|
||||||
valid_from: model.valid_from.map(Into::into),
|
valid_from: model.valid_from.map(Into::into),
|
||||||
valid_until: model.valid_until.map(Into::into),
|
valid_until: model.valid_until.map(Into::into),
|
||||||
max_gas_fee_per_gas: model
|
max_gas_fee_per_gas: model
|
||||||
@@ -168,10 +168,11 @@ impl SharedGrantSettings {
|
|||||||
.max_priority_fee_per_gas
|
.max_priority_fee_per_gas
|
||||||
.map(|b| utils::try_bytes_to_u256(&b))
|
.map(|b| utils::try_bytes_to_u256(&b))
|
||||||
.transpose()?,
|
.transpose()?,
|
||||||
|
#[expect(clippy::cast_sign_loss, clippy::as_conversions, reason = "fixme! #86")]
|
||||||
rate_limit: match (model.rate_limit_count, model.rate_limit_window_secs) {
|
rate_limit: match (model.rate_limit_count, model.rate_limit_window_secs) {
|
||||||
(Some(count), Some(window_secs)) => Some(TransactionRateLimit {
|
(Some(count), Some(window_secs)) => Some(TransactionRateLimit {
|
||||||
count: count as u32,
|
count: count as u32,
|
||||||
window: Duration::seconds(window_secs as i64),
|
window: Duration::seconds(window_secs.into()),
|
||||||
}),
|
}),
|
||||||
_ => None,
|
_ => None,
|
||||||
},
|
},
|
||||||
@@ -181,7 +182,7 @@ impl SharedGrantSettings {
|
|||||||
pub async fn query_by_id(
|
pub async fn query_by_id(
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
id: i32,
|
id: i32,
|
||||||
) -> diesel::result::QueryResult<Self> {
|
) -> QueryResult<Self> {
|
||||||
use crate::db::schema::evm_basic_grant;
|
use crate::db::schema::evm_basic_grant;
|
||||||
|
|
||||||
let basic_grant: EvmBasicGrant = evm_basic_grant::table
|
let basic_grant: EvmBasicGrant = evm_basic_grant::table
|
||||||
@@ -200,7 +201,7 @@ pub enum SpecificGrant {
|
|||||||
TokenTransfer(token_transfers::Settings),
|
TokenTransfer(token_transfers::Settings),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, arbiter_macros::Hashable)]
|
||||||
pub struct CombinedSettings<PolicyGrant> {
|
pub struct CombinedSettings<PolicyGrant> {
|
||||||
pub shared: SharedGrantSettings,
|
pub shared: SharedGrantSettings,
|
||||||
pub specific: PolicyGrant,
|
pub specific: PolicyGrant,
|
||||||
@@ -219,38 +220,3 @@ impl<P: Integrable> Integrable for CombinedSettings<P> {
|
|||||||
const KIND: &'static str = P::KIND;
|
const KIND: &'static str = P::KIND;
|
||||||
const VERSION: i32 = P::VERSION;
|
const VERSION: i32 = P::VERSION;
|
||||||
}
|
}
|
||||||
|
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
|
||||||
|
|
||||||
impl Hashable for TransactionRateLimit {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.count.hash(hasher);
|
|
||||||
self.window.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Hashable for VolumeRateLimit {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.max_volume.hash(hasher);
|
|
||||||
self.window.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Hashable for SharedGrantSettings {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.wallet_access_id.hash(hasher);
|
|
||||||
self.chain.hash(hasher);
|
|
||||||
self.valid_from.hash(hasher);
|
|
||||||
self.valid_until.hash(hasher);
|
|
||||||
self.max_gas_fee_per_gas.hash(hasher);
|
|
||||||
self.max_priority_fee_per_gas.hash(hasher);
|
|
||||||
self.rate_limit.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<P: Hashable> Hashable for CombinedSettings<P> {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.shared.hash(hasher);
|
|
||||||
self.specific.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ use std::fmt::Display;
|
|||||||
use alloy::primitives::{Address, U256};
|
use alloy::primitives::{Address, U256};
|
||||||
use chrono::{DateTime, Duration, Utc};
|
use chrono::{DateTime, Duration, Utc};
|
||||||
use diesel::dsl::{auto_type, insert_into};
|
use diesel::dsl::{auto_type, insert_into};
|
||||||
|
use diesel::prelude::*;
|
||||||
use diesel::sqlite::Sqlite;
|
use diesel::sqlite::Sqlite;
|
||||||
use diesel::{ExpressionMethods, JoinOnDsl, prelude::*};
|
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
|
|
||||||
use crate::crypto::integrity::v1::Integrable;
|
use crate::crypto::integrity::v1::Integrable;
|
||||||
@@ -19,7 +19,7 @@ use crate::evm::policies::{
|
|||||||
};
|
};
|
||||||
use crate::{
|
use crate::{
|
||||||
db::{
|
db::{
|
||||||
models::{self, NewEvmEtherTransferGrant, NewEvmEtherTransferGrantTarget},
|
models::{NewEvmEtherTransferGrant, NewEvmEtherTransferGrantTarget},
|
||||||
schema::{evm_ether_transfer_grant, evm_ether_transfer_grant_target},
|
schema::{evm_ether_transfer_grant, evm_ether_transfer_grant_target},
|
||||||
},
|
},
|
||||||
evm::{policies::Policy, utils},
|
evm::{policies::Policy, utils},
|
||||||
@@ -46,13 +46,13 @@ impl Display for Meaning {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl From<Meaning> for SpecificMeaning {
|
impl From<Meaning> for SpecificMeaning {
|
||||||
fn from(val: Meaning) -> SpecificMeaning {
|
fn from(val: Meaning) -> Self {
|
||||||
SpecificMeaning::EtherTransfer(val)
|
Self::EtherTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// A grant for ether transfers, which can be scoped to specific target addresses and volume limits
|
// A grant for ether transfers, which can be scoped to specific target addresses and volume limits
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone, arbiter_macros::Hashable)]
|
||||||
pub struct Settings {
|
pub struct Settings {
|
||||||
pub target: Vec<Address>,
|
pub target: Vec<Address>,
|
||||||
pub limit: VolumeRateLimit,
|
pub limit: VolumeRateLimit,
|
||||||
@@ -61,18 +61,9 @@ impl Integrable for Settings {
|
|||||||
const KIND: &'static str = "EtherTransfer";
|
const KIND: &'static str = "EtherTransfer";
|
||||||
}
|
}
|
||||||
|
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
|
||||||
|
|
||||||
impl Hashable for Settings {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.target.hash(hasher);
|
|
||||||
self.limit.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<Settings> for SpecificGrant {
|
impl From<Settings> for SpecificGrant {
|
||||||
fn from(val: Settings) -> SpecificGrant {
|
fn from(val: Settings) -> Self {
|
||||||
SpecificGrant::EtherTransfer(val)
|
Self::EtherTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -83,9 +74,7 @@ async fn query_relevant_past_transaction(
|
|||||||
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
||||||
let past_transactions: Vec<(Vec<u8>, SqliteTimestamp)> = evm_transaction_log::table
|
let past_transactions: Vec<(Vec<u8>, SqliteTimestamp)> = evm_transaction_log::table
|
||||||
.filter(evm_transaction_log::grant_id.eq(grant_id))
|
.filter(evm_transaction_log::grant_id.eq(grant_id))
|
||||||
.filter(
|
.filter(evm_transaction_log::signed_at.ge(SqliteTimestamp(Utc::now() - longest_window)))
|
||||||
evm_transaction_log::signed_at.ge(SqliteTimestamp(chrono::Utc::now() - longest_window)),
|
|
||||||
)
|
|
||||||
.select((
|
.select((
|
||||||
evm_transaction_log::eth_value,
|
evm_transaction_log::eth_value,
|
||||||
evm_transaction_log::signed_at,
|
evm_transaction_log::signed_at,
|
||||||
@@ -110,10 +99,9 @@ async fn check_rate_limits(
|
|||||||
let mut violations = Vec::new();
|
let mut violations = Vec::new();
|
||||||
let window = grant.settings.specific.limit.window;
|
let window = grant.settings.specific.limit.window;
|
||||||
|
|
||||||
let past_transaction =
|
let past_transaction = query_relevant_past_transaction(grant.id, window, db).await?;
|
||||||
query_relevant_past_transaction(grant.common_settings_id, window, db).await?;
|
|
||||||
|
|
||||||
let window_start = chrono::Utc::now() - grant.settings.specific.limit.window;
|
let window_start = Utc::now() - grant.settings.specific.limit.window;
|
||||||
let prospective_cumulative_volume: U256 = past_transaction
|
let prospective_cumulative_volume: U256 = past_transaction
|
||||||
.iter()
|
.iter()
|
||||||
.filter(|(_, timestamp)| timestamp >= &window_start)
|
.filter(|(_, timestamp)| timestamp >= &window_start)
|
||||||
@@ -163,10 +151,15 @@ impl Policy for EtherTransfer {
|
|||||||
}
|
}
|
||||||
|
|
||||||
async fn create_grant(
|
async fn create_grant(
|
||||||
basic: &models::EvmBasicGrant,
|
basic: &EvmBasicGrant,
|
||||||
grant: &Self::Settings,
|
grant: &Self::Settings,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> diesel::result::QueryResult<DatabaseID> {
|
) -> QueryResult<DatabaseID> {
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #86"
|
||||||
|
)]
|
||||||
let limit_id: i32 = insert_into(evm_ether_transfer_limit::table)
|
let limit_id: i32 = insert_into(evm_ether_transfer_limit::table)
|
||||||
.values(NewEvmEtherTransferLimit {
|
.values(NewEvmEtherTransferLimit {
|
||||||
window_secs: grant.limit.window.num_seconds() as i32,
|
window_secs: grant.limit.window.num_seconds() as i32,
|
||||||
@@ -201,7 +194,7 @@ impl Policy for EtherTransfer {
|
|||||||
async fn try_find_grant(
|
async fn try_find_grant(
|
||||||
context: &EvalContext,
|
context: &EvalContext,
|
||||||
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> diesel::result::QueryResult<Option<Grant<Self::Settings>>> {
|
) -> QueryResult<Option<Grant<Self::Settings>>> {
|
||||||
let target_bytes = context.to.to_vec();
|
let target_bytes = context.to.to_vec();
|
||||||
|
|
||||||
// Find a grant where:
|
// Find a grant where:
|
||||||
@@ -250,20 +243,21 @@ impl Policy for EtherTransfer {
|
|||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
|
let settings = Settings {
|
||||||
|
target: targets,
|
||||||
|
limit: VolumeRateLimit {
|
||||||
|
max_volume: utils::try_bytes_to_u256(&limit.max_volume)
|
||||||
|
.map_err(|err| diesel::result::Error::DeserializationError(Box::new(err)))?,
|
||||||
|
window: Duration::seconds(limit.window_secs.into()),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
Ok(Some(Grant {
|
Ok(Some(Grant {
|
||||||
id: grant.id,
|
id: grant.id,
|
||||||
common_settings_id: grant.basic_grant_id,
|
common_settings_id: grant.basic_grant_id,
|
||||||
settings: CombinedSettings {
|
settings: CombinedSettings {
|
||||||
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
||||||
specific: Settings {
|
specific: settings,
|
||||||
target: targets,
|
|
||||||
limit: VolumeRateLimit {
|
|
||||||
max_volume: utils::try_bytes_to_u256(&limit.max_volume).map_err(|err| {
|
|
||||||
diesel::result::Error::DeserializationError(Box::new(err))
|
|
||||||
})?,
|
|
||||||
window: chrono::Duration::seconds(limit.window_secs as i64),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -274,7 +268,7 @@ impl Policy for EtherTransfer {
|
|||||||
_log_id: i32,
|
_log_id: i32,
|
||||||
_grant: &Grant<Self::Settings>,
|
_grant: &Grant<Self::Settings>,
|
||||||
_conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
_conn: &mut impl AsyncConnection<Backend = Sqlite>,
|
||||||
) -> diesel::result::QueryResult<()> {
|
) -> QueryResult<()> {
|
||||||
// Basic log is sufficient
|
// Basic log is sufficient
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
@@ -327,7 +321,7 @@ impl Policy for EtherTransfer {
|
|||||||
.map(|(basic, specific)| {
|
.map(|(basic, specific)| {
|
||||||
let targets: Vec<Address> = targets_by_grant
|
let targets: Vec<Address> = targets_by_grant
|
||||||
.get(&specific.id)
|
.get(&specific.id)
|
||||||
.map(|v| v.as_slice())
|
.map(Vec::as_slice)
|
||||||
.unwrap_or_default()
|
.unwrap_or_default()
|
||||||
.iter()
|
.iter()
|
||||||
.filter_map(|t| {
|
.filter_map(|t| {
|
||||||
@@ -351,7 +345,7 @@ impl Policy for EtherTransfer {
|
|||||||
max_volume: utils::try_bytes_to_u256(&limit.max_volume).map_err(
|
max_volume: utils::try_bytes_to_u256(&limit.max_volume).map_err(
|
||||||
|e| diesel::result::Error::DeserializationError(Box::new(e)),
|
|e| diesel::result::Error::DeserializationError(Box::new(e)),
|
||||||
)?,
|
)?,
|
||||||
window: Duration::seconds(limit.window_secs as i64),
|
window: Duration::seconds(limit.window_secs.into()),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ use crate::evm::{
|
|||||||
use super::{EtherTransfer, Settings};
|
use super::{EtherTransfer, Settings};
|
||||||
|
|
||||||
const WALLET_ACCESS_ID: i32 = 1;
|
const WALLET_ACCESS_ID: i32 = 1;
|
||||||
const CHAIN_ID: u64 = 1;
|
const CHAIN_ID: alloy::primitives::ChainId = 1;
|
||||||
|
|
||||||
const ALLOWED: Address = address!("1111111111111111111111111111111111111111");
|
const ALLOWED: Address = address!("1111111111111111111111111111111111111111");
|
||||||
const OTHER: Address = address!("2222222222222222222222222222222222222222");
|
const OTHER: Address = address!("2222222222222222222222222222222222222222");
|
||||||
@@ -47,7 +47,7 @@ async fn insert_basic(conn: &mut DatabaseConnection, revoked: bool) -> EvmBasicG
|
|||||||
insert_into(evm_basic_grant::table)
|
insert_into(evm_basic_grant::table)
|
||||||
.values(NewEvmBasicGrant {
|
.values(NewEvmBasicGrant {
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
valid_from: None,
|
valid_from: None,
|
||||||
valid_until: None,
|
valid_until: None,
|
||||||
max_gas_fee_per_gas: None,
|
max_gas_fee_per_gas: None,
|
||||||
@@ -160,7 +160,7 @@ async fn evaluate_passes_when_volume_within_limit() {
|
|||||||
.values(NewEvmTransactionLog {
|
.values(NewEvmTransactionLog {
|
||||||
grant_id,
|
grant_id,
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
eth_value: utils::u256_to_bytes(U256::from(500u64)).to_vec(),
|
eth_value: utils::u256_to_bytes(U256::from(500u64)).to_vec(),
|
||||||
signed_at: SqliteTimestamp(Utc::now()),
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
})
|
})
|
||||||
@@ -202,7 +202,7 @@ async fn evaluate_rejects_volume_over_limit() {
|
|||||||
.values(NewEvmTransactionLog {
|
.values(NewEvmTransactionLog {
|
||||||
grant_id,
|
grant_id,
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
eth_value: utils::u256_to_bytes(U256::from(1_000u64)).to_vec(),
|
eth_value: utils::u256_to_bytes(U256::from(1_000u64)).to_vec(),
|
||||||
signed_at: SqliteTimestamp(Utc::now()),
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
})
|
})
|
||||||
@@ -245,7 +245,7 @@ async fn evaluate_passes_at_exactly_volume_limit() {
|
|||||||
.values(NewEvmTransactionLog {
|
.values(NewEvmTransactionLog {
|
||||||
grant_id,
|
grant_id,
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
eth_value: utils::u256_to_bytes(U256::from(900u64)).to_vec(),
|
eth_value: utils::u256_to_bytes(U256::from(900u64)).to_vec(),
|
||||||
signed_at: SqliteTimestamp(Utc::now()),
|
signed_at: SqliteTimestamp(Utc::now()),
|
||||||
})
|
})
|
||||||
@@ -340,7 +340,7 @@ proptest::proptest! {
|
|||||||
) {
|
) {
|
||||||
use rand::{SeedableRng, seq::SliceRandom};
|
use rand::{SeedableRng, seq::SliceRandom};
|
||||||
use sha2::Digest;
|
use sha2::Digest;
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
use arbiter_crypto::hashing::Hashable;
|
||||||
|
|
||||||
let addrs: Vec<Address> = raw_addrs.iter().map(|b| Address::from(*b)).collect();
|
let addrs: Vec<Address> = raw_addrs.iter().map(|b| Address::from(*b)).collect();
|
||||||
let mut shuffled = addrs.clone();
|
let mut shuffled = addrs.clone();
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ use alloy::{
|
|||||||
use arbiter_tokens_registry::evm::nonfungible::{self, TokenInfo};
|
use arbiter_tokens_registry::evm::nonfungible::{self, TokenInfo};
|
||||||
use chrono::{DateTime, Duration, Utc};
|
use chrono::{DateTime, Duration, Utc};
|
||||||
use diesel::dsl::{auto_type, insert_into};
|
use diesel::dsl::{auto_type, insert_into};
|
||||||
|
use diesel::prelude::*;
|
||||||
use diesel::sqlite::Sqlite;
|
use diesel::sqlite::Sqlite;
|
||||||
use diesel::{ExpressionMethods, prelude::*};
|
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
|
|
||||||
use super::{DatabaseID, EvalContext, EvalViolation};
|
use super::{DatabaseID, EvalContext, EvalViolation};
|
||||||
@@ -56,13 +56,13 @@ impl std::fmt::Display for Meaning {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
impl From<Meaning> for SpecificMeaning {
|
impl From<Meaning> for SpecificMeaning {
|
||||||
fn from(val: Meaning) -> SpecificMeaning {
|
fn from(val: Meaning) -> Self {
|
||||||
SpecificMeaning::TokenTransfer(val)
|
Self::TokenTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// A grant for token transfers, which can be scoped to specific target addresses and volume limits
|
// A grant for token transfers, which can be scoped to specific target addresses and volume limits
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone, arbiter_macros::Hashable)]
|
||||||
pub struct Settings {
|
pub struct Settings {
|
||||||
pub token_contract: Address,
|
pub token_contract: Address,
|
||||||
pub target: Option<Address>,
|
pub target: Option<Address>,
|
||||||
@@ -72,19 +72,9 @@ impl Integrable for Settings {
|
|||||||
const KIND: &'static str = "TokenTransfer";
|
const KIND: &'static str = "TokenTransfer";
|
||||||
}
|
}
|
||||||
|
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
|
||||||
|
|
||||||
impl Hashable for Settings {
|
|
||||||
fn hash<H: sha2::Digest>(&self, hasher: &mut H) {
|
|
||||||
self.token_contract.hash(hasher);
|
|
||||||
self.target.hash(hasher);
|
|
||||||
self.volume_limits.hash(hasher);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<Settings> for SpecificGrant {
|
impl From<Settings> for SpecificGrant {
|
||||||
fn from(val: Settings) -> SpecificGrant {
|
fn from(val: Settings) -> Self {
|
||||||
SpecificGrant::TokenTransfer(val)
|
Self::TokenTransfer(val)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -95,10 +85,7 @@ async fn query_relevant_past_transfers(
|
|||||||
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
) -> QueryResult<Vec<(U256, DateTime<Utc>)>> {
|
||||||
let past_logs: Vec<(Vec<u8>, SqliteTimestamp)> = evm_token_transfer_log::table
|
let past_logs: Vec<(Vec<u8>, SqliteTimestamp)> = evm_token_transfer_log::table
|
||||||
.filter(evm_token_transfer_log::grant_id.eq(grant_id))
|
.filter(evm_token_transfer_log::grant_id.eq(grant_id))
|
||||||
.filter(
|
.filter(evm_token_transfer_log::created_at.ge(SqliteTimestamp(Utc::now() - longest_window)))
|
||||||
evm_token_transfer_log::created_at
|
|
||||||
.ge(SqliteTimestamp(chrono::Utc::now() - longest_window)),
|
|
||||||
)
|
|
||||||
.select((
|
.select((
|
||||||
evm_token_transfer_log::value,
|
evm_token_transfer_log::value,
|
||||||
evm_token_transfer_log::created_at,
|
evm_token_transfer_log::created_at,
|
||||||
@@ -138,7 +125,7 @@ async fn check_volume_rate_limits(
|
|||||||
let past_transfers = query_relevant_past_transfers(grant.id, longest_window, db).await?;
|
let past_transfers = query_relevant_past_transfers(grant.id, longest_window, db).await?;
|
||||||
|
|
||||||
for limit in &grant.settings.specific.volume_limits {
|
for limit in &grant.settings.specific.volume_limits {
|
||||||
let window_start = chrono::Utc::now() - limit.window;
|
let window_start = Utc::now() - limit.window;
|
||||||
let prospective_cumulative_volume: U256 = past_transfers
|
let prospective_cumulative_volume: U256 = past_transfers
|
||||||
.iter()
|
.iter()
|
||||||
.filter(|(_, timestamp)| timestamp >= &window_start)
|
.filter(|(_, timestamp)| timestamp >= &window_start)
|
||||||
@@ -214,6 +201,11 @@ impl Policy for TokenTransfer {
|
|||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
for limit in &grant.volume_limits {
|
for limit in &grant.volume_limits {
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #86"
|
||||||
|
)]
|
||||||
insert_into(evm_token_transfer_volume_limit::table)
|
insert_into(evm_token_transfer_volume_limit::table)
|
||||||
.values(NewEvmTokenTransferVolumeLimit {
|
.values(NewEvmTokenTransferVolumeLimit {
|
||||||
grant_id,
|
grant_id,
|
||||||
@@ -263,7 +255,7 @@ impl Policy for TokenTransfer {
|
|||||||
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|err| {
|
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|err| {
|
||||||
diesel::result::Error::DeserializationError(Box::new(err))
|
diesel::result::Error::DeserializationError(Box::new(err))
|
||||||
})?,
|
})?,
|
||||||
window: Duration::seconds(row.window_secs as i64),
|
window: Duration::seconds(row.window_secs.into()),
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.collect::<QueryResult<Vec<_>>>()?;
|
.collect::<QueryResult<Vec<_>>>()?;
|
||||||
@@ -286,16 +278,18 @@ impl Policy for TokenTransfer {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
let settings = Settings {
|
||||||
|
token_contract: Address::from(token_contract),
|
||||||
|
target,
|
||||||
|
volume_limits,
|
||||||
|
};
|
||||||
|
|
||||||
Ok(Some(Grant {
|
Ok(Some(Grant {
|
||||||
id: token_grant.id,
|
id: token_grant.id,
|
||||||
common_settings_id: token_grant.basic_grant_id,
|
common_settings_id: token_grant.basic_grant_id,
|
||||||
settings: CombinedSettings {
|
settings: CombinedSettings {
|
||||||
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
shared: SharedGrantSettings::try_from_model(basic_grant)?,
|
||||||
specific: Settings {
|
specific: settings,
|
||||||
token_contract: Address::from(token_contract),
|
|
||||||
target,
|
|
||||||
volume_limits,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -311,7 +305,7 @@ impl Policy for TokenTransfer {
|
|||||||
.values(NewEvmTokenTransferLog {
|
.values(NewEvmTokenTransferLog {
|
||||||
grant_id: grant.id,
|
grant_id: grant.id,
|
||||||
log_id,
|
log_id,
|
||||||
chain_id: context.chain as i32,
|
chain_id: context.chain.into(),
|
||||||
token_contract: context.to.to_vec(),
|
token_contract: context.to.to_vec(),
|
||||||
recipient_address: meaning.to.to_vec(),
|
recipient_address: meaning.to.to_vec(),
|
||||||
value: utils::u256_to_bytes(meaning.value).to_vec(),
|
value: utils::u256_to_bytes(meaning.value).to_vec(),
|
||||||
@@ -360,7 +354,7 @@ impl Policy for TokenTransfer {
|
|||||||
.map(|(basic, specific)| {
|
.map(|(basic, specific)| {
|
||||||
let volume_limits: Vec<VolumeRateLimit> = limits_by_grant
|
let volume_limits: Vec<VolumeRateLimit> = limits_by_grant
|
||||||
.get(&specific.id)
|
.get(&specific.id)
|
||||||
.map(|v| v.as_slice())
|
.map(Vec::as_slice)
|
||||||
.unwrap_or_default()
|
.unwrap_or_default()
|
||||||
.iter()
|
.iter()
|
||||||
.map(|row| {
|
.map(|row| {
|
||||||
@@ -368,7 +362,7 @@ impl Policy for TokenTransfer {
|
|||||||
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|e| {
|
max_volume: utils::try_bytes_to_u256(&row.max_volume).map_err(|e| {
|
||||||
diesel::result::Error::DeserializationError(Box::new(e))
|
diesel::result::Error::DeserializationError(Box::new(e))
|
||||||
})?,
|
})?,
|
||||||
window: Duration::seconds(row.window_secs as i64),
|
window: Duration::seconds(row.window_secs.into()),
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.collect::<QueryResult<Vec<_>>>()?;
|
.collect::<QueryResult<Vec<_>>>()?;
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ async fn insert_basic(conn: &mut DatabaseConnection, revoked: bool) -> EvmBasicG
|
|||||||
insert_into(evm_basic_grant::table)
|
insert_into(evm_basic_grant::table)
|
||||||
.values(NewEvmBasicGrant {
|
.values(NewEvmBasicGrant {
|
||||||
wallet_access_id: WALLET_ACCESS_ID,
|
wallet_access_id: WALLET_ACCESS_ID,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
valid_from: None,
|
valid_from: None,
|
||||||
valid_until: None,
|
valid_until: None,
|
||||||
max_gas_fee_per_gas: None,
|
max_gas_fee_per_gas: None,
|
||||||
@@ -238,12 +238,11 @@ async fn evaluate_passes_volume_at_exact_limit() {
|
|||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Record a past transfer of 900, with current transfer 100 => exactly 1000 limit
|
// Record a past transfer of 900, with current transfer 100 => exactly 1000 limit
|
||||||
use crate::db::{models::NewEvmTokenTransferLog, schema::evm_token_transfer_log};
|
insert_into(db::schema::evm_token_transfer_log::table)
|
||||||
insert_into(evm_token_transfer_log::table)
|
.values(db::models::NewEvmTokenTransferLog {
|
||||||
.values(NewEvmTokenTransferLog {
|
|
||||||
grant_id,
|
grant_id,
|
||||||
log_id: 0,
|
log_id: 0,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
token_contract: DAI.to_vec(),
|
token_contract: DAI.to_vec(),
|
||||||
recipient_address: RECIPIENT.to_vec(),
|
recipient_address: RECIPIENT.to_vec(),
|
||||||
value: utils::u256_to_bytes(U256::from(900u64)).to_vec(),
|
value: utils::u256_to_bytes(U256::from(900u64)).to_vec(),
|
||||||
@@ -283,12 +282,11 @@ async fn evaluate_rejects_volume_over_limit() {
|
|||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
use crate::db::{models::NewEvmTokenTransferLog, schema::evm_token_transfer_log};
|
insert_into(db::schema::evm_token_transfer_log::table)
|
||||||
insert_into(evm_token_transfer_log::table)
|
.values(db::models::NewEvmTokenTransferLog {
|
||||||
.values(NewEvmTokenTransferLog {
|
|
||||||
grant_id,
|
grant_id,
|
||||||
log_id: 0,
|
log_id: 0,
|
||||||
chain_id: CHAIN_ID as i32,
|
chain_id: CHAIN_ID.into(),
|
||||||
token_contract: DAI.to_vec(),
|
token_contract: DAI.to_vec(),
|
||||||
recipient_address: RECIPIENT.to_vec(),
|
recipient_address: RECIPIENT.to_vec(),
|
||||||
value: utils::u256_to_bytes(U256::from(1_000u64)).to_vec(),
|
value: utils::u256_to_bytes(U256::from(1_000u64)).to_vec(),
|
||||||
@@ -419,7 +417,7 @@ proptest::proptest! {
|
|||||||
) {
|
) {
|
||||||
use rand::{SeedableRng, seq::SliceRandom};
|
use rand::{SeedableRng, seq::SliceRandom};
|
||||||
use sha2::Digest;
|
use sha2::Digest;
|
||||||
use crate::crypto::integrity::hashing::Hashable;
|
use arbiter_crypto::hashing::Hashable;
|
||||||
|
|
||||||
let limits: Vec<VolumeRateLimit> = raw_limits
|
let limits: Vec<VolumeRateLimit> = raw_limits
|
||||||
.iter()
|
.iter()
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
use std::sync::Mutex;
|
use std::sync::Mutex;
|
||||||
|
|
||||||
use crate::safe_cell::{SafeCell, SafeCellHandle as _};
|
|
||||||
use alloy::{
|
use alloy::{
|
||||||
consensus::SignableTransaction,
|
consensus::SignableTransaction,
|
||||||
network::{TxSigner, TxSignerSync},
|
network::{TxSigner, TxSignerSync},
|
||||||
primitives::{Address, B256, ChainId, Signature},
|
primitives::{Address, B256, ChainId, Signature},
|
||||||
signers::{Error, Result, Signer, SignerSync, utils::secret_key_to_address},
|
signers::{Error, Result, Signer, SignerSync, utils::secret_key_to_address},
|
||||||
};
|
};
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use k256::ecdsa::{self, RecoveryId, SigningKey, signature::hazmat::PrehashSigner};
|
use k256::ecdsa::{self, RecoveryId, SigningKey, signature::hazmat::PrehashSigner};
|
||||||
|
|
||||||
@@ -82,8 +82,8 @@ impl SafeSigner {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[expect(clippy::significant_drop_tightening, reason = "false positive")]
|
||||||
fn sign_hash_inner(&self, hash: &B256) -> Result<Signature> {
|
fn sign_hash_inner(&self, hash: &B256) -> Result<Signature> {
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
let mut cell = self.key.lock().expect("SafeSigner mutex poisoned");
|
let mut cell = self.key.lock().expect("SafeSigner mutex poisoned");
|
||||||
let reader = cell.read();
|
let reader = cell.read();
|
||||||
let sig: (ecdsa::Signature, RecoveryId) = reader.sign_prehash(hash.as_ref())?;
|
let sig: (ecdsa::Signature, RecoveryId) = reader.sign_prehash(hash.as_ref())?;
|
||||||
@@ -96,7 +96,6 @@ impl SafeSigner {
|
|||||||
{
|
{
|
||||||
return Err(Error::TransactionChainIdMismatch {
|
return Err(Error::TransactionChainIdMismatch {
|
||||||
signer: chain_id,
|
signer: chain_id,
|
||||||
#[allow(clippy::expect_used)]
|
|
||||||
tx: tx.chain_id().expect("Chain ID is guaranteed to be set"),
|
tx: tx.chain_id().expect("Chain ID is guaranteed to be set"),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ pub struct LengthError {
|
|||||||
pub actual: usize,
|
pub actual: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn u256_to_bytes(value: U256) -> [u8; 32] {
|
pub const fn u256_to_bytes(value: U256) -> [u8; 32] {
|
||||||
value.to_le_bytes()
|
value.to_le_bytes()
|
||||||
}
|
}
|
||||||
pub fn bytes_to_u256(bytes: &[u8]) -> Option<U256> {
|
pub fn bytes_to_u256(bytes: &[u8]) -> Option<U256> {
|
||||||
|
|||||||
@@ -98,8 +98,7 @@ pub async fn start(mut conn: ClientConnection, mut bi: GrpcBi<ClientRequest, Cli
|
|||||||
Err(err) => {
|
Err(err) => {
|
||||||
let _ = bi
|
let _ = bi
|
||||||
.send(Err(Status::unauthenticated(format!(
|
.send(Err(Status::unauthenticated(format!(
|
||||||
"Authentication failed: {}",
|
"Authentication failed: {err}",
|
||||||
err
|
|
||||||
))))
|
))))
|
||||||
.await;
|
.await;
|
||||||
warn!(error = ?err, "Client authentication failed");
|
warn!(error = ?err, "Client authentication failed");
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
ClientMetadata,
|
ClientMetadata,
|
||||||
proto::{
|
proto::{
|
||||||
client::{
|
client::{
|
||||||
ClientRequest, ClientResponse,
|
ClientRequest, ClientResponse,
|
||||||
auth::{
|
auth::{
|
||||||
self as proto_auth, AuthChallenge as ProtoAuthChallenge,
|
self as proto_auth, AuthChallengeRequest as ProtoAuthChallengeRequest,
|
||||||
AuthChallengeRequest as ProtoAuthChallengeRequest,
|
|
||||||
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
||||||
request::Payload as AuthRequestPayload, response::Payload as AuthResponsePayload,
|
request::Payload as AuthRequestPayload, response::Payload as AuthResponsePayload,
|
||||||
},
|
},
|
||||||
@@ -21,7 +21,7 @@ use tonic::Status;
|
|||||||
use tracing::warn;
|
use tracing::warn;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::client::{self, ClientConnection, auth},
|
actors::client::{ClientConnection, auth},
|
||||||
grpc::request_tracker::RequestTracker,
|
grpc::request_tracker::RequestTracker,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -31,7 +31,7 @@ pub struct AuthTransportAdapter<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> AuthTransportAdapter<'a> {
|
impl<'a> AuthTransportAdapter<'a> {
|
||||||
pub fn new(
|
pub const fn new(
|
||||||
bi: &'a mut GrpcBi<ClientRequest, ClientResponse>,
|
bi: &'a mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
request_tracker: &'a mut RequestTracker,
|
request_tracker: &'a mut RequestTracker,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
@@ -41,40 +41,6 @@ impl<'a> AuthTransportAdapter<'a> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn response_to_proto(response: auth::Outbound) -> AuthResponsePayload {
|
|
||||||
match response {
|
|
||||||
auth::Outbound::AuthChallenge { pubkey, nonce } => {
|
|
||||||
AuthResponsePayload::Challenge(ProtoAuthChallenge {
|
|
||||||
pubkey: pubkey.to_bytes().to_vec(),
|
|
||||||
nonce,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
auth::Outbound::AuthSuccess => {
|
|
||||||
AuthResponsePayload::Result(ProtoAuthResult::Success.into())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn error_to_proto(error: auth::Error) -> AuthResponsePayload {
|
|
||||||
AuthResponsePayload::Result(
|
|
||||||
match error {
|
|
||||||
auth::Error::InvalidChallengeSolution => ProtoAuthResult::InvalidSignature,
|
|
||||||
auth::Error::ApproveError(auth::ApproveError::Denied) => {
|
|
||||||
ProtoAuthResult::ApprovalDenied
|
|
||||||
}
|
|
||||||
auth::Error::ApproveError(auth::ApproveError::Upstream(
|
|
||||||
crate::actors::flow_coordinator::ApprovalError::NoUserAgentsConnected,
|
|
||||||
)) => ProtoAuthResult::NoUserAgentsOnline,
|
|
||||||
auth::Error::ApproveError(auth::ApproveError::Internal)
|
|
||||||
| auth::Error::DatabasePoolUnavailable
|
|
||||||
| auth::Error::DatabaseOperationFailed
|
|
||||||
| auth::Error::IntegrityCheckFailed
|
|
||||||
| auth::Error::Transport => ProtoAuthResult::Internal,
|
|
||||||
}
|
|
||||||
.into(),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn send_client_response(
|
async fn send_client_response(
|
||||||
&mut self,
|
&mut self,
|
||||||
payload: AuthResponsePayload,
|
payload: AuthResponsePayload,
|
||||||
@@ -96,14 +62,14 @@ impl<'a> AuthTransportAdapter<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
impl Sender<Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {
|
impl Sender<Result<auth::Outbound, auth::ClientAuthError>> for AuthTransportAdapter<'_> {
|
||||||
async fn send(
|
async fn send(
|
||||||
&mut self,
|
&mut self,
|
||||||
item: Result<auth::Outbound, auth::Error>,
|
item: Result<auth::Outbound, auth::ClientAuthError>,
|
||||||
) -> Result<(), TransportError> {
|
) -> Result<(), TransportError> {
|
||||||
let payload = match item {
|
let payload = match item {
|
||||||
Ok(message) => AuthTransportAdapter::response_to_proto(message),
|
Ok(message) => message.into(),
|
||||||
Err(err) => AuthTransportAdapter::error_to_proto(err),
|
Err(err) => AuthResponsePayload::Result(ProtoAuthResult::from(err).into()),
|
||||||
};
|
};
|
||||||
|
|
||||||
self.send_client_response(payload).await
|
self.send_client_response(payload).await
|
||||||
@@ -160,11 +126,7 @@ impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
|||||||
.await;
|
.await;
|
||||||
return None;
|
return None;
|
||||||
};
|
};
|
||||||
let Ok(pubkey) = <[u8; 32]>::try_from(pubkey) else {
|
let Ok(pubkey) = authn::PublicKey::try_from(pubkey.as_slice()) else {
|
||||||
let _ = self.send_auth_result(ProtoAuthResult::InvalidKey).await;
|
|
||||||
return None;
|
|
||||||
};
|
|
||||||
let Ok(pubkey) = ed25519_dalek::VerifyingKey::from_bytes(&pubkey) else {
|
|
||||||
let _ = self.send_auth_result(ProtoAuthResult::InvalidKey).await;
|
let _ = self.send_auth_result(ProtoAuthResult::InvalidKey).await;
|
||||||
return None;
|
return None;
|
||||||
};
|
};
|
||||||
@@ -174,7 +136,7 @@ impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
AuthRequestPayload::ChallengeSolution(ProtoAuthChallengeSolution { signature }) => {
|
AuthRequestPayload::ChallengeSolution(ProtoAuthChallengeSolution { signature }) => {
|
||||||
let Ok(signature) = ed25519_dalek::Signature::try_from(signature.as_slice()) else {
|
let Ok(signature) = authn::Signature::try_from(signature.as_slice()) else {
|
||||||
let _ = self
|
let _ = self
|
||||||
.send_auth_result(ProtoAuthResult::InvalidSignature)
|
.send_auth_result(ProtoAuthResult::InvalidSignature)
|
||||||
.await;
|
.await;
|
||||||
@@ -186,7 +148,7 @@ impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Bi<auth::Inbound, Result<auth::Outbound, auth::Error>> for AuthTransportAdapter<'_> {}
|
impl Bi<auth::Inbound, Result<auth::Outbound, auth::ClientAuthError>> for AuthTransportAdapter<'_> {}
|
||||||
|
|
||||||
fn client_metadata_from_proto(metadata: ProtoClientInfo) -> ClientMetadata {
|
fn client_metadata_from_proto(metadata: ProtoClientInfo) -> ClientMetadata {
|
||||||
ClientMetadata {
|
ClientMetadata {
|
||||||
@@ -200,7 +162,7 @@ pub async fn start(
|
|||||||
conn: &mut ClientConnection,
|
conn: &mut ClientConnection,
|
||||||
bi: &mut GrpcBi<ClientRequest, ClientResponse>,
|
bi: &mut GrpcBi<ClientRequest, ClientResponse>,
|
||||||
request_tracker: &mut RequestTracker,
|
request_tracker: &mut RequestTracker,
|
||||||
) -> Result<i32, auth::Error> {
|
) -> Result<i32, auth::ClientAuthError> {
|
||||||
let mut transport = AuthTransportAdapter::new(bi, request_tracker);
|
let mut transport = AuthTransportAdapter::new(bi, request_tracker);
|
||||||
client::auth::authenticate(conn, &mut transport).await
|
auth::authenticate(conn, &mut transport).await
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ use crate::{
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
fn wrap_response(payload: EvmResponsePayload) -> ClientResponsePayload {
|
const fn wrap_response(payload: EvmResponsePayload) -> ClientResponsePayload {
|
||||||
ClientResponsePayload::Evm(proto_evm::Response {
|
ClientResponsePayload::Evm(proto_evm::Response {
|
||||||
payload: Some(payload),
|
payload: Some(payload),
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ use tonic::Status;
|
|||||||
use tracing::warn;
|
use tracing::warn;
|
||||||
|
|
||||||
use crate::actors::{
|
use crate::actors::{
|
||||||
client::session::{ClientSession, Error, HandleQueryVaultState},
|
client::session::{ClientSession, ClientSessionError, HandleQueryVaultState},
|
||||||
keyholder::KeyHolderState,
|
keyholder::KeyHolderState,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -28,12 +28,14 @@ pub(super) async fn dispatch(
|
|||||||
};
|
};
|
||||||
|
|
||||||
match payload {
|
match payload {
|
||||||
VaultRequestPayload::QueryState(_) => {
|
VaultRequestPayload::QueryState(()) => {
|
||||||
let state = match actor.ask(HandleQueryVaultState {}).await {
|
let state = match actor.ask(HandleQueryVaultState {}).await {
|
||||||
Ok(KeyHolderState::Unbootstrapped) => ProtoVaultState::Unbootstrapped,
|
Ok(KeyHolderState::Unbootstrapped) => ProtoVaultState::Unbootstrapped,
|
||||||
Ok(KeyHolderState::Sealed) => ProtoVaultState::Sealed,
|
Ok(KeyHolderState::Sealed) => ProtoVaultState::Sealed,
|
||||||
Ok(KeyHolderState::Unsealed) => ProtoVaultState::Unsealed,
|
Ok(KeyHolderState::Unsealed) => ProtoVaultState::Unsealed,
|
||||||
Err(SendError::HandlerError(Error::Internal)) => ProtoVaultState::Error,
|
Err(SendError::HandlerError(ClientSessionError::Internal)) => {
|
||||||
|
ProtoVaultState::Error
|
||||||
|
}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
warn!(error = ?err, "Failed to query vault state");
|
warn!(error = ?err, "Failed to query vault state");
|
||||||
ProtoVaultState::Error
|
ProtoVaultState::Error
|
||||||
|
|||||||
@@ -31,16 +31,16 @@ impl Convert for SpecificMeaning {
|
|||||||
|
|
||||||
fn convert(self) -> Self::Output {
|
fn convert(self) -> Self::Output {
|
||||||
let kind = match self {
|
let kind = match self {
|
||||||
SpecificMeaning::EtherTransfer(meaning) => ProtoSpecificMeaningKind::EtherTransfer(
|
Self::EtherTransfer(meaning) => ProtoSpecificMeaningKind::EtherTransfer(
|
||||||
arbiter_proto::proto::shared::evm::EtherTransferMeaning {
|
arbiter_proto::proto::shared::evm::EtherTransferMeaning {
|
||||||
to: meaning.to.to_vec(),
|
to: meaning.to.to_vec(),
|
||||||
value: u256_to_proto_bytes(meaning.value),
|
value: u256_to_proto_bytes(meaning.value),
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
SpecificMeaning::TokenTransfer(meaning) => ProtoSpecificMeaningKind::TokenTransfer(
|
Self::TokenTransfer(meaning) => ProtoSpecificMeaningKind::TokenTransfer(
|
||||||
arbiter_proto::proto::shared::evm::TokenTransferMeaning {
|
arbiter_proto::proto::shared::evm::TokenTransferMeaning {
|
||||||
token: Some(ProtoTokenInfo {
|
token: Some(ProtoTokenInfo {
|
||||||
symbol: meaning.token.symbol.to_string(),
|
symbol: meaning.token.symbol.to_owned(),
|
||||||
address: meaning.token.contract.to_vec(),
|
address: meaning.token.contract.to_vec(),
|
||||||
chain_id: meaning.token.chain,
|
chain_id: meaning.token.chain,
|
||||||
}),
|
}),
|
||||||
@@ -61,25 +61,21 @@ impl Convert for EvalViolation {
|
|||||||
|
|
||||||
fn convert(self) -> Self::Output {
|
fn convert(self) -> Self::Output {
|
||||||
let kind = match self {
|
let kind = match self {
|
||||||
EvalViolation::InvalidTarget { target } => {
|
Self::InvalidTarget { target } => {
|
||||||
ProtoEvalViolationKind::InvalidTarget(target.to_vec())
|
ProtoEvalViolationKind::InvalidTarget(target.to_vec())
|
||||||
}
|
}
|
||||||
EvalViolation::GasLimitExceeded {
|
Self::GasLimitExceeded {
|
||||||
max_gas_fee_per_gas,
|
max_gas_fee_per_gas,
|
||||||
max_priority_fee_per_gas,
|
max_priority_fee_per_gas,
|
||||||
} => ProtoEvalViolationKind::GasLimitExceeded(GasLimitExceededViolation {
|
} => ProtoEvalViolationKind::GasLimitExceeded(GasLimitExceededViolation {
|
||||||
max_gas_fee_per_gas: max_gas_fee_per_gas.map(u256_to_proto_bytes),
|
max_gas_fee_per_gas: max_gas_fee_per_gas.map(u256_to_proto_bytes),
|
||||||
max_priority_fee_per_gas: max_priority_fee_per_gas.map(u256_to_proto_bytes),
|
max_priority_fee_per_gas: max_priority_fee_per_gas.map(u256_to_proto_bytes),
|
||||||
}),
|
}),
|
||||||
EvalViolation::RateLimitExceeded => ProtoEvalViolationKind::RateLimitExceeded(()),
|
Self::RateLimitExceeded => ProtoEvalViolationKind::RateLimitExceeded(()),
|
||||||
EvalViolation::VolumetricLimitExceeded => {
|
Self::VolumetricLimitExceeded => ProtoEvalViolationKind::VolumetricLimitExceeded(()),
|
||||||
ProtoEvalViolationKind::VolumetricLimitExceeded(())
|
Self::InvalidTime => ProtoEvalViolationKind::InvalidTime(()),
|
||||||
}
|
Self::InvalidTransactionType => ProtoEvalViolationKind::InvalidTransactionType(()),
|
||||||
EvalViolation::InvalidTime => ProtoEvalViolationKind::InvalidTime(()),
|
Self::MismatchingChainId { expected, actual } => {
|
||||||
EvalViolation::InvalidTransactionType => {
|
|
||||||
ProtoEvalViolationKind::InvalidTransactionType(())
|
|
||||||
}
|
|
||||||
EvalViolation::MismatchingChainId { expected, actual } => {
|
|
||||||
ProtoEvalViolationKind::ChainIdMismatch(proto_eval_violation::ChainIdMismatch {
|
ProtoEvalViolationKind::ChainIdMismatch(proto_eval_violation::ChainIdMismatch {
|
||||||
expected,
|
expected,
|
||||||
actual,
|
actual,
|
||||||
@@ -96,13 +92,13 @@ impl Convert for VetError {
|
|||||||
|
|
||||||
fn convert(self) -> Self::Output {
|
fn convert(self) -> Self::Output {
|
||||||
let kind = match self {
|
let kind = match self {
|
||||||
VetError::ContractCreationNotSupported => {
|
Self::ContractCreationNotSupported => {
|
||||||
ProtoTransactionEvalErrorKind::ContractCreationNotSupported(())
|
ProtoTransactionEvalErrorKind::ContractCreationNotSupported(())
|
||||||
}
|
}
|
||||||
VetError::UnsupportedTransactionType => {
|
Self::UnsupportedTransactionType => {
|
||||||
ProtoTransactionEvalErrorKind::UnsupportedTransactionType(())
|
ProtoTransactionEvalErrorKind::UnsupportedTransactionType(())
|
||||||
}
|
}
|
||||||
VetError::Evaluated(meaning, policy_error) => match policy_error {
|
Self::Evaluated(meaning, policy_error) => match policy_error {
|
||||||
PolicyError::NoMatchingGrant => {
|
PolicyError::NoMatchingGrant => {
|
||||||
ProtoTransactionEvalErrorKind::NoMatchingGrant(NoMatchingGrantError {
|
ProtoTransactionEvalErrorKind::NoMatchingGrant(NoMatchingGrantError {
|
||||||
meaning: Some(meaning.convert()),
|
meaning: Some(meaning.convert()),
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ impl RequestTracker {
|
|||||||
|
|
||||||
// This is used to set the response id for auth responses, which need to match the request id of the auth challenge request.
|
// This is used to set the response id for auth responses, which need to match the request id of the auth challenge request.
|
||||||
// -1 offset is needed because request() increments the next_request_id after returning the current request id.
|
// -1 offset is needed because request() increments the next_request_id after returning the current request id.
|
||||||
pub fn current_request_id(&self) -> i32 {
|
pub const fn current_request_id(&self) -> i32 {
|
||||||
self.next_request_id - 1
|
self.next_request_id - 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
proto::user_agent::{
|
proto::user_agent::{
|
||||||
UserAgentRequest, UserAgentResponse,
|
UserAgentRequest, UserAgentResponse,
|
||||||
@@ -5,8 +6,7 @@ use arbiter_proto::{
|
|||||||
self as proto_auth, AuthChallenge as ProtoAuthChallenge,
|
self as proto_auth, AuthChallenge as ProtoAuthChallenge,
|
||||||
AuthChallengeRequest as ProtoAuthChallengeRequest,
|
AuthChallengeRequest as ProtoAuthChallengeRequest,
|
||||||
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
AuthChallengeSolution as ProtoAuthChallengeSolution, AuthResult as ProtoAuthResult,
|
||||||
KeyType as ProtoKeyType, request::Payload as AuthRequestPayload,
|
request::Payload as AuthRequestPayload, response::Payload as AuthResponsePayload,
|
||||||
response::Payload as AuthResponsePayload,
|
|
||||||
},
|
},
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
user_agent_request::Payload as UserAgentRequestPayload,
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
@@ -18,8 +18,7 @@ use tonic::Status;
|
|||||||
use tracing::warn;
|
use tracing::warn;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::user_agent::{AuthPublicKey, UserAgentConnection, auth},
|
actors::user_agent::{UserAgentConnection, auth},
|
||||||
db::models::KeyType,
|
|
||||||
grpc::request_tracker::RequestTracker,
|
grpc::request_tracker::RequestTracker,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -29,7 +28,7 @@ pub struct AuthTransportAdapter<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> AuthTransportAdapter<'a> {
|
impl<'a> AuthTransportAdapter<'a> {
|
||||||
pub fn new(
|
pub const fn new(
|
||||||
bi: &'a mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
bi: &'a mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
request_tracker: &'a mut RequestTracker,
|
request_tracker: &'a mut RequestTracker,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
@@ -141,28 +140,9 @@ impl Receiver<auth::Inbound> for AuthTransportAdapter<'_> {
|
|||||||
AuthRequestPayload::ChallengeRequest(ProtoAuthChallengeRequest {
|
AuthRequestPayload::ChallengeRequest(ProtoAuthChallengeRequest {
|
||||||
pubkey,
|
pubkey,
|
||||||
bootstrap_token,
|
bootstrap_token,
|
||||||
key_type,
|
..
|
||||||
}) => {
|
}) => {
|
||||||
let Ok(key_type) = ProtoKeyType::try_from(key_type) else {
|
let Ok(pubkey) = authn::PublicKey::try_from(pubkey.as_slice()) else {
|
||||||
warn!(
|
|
||||||
event = "received request with invalid key type",
|
|
||||||
"grpc.useragent.auth_adapter"
|
|
||||||
);
|
|
||||||
return None;
|
|
||||||
};
|
|
||||||
let key_type = match key_type {
|
|
||||||
ProtoKeyType::Ed25519 => KeyType::Ed25519,
|
|
||||||
ProtoKeyType::EcdsaSecp256k1 => KeyType::EcdsaSecp256k1,
|
|
||||||
ProtoKeyType::Rsa => KeyType::Rsa,
|
|
||||||
ProtoKeyType::Unspecified => {
|
|
||||||
warn!(
|
|
||||||
event = "received request with unspecified key type",
|
|
||||||
"grpc.useragent.auth_adapter"
|
|
||||||
);
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
let Ok(pubkey) = AuthPublicKey::try_from((key_type, pubkey)) else {
|
|
||||||
warn!(
|
warn!(
|
||||||
event = "received request with invalid public key",
|
event = "received request with invalid public key",
|
||||||
"grpc.useragent.auth_adapter"
|
"grpc.useragent.auth_adapter"
|
||||||
@@ -188,7 +168,7 @@ pub async fn start(
|
|||||||
conn: &mut UserAgentConnection,
|
conn: &mut UserAgentConnection,
|
||||||
bi: &mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
bi: &mut GrpcBi<UserAgentRequest, UserAgentResponse>,
|
||||||
request_tracker: &mut RequestTracker,
|
request_tracker: &mut RequestTracker,
|
||||||
) -> Result<AuthPublicKey, auth::Error> {
|
) -> Result<authn::PublicKey, auth::Error> {
|
||||||
let transport = AuthTransportAdapter::new(bi, request_tracker);
|
let transport = AuthTransportAdapter::new(bi, request_tracker);
|
||||||
auth::authenticate(conn, transport).await
|
auth::authenticate(conn, transport).await
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ use crate::{
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
fn wrap_evm_response(payload: EvmResponsePayload) -> UserAgentResponsePayload {
|
const fn wrap_evm_response(payload: EvmResponsePayload) -> UserAgentResponsePayload {
|
||||||
UserAgentResponsePayload::Evm(proto_evm::Response {
|
UserAgentResponsePayload::Evm(proto_evm::Response {
|
||||||
payload: Some(payload),
|
payload: Some(payload),
|
||||||
})
|
})
|
||||||
@@ -52,8 +52,8 @@ pub(super) async fn dispatch(
|
|||||||
};
|
};
|
||||||
|
|
||||||
match payload {
|
match payload {
|
||||||
EvmRequestPayload::WalletCreate(_) => handle_wallet_create(actor).await,
|
EvmRequestPayload::WalletCreate(()) => handle_wallet_create(actor).await,
|
||||||
EvmRequestPayload::WalletList(_) => handle_wallet_list(actor).await,
|
EvmRequestPayload::WalletList(()) => handle_wallet_list(actor).await,
|
||||||
EvmRequestPayload::GrantCreate(req) => handle_grant_create(actor, req).await,
|
EvmRequestPayload::GrantCreate(req) => handle_grant_create(actor, req).await,
|
||||||
EvmRequestPayload::GrantDelete(req) => handle_grant_delete(actor, req).await,
|
EvmRequestPayload::GrantDelete(req) => handle_grant_delete(actor, req).await,
|
||||||
EvmRequestPayload::GrantList(_) => handle_grant_list(actor).await,
|
EvmRequestPayload::GrantList(_) => handle_grant_list(actor).await,
|
||||||
@@ -121,9 +121,6 @@ async fn handle_grant_list(
|
|||||||
})
|
})
|
||||||
.collect(),
|
.collect(),
|
||||||
}),
|
}),
|
||||||
Err(kameo::error::SendError::HandlerError(GrantMutationError::VaultSealed)) => {
|
|
||||||
EvmGrantListResult::Error(ProtoEvmError::VaultSealed.into())
|
|
||||||
}
|
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
warn!(error = ?err, "Failed to list EVM grants");
|
warn!(error = ?err, "Failed to list EVM grants");
|
||||||
EvmGrantListResult::Error(ProtoEvmError::Internal.into())
|
EvmGrantListResult::Error(ProtoEvmError::Internal.into())
|
||||||
@@ -150,7 +147,7 @@ async fn handle_grant_create(
|
|||||||
.try_convert()?;
|
.try_convert()?;
|
||||||
|
|
||||||
let result = match actor.ask(HandleGrantCreate { basic, grant }).await {
|
let result = match actor.ask(HandleGrantCreate { basic, grant }).await {
|
||||||
Ok(grant_id) => EvmGrantCreateResult::GrantId(grant_id.into_inner()),
|
Ok(grant_id) => EvmGrantCreateResult::GrantId(grant_id),
|
||||||
Err(kameo::error::SendError::HandlerError(GrantMutationError::VaultSealed)) => {
|
Err(kameo::error::SendError::HandlerError(GrantMutationError::VaultSealed)) => {
|
||||||
EvmGrantCreateResult::Error(ProtoEvmError::VaultSealed.into())
|
EvmGrantCreateResult::Error(ProtoEvmError::VaultSealed.into())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -22,11 +22,11 @@ use crate::{
|
|||||||
grpc::TryConvert,
|
grpc::TryConvert,
|
||||||
};
|
};
|
||||||
|
|
||||||
fn address_from_bytes(bytes: Vec<u8>) -> Result<Address, Status> {
|
fn address_from_bytes(bytes: &[u8]) -> Result<Address, Status> {
|
||||||
if bytes.len() != 20 {
|
if bytes.len() != 20 {
|
||||||
return Err(Status::invalid_argument("Invalid EVM address"));
|
return Err(Status::invalid_argument("Invalid EVM address"));
|
||||||
}
|
}
|
||||||
Ok(Address::from_slice(&bytes))
|
Ok(Address::from_slice(bytes))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn u256_from_proto_bytes(bytes: &[u8]) -> Result<U256, Status> {
|
fn u256_from_proto_bytes(bytes: &[u8]) -> Result<U256, Status> {
|
||||||
@@ -41,7 +41,7 @@ impl TryConvert for ProtoTimestamp {
|
|||||||
type Error = Status;
|
type Error = Status;
|
||||||
|
|
||||||
fn try_convert(self) -> Result<DateTime<Utc>, Status> {
|
fn try_convert(self) -> Result<DateTime<Utc>, Status> {
|
||||||
Utc.timestamp_opt(self.seconds, self.nanos as u32)
|
Utc.timestamp_opt(self.seconds, self.nanos.try_into().unwrap_or_default())
|
||||||
.single()
|
.single()
|
||||||
.ok_or_else(|| Status::invalid_argument("Invalid timestamp"))
|
.ok_or_else(|| Status::invalid_argument("Invalid timestamp"))
|
||||||
}
|
}
|
||||||
@@ -116,7 +116,8 @@ impl TryConvert for ProtoSpecificGrant {
|
|||||||
limit,
|
limit,
|
||||||
})) => Ok(SpecificGrant::EtherTransfer(ether_transfer::Settings {
|
})) => Ok(SpecificGrant::EtherTransfer(ether_transfer::Settings {
|
||||||
target: targets
|
target: targets
|
||||||
.into_iter()
|
.iter()
|
||||||
|
.map(Vec::as_slice)
|
||||||
.map(address_from_bytes)
|
.map(address_from_bytes)
|
||||||
.collect::<Result<_, _>>()?,
|
.collect::<Result<_, _>>()?,
|
||||||
limit: limit
|
limit: limit
|
||||||
@@ -130,8 +131,10 @@ impl TryConvert for ProtoSpecificGrant {
|
|||||||
target,
|
target,
|
||||||
volume_limits,
|
volume_limits,
|
||||||
})) => Ok(SpecificGrant::TokenTransfer(token_transfers::Settings {
|
})) => Ok(SpecificGrant::TokenTransfer(token_transfers::Settings {
|
||||||
token_contract: address_from_bytes(token_contract)?,
|
token_contract: address_from_bytes(&token_contract)?,
|
||||||
target: target.map(address_from_bytes).transpose()?,
|
target: target
|
||||||
|
.map(|target| address_from_bytes(&target))
|
||||||
|
.transpose()?,
|
||||||
volume_limits: volume_limits
|
volume_limits: volume_limits
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(ProtoVolumeRateLimit::try_convert)
|
.map(ProtoVolumeRateLimit::try_convert)
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ impl Convert for DateTime<Utc> {
|
|||||||
fn convert(self) -> ProtoTimestamp {
|
fn convert(self) -> ProtoTimestamp {
|
||||||
ProtoTimestamp {
|
ProtoTimestamp {
|
||||||
seconds: self.timestamp(),
|
seconds: self.timestamp(),
|
||||||
nanos: self.timestamp_subsec_nanos() as i32,
|
nanos: self.timestamp_subsec_nanos().try_into().unwrap_or(i32::MAX),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -74,13 +74,13 @@ impl Convert for SpecificGrant {
|
|||||||
|
|
||||||
fn convert(self) -> ProtoSpecificGrant {
|
fn convert(self) -> ProtoSpecificGrant {
|
||||||
let grant = match self {
|
let grant = match self {
|
||||||
SpecificGrant::EtherTransfer(s) => {
|
Self::EtherTransfer(s) => {
|
||||||
ProtoSpecificGrantType::EtherTransfer(ProtoEtherTransferSettings {
|
ProtoSpecificGrantType::EtherTransfer(ProtoEtherTransferSettings {
|
||||||
targets: s.target.into_iter().map(|a| a.to_vec()).collect(),
|
targets: s.target.into_iter().map(|a| a.to_vec()).collect(),
|
||||||
limit: Some(s.limit.convert()),
|
limit: Some(s.limit.convert()),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
SpecificGrant::TokenTransfer(s) => {
|
Self::TokenTransfer(s) => {
|
||||||
ProtoSpecificGrantType::TokenTransfer(ProtoTokenTransferSettings {
|
ProtoSpecificGrantType::TokenTransfer(ProtoTokenTransferSettings {
|
||||||
token_contract: s.token_contract.to_vec(),
|
token_contract: s.token_contract.to_vec(),
|
||||||
target: s.target.map(|a| a.to_vec()),
|
target: s.target.map(|a| a.to_vec()),
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use arbiter_crypto::authn;
|
||||||
use arbiter_proto::proto::{
|
use arbiter_proto::proto::{
|
||||||
shared::ClientInfo as ProtoClientMetadata,
|
shared::ClientInfo as ProtoClientMetadata,
|
||||||
user_agent::{
|
user_agent::{
|
||||||
@@ -31,7 +32,7 @@ use crate::{
|
|||||||
grpc::Convert,
|
grpc::Convert,
|
||||||
};
|
};
|
||||||
|
|
||||||
fn wrap_sdk_client_response(payload: SdkClientResponsePayload) -> UserAgentResponsePayload {
|
const fn wrap_sdk_client_response(payload: SdkClientResponsePayload) -> UserAgentResponsePayload {
|
||||||
UserAgentResponsePayload::SdkClient(proto_sdk_client::Response {
|
UserAgentResponsePayload::SdkClient(proto_sdk_client::Response {
|
||||||
payload: Some(payload),
|
payload: Some(payload),
|
||||||
})
|
})
|
||||||
@@ -41,7 +42,7 @@ pub(super) fn out_of_band_payload(oob: OutOfBand) -> UserAgentResponsePayload {
|
|||||||
match oob {
|
match oob {
|
||||||
OutOfBand::ClientConnectionRequest { profile } => wrap_sdk_client_response(
|
OutOfBand::ClientConnectionRequest { profile } => wrap_sdk_client_response(
|
||||||
SdkClientResponsePayload::ConnectionRequest(ProtoSdkClientConnectionRequest {
|
SdkClientResponsePayload::ConnectionRequest(ProtoSdkClientConnectionRequest {
|
||||||
pubkey: profile.pubkey.to_bytes().to_vec(),
|
pubkey: profile.pubkey.to_bytes(),
|
||||||
info: Some(ProtoClientMetadata {
|
info: Some(ProtoClientMetadata {
|
||||||
name: profile.metadata.name,
|
name: profile.metadata.name,
|
||||||
description: profile.metadata.description,
|
description: profile.metadata.description,
|
||||||
@@ -51,7 +52,7 @@ pub(super) fn out_of_band_payload(oob: OutOfBand) -> UserAgentResponsePayload {
|
|||||||
),
|
),
|
||||||
OutOfBand::ClientConnectionCancel { pubkey } => wrap_sdk_client_response(
|
OutOfBand::ClientConnectionCancel { pubkey } => wrap_sdk_client_response(
|
||||||
SdkClientResponsePayload::ConnectionCancel(ProtoSdkClientConnectionCancel {
|
SdkClientResponsePayload::ConnectionCancel(ProtoSdkClientConnectionCancel {
|
||||||
pubkey: pubkey.to_bytes().to_vec(),
|
pubkey: pubkey.to_bytes(),
|
||||||
}),
|
}),
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
@@ -74,14 +75,14 @@ pub(super) async fn dispatch(
|
|||||||
SdkClientRequestPayload::Revoke(_) => Err(Status::unimplemented(
|
SdkClientRequestPayload::Revoke(_) => Err(Status::unimplemented(
|
||||||
"SdkClientRevoke is not yet implemented",
|
"SdkClientRevoke is not yet implemented",
|
||||||
)),
|
)),
|
||||||
SdkClientRequestPayload::List(_) => handle_list(actor).await,
|
SdkClientRequestPayload::List(()) => handle_list(actor).await,
|
||||||
SdkClientRequestPayload::GrantWalletAccess(req) => {
|
SdkClientRequestPayload::GrantWalletAccess(req) => {
|
||||||
handle_grant_wallet_access(actor, req).await
|
handle_grant_wallet_access(actor, req).await
|
||||||
}
|
}
|
||||||
SdkClientRequestPayload::RevokeWalletAccess(req) => {
|
SdkClientRequestPayload::RevokeWalletAccess(req) => {
|
||||||
handle_revoke_wallet_access(actor, req).await
|
handle_revoke_wallet_access(actor, req).await
|
||||||
}
|
}
|
||||||
SdkClientRequestPayload::ListWalletAccess(_) => handle_list_wallet_access(actor).await,
|
SdkClientRequestPayload::ListWalletAccess(()) => handle_list_wallet_access(actor).await,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -89,10 +90,8 @@ async fn handle_connection_response(
|
|||||||
actor: &ActorRef<UserAgentSession>,
|
actor: &ActorRef<UserAgentSession>,
|
||||||
resp: ProtoSdkClientConnectionResponse,
|
resp: ProtoSdkClientConnectionResponse,
|
||||||
) -> Result<Option<UserAgentResponsePayload>, Status> {
|
) -> Result<Option<UserAgentResponsePayload>, Status> {
|
||||||
let pubkey_bytes = <[u8; 32]>::try_from(resp.pubkey)
|
let pubkey = authn::PublicKey::try_from(resp.pubkey.as_slice())
|
||||||
.map_err(|_| Status::invalid_argument("Invalid Ed25519 public key length"))?;
|
.map_err(|()| Status::invalid_argument("Invalid ML-DSA public key"))?;
|
||||||
let pubkey = ed25519_dalek::VerifyingKey::from_bytes(&pubkey_bytes)
|
|
||||||
.map_err(|_| Status::invalid_argument("Invalid Ed25519 public key"))?;
|
|
||||||
|
|
||||||
actor
|
actor
|
||||||
.ask(HandleNewClientApprove {
|
.ask(HandleNewClientApprove {
|
||||||
@@ -117,12 +116,17 @@ async fn handle_list(
|
|||||||
.into_iter()
|
.into_iter()
|
||||||
.map(|(client, metadata)| ProtoSdkClientEntry {
|
.map(|(client, metadata)| ProtoSdkClientEntry {
|
||||||
id: client.id,
|
id: client.id,
|
||||||
pubkey: client.public_key,
|
pubkey: client.public_key.clone(),
|
||||||
info: Some(ProtoClientMetadata {
|
info: Some(ProtoClientMetadata {
|
||||||
name: metadata.name,
|
name: metadata.name,
|
||||||
description: metadata.description,
|
description: metadata.description,
|
||||||
version: metadata.version,
|
version: metadata.version,
|
||||||
}),
|
}),
|
||||||
|
#[expect(
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::as_conversions,
|
||||||
|
reason = "fixme! #84"
|
||||||
|
)]
|
||||||
created_at: client.created_at.0.timestamp() as i32,
|
created_at: client.created_at.0.timestamp() as i32,
|
||||||
})
|
})
|
||||||
.collect(),
|
.collect(),
|
||||||
@@ -143,7 +147,7 @@ async fn handle_grant_wallet_access(
|
|||||||
actor: &ActorRef<UserAgentSession>,
|
actor: &ActorRef<UserAgentSession>,
|
||||||
req: ProtoSdkClientGrantWalletAccess,
|
req: ProtoSdkClientGrantWalletAccess,
|
||||||
) -> Result<Option<UserAgentResponsePayload>, Status> {
|
) -> Result<Option<UserAgentResponsePayload>, Status> {
|
||||||
let entries: Vec<NewEvmWalletAccess> = req.accesses.into_iter().map(|a| a.convert()).collect();
|
let entries: Vec<NewEvmWalletAccess> = req.accesses.into_iter().map(Convert::convert).collect();
|
||||||
match actor.ask(HandleGrantEvmWalletAccess { entries }).await {
|
match actor.ask(HandleGrantEvmWalletAccess { entries }).await {
|
||||||
Ok(()) => {
|
Ok(()) => {
|
||||||
info!("Successfully granted wallet access");
|
info!("Successfully granted wallet access");
|
||||||
@@ -183,7 +187,7 @@ async fn handle_list_wallet_access(
|
|||||||
match actor.ask(HandleListWalletAccess {}).await {
|
match actor.ask(HandleListWalletAccess {}).await {
|
||||||
Ok(accesses) => Ok(Some(wrap_sdk_client_response(
|
Ok(accesses) => Ok(Some(wrap_sdk_client_response(
|
||||||
SdkClientResponsePayload::ListWalletAccess(ListWalletAccessResponse {
|
SdkClientResponsePayload::ListWalletAccess(ListWalletAccessResponse {
|
||||||
accesses: accesses.into_iter().map(|a| a.convert()).collect(),
|
accesses: accesses.into_iter().map(Convert::convert).collect(),
|
||||||
}),
|
}),
|
||||||
))),
|
))),
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
|
|||||||
@@ -31,13 +31,13 @@ use crate::actors::{
|
|||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
fn wrap_vault_response(payload: VaultResponsePayload) -> UserAgentResponsePayload {
|
const fn wrap_vault_response(payload: VaultResponsePayload) -> UserAgentResponsePayload {
|
||||||
UserAgentResponsePayload::Vault(proto_vault::Response {
|
UserAgentResponsePayload::Vault(proto_vault::Response {
|
||||||
payload: Some(payload),
|
payload: Some(payload),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
fn wrap_unseal_response(payload: UnsealResponsePayload) -> UserAgentResponsePayload {
|
const fn wrap_unseal_response(payload: UnsealResponsePayload) -> UserAgentResponsePayload {
|
||||||
wrap_vault_response(VaultResponsePayload::Unseal(proto_unseal::Response {
|
wrap_vault_response(VaultResponsePayload::Unseal(proto_unseal::Response {
|
||||||
payload: Some(payload),
|
payload: Some(payload),
|
||||||
}))
|
}))
|
||||||
@@ -58,7 +58,7 @@ pub(super) async fn dispatch(
|
|||||||
};
|
};
|
||||||
|
|
||||||
match payload {
|
match payload {
|
||||||
VaultRequestPayload::QueryState(_) => handle_query_vault_state(actor).await,
|
VaultRequestPayload::QueryState(()) => handle_query_vault_state(actor).await,
|
||||||
VaultRequestPayload::Unseal(req) => dispatch_unseal_request(actor, req).await,
|
VaultRequestPayload::Unseal(req) => dispatch_unseal_request(actor, req).await,
|
||||||
VaultRequestPayload::Bootstrap(req) => handle_bootstrap_request(actor, req).await,
|
VaultRequestPayload::Bootstrap(req) => handle_bootstrap_request(actor, req).await,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
#![forbid(unsafe_code)]
|
|
||||||
use crate::context::ServerContext;
|
use crate::context::ServerContext;
|
||||||
|
|
||||||
pub mod actors;
|
pub mod actors;
|
||||||
@@ -7,7 +6,6 @@ pub mod crypto;
|
|||||||
pub mod db;
|
pub mod db;
|
||||||
pub mod evm;
|
pub mod evm;
|
||||||
pub mod grpc;
|
pub mod grpc;
|
||||||
pub mod safe_cell;
|
|
||||||
pub mod utils;
|
pub mod utils;
|
||||||
|
|
||||||
pub struct Server {
|
pub struct Server {
|
||||||
@@ -15,7 +13,7 @@ pub struct Server {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Server {
|
impl Server {
|
||||||
pub fn new(context: ServerContext) -> Self {
|
pub const fn new(context: ServerContext) -> Self {
|
||||||
Self { context }
|
Self { context }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,7 @@
|
|||||||
|
use arbiter_crypto::{
|
||||||
|
authn::{self, CLIENT_CONTEXT, format_challenge},
|
||||||
|
safecell::{SafeCell, SafeCellHandle as _},
|
||||||
|
};
|
||||||
use arbiter_proto::ClientMetadata;
|
use arbiter_proto::ClientMetadata;
|
||||||
use arbiter_proto::transport::{Receiver, Sender};
|
use arbiter_proto::transport::{Receiver, Sender};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
@@ -8,11 +12,10 @@ use arbiter_server::{
|
|||||||
},
|
},
|
||||||
crypto::integrity,
|
crypto::integrity,
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
use diesel::{ExpressionMethods as _, NullableExpressionMethods as _, QueryDsl as _, insert_into};
|
use diesel::{ExpressionMethods as _, NullableExpressionMethods as _, QueryDsl as _, insert_into};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use ed25519_dalek::Signer as _;
|
use ml_dsa::{KeyGen, MlDsa87, SigningKey, VerifyingKey, signature::Keypair};
|
||||||
|
|
||||||
use super::common::ChannelTransport;
|
use super::common::ChannelTransport;
|
||||||
|
|
||||||
@@ -24,10 +27,14 @@ fn metadata(name: &str, description: Option<&str>, version: Option<&str>) -> Cli
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn verifying_key(key: &SigningKey<MlDsa87>) -> VerifyingKey<MlDsa87> {
|
||||||
|
<SigningKey<MlDsa87> as Keypair>::verifying_key(key)
|
||||||
|
}
|
||||||
|
|
||||||
async fn insert_registered_client(
|
async fn insert_registered_client(
|
||||||
db: &db::DatabasePool,
|
db: &db::DatabasePool,
|
||||||
actors: &GlobalActors,
|
actors: &GlobalActors,
|
||||||
pubkey: ed25519_dalek::VerifyingKey,
|
pubkey: VerifyingKey<MlDsa87>,
|
||||||
metadata: &ClientMetadata,
|
metadata: &ClientMetadata,
|
||||||
) {
|
) {
|
||||||
use arbiter_server::db::schema::{client_metadata, program_client};
|
use arbiter_server::db::schema::{client_metadata, program_client};
|
||||||
@@ -45,7 +52,7 @@ async fn insert_registered_client(
|
|||||||
.unwrap();
|
.unwrap();
|
||||||
let client_id: i32 = insert_into(program_client::table)
|
let client_id: i32 = insert_into(program_client::table)
|
||||||
.values((
|
.values((
|
||||||
program_client::public_key.eq(pubkey.to_bytes().to_vec()),
|
program_client::public_key.eq(pubkey.encode().0.to_vec()),
|
||||||
program_client::metadata_id.eq(metadata_id),
|
program_client::metadata_id.eq(metadata_id),
|
||||||
))
|
))
|
||||||
.returning(program_client::id)
|
.returning(program_client::id)
|
||||||
@@ -56,18 +63,33 @@ async fn insert_registered_client(
|
|||||||
integrity::sign_entity(
|
integrity::sign_entity(
|
||||||
&mut conn,
|
&mut conn,
|
||||||
&actors.key_holder,
|
&actors.key_holder,
|
||||||
&ClientCredentials { pubkey, nonce: 1 },
|
&ClientCredentials {
|
||||||
|
pubkey: pubkey.into(),
|
||||||
|
nonce: 1,
|
||||||
|
},
|
||||||
client_id,
|
client_id,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn sign_client_challenge(
|
||||||
|
key: &SigningKey<MlDsa87>,
|
||||||
|
nonce: i32,
|
||||||
|
pubkey: &authn::PublicKey,
|
||||||
|
) -> authn::Signature {
|
||||||
|
let challenge = format_challenge(nonce, &pubkey.to_bytes());
|
||||||
|
key.signing_key()
|
||||||
|
.sign_deterministic(&challenge, CLIENT_CONTEXT)
|
||||||
|
.unwrap()
|
||||||
|
.into()
|
||||||
|
}
|
||||||
|
|
||||||
async fn insert_bootstrap_sentinel_useragent(db: &db::DatabasePool) {
|
async fn insert_bootstrap_sentinel_useragent(db: &db::DatabasePool) {
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
let sentinel_key = ed25519_dalek::SigningKey::generate(&mut rand::rng())
|
let sentinel_key = verifying_key(&MlDsa87::key_gen(&mut rand::rng()))
|
||||||
.verifying_key()
|
.encode()
|
||||||
.to_bytes()
|
.0
|
||||||
.to_vec();
|
.to_vec();
|
||||||
|
|
||||||
insert_into(schema::useragent_client::table)
|
insert_into(schema::useragent_client::table)
|
||||||
@@ -96,7 +118,7 @@ async fn spawn_test_actors(db: &db::DatabasePool) -> GlobalActors {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unregistered_pubkey_rejected() {
|
pub async fn unregistered_pubkey_rejected() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
@@ -107,11 +129,11 @@ pub async fn test_unregistered_pubkey_rejected() {
|
|||||||
connect_client(props, &mut server_transport).await;
|
connect_client(props, &mut server_transport).await;
|
||||||
});
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: new_key.verifying_key(),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
metadata: metadata("client", Some("desc"), Some("1.0.0")),
|
metadata: metadata("client", Some("desc"), Some("1.0.0")),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -123,18 +145,18 @@ pub async fn test_unregistered_pubkey_rejected() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_challenge_auth() {
|
pub async fn challenge_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = spawn_test_actors(&db).await;
|
let actors = spawn_test_actors(&db).await;
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
|
|
||||||
insert_registered_client(
|
Box::pin(insert_registered_client(
|
||||||
&db,
|
&db,
|
||||||
&actors,
|
&actors,
|
||||||
new_key.verifying_key(),
|
verifying_key(&new_key),
|
||||||
&metadata("client", Some("desc"), Some("1.0.0")),
|
&metadata("client", Some("desc"), Some("1.0.0")),
|
||||||
)
|
))
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
let (server_transport, mut test_transport) = ChannelTransport::new();
|
let (server_transport, mut test_transport) = ChannelTransport::new();
|
||||||
@@ -147,7 +169,7 @@ pub async fn test_challenge_auth() {
|
|||||||
// Send challenge request
|
// Send challenge request
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: new_key.verifying_key(),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
metadata: metadata("client", Some("desc"), Some("1.0.0")),
|
metadata: metadata("client", Some("desc"), Some("1.0.0")),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -161,14 +183,13 @@ pub async fn test_challenge_auth() {
|
|||||||
let challenge = match response {
|
let challenge = match response {
|
||||||
Ok(resp) => match resp {
|
Ok(resp) => match resp {
|
||||||
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
other @ auth::Outbound::AuthSuccess => panic!("Expected AuthChallenge, got {other:?}"),
|
||||||
},
|
},
|
||||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Sign the challenge and send solution
|
// Sign the challenge and send solution
|
||||||
let formatted_challenge = arbiter_proto::format_challenge(challenge.1, challenge.0.as_bytes());
|
let signature = sign_client_challenge(&new_key, challenge.1, &challenge.0);
|
||||||
let signature = new_key.sign(&formatted_challenge);
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeSolution { signature })
|
.send(auth::Inbound::AuthChallengeSolution { signature })
|
||||||
@@ -191,13 +212,19 @@ pub async fn test_challenge_auth() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_metadata_unchanged_does_not_append_history() {
|
pub async fn metadata_unchanged_does_not_append_history() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = spawn_test_actors(&db).await;
|
let actors = spawn_test_actors(&db).await;
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
let requested = metadata("client", Some("desc"), Some("1.0.0"));
|
let requested = metadata("client", Some("desc"), Some("1.0.0"));
|
||||||
|
|
||||||
insert_registered_client(&db, &actors, new_key.verifying_key(), &requested).await;
|
Box::pin(insert_registered_client(
|
||||||
|
&db,
|
||||||
|
&actors,
|
||||||
|
verifying_key(&new_key),
|
||||||
|
&requested,
|
||||||
|
))
|
||||||
|
.await;
|
||||||
|
|
||||||
let props = ClientConnection::new(db.clone(), actors);
|
let props = ClientConnection::new(db.clone(), actors);
|
||||||
|
|
||||||
@@ -209,7 +236,7 @@ pub async fn test_metadata_unchanged_does_not_append_history() {
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: new_key.verifying_key(),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
metadata: requested,
|
metadata: requested,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -218,9 +245,9 @@ pub async fn test_metadata_unchanged_does_not_append_history() {
|
|||||||
let response = test_transport.recv().await.unwrap().unwrap();
|
let response = test_transport.recv().await.unwrap().unwrap();
|
||||||
let (pubkey, nonce) = match response {
|
let (pubkey, nonce) = match response {
|
||||||
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
auth::Outbound::AuthSuccess => panic!("Expected AuthChallenge, got AuthSuccess"),
|
||||||
};
|
};
|
||||||
let signature = new_key.sign(&arbiter_proto::format_challenge(nonce, pubkey.as_bytes()));
|
let signature = sign_client_challenge(&new_key, nonce, &pubkey);
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeSolution { signature })
|
.send(auth::Inbound::AuthChallengeSolution { signature })
|
||||||
.await
|
.await
|
||||||
@@ -248,17 +275,17 @@ pub async fn test_metadata_unchanged_does_not_append_history() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_metadata_change_appends_history_and_repoints_binding() {
|
pub async fn metadata_change_appends_history_and_repoints_binding() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = spawn_test_actors(&db).await;
|
let actors = spawn_test_actors(&db).await;
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
|
|
||||||
insert_registered_client(
|
Box::pin(insert_registered_client(
|
||||||
&db,
|
&db,
|
||||||
&actors,
|
&actors,
|
||||||
new_key.verifying_key(),
|
verifying_key(&new_key),
|
||||||
&metadata("client", Some("old"), Some("1.0.0")),
|
&metadata("client", Some("old"), Some("1.0.0")),
|
||||||
)
|
))
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
let props = ClientConnection::new(db.clone(), actors);
|
let props = ClientConnection::new(db.clone(), actors);
|
||||||
@@ -271,7 +298,7 @@ pub async fn test_metadata_change_appends_history_and_repoints_binding() {
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: new_key.verifying_key(),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
metadata: metadata("client", Some("new"), Some("2.0.0")),
|
metadata: metadata("client", Some("new"), Some("2.0.0")),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -280,14 +307,14 @@ pub async fn test_metadata_change_appends_history_and_repoints_binding() {
|
|||||||
let response = test_transport.recv().await.unwrap().unwrap();
|
let response = test_transport.recv().await.unwrap().unwrap();
|
||||||
let (pubkey, nonce) = match response {
|
let (pubkey, nonce) = match response {
|
||||||
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
auth::Outbound::AuthChallenge { pubkey, nonce } => (pubkey, nonce),
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
auth::Outbound::AuthSuccess => panic!("Expected AuthChallenge, got AuthSuccess"),
|
||||||
};
|
};
|
||||||
let signature = new_key.sign(&arbiter_proto::format_challenge(nonce, pubkey.as_bytes()));
|
let signature = sign_client_challenge(&new_key, nonce, &pubkey);
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeSolution { signature })
|
.send(auth::Inbound::AuthChallengeSolution { signature })
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let _ = test_transport.recv().await.unwrap();
|
drop(test_transport.recv().await.unwrap());
|
||||||
task.await.unwrap();
|
task.await.unwrap();
|
||||||
|
|
||||||
{
|
{
|
||||||
@@ -335,11 +362,11 @@ pub async fn test_metadata_change_appends_history_and_repoints_binding() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_challenge_auth_rejects_integrity_tag_mismatch() {
|
pub async fn challenge_auth_rejects_integrity_tag_mismatch() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = spawn_test_actors(&db).await;
|
let actors = spawn_test_actors(&db).await;
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
let requested = metadata("client", Some("desc"), Some("1.0.0"));
|
let requested = metadata("client", Some("desc"), Some("1.0.0"));
|
||||||
|
|
||||||
{
|
{
|
||||||
@@ -357,7 +384,7 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch() {
|
|||||||
.unwrap();
|
.unwrap();
|
||||||
insert_into(program_client::table)
|
insert_into(program_client::table)
|
||||||
.values((
|
.values((
|
||||||
program_client::public_key.eq(new_key.verifying_key().to_bytes().to_vec()),
|
program_client::public_key.eq(verifying_key(&new_key).encode().0.to_vec()),
|
||||||
program_client::metadata_id.eq(metadata_id),
|
program_client::metadata_id.eq(metadata_id),
|
||||||
))
|
))
|
||||||
.execute(&mut conn)
|
.execute(&mut conn)
|
||||||
@@ -374,7 +401,7 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch() {
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: new_key.verifying_key(),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
metadata: requested,
|
metadata: requested,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -384,7 +411,10 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch() {
|
|||||||
.recv()
|
.recv()
|
||||||
.await
|
.await
|
||||||
.expect("should receive auth rejection");
|
.expect("should receive auth rejection");
|
||||||
assert!(matches!(response, Err(auth::Error::IntegrityCheckFailed)));
|
assert!(matches!(
|
||||||
|
response,
|
||||||
|
Err(auth::ClientAuthError::IntegrityCheckFailed)
|
||||||
|
));
|
||||||
|
|
||||||
task.await.unwrap();
|
task.await.unwrap();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
|
#![allow(dead_code, reason = "Common test utilities that may not be used in every test")]
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use arbiter_proto::transport::{Bi, Error, Receiver, Sender};
|
use arbiter_proto::transport::{Bi, Error, Receiver, Sender};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::KeyHolder,
|
actors::keyholder::KeyHolder,
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
use diesel::QueryDsl;
|
use diesel::QueryDsl;
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
actor
|
actor
|
||||||
@@ -19,7 +20,6 @@ pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
|||||||
actor
|
actor
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
let id = schema::arbiter_settings::table
|
let id = schema::arbiter_settings::table
|
||||||
@@ -30,14 +30,12 @@ pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
|||||||
id.expect("root_key_id should be set after bootstrap")
|
id.expect("root_key_id should be set after bootstrap")
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub struct ChannelTransport<T, Y> {
|
pub struct ChannelTransport<T, Y> {
|
||||||
receiver: mpsc::Receiver<T>,
|
receiver: mpsc::Receiver<T>,
|
||||||
sender: mpsc::Sender<Y>,
|
sender: mpsc::Sender<Y>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T, Y> ChannelTransport<T, Y> {
|
impl<T, Y> ChannelTransport<T, Y> {
|
||||||
#[allow(dead_code)]
|
|
||||||
pub fn new() -> (Self, ChannelTransport<Y, T>) {
|
pub fn new() -> (Self, ChannelTransport<Y, T>) {
|
||||||
let (tx1, rx1) = mpsc::channel(10);
|
let (tx1, rx1) = mpsc::channel(10);
|
||||||
let (tx2, rx2) = mpsc::channel(10);
|
let (tx2, rx2) = mpsc::channel(10);
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
use std::collections::{HashMap, HashSet};
|
use std::collections::{HashMap, HashSet};
|
||||||
|
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::{CreateNew, Error, KeyHolder},
|
actors::keyholder::{CreateNew, KeyHolder, KeyHolderError},
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::actor::{ActorRef, Spawn as _};
|
use kameo::actor::{ActorRef, Spawn as _};
|
||||||
@@ -121,7 +122,7 @@ async fn insert_failure_does_not_create_partial_row() {
|
|||||||
.create_new(SafeCell::new(b"should fail".to_vec()))
|
.create_new(SafeCell::new(b"should fail".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
assert!(matches!(err, KeyHolderError::DatabaseTransaction(_)));
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
sql_query("DROP TRIGGER fail_aead_insert;")
|
sql_query("DROP TRIGGER fail_aead_insert;")
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::{Error, KeyHolder},
|
actors::keyholder::{KeyHolder, KeyHolderError},
|
||||||
crypto::encryption::v1::{Nonce, ROOT_KEY_TAG},
|
crypto::encryption::v1::{Nonce, ROOT_KEY_TAG},
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
use diesel::{QueryDsl, SelectableHelper};
|
use diesel::{QueryDsl, SelectableHelper};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
|
|
||||||
@@ -11,7 +12,7 @@ use crate::common;
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_bootstrap() {
|
async fn bootstrap() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
@@ -34,18 +35,18 @@ async fn test_bootstrap() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_bootstrap_rejects_double() {
|
async fn bootstrap_rejects_double() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
let seal_key2 = SafeCell::new(b"test-seal-key".to_vec());
|
let seal_key2 = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::AlreadyBootstrapped));
|
assert!(matches!(err, KeyHolderError::AlreadyBootstrapped));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_create_new_before_bootstrap_fails() {
|
async fn create_new_before_bootstrap_fails() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = KeyHolder::new(db).await.unwrap();
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
@@ -53,34 +54,34 @@ async fn test_create_new_before_bootstrap_fails() {
|
|||||||
.create_new(SafeCell::new(b"data".to_vec()))
|
.create_new(SafeCell::new(b"data".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
assert!(matches!(err, KeyHolderError::NotBootstrapped));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_decrypt_before_bootstrap_fails() {
|
async fn decrypt_before_bootstrap_fails() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = KeyHolder::new(db).await.unwrap();
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
let err = actor.decrypt(1).await.unwrap_err();
|
let err = actor.decrypt(1).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
assert!(matches!(err, KeyHolderError::NotBootstrapped));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_new_restores_sealed_state() {
|
async fn new_restores_sealed_state() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actor = common::bootstrapped_keyholder(&db).await;
|
let actor = common::bootstrapped_keyholder(&db).await;
|
||||||
drop(actor);
|
drop(actor);
|
||||||
|
|
||||||
let mut actor2 = KeyHolder::new(db).await.unwrap();
|
let mut actor2 = KeyHolder::new(db).await.unwrap();
|
||||||
let err = actor2.decrypt(1).await.unwrap_err();
|
let err = actor2.decrypt(1).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
assert!(matches!(err, KeyHolderError::NotBootstrapped));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_unseal_correct_password() {
|
async fn unseal_correct_password() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
@@ -101,7 +102,7 @@ async fn test_unseal_correct_password() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_unseal_wrong_then_correct_password() {
|
async fn unseal_wrong_then_correct_password() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
@@ -116,7 +117,7 @@ async fn test_unseal_wrong_then_correct_password() {
|
|||||||
|
|
||||||
let bad_key = SafeCell::new(b"wrong-password".to_vec());
|
let bad_key = SafeCell::new(b"wrong-password".to_vec());
|
||||||
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::InvalidKey));
|
assert!(matches!(err, KeyHolderError::InvalidKey));
|
||||||
|
|
||||||
let good_key = SafeCell::new(b"test-seal-key".to_vec());
|
let good_key = SafeCell::new(b"test-seal-key".to_vec());
|
||||||
actor.try_unseal(good_key).await.unwrap();
|
actor.try_unseal(good_key).await.unwrap();
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
use std::collections::HashSet;
|
use std::collections::HashSet;
|
||||||
|
|
||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::keyholder::Error,
|
actors::keyholder::KeyHolderError,
|
||||||
crypto::encryption::v1::Nonce,
|
crypto::encryption::v1::Nonce,
|
||||||
db::{self, models, schema},
|
db::{self, models, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
|
|
||||||
@@ -13,7 +14,7 @@ use crate::common;
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_create_decrypt_roundtrip() {
|
async fn create_decrypt_roundtrip() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
@@ -29,17 +30,17 @@ async fn test_create_decrypt_roundtrip() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_decrypt_nonexistent_returns_not_found() {
|
async fn decrypt_nonexistent_returns_not_found() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
let err = actor.decrypt(9999).await.unwrap_err();
|
let err = actor.decrypt(9999).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::NotFound));
|
assert!(matches!(err, KeyHolderError::NotFound));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_ciphertext_differs_across_entries() {
|
async fn ciphertext_differs_across_entries() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
@@ -77,7 +78,7 @@ async fn test_ciphertext_differs_across_entries() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
async fn test_nonce_never_reused() {
|
async fn nonce_never_reused() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
@@ -141,7 +142,7 @@ async fn broken_db_nonce_format_fails_closed() {
|
|||||||
.create_new(SafeCell::new(b"must fail".to_vec()))
|
.create_new(SafeCell::new(b"must fail".to_vec()))
|
||||||
.await
|
.await
|
||||||
.unwrap_err();
|
.unwrap_err();
|
||||||
assert!(matches!(err, Error::BrokenDatabase));
|
assert!(matches!(err, KeyHolderError::BrokenDatabase));
|
||||||
|
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let mut actor = common::bootstrapped_keyholder(&db).await;
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
@@ -158,5 +159,5 @@ async fn broken_db_nonce_format_fails_closed() {
|
|||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
let err = actor.decrypt(id).await.unwrap_err();
|
let err = actor.decrypt(id).await.unwrap_err();
|
||||||
assert!(matches!(err, Error::BrokenDatabase));
|
assert!(matches!(err, KeyHolderError::BrokenDatabase));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +1,44 @@
|
|||||||
|
use arbiter_crypto::{
|
||||||
|
authn::{self, USERAGENT_CONTEXT, format_challenge},
|
||||||
|
safecell::{SafeCell, SafeCellHandle as _},
|
||||||
|
};
|
||||||
|
|
||||||
use arbiter_proto::transport::{Receiver, Sender};
|
use arbiter_proto::transport::{Receiver, Sender};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::{
|
actors::{
|
||||||
GlobalActors,
|
GlobalActors,
|
||||||
bootstrap::GetToken,
|
bootstrap::GetToken,
|
||||||
keyholder::Bootstrap,
|
keyholder::Bootstrap,
|
||||||
user_agent::{AuthPublicKey, UserAgentConnection, UserAgentCredentials, auth},
|
user_agent::{UserAgentConnection, UserAgentCredentials, auth},
|
||||||
},
|
},
|
||||||
crypto::integrity,
|
crypto::integrity,
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
|
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use ed25519_dalek::Signer as _;
|
use ml_dsa::{KeyGen, MlDsa87, SigningKey, VerifyingKey, signature::Keypair};
|
||||||
|
|
||||||
use super::common::ChannelTransport;
|
use super::common::ChannelTransport;
|
||||||
|
|
||||||
|
fn verifying_key(key: &SigningKey<MlDsa87>) -> VerifyingKey<MlDsa87> {
|
||||||
|
<SigningKey<MlDsa87> as Keypair>::verifying_key(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn sign_useragent_challenge(
|
||||||
|
key: &SigningKey<MlDsa87>,
|
||||||
|
nonce: i32,
|
||||||
|
pubkey_bytes: &[u8],
|
||||||
|
) -> authn::Signature {
|
||||||
|
let challenge = format_challenge(nonce, pubkey_bytes);
|
||||||
|
key.signing_key()
|
||||||
|
.sign_deterministic(&challenge, USERAGENT_CONTEXT)
|
||||||
|
.unwrap()
|
||||||
|
.into()
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_bootstrap_token_auth() {
|
pub async fn bootstrap_token_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
actors
|
actors
|
||||||
@@ -37,10 +57,10 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
auth::authenticate(&mut props, server_transport).await
|
auth::authenticate(&mut props, server_transport).await
|
||||||
});
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
bootstrap_token: Some(token),
|
bootstrap_token: Some(token),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -63,12 +83,12 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
.first::<Vec<u8>>(&mut conn)
|
.first::<Vec<u8>>(&mut conn)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
assert_eq!(stored_pubkey, new_key.verifying_key().to_bytes().to_vec());
|
assert_eq!(stored_pubkey, verifying_key(&new_key).encode().0.to_vec());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_bootstrap_invalid_token_auth() {
|
pub async fn bootstrap_invalid_token_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
|
||||||
@@ -79,11 +99,11 @@ pub async fn test_bootstrap_invalid_token_auth() {
|
|||||||
auth::authenticate(&mut props, server_transport).await
|
auth::authenticate(&mut props, server_transport).await
|
||||||
});
|
});
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
bootstrap_token: Some("invalid_token".to_string()),
|
bootstrap_token: Some("invalid_token".to_owned()),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -104,7 +124,7 @@ pub async fn test_bootstrap_invalid_token_auth() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_challenge_auth() {
|
pub async fn challenge_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
actors
|
actors
|
||||||
@@ -115,8 +135,8 @@ pub async fn test_challenge_auth() {
|
|||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = authn::PublicKey::from(verifying_key(&new_key)).to_bytes();
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
@@ -133,7 +153,7 @@ pub async fn test_challenge_auth() {
|
|||||||
&mut conn,
|
&mut conn,
|
||||||
&actors.key_holder,
|
&actors.key_holder,
|
||||||
&UserAgentCredentials {
|
&UserAgentCredentials {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
nonce: 1,
|
nonce: 1,
|
||||||
},
|
},
|
||||||
id,
|
id,
|
||||||
@@ -151,7 +171,7 @@ pub async fn test_challenge_auth() {
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
bootstrap_token: None,
|
bootstrap_token: None,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -164,17 +184,16 @@ pub async fn test_challenge_auth() {
|
|||||||
let challenge = match response {
|
let challenge = match response {
|
||||||
Ok(resp) => match resp {
|
Ok(resp) => match resp {
|
||||||
auth::Outbound::AuthChallenge { nonce } => nonce,
|
auth::Outbound::AuthChallenge { nonce } => nonce,
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
auth::Outbound::AuthSuccess => panic!("Expected AuthChallenge, got AuthSuccess"),
|
||||||
},
|
},
|
||||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
};
|
};
|
||||||
|
|
||||||
let formatted_challenge = arbiter_proto::format_challenge(challenge, &pubkey_bytes);
|
let signature = sign_useragent_challenge(&new_key, challenge, &pubkey_bytes);
|
||||||
let signature = new_key.sign(&formatted_challenge);
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeSolution {
|
.send(auth::Inbound::AuthChallengeSolution {
|
||||||
signature: signature.to_bytes().to_vec(),
|
signature: signature.to_bytes(),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -193,7 +212,7 @@ pub async fn test_challenge_auth() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_challenge_auth_rejects_integrity_tag_mismatch_when_unsealed() {
|
pub async fn challenge_auth_rejects_integrity_tag_mismatch_when_unsealed() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
|
||||||
@@ -205,8 +224,8 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch_when_unsealed()
|
|||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = authn::PublicKey::from(verifying_key(&new_key)).to_bytes();
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
@@ -229,7 +248,7 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch_when_unsealed()
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
bootstrap_token: None,
|
bootstrap_token: None,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -243,7 +262,7 @@ pub async fn test_challenge_auth_rejects_integrity_tag_mismatch_when_unsealed()
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_challenge_auth_rejects_invalid_signature() {
|
pub async fn challenge_auth_rejects_invalid_signature() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
actors
|
actors
|
||||||
@@ -254,8 +273,8 @@ pub async fn test_challenge_auth_rejects_invalid_signature() {
|
|||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = MlDsa87::key_gen(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = authn::PublicKey::from(verifying_key(&new_key)).to_bytes();
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
@@ -272,7 +291,7 @@ pub async fn test_challenge_auth_rejects_invalid_signature() {
|
|||||||
&mut conn,
|
&mut conn,
|
||||||
&actors.key_holder,
|
&actors.key_holder,
|
||||||
&UserAgentCredentials {
|
&UserAgentCredentials {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
nonce: 1,
|
nonce: 1,
|
||||||
},
|
},
|
||||||
id,
|
id,
|
||||||
@@ -290,7 +309,7 @@ pub async fn test_challenge_auth_rejects_invalid_signature() {
|
|||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeRequest {
|
.send(auth::Inbound::AuthChallengeRequest {
|
||||||
pubkey: AuthPublicKey::Ed25519(new_key.verifying_key()),
|
pubkey: verifying_key(&new_key).into(),
|
||||||
bootstrap_token: None,
|
bootstrap_token: None,
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
@@ -303,17 +322,16 @@ pub async fn test_challenge_auth_rejects_invalid_signature() {
|
|||||||
let challenge = match response {
|
let challenge = match response {
|
||||||
Ok(resp) => match resp {
|
Ok(resp) => match resp {
|
||||||
auth::Outbound::AuthChallenge { nonce } => nonce,
|
auth::Outbound::AuthChallenge { nonce } => nonce,
|
||||||
other => panic!("Expected AuthChallenge, got {other:?}"),
|
auth::Outbound::AuthSuccess => panic!("Expected AuthChallenge, got AuthSuccess"),
|
||||||
},
|
},
|
||||||
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
Err(err) => panic!("Expected Ok response, got Err({err:?})"),
|
||||||
};
|
};
|
||||||
|
|
||||||
let wrong_challenge = arbiter_proto::format_challenge(challenge + 1, &pubkey_bytes);
|
let signature = sign_useragent_challenge(&new_key, challenge + 1, &pubkey_bytes);
|
||||||
let signature = new_key.sign(&wrong_challenge);
|
|
||||||
|
|
||||||
test_transport
|
test_transport
|
||||||
.send(auth::Inbound::AuthChallengeSolution {
|
.send(auth::Inbound::AuthChallengeSolution {
|
||||||
signature: signature.to_bytes().to_vec(),
|
signature: signature.to_bytes(),
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use arbiter_crypto::safecell::{SafeCell, SafeCellHandle as _};
|
||||||
use arbiter_server::{
|
use arbiter_server::{
|
||||||
actors::{
|
actors::{
|
||||||
GlobalActors,
|
GlobalActors,
|
||||||
@@ -8,8 +9,8 @@ use arbiter_server::{
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
db,
|
db,
|
||||||
safe_cell::{SafeCell, SafeCellHandle as _},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
use kameo::actor::Spawn as _;
|
use kameo::actor::Spawn as _;
|
||||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
@@ -68,7 +69,7 @@ async fn client_dh_encrypt(
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_success() {
|
pub async fn unseal_success() {
|
||||||
let seal_key = b"test-seal-key";
|
let seal_key = b"test-seal-key";
|
||||||
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||||
|
|
||||||
@@ -80,7 +81,7 @@ pub async fn test_unseal_success() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_wrong_seal_key() {
|
pub async fn unseal_wrong_seal_key() {
|
||||||
let (_db, user_agent) = setup_sealed_user_agent(b"correct-key").await;
|
let (_db, user_agent) = setup_sealed_user_agent(b"correct-key").await;
|
||||||
|
|
||||||
let encrypted_key = client_dh_encrypt(&user_agent, b"wrong-key").await;
|
let encrypted_key = client_dh_encrypt(&user_agent, b"wrong-key").await;
|
||||||
@@ -96,7 +97,7 @@ pub async fn test_unseal_wrong_seal_key() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_corrupted_ciphertext() {
|
pub async fn unseal_corrupted_ciphertext() {
|
||||||
let (_db, user_agent) = setup_sealed_user_agent(b"test-key").await;
|
let (_db, user_agent) = setup_sealed_user_agent(b"test-key").await;
|
||||||
|
|
||||||
let client_secret = EphemeralSecret::random();
|
let client_secret = EphemeralSecret::random();
|
||||||
@@ -127,7 +128,7 @@ pub async fn test_unseal_corrupted_ciphertext() {
|
|||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_unseal_retry_after_invalid_key() {
|
pub async fn unseal_retry_after_invalid_key() {
|
||||||
let seal_key = b"real-seal-key";
|
let seal_key = b"real-seal-key";
|
||||||
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
let (_db, user_agent) = setup_sealed_user_agent(seal_key).await;
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user