Compare commits
15 Commits
push-lspny
...
20ac84b60c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
20ac84b60c | ||
|
|
8f6dda871b | ||
|
|
47108ed8ad | ||
|
|
359df73c2e | ||
|
|
ce03b7e15d | ||
|
|
e4038d9188 | ||
|
|
c82339d764 | ||
|
|
c5b51f4b70 | ||
|
|
6b8f8c9ff7 | ||
|
|
8263bc6b6f | ||
|
|
a6c849f268 | ||
|
|
d8d65da0b4 | ||
|
|
abdf4e3893 | ||
|
|
4bac70a6e9 | ||
|
|
54a41743be |
4
.gitignore
vendored
@@ -1,3 +1 @@
|
|||||||
target/
|
target/
|
||||||
scripts/__pycache__/
|
|
||||||
.DS_Store
|
|
||||||
@@ -8,7 +8,7 @@ when:
|
|||||||
include: ['.woodpecker/server-*.yaml', 'server/**']
|
include: ['.woodpecker/server-*.yaml', 'server/**']
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: audit
|
- name: test
|
||||||
image: jdxcode/mise:latest
|
image: jdxcode/mise:latest
|
||||||
directory: server
|
directory: server
|
||||||
environment:
|
environment:
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ when:
|
|||||||
include: ['.woodpecker/server-*.yaml', 'server/**']
|
include: ['.woodpecker/server-*.yaml', 'server/**']
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: lint
|
- name: test
|
||||||
image: jdxcode/mise:latest
|
image: jdxcode/mise:latest
|
||||||
directory: server
|
directory: server
|
||||||
environment:
|
environment:
|
||||||
@@ -21,5 +21,4 @@ steps:
|
|||||||
commands:
|
commands:
|
||||||
- apt-get update && apt-get install -y pkg-config
|
- apt-get update && apt-get install -y pkg-config
|
||||||
- mise install rust
|
- mise install rust
|
||||||
- mise install protoc
|
|
||||||
- mise exec rust -- cargo clippy --all-targets --all-features -- -D warnings
|
- mise exec rust -- cargo clippy --all-targets --all-features -- -D warnings
|
||||||
@@ -8,7 +8,7 @@ when:
|
|||||||
include: ['.woodpecker/server-*.yaml', 'server/**']
|
include: ['.woodpecker/server-*.yaml', 'server/**']
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: vet
|
- name: test
|
||||||
image: jdxcode/mise:latest
|
image: jdxcode/mise:latest
|
||||||
directory: server
|
directory: server
|
||||||
environment:
|
environment:
|
||||||
|
|||||||
@@ -3,6 +3,7 @@
|
|||||||
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
|
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
|
||||||
|
|
||||||
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
|
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 1. Peer Types
|
## 1. Peer Types
|
||||||
|
|||||||
@@ -27,82 +27,6 @@ This document covers concrete technology choices and dependencies. For the archi
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## EVM Policy Engine
|
|
||||||
|
|
||||||
### Overview
|
|
||||||
|
|
||||||
The EVM engine classifies incoming transactions, enforces grant constraints, and records executions. It is the sole path through which a wallet key is used for signing.
|
|
||||||
|
|
||||||
The central abstraction is the `Policy` trait. Each implementation handles one semantic transaction category and owns its own database tables for grant storage and transaction logging.
|
|
||||||
|
|
||||||
### Transaction Evaluation Flow
|
|
||||||
|
|
||||||
`Engine::evaluate_transaction` runs the following steps in order:
|
|
||||||
|
|
||||||
1. **Classify** — Each registered policy's `analyze(context)` inspects the transaction fields (`chain`, `to`, `value`, `calldata`). The first one returning `Some(meaning)` wins. If none match, the transaction is rejected as `UnsupportedTransactionType`.
|
|
||||||
2. **Find grant** — `Policy::try_find_grant` queries for a non-revoked grant covering this wallet, client, chain, and target address.
|
|
||||||
3. **Check shared constraints** — `check_shared_constraints` runs in the engine before any policy-specific logic. It enforces the validity window, gas fee caps, and transaction count rate limit (see below).
|
|
||||||
4. **Evaluate** — `Policy::evaluate` checks the decoded meaning against the grant's policy-specific constraints and returns any violations.
|
|
||||||
5. **Record** — If `RunKind::Execution` and there are no violations, the engine writes to `evm_transaction_log` and calls `Policy::record_transaction` for any policy-specific logging (e.g., token transfer volume).
|
|
||||||
|
|
||||||
### Policy Trait
|
|
||||||
|
|
||||||
| Method | Purpose |
|
|
||||||
|---|---|
|
|
||||||
| `analyze` | Pure — classifies a transaction into a typed `Meaning`, or `None` if this policy doesn't apply |
|
|
||||||
| `evaluate` | Checks the `Meaning` against a `Grant`; returns a list of `EvalViolation`s |
|
|
||||||
| `create_grant` | Inserts policy-specific rows; returns the specific grant ID |
|
|
||||||
| `try_find_grant` | Finds a matching non-revoked grant for the given `EvalContext` |
|
|
||||||
| `find_all_grants` | Returns all non-revoked grants (used for listing) |
|
|
||||||
| `record_transaction` | Persists policy-specific data after execution |
|
|
||||||
|
|
||||||
`analyze` and `evaluate` are intentionally separate: classification is pure and cheap, while evaluation may involve DB queries (e.g., fetching past transfer volume).
|
|
||||||
|
|
||||||
### Registered Policies
|
|
||||||
|
|
||||||
**EtherTransfer** — plain ETH transfers (empty calldata)
|
|
||||||
|
|
||||||
- Grant requires: allowlist of recipient addresses + one volumetric rate limit (max ETH over a time window)
|
|
||||||
- Violations: recipient not in allowlist, cumulative ETH volume exceeded
|
|
||||||
|
|
||||||
**TokenTransfer** — ERC-20 `transfer(address,uint256)` calls
|
|
||||||
|
|
||||||
- Recognised by ABI-decoding the `transfer(address,uint256)` selector against a static registry of known token contracts (`arbiter_tokens_registry`)
|
|
||||||
- Grant requires: token contract address, optional recipient restriction, zero or more volumetric rate limits
|
|
||||||
- Violations: recipient mismatch, any volumetric limit exceeded
|
|
||||||
|
|
||||||
### Grant Model
|
|
||||||
|
|
||||||
Every grant has two layers:
|
|
||||||
|
|
||||||
- **Shared (`evm_basic_grant`)** — wallet, chain, validity period, gas fee caps, transaction count rate limit. One row per grant regardless of type.
|
|
||||||
- **Specific** — policy-owned tables (`evm_ether_transfer_grant`, `evm_token_transfer_grant`, etc.) holding type-specific configuration.
|
|
||||||
|
|
||||||
`find_all_grants` uses a `#[diesel::auto_type]` base join between the specific and shared tables, then batch-loads related rows (targets, volume limits) in two additional queries to avoid N+1.
|
|
||||||
|
|
||||||
The engine exposes `list_all_grants` which collects across all policy types into `Vec<Grant<SpecificGrant>>` via a blanket `From<Grant<S>> for Grant<SpecificGrant>` conversion.
|
|
||||||
|
|
||||||
### Shared Constraints (enforced by the engine)
|
|
||||||
|
|
||||||
These are checked centrally in `check_shared_constraints` before policy evaluation:
|
|
||||||
|
|
||||||
| Constraint | Fields | Behaviour |
|
|
||||||
|---|---|---|
|
|
||||||
| Validity window | `valid_from`, `valid_until` | Emits `InvalidTime` if current time is outside the range |
|
|
||||||
| Gas fee cap | `max_gas_fee_per_gas`, `max_priority_fee_per_gas` | Emits `GasLimitExceeded` if either cap is breached |
|
|
||||||
| Tx count rate limit | `rate_limit` (`count` + `window`) | Counts rows in `evm_transaction_log` within the window; emits `RateLimitExceeded` if at or above the limit |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Known Limitations
|
|
||||||
|
|
||||||
- **Only EIP-1559 transactions are supported.** Legacy and EIP-2930 types are rejected outright.
|
|
||||||
- **No opaque-calldata (unknown contract) grant type.** The architecture describes a category for unrecognised contracts, but no policy implements it yet. Any transaction that is not a plain ETH transfer or a known ERC-20 transfer is unconditionally rejected.
|
|
||||||
- **Token registry is static.** Tokens are recognised only if they appear in the hard-coded `arbiter_tokens_registry` crate. There is no mechanism to register additional contracts at runtime.
|
|
||||||
- **Nonce management is not implemented.** The architecture lists nonce deduplication as a core responsibility, but no nonce tracking or enforcement exists yet.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Memory Protection
|
## Memory Protection
|
||||||
|
|
||||||
The unsealed root key must be held in a hardened memory cell resistant to dumps, page swaps, and hibernation.
|
The unsealed root key must be held in a hardened memory cell resistant to dumps, page swaps, and hibernation.
|
||||||
|
|||||||
190
LICENSE
@@ -1,190 +0,0 @@
|
|||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
Copyright 2026 MarketTakers
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
13
README.md
@@ -1,13 +0,0 @@
|
|||||||
# Arbiter
|
|
||||||
> Policy-first multi-client wallet daemon, allowing permissioned transactions across blockchains
|
|
||||||
|
|
||||||
## Security warning
|
|
||||||
Arbiter can't meaningfully protect against host compromise. Potential attack flow:
|
|
||||||
- Attacker steals TLS keys from database
|
|
||||||
- Pretends to be server; just accepts user agent challenge solutions
|
|
||||||
- Pretend to be in sealed state and performing DH with client
|
|
||||||
- Steals user password and derives seal key
|
|
||||||
|
|
||||||
While this attack is highly targetive, it's still possible.
|
|
||||||
|
|
||||||
> This software is experimental. Do not use with funds you cannot afford to lose.
|
|
||||||
@@ -1,31 +0,0 @@
|
|||||||
Extension Discovery Cache
|
|
||||||
=========================
|
|
||||||
|
|
||||||
This folder is used by `package:extension_discovery` to cache lists of
|
|
||||||
packages that contains extensions for other packages.
|
|
||||||
|
|
||||||
DO NOT USE THIS FOLDER
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
* Do not read (or rely) the contents of this folder.
|
|
||||||
* Do write to this folder.
|
|
||||||
|
|
||||||
If you're interested in the lists of extensions stored in this folder use the
|
|
||||||
API offered by package `extension_discovery` to get this information.
|
|
||||||
|
|
||||||
If this package doesn't work for your use-case, then don't try to read the
|
|
||||||
contents of this folder. It may change, and will not remain stable.
|
|
||||||
|
|
||||||
Use package `extension_discovery`
|
|
||||||
---------------------------------
|
|
||||||
|
|
||||||
If you want to access information from this folder.
|
|
||||||
|
|
||||||
Feel free to delete this folder
|
|
||||||
-------------------------------
|
|
||||||
|
|
||||||
Files in this folder act as a cache, and the cache is discarded if the files
|
|
||||||
are older than the modification time of `.dart_tool/package_config.json`.
|
|
||||||
|
|
||||||
Hence, it should never be necessary to clear this cache manually, if you find a
|
|
||||||
need to do please file a bug.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
{"version":2,"entries":[{"package":"app","rootUri":"../","packageUri":"lib/"}]}
|
|
||||||
@@ -1,178 +0,0 @@
|
|||||||
{
|
|
||||||
"configVersion": 2,
|
|
||||||
"packages": [
|
|
||||||
{
|
|
||||||
"name": "async",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/async-2.13.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "boolean_selector",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/boolean_selector-2.1.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "characters",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/characters-1.4.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "clock",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/clock-1.1.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "collection",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/collection-1.19.1",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "cupertino_icons",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/cupertino_icons-1.0.8",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "fake_async",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/fake_async-1.3.3",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.3"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter",
|
|
||||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.8"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter_lints",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/flutter_lints-6.0.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.8"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter_test",
|
|
||||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter_test",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.8"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker-11.0.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.2"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker_flutter_testing",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_flutter_testing-3.0.10",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.2"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker_testing",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_testing-3.0.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.2"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "lints",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/lints-6.1.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.8"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "matcher",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/matcher-0.12.17",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "material_color_utilities",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/material_color_utilities-0.11.1",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "2.17"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "meta",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/meta-1.17.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.5"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "path",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/path-1.9.1",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "sky_engine",
|
|
||||||
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/bin/cache/pkg/sky_engine",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.8"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "source_span",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/source_span-1.10.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stack_trace",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stack_trace-1.12.1",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.4"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stream_channel",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stream_channel-2.1.4",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.3"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "string_scanner",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/string_scanner-1.4.1",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "term_glyph",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/term_glyph-1.2.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "test_api",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/test_api-0.7.7",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.5"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "vector_math",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vector_math-2.2.0",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "vm_service",
|
|
||||||
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vm_service-15.0.2",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.5"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "app",
|
|
||||||
"rootUri": "../",
|
|
||||||
"packageUri": "lib/",
|
|
||||||
"languageVersion": "3.10"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"generator": "pub",
|
|
||||||
"generatorVersion": "3.10.8",
|
|
||||||
"flutterRoot": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable",
|
|
||||||
"flutterVersion": "3.38.9",
|
|
||||||
"pubCache": "file:///Users/kaska/.pub-cache"
|
|
||||||
}
|
|
||||||
@@ -1,230 +0,0 @@
|
|||||||
{
|
|
||||||
"roots": [
|
|
||||||
"app"
|
|
||||||
],
|
|
||||||
"packages": [
|
|
||||||
{
|
|
||||||
"name": "app",
|
|
||||||
"version": "1.0.0+1",
|
|
||||||
"dependencies": [
|
|
||||||
"cupertino_icons",
|
|
||||||
"flutter"
|
|
||||||
],
|
|
||||||
"devDependencies": [
|
|
||||||
"flutter_lints",
|
|
||||||
"flutter_test"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter_lints",
|
|
||||||
"version": "6.0.0",
|
|
||||||
"dependencies": [
|
|
||||||
"lints"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter_test",
|
|
||||||
"version": "0.0.0",
|
|
||||||
"dependencies": [
|
|
||||||
"clock",
|
|
||||||
"collection",
|
|
||||||
"fake_async",
|
|
||||||
"flutter",
|
|
||||||
"leak_tracker_flutter_testing",
|
|
||||||
"matcher",
|
|
||||||
"meta",
|
|
||||||
"path",
|
|
||||||
"stack_trace",
|
|
||||||
"stream_channel",
|
|
||||||
"test_api",
|
|
||||||
"vector_math"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "cupertino_icons",
|
|
||||||
"version": "1.0.8",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "flutter",
|
|
||||||
"version": "0.0.0",
|
|
||||||
"dependencies": [
|
|
||||||
"characters",
|
|
||||||
"collection",
|
|
||||||
"material_color_utilities",
|
|
||||||
"meta",
|
|
||||||
"sky_engine",
|
|
||||||
"vector_math"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "lints",
|
|
||||||
"version": "6.1.0",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stream_channel",
|
|
||||||
"version": "2.1.4",
|
|
||||||
"dependencies": [
|
|
||||||
"async"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "meta",
|
|
||||||
"version": "1.17.0",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "collection",
|
|
||||||
"version": "1.19.1",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker_flutter_testing",
|
|
||||||
"version": "3.0.10",
|
|
||||||
"dependencies": [
|
|
||||||
"flutter",
|
|
||||||
"leak_tracker",
|
|
||||||
"leak_tracker_testing",
|
|
||||||
"matcher",
|
|
||||||
"meta"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "vector_math",
|
|
||||||
"version": "2.2.0",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stack_trace",
|
|
||||||
"version": "1.12.1",
|
|
||||||
"dependencies": [
|
|
||||||
"path"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "clock",
|
|
||||||
"version": "1.1.2",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "fake_async",
|
|
||||||
"version": "1.3.3",
|
|
||||||
"dependencies": [
|
|
||||||
"clock",
|
|
||||||
"collection"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "path",
|
|
||||||
"version": "1.9.1",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "matcher",
|
|
||||||
"version": "0.12.17",
|
|
||||||
"dependencies": [
|
|
||||||
"async",
|
|
||||||
"meta",
|
|
||||||
"stack_trace",
|
|
||||||
"term_glyph",
|
|
||||||
"test_api"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "test_api",
|
|
||||||
"version": "0.7.7",
|
|
||||||
"dependencies": [
|
|
||||||
"async",
|
|
||||||
"boolean_selector",
|
|
||||||
"collection",
|
|
||||||
"meta",
|
|
||||||
"source_span",
|
|
||||||
"stack_trace",
|
|
||||||
"stream_channel",
|
|
||||||
"string_scanner",
|
|
||||||
"term_glyph"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "sky_engine",
|
|
||||||
"version": "0.0.0",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "material_color_utilities",
|
|
||||||
"version": "0.11.1",
|
|
||||||
"dependencies": [
|
|
||||||
"collection"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "characters",
|
|
||||||
"version": "1.4.0",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "async",
|
|
||||||
"version": "2.13.0",
|
|
||||||
"dependencies": [
|
|
||||||
"collection",
|
|
||||||
"meta"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker_testing",
|
|
||||||
"version": "3.0.2",
|
|
||||||
"dependencies": [
|
|
||||||
"leak_tracker",
|
|
||||||
"matcher",
|
|
||||||
"meta"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "leak_tracker",
|
|
||||||
"version": "11.0.2",
|
|
||||||
"dependencies": [
|
|
||||||
"clock",
|
|
||||||
"collection",
|
|
||||||
"meta",
|
|
||||||
"path",
|
|
||||||
"vm_service"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "term_glyph",
|
|
||||||
"version": "1.2.2",
|
|
||||||
"dependencies": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "string_scanner",
|
|
||||||
"version": "1.4.1",
|
|
||||||
"dependencies": [
|
|
||||||
"source_span"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "source_span",
|
|
||||||
"version": "1.10.2",
|
|
||||||
"dependencies": [
|
|
||||||
"collection",
|
|
||||||
"path",
|
|
||||||
"term_glyph"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "boolean_selector",
|
|
||||||
"version": "2.1.2",
|
|
||||||
"dependencies": [
|
|
||||||
"source_span",
|
|
||||||
"string_scanner"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "vm_service",
|
|
||||||
"version": "15.0.2",
|
|
||||||
"dependencies": []
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"configVersion": 1
|
|
||||||
}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
3.38.9
|
|
||||||
0
useragent/.gitignore → app/.gitignore
vendored
@@ -1,11 +0,0 @@
|
|||||||
// This is a generated file; do not edit or check into version control.
|
|
||||||
FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable
|
|
||||||
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app
|
|
||||||
COCOAPODS_PARALLEL_CODE_SIGN=true
|
|
||||||
FLUTTER_BUILD_DIR=build
|
|
||||||
FLUTTER_BUILD_NAME=1.0.0
|
|
||||||
FLUTTER_BUILD_NUMBER=1
|
|
||||||
DART_OBFUSCATION=false
|
|
||||||
TRACK_WIDGET_CREATION=true
|
|
||||||
TREE_SHAKE_ICONS=false
|
|
||||||
PACKAGE_CONFIG=.dart_tool/package_config.json
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# This is a generated file; do not edit or check into version control.
|
|
||||||
export "FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable"
|
|
||||||
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app"
|
|
||||||
export "COCOAPODS_PARALLEL_CODE_SIGN=true"
|
|
||||||
export "FLUTTER_BUILD_DIR=build"
|
|
||||||
export "FLUTTER_BUILD_NAME=1.0.0"
|
|
||||||
export "FLUTTER_BUILD_NUMBER=1"
|
|
||||||
export "DART_OBFUSCATION=false"
|
|
||||||
export "TRACK_WIDGET_CREATION=true"
|
|
||||||
export "TREE_SHAKE_ICONS=false"
|
|
||||||
export "PACKAGE_CONFIG=.dart_tool/package_config.json"
|
|
||||||
|
Before Width: | Height: | Size: 101 KiB After Width: | Height: | Size: 101 KiB |
|
Before Width: | Height: | Size: 5.5 KiB After Width: | Height: | Size: 5.5 KiB |
|
Before Width: | Height: | Size: 520 B After Width: | Height: | Size: 520 B |
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 1.0 KiB After Width: | Height: | Size: 1.0 KiB |
|
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
@@ -41,6 +41,14 @@ packages:
|
|||||||
url: "https://pub.dev"
|
url: "https://pub.dev"
|
||||||
source: hosted
|
source: hosted
|
||||||
version: "1.19.1"
|
version: "1.19.1"
|
||||||
|
cupertino_icons:
|
||||||
|
dependency: "direct main"
|
||||||
|
description:
|
||||||
|
name: cupertino_icons
|
||||||
|
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
|
||||||
|
url: "https://pub.dev"
|
||||||
|
source: hosted
|
||||||
|
version: "1.0.8"
|
||||||
fake_async:
|
fake_async:
|
||||||
dependency: transitive
|
dependency: transitive
|
||||||
description:
|
description:
|
||||||
89
app/pubspec.yaml
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
name: app
|
||||||
|
description: "A new Flutter project."
|
||||||
|
# The following line prevents the package from being accidentally published to
|
||||||
|
# pub.dev using `flutter pub publish`. This is preferred for private packages.
|
||||||
|
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
|
||||||
|
|
||||||
|
# The following defines the version and build number for your application.
|
||||||
|
# A version number is three numbers separated by dots, like 1.2.43
|
||||||
|
# followed by an optional build number separated by a +.
|
||||||
|
# Both the version and the builder number may be overridden in flutter
|
||||||
|
# build by specifying --build-name and --build-number, respectively.
|
||||||
|
# In Android, build-name is used as versionName while build-number used as versionCode.
|
||||||
|
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
|
||||||
|
# In iOS, build-name is used as CFBundleShortVersionString while build-number is used as CFBundleVersion.
|
||||||
|
# Read more about iOS versioning at
|
||||||
|
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
|
||||||
|
# In Windows, build-name is used as the major, minor, and patch parts
|
||||||
|
# of the product and file versions while build-number is used as the build suffix.
|
||||||
|
version: 1.0.0+1
|
||||||
|
|
||||||
|
environment:
|
||||||
|
sdk: ^3.10.8
|
||||||
|
|
||||||
|
# Dependencies specify other packages that your package needs in order to work.
|
||||||
|
# To automatically upgrade your package dependencies to the latest versions
|
||||||
|
# consider running `flutter pub upgrade --major-versions`. Alternatively,
|
||||||
|
# dependencies can be manually updated by changing the version numbers below to
|
||||||
|
# the latest version available on pub.dev. To see which dependencies have newer
|
||||||
|
# versions available, run `flutter pub outdated`.
|
||||||
|
dependencies:
|
||||||
|
flutter:
|
||||||
|
sdk: flutter
|
||||||
|
|
||||||
|
# The following adds the Cupertino Icons font to your application.
|
||||||
|
# Use with the CupertinoIcons class for iOS style icons.
|
||||||
|
cupertino_icons: ^1.0.8
|
||||||
|
|
||||||
|
dev_dependencies:
|
||||||
|
flutter_test:
|
||||||
|
sdk: flutter
|
||||||
|
|
||||||
|
# The "flutter_lints" package below contains a set of recommended lints to
|
||||||
|
# encourage good coding practices. The lint set provided by the package is
|
||||||
|
# activated in the `analysis_options.yaml` file located at the root of your
|
||||||
|
# package. See that file for information about deactivating specific lint
|
||||||
|
# rules and activating additional ones.
|
||||||
|
flutter_lints: ^6.0.0
|
||||||
|
|
||||||
|
# For information on the generic Dart part of this file, see the
|
||||||
|
# following page: https://dart.dev/tools/pub/pubspec
|
||||||
|
|
||||||
|
# The following section is specific to Flutter packages.
|
||||||
|
flutter:
|
||||||
|
|
||||||
|
# The following line ensures that the Material Icons font is
|
||||||
|
# included with your application, so that you can use the icons in
|
||||||
|
# the material Icons class.
|
||||||
|
uses-material-design: true
|
||||||
|
|
||||||
|
# To add assets to your application, add an assets section, like this:
|
||||||
|
# assets:
|
||||||
|
# - images/a_dot_burr.jpeg
|
||||||
|
# - images/a_dot_ham.jpeg
|
||||||
|
|
||||||
|
# An image asset can refer to one or more resolution-specific "variants", see
|
||||||
|
# https://flutter.dev/to/resolution-aware-images
|
||||||
|
|
||||||
|
# For details regarding adding assets from package dependencies, see
|
||||||
|
# https://flutter.dev/to/asset-from-package
|
||||||
|
|
||||||
|
# To add custom fonts to your application, add a fonts section here,
|
||||||
|
# in this "flutter" section. Each entry in this list should have a
|
||||||
|
# "family" key with the font family name, and a "fonts" key with a
|
||||||
|
# list giving the asset and other descriptors for the font. For
|
||||||
|
# example:
|
||||||
|
# fonts:
|
||||||
|
# - family: Schyler
|
||||||
|
# fonts:
|
||||||
|
# - asset: fonts/Schyler-Regular.ttf
|
||||||
|
# - asset: fonts/Schyler-Italic.ttf
|
||||||
|
# style: italic
|
||||||
|
# - family: Trajan Pro
|
||||||
|
# fonts:
|
||||||
|
# - asset: fonts/TrajanPro.ttf
|
||||||
|
# - asset: fonts/TrajanPro_Bold.ttf
|
||||||
|
# weight: 700
|
||||||
|
#
|
||||||
|
# For details regarding fonts from package dependencies,
|
||||||
|
# see https://flutter.dev/to/font-from-package
|
||||||
|
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
@@ -55,15 +55,6 @@ backend = "aqua:protocolbuffers/protobuf/protoc"
|
|||||||
"platforms.macos-x64" = { checksum = "sha256:312f04713946921cc0187ef34df80241ddca1bab6f564c636885fd2cc90d3f88", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-x86_64.zip"}
|
"platforms.macos-x64" = { checksum = "sha256:312f04713946921cc0187ef34df80241ddca1bab6f564c636885fd2cc90d3f88", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-osx-x86_64.zip"}
|
||||||
"platforms.windows-x64" = { checksum = "sha256:1ebd7c87baffb9f1c47169b640872bf5fb1e4408079c691af527be9561d8f6f7", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-win64.zip"}
|
"platforms.windows-x64" = { checksum = "sha256:1ebd7c87baffb9f1c47169b640872bf5fb1e4408079c691af527be9561d8f6f7", url = "https://github.com/protocolbuffers/protobuf/releases/download/v29.6/protoc-29.6-win64.zip"}
|
||||||
|
|
||||||
[[tools.python]]
|
|
||||||
version = "3.14.3"
|
|
||||||
backend = "core:python"
|
|
||||||
"platforms.linux-arm64" = { checksum = "sha256:be0f4dc2932f762292b27d46ea7d3e8e66ddf3969a5eb0254a229015ed402625", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
|
||||||
"platforms.linux-x64" = { checksum = "sha256:0a73413f89efd417871876c9accaab28a9d1e3cd6358fbfff171a38ec99302f0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz"}
|
|
||||||
"platforms.macos-arm64" = { checksum = "sha256:4703cdf18b26798fde7b49b6b66149674c25f97127be6a10dbcf29309bdcdcdb", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-aarch64-apple-darwin-install_only_stripped.tar.gz"}
|
|
||||||
"platforms.macos-x64" = { checksum = "sha256:76f1cc26e3d262eae8ca546a93e8bded10cf0323613f7e246fea2e10a8115eb7", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-apple-darwin-install_only_stripped.tar.gz"}
|
|
||||||
"platforms.windows-x64" = { checksum = "sha256:950c5f21a015c1bdd1337f233456df2470fab71e4d794407d27a84cb8b9909a0", url = "https://github.com/astral-sh/python-build-standalone/releases/download/20260303/cpython-3.14.3+20260303-x86_64-pc-windows-msvc-install_only_stripped.tar.gz"}
|
|
||||||
|
|
||||||
[[tools.rust]]
|
[[tools.rust]]
|
||||||
version = "1.93.0"
|
version = "1.93.0"
|
||||||
backend = "core:rust"
|
backend = "core:rust"
|
||||||
|
|||||||
@@ -9,4 +9,3 @@ protoc = "29.6"
|
|||||||
"cargo:cargo-nextest" = "0.9.126"
|
"cargo:cargo-nextest" = "0.9.126"
|
||||||
"cargo:cargo-shear" = "latest"
|
"cargo:cargo-shear" = "latest"
|
||||||
"cargo:cargo-insta" = "1.46.3"
|
"cargo:cargo-insta" = "1.46.3"
|
||||||
python = "3.14.3"
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ syntax = "proto3";
|
|||||||
|
|
||||||
package arbiter;
|
package arbiter;
|
||||||
|
|
||||||
|
import "auth.proto";
|
||||||
import "client.proto";
|
import "client.proto";
|
||||||
import "user_agent.proto";
|
import "user_agent.proto";
|
||||||
|
|
||||||
@@ -11,6 +12,6 @@ message ServerInfo {
|
|||||||
}
|
}
|
||||||
|
|
||||||
service ArbiterService {
|
service ArbiterService {
|
||||||
rpc Client(stream arbiter.client.ClientRequest) returns (stream arbiter.client.ClientResponse);
|
rpc Client(stream ClientRequest) returns (stream ClientResponse);
|
||||||
rpc UserAgent(stream arbiter.user_agent.UserAgentRequest) returns (stream arbiter.user_agent.UserAgentResponse);
|
rpc UserAgent(stream UserAgentRequest) returns (stream UserAgentResponse);
|
||||||
}
|
}
|
||||||
|
|||||||
35
protobufs/auth.proto
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package arbiter.auth;
|
||||||
|
|
||||||
|
import "google/protobuf/timestamp.proto";
|
||||||
|
|
||||||
|
message AuthChallengeRequest {
|
||||||
|
bytes pubkey = 1;
|
||||||
|
optional string bootstrap_token = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message AuthChallenge {
|
||||||
|
bytes pubkey = 1;
|
||||||
|
int32 nonce = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message AuthChallengeSolution {
|
||||||
|
bytes signature = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message AuthOk {}
|
||||||
|
|
||||||
|
message ClientMessage {
|
||||||
|
oneof payload {
|
||||||
|
AuthChallengeRequest auth_challenge_request = 1;
|
||||||
|
AuthChallengeSolution auth_challenge_solution = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message ServerMessage {
|
||||||
|
oneof payload {
|
||||||
|
AuthChallenge auth_challenge = 1;
|
||||||
|
AuthOk auth_ok = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,38 +1,17 @@
|
|||||||
syntax = "proto3";
|
syntax = "proto3";
|
||||||
|
|
||||||
package arbiter.client;
|
package arbiter;
|
||||||
|
|
||||||
import "evm.proto";
|
import "auth.proto";
|
||||||
|
|
||||||
message AuthChallengeRequest {
|
|
||||||
bytes pubkey = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthChallenge {
|
|
||||||
bytes pubkey = 1;
|
|
||||||
int32 nonce = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthChallengeSolution {
|
|
||||||
bytes signature = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthOk {}
|
|
||||||
|
|
||||||
message ClientRequest {
|
message ClientRequest {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallengeRequest auth_challenge_request = 1;
|
arbiter.auth.ClientMessage auth_message = 1;
|
||||||
AuthChallengeSolution auth_challenge_solution = 2;
|
|
||||||
arbiter.evm.EvmSignTransactionRequest evm_sign_transaction = 3;
|
|
||||||
arbiter.evm.EvmAnalyzeTransactionRequest evm_analyze_transaction = 4;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
message ClientResponse {
|
message ClientResponse {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallenge auth_challenge = 1;
|
arbiter.auth.ServerMessage auth_message = 1;
|
||||||
AuthOk auth_ok = 2;
|
|
||||||
arbiter.evm.EvmSignTransactionResponse evm_sign_transaction = 3;
|
|
||||||
arbiter.evm.EvmAnalyzeTransactionResponse evm_analyze_transaction = 4;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,216 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package arbiter.evm;
|
|
||||||
|
|
||||||
import "google/protobuf/empty.proto";
|
|
||||||
import "google/protobuf/timestamp.proto";
|
|
||||||
|
|
||||||
enum EvmError {
|
|
||||||
EVM_ERROR_UNSPECIFIED = 0;
|
|
||||||
EVM_ERROR_VAULT_SEALED = 1;
|
|
||||||
EVM_ERROR_INTERNAL = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message WalletEntry {
|
|
||||||
bytes address = 1; // 20-byte Ethereum address
|
|
||||||
}
|
|
||||||
|
|
||||||
message WalletList {
|
|
||||||
repeated WalletEntry wallets = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message WalletCreateResponse {
|
|
||||||
oneof result {
|
|
||||||
WalletEntry wallet = 1;
|
|
||||||
EvmError error = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message WalletListResponse {
|
|
||||||
oneof result {
|
|
||||||
WalletList wallets = 1;
|
|
||||||
EvmError error = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Grant types ---
|
|
||||||
|
|
||||||
message TransactionRateLimit {
|
|
||||||
uint32 count = 1;
|
|
||||||
int64 window_secs = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message VolumeRateLimit {
|
|
||||||
bytes max_volume = 1; // U256 as big-endian bytes
|
|
||||||
int64 window_secs = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SharedSettings {
|
|
||||||
int32 wallet_id = 1;
|
|
||||||
uint64 chain_id = 2;
|
|
||||||
optional google.protobuf.Timestamp valid_from = 3;
|
|
||||||
optional google.protobuf.Timestamp valid_until = 4;
|
|
||||||
optional bytes max_gas_fee_per_gas = 5; // U256 as big-endian bytes
|
|
||||||
optional bytes max_priority_fee_per_gas = 6; // U256 as big-endian bytes
|
|
||||||
optional TransactionRateLimit rate_limit = 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
message EtherTransferSettings {
|
|
||||||
repeated bytes targets = 1; // list of 20-byte Ethereum addresses
|
|
||||||
VolumeRateLimit limit = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message TokenTransferSettings {
|
|
||||||
bytes token_contract = 1; // 20-byte Ethereum address
|
|
||||||
optional bytes target = 2; // 20-byte Ethereum address; absent means any recipient allowed
|
|
||||||
repeated VolumeRateLimit volume_limits = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SpecificGrant {
|
|
||||||
oneof grant {
|
|
||||||
EtherTransferSettings ether_transfer = 1;
|
|
||||||
TokenTransferSettings token_transfer = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message EtherTransferMeaning {
|
|
||||||
bytes to = 1; // 20-byte Ethereum address
|
|
||||||
bytes value = 2; // U256 as big-endian bytes
|
|
||||||
}
|
|
||||||
|
|
||||||
message TokenInfo {
|
|
||||||
string symbol = 1;
|
|
||||||
bytes address = 2; // 20-byte Ethereum address
|
|
||||||
uint64 chain_id = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mirror of token_transfers::Meaning
|
|
||||||
message TokenTransferMeaning {
|
|
||||||
TokenInfo token = 1;
|
|
||||||
bytes to = 2; // 20-byte Ethereum address
|
|
||||||
bytes value = 3; // U256 as big-endian bytes
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mirror of policies::SpecificMeaning
|
|
||||||
message SpecificMeaning {
|
|
||||||
oneof meaning {
|
|
||||||
EtherTransferMeaning ether_transfer = 1;
|
|
||||||
TokenTransferMeaning token_transfer = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Eval error types ---
|
|
||||||
message GasLimitExceededViolation {
|
|
||||||
optional bytes max_gas_fee_per_gas = 1; // U256 as big-endian bytes
|
|
||||||
optional bytes max_priority_fee_per_gas = 2; // U256 as big-endian bytes
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvalViolation {
|
|
||||||
oneof kind {
|
|
||||||
bytes invalid_target = 1; // 20-byte Ethereum address
|
|
||||||
GasLimitExceededViolation gas_limit_exceeded = 2;
|
|
||||||
google.protobuf.Empty rate_limit_exceeded = 3;
|
|
||||||
google.protobuf.Empty volumetric_limit_exceeded = 4;
|
|
||||||
google.protobuf.Empty invalid_time = 5;
|
|
||||||
google.protobuf.Empty invalid_transaction_type = 6;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Transaction was classified but no grant covers it
|
|
||||||
message NoMatchingGrantError {
|
|
||||||
SpecificMeaning meaning = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Transaction was classified and a grant was found, but constraints were violated
|
|
||||||
message PolicyViolationsError {
|
|
||||||
SpecificMeaning meaning = 1;
|
|
||||||
repeated EvalViolation violations = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
// top-level error returned when transaction evaluation fails
|
|
||||||
message TransactionEvalError {
|
|
||||||
oneof kind {
|
|
||||||
google.protobuf.Empty contract_creation_not_supported = 1;
|
|
||||||
google.protobuf.Empty unsupported_transaction_type = 2;
|
|
||||||
NoMatchingGrantError no_matching_grant = 3;
|
|
||||||
PolicyViolationsError policy_violations = 4;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- UserAgent grant management ---
|
|
||||||
message EvmGrantCreateRequest {
|
|
||||||
int32 client_id = 1;
|
|
||||||
SharedSettings shared = 2;
|
|
||||||
SpecificGrant specific = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantCreateResponse {
|
|
||||||
oneof result {
|
|
||||||
int32 grant_id = 1;
|
|
||||||
EvmError error = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantDeleteRequest {
|
|
||||||
int32 grant_id = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantDeleteResponse {
|
|
||||||
oneof result {
|
|
||||||
google.protobuf.Empty ok = 1;
|
|
||||||
EvmError error = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Basic grant info returned in grant listings
|
|
||||||
message GrantEntry {
|
|
||||||
int32 id = 1;
|
|
||||||
int32 client_id = 2;
|
|
||||||
SharedSettings shared = 3;
|
|
||||||
SpecificGrant specific = 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantListRequest {
|
|
||||||
optional int32 wallet_id = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantListResponse {
|
|
||||||
oneof result {
|
|
||||||
EvmGrantList grants = 1;
|
|
||||||
EvmError error = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmGrantList {
|
|
||||||
repeated GrantEntry grants = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- Client transaction operations ---
|
|
||||||
|
|
||||||
message EvmSignTransactionRequest {
|
|
||||||
bytes wallet_address = 1; // 20-byte Ethereum address
|
|
||||||
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction (unsigned)
|
|
||||||
}
|
|
||||||
|
|
||||||
// oneof because signing and evaluation happen atomically — a signing failure
|
|
||||||
// is always either an eval error or an internal error, never a partial success
|
|
||||||
message EvmSignTransactionResponse {
|
|
||||||
oneof result {
|
|
||||||
bytes signature = 1; // 65-byte signature: r[32] || s[32] || v[1]
|
|
||||||
TransactionEvalError eval_error = 2;
|
|
||||||
EvmError error = 3;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmAnalyzeTransactionRequest {
|
|
||||||
bytes wallet_address = 1; // 20-byte Ethereum address
|
|
||||||
bytes rlp_transaction = 2; // RLP-encoded EIP-1559 transaction
|
|
||||||
}
|
|
||||||
|
|
||||||
message EvmAnalyzeTransactionResponse {
|
|
||||||
oneof result {
|
|
||||||
SpecificMeaning meaning = 1;
|
|
||||||
TransactionEvalError eval_error = 2;
|
|
||||||
EvmError error = 3;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
25
protobufs/unseal.proto
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package arbiter.unseal;
|
||||||
|
|
||||||
|
import "google/protobuf/empty.proto";
|
||||||
|
|
||||||
|
message UnsealStart {
|
||||||
|
bytes client_pubkey = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message UnsealStartResponse {
|
||||||
|
bytes server_pubkey = 1;
|
||||||
|
}
|
||||||
|
message UnsealEncryptedKey {
|
||||||
|
bytes nonce = 1;
|
||||||
|
bytes ciphertext = 2;
|
||||||
|
bytes associated_data = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
enum UnsealResult {
|
||||||
|
UNSEAL_RESULT_UNSPECIFIED = 0;
|
||||||
|
UNSEAL_RESULT_SUCCESS = 1;
|
||||||
|
UNSEAL_RESULT_INVALID_KEY = 2;
|
||||||
|
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
||||||
|
}
|
||||||
@@ -1,79 +1,21 @@
|
|||||||
syntax = "proto3";
|
syntax = "proto3";
|
||||||
|
|
||||||
package arbiter.user_agent;
|
package arbiter;
|
||||||
|
|
||||||
import "google/protobuf/empty.proto";
|
import "auth.proto";
|
||||||
import "evm.proto";
|
import "unseal.proto";
|
||||||
|
|
||||||
message AuthChallengeRequest {
|
|
||||||
bytes pubkey = 1;
|
|
||||||
optional string bootstrap_token = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthChallenge {
|
|
||||||
bytes pubkey = 1;
|
|
||||||
int32 nonce = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthChallengeSolution {
|
|
||||||
bytes signature = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message AuthOk {}
|
|
||||||
|
|
||||||
message UnsealStart {
|
|
||||||
bytes client_pubkey = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message UnsealStartResponse {
|
|
||||||
bytes server_pubkey = 1;
|
|
||||||
}
|
|
||||||
message UnsealEncryptedKey {
|
|
||||||
bytes nonce = 1;
|
|
||||||
bytes ciphertext = 2;
|
|
||||||
bytes associated_data = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
enum UnsealResult {
|
|
||||||
UNSEAL_RESULT_UNSPECIFIED = 0;
|
|
||||||
UNSEAL_RESULT_SUCCESS = 1;
|
|
||||||
UNSEAL_RESULT_INVALID_KEY = 2;
|
|
||||||
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
enum VaultState {
|
|
||||||
VAULT_STATE_UNSPECIFIED = 0;
|
|
||||||
VAULT_STATE_UNBOOTSTRAPPED = 1;
|
|
||||||
VAULT_STATE_SEALED = 2;
|
|
||||||
VAULT_STATE_UNSEALED = 3;
|
|
||||||
VAULT_STATE_ERROR = 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
message UserAgentRequest {
|
message UserAgentRequest {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallengeRequest auth_challenge_request = 1;
|
arbiter.auth.ClientMessage auth_message = 1;
|
||||||
AuthChallengeSolution auth_challenge_solution = 2;
|
arbiter.unseal.UnsealStart unseal_start = 2;
|
||||||
UnsealStart unseal_start = 3;
|
arbiter.unseal.UnsealEncryptedKey unseal_encrypted_key = 3;
|
||||||
UnsealEncryptedKey unseal_encrypted_key = 4;
|
|
||||||
google.protobuf.Empty query_vault_state = 5;
|
|
||||||
google.protobuf.Empty evm_wallet_create = 6;
|
|
||||||
google.protobuf.Empty evm_wallet_list = 7;
|
|
||||||
arbiter.evm.EvmGrantCreateRequest evm_grant_create = 8;
|
|
||||||
arbiter.evm.EvmGrantDeleteRequest evm_grant_delete = 9;
|
|
||||||
arbiter.evm.EvmGrantListRequest evm_grant_list = 10;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
message UserAgentResponse {
|
message UserAgentResponse {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
AuthChallenge auth_challenge = 1;
|
arbiter.auth.ServerMessage auth_message = 1;
|
||||||
AuthOk auth_ok = 2;
|
arbiter.unseal.UnsealStartResponse unseal_start_response = 2;
|
||||||
UnsealStartResponse unseal_start_response = 3;
|
arbiter.unseal.UnsealResult unseal_result = 3;
|
||||||
UnsealResult unseal_result = 4;
|
|
||||||
VaultState vault_state = 5;
|
|
||||||
arbiter.evm.WalletCreateResponse evm_wallet_create = 6;
|
|
||||||
arbiter.evm.WalletListResponse evm_wallet_list = 7;
|
|
||||||
arbiter.evm.EvmGrantCreateResponse evm_grant_create = 8;
|
|
||||||
arbiter.evm.EvmGrantDeleteResponse evm_grant_delete = 9;
|
|
||||||
arbiter.evm.EvmGrantListResponse evm_grant_list = 10;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,150 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Fetch the Uniswap default token list and emit Rust `TokenInfo` statics.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
python3 gen_erc20_registry.py # fetch from IPFS
|
|
||||||
python3 gen_erc20_registry.py tokens.json # local file
|
|
||||||
python3 gen_erc20_registry.py tokens.json out.rs # custom output file
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import unicodedata
|
|
||||||
import urllib.request
|
|
||||||
|
|
||||||
UNISWAP_URL = "https://ipfs.io/ipns/tokens.uniswap.org"
|
|
||||||
|
|
||||||
SOLANA_CHAIN_ID = 501000101
|
|
||||||
IDENTIFIER_RE = re.compile(r"[^A-Za-z0-9]+")
|
|
||||||
|
|
||||||
|
|
||||||
def load_tokens(source=None):
|
|
||||||
if source:
|
|
||||||
with open(source) as f:
|
|
||||||
return json.load(f)
|
|
||||||
req = urllib.request.Request(
|
|
||||||
UNISWAP_URL,
|
|
||||||
headers={"Accept": "application/json", "User-Agent": "gen_tokens/1.0"},
|
|
||||||
)
|
|
||||||
with urllib.request.urlopen(req, timeout=60) as resp:
|
|
||||||
return json.loads(resp.read())
|
|
||||||
|
|
||||||
|
|
||||||
def escape(s: str) -> str:
|
|
||||||
return s.replace("\\", "\\\\").replace('"', '\\"')
|
|
||||||
|
|
||||||
|
|
||||||
def to_screaming_case(name: str) -> str:
|
|
||||||
normalized = unicodedata.normalize("NFKD", name or "")
|
|
||||||
ascii_name = normalized.encode("ascii", "ignore").decode("ascii")
|
|
||||||
snake = IDENTIFIER_RE.sub("_", ascii_name).strip("_").upper()
|
|
||||||
if not snake:
|
|
||||||
snake = "TOKEN"
|
|
||||||
if snake[0].isdigit():
|
|
||||||
snake = f"TOKEN_{snake}"
|
|
||||||
return snake
|
|
||||||
|
|
||||||
|
|
||||||
def static_name_for_token(token: dict, used_names: set[str]) -> str:
|
|
||||||
base = to_screaming_case(token.get("name", ""))
|
|
||||||
if base not in used_names:
|
|
||||||
used_names.add(base)
|
|
||||||
return base
|
|
||||||
|
|
||||||
address = token["address"]
|
|
||||||
suffix = f"{token['chainId']}_{address[2:].upper()[-8:]}"
|
|
||||||
candidate = f"{base}_{suffix}"
|
|
||||||
|
|
||||||
i = 2
|
|
||||||
while candidate in used_names:
|
|
||||||
candidate = f"{base}_{suffix}_{i}"
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
used_names.add(candidate)
|
|
||||||
return candidate
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
source = sys.argv[1] if len(sys.argv) > 1 else None
|
|
||||||
output = sys.argv[2] if len(sys.argv) > 2 else "generated_tokens.rs"
|
|
||||||
data = load_tokens(source)
|
|
||||||
tokens = data["tokens"]
|
|
||||||
|
|
||||||
# Deduplicate by (chainId, address)
|
|
||||||
seen = set()
|
|
||||||
unique = []
|
|
||||||
for t in tokens:
|
|
||||||
key = (t["chainId"], t["address"].lower())
|
|
||||||
if key not in seen:
|
|
||||||
seen.add(key)
|
|
||||||
unique.append(t)
|
|
||||||
|
|
||||||
unique.sort(key=lambda t: (t["chainId"], t.get("symbol", "").upper()))
|
|
||||||
evm_tokens = [t for t in unique if t["chainId"] != SOLANA_CHAIN_ID]
|
|
||||||
|
|
||||||
ver = data["version"]
|
|
||||||
lines = []
|
|
||||||
w = lines.append
|
|
||||||
|
|
||||||
w(
|
|
||||||
f"// Auto-generated from Uniswap token list v{ver['major']}.{ver['minor']}.{ver['patch']}"
|
|
||||||
)
|
|
||||||
w(f"// {len(evm_tokens)} tokens")
|
|
||||||
w("// DO NOT EDIT - regenerate with gen_erc20_registry.py")
|
|
||||||
w("")
|
|
||||||
|
|
||||||
used_static_names = set()
|
|
||||||
token_statics = []
|
|
||||||
for t in evm_tokens:
|
|
||||||
static_name = static_name_for_token(t, used_static_names)
|
|
||||||
token_statics.append((static_name, t))
|
|
||||||
|
|
||||||
for static_name, t in token_statics:
|
|
||||||
addr = t["address"]
|
|
||||||
name = escape(t.get("name", ""))
|
|
||||||
symbol = escape(t.get("symbol", ""))
|
|
||||||
decimals = t.get("decimals", 18)
|
|
||||||
logo = t.get("logoURI")
|
|
||||||
chain = t["chainId"]
|
|
||||||
|
|
||||||
logo_val = f'Some("{escape(logo)}")' if logo else "None"
|
|
||||||
|
|
||||||
w(f"pub static {static_name}: TokenInfo = TokenInfo {{")
|
|
||||||
w(f' name: "{name}",')
|
|
||||||
w(f' symbol: "{symbol}",')
|
|
||||||
w(f" decimals: {decimals},")
|
|
||||||
w(f' contract: address!("{addr}"),')
|
|
||||||
w(f" chain: {chain},")
|
|
||||||
w(f" logo_uri: {logo_val},")
|
|
||||||
w("};")
|
|
||||||
w("")
|
|
||||||
|
|
||||||
w("pub static TOKENS: &[&TokenInfo] = &[")
|
|
||||||
for static_name, _ in token_statics:
|
|
||||||
w(f" &{static_name},")
|
|
||||||
w("];")
|
|
||||||
w("")
|
|
||||||
w("pub fn get_token(")
|
|
||||||
w(" chain_id: alloy::primitives::ChainId,")
|
|
||||||
w(" address: alloy::primitives::Address,")
|
|
||||||
w(") -> Option<&'static TokenInfo> {")
|
|
||||||
w(" match (chain_id, address) {")
|
|
||||||
for static_name, t in token_statics:
|
|
||||||
w(
|
|
||||||
f' ({t["chainId"]}, addr) if addr == address!("{t["address"]}") => Some(&{static_name}),'
|
|
||||||
)
|
|
||||||
w(" _ => None,")
|
|
||||||
w(" }")
|
|
||||||
w("}")
|
|
||||||
w("")
|
|
||||||
|
|
||||||
with open(output, "w") as f:
|
|
||||||
f.write("\n".join(lines))
|
|
||||||
|
|
||||||
print(f"Wrote {len(token_statics)} tokens to {output}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
2916
server/Cargo.lock
generated
@@ -1,17 +1,15 @@
|
|||||||
[workspace]
|
[workspace]
|
||||||
members = [
|
members = [
|
||||||
"crates/*",
|
"crates/arbiter-client",
|
||||||
|
"crates/arbiter-proto",
|
||||||
|
"crates/arbiter-server",
|
||||||
|
"crates/arbiter-useragent",
|
||||||
]
|
]
|
||||||
resolver = "3"
|
resolver = "3"
|
||||||
|
|
||||||
|
|
||||||
[workspace.dependencies]
|
[workspace.dependencies]
|
||||||
tonic = { version = "0.14.3", features = [
|
tonic = { version = "0.14.3", features = ["deflate", "gzip", "tls-connect-info", "zstd"] }
|
||||||
"deflate",
|
|
||||||
"gzip",
|
|
||||||
"tls-connect-info",
|
|
||||||
"zstd",
|
|
||||||
] }
|
|
||||||
tracing = "0.1.44"
|
tracing = "0.1.44"
|
||||||
tokio = { version = "1.49.0", features = ["full"] }
|
tokio = { version = "1.49.0", features = ["full"] }
|
||||||
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core"] }
|
||||||
@@ -25,14 +23,3 @@ async-trait = "0.1.89"
|
|||||||
futures = "0.3.31"
|
futures = "0.3.31"
|
||||||
tokio-stream = { version = "0.1.18", features = ["full"] }
|
tokio-stream = { version = "0.1.18", features = ["full"] }
|
||||||
kameo = "0.19.2"
|
kameo = "0.19.2"
|
||||||
prost-types = { version = "0.14.3", features = ["chrono"] }
|
|
||||||
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
|
||||||
rstest = "0.26.1"
|
|
||||||
rustls-pki-types = "1.14.0"
|
|
||||||
alloy = "1.7.3"
|
|
||||||
rcgen = { version = "0.14.7", features = [
|
|
||||||
"aws_lc_rs",
|
|
||||||
"pem",
|
|
||||||
"x509-parser",
|
|
||||||
"zeroize",
|
|
||||||
], default-features = false }
|
|
||||||
|
|||||||
@@ -3,6 +3,5 @@ name = "arbiter-client"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
license = "Apache-2.0"
|
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ name = "arbiter-proto"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
license = "Apache-2.0"
|
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
@@ -13,24 +12,8 @@ hex = "0.4.3"
|
|||||||
tonic-prost = "0.14.3"
|
tonic-prost = "0.14.3"
|
||||||
prost = "0.14.3"
|
prost = "0.14.3"
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
url = "2.5.8"
|
|
||||||
miette.workspace = true
|
|
||||||
thiserror.workspace = true
|
|
||||||
rustls-pki-types.workspace = true
|
|
||||||
base64 = "0.22.1"
|
|
||||||
prost-types.workspace = true
|
|
||||||
tracing.workspace = true
|
|
||||||
async-trait.workspace = true
|
|
||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
tonic-prost-build = "0.14.3"
|
tonic-prost-build = "0.14.3"
|
||||||
|
|
||||||
[dev-dependencies]
|
|
||||||
rstest.workspace = true
|
|
||||||
rand.workspace = true
|
|
||||||
rcgen.workspace = true
|
|
||||||
|
|
||||||
[package.metadata.cargo-shear]
|
|
||||||
ignored = ["tonic-prost", "prost", "kameo"]
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -11,9 +11,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|||||||
.compile_protos(
|
.compile_protos(
|
||||||
&[
|
&[
|
||||||
format!("{}/arbiter.proto", PROTOBUF_DIR),
|
format!("{}/arbiter.proto", PROTOBUF_DIR),
|
||||||
format!("{}/user_agent.proto", PROTOBUF_DIR),
|
format!("{}/auth.proto", PROTOBUF_DIR),
|
||||||
format!("{}/client.proto", PROTOBUF_DIR),
|
|
||||||
format!("{}/evm.proto", PROTOBUF_DIR),
|
|
||||||
],
|
],
|
||||||
&[PROTOBUF_DIR.to_string()],
|
&[PROTOBUF_DIR.to_string()],
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,28 +1,22 @@
|
|||||||
pub mod transport;
|
use crate::proto::auth::AuthChallenge;
|
||||||
pub mod url;
|
|
||||||
|
|
||||||
use base64::{Engine, prelude::BASE64_STANDARD};
|
|
||||||
|
|
||||||
pub mod proto {
|
pub mod proto {
|
||||||
tonic::include_proto!("arbiter");
|
tonic::include_proto!("arbiter");
|
||||||
|
|
||||||
pub mod user_agent {
|
pub mod auth {
|
||||||
tonic::include_proto!("arbiter.user_agent");
|
tonic::include_proto!("arbiter.auth");
|
||||||
}
|
}
|
||||||
|
pub mod unseal {
|
||||||
pub mod client {
|
tonic::include_proto!("arbiter.unseal");
|
||||||
tonic::include_proto!("arbiter.client");
|
|
||||||
}
|
|
||||||
|
|
||||||
pub mod evm {
|
|
||||||
tonic::include_proto!("arbiter.evm");
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub static BOOTSTRAP_PATH: &str = "bootstrap_token";
|
pub mod transport;
|
||||||
|
|
||||||
|
pub static BOOTSTRAP_TOKEN_PATH: &'static str = "bootstrap_token";
|
||||||
|
|
||||||
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
||||||
static ARBITER_HOME: &str = ".arbiter";
|
static ARBITER_HOME: &'static str = ".arbiter";
|
||||||
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
|
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
|
||||||
std::io::ErrorKind::PermissionDenied,
|
std::io::ErrorKind::PermissionDenied,
|
||||||
"can not get home directory",
|
"can not get home directory",
|
||||||
@@ -34,7 +28,7 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
|||||||
Ok(arbiter_home)
|
Ok(arbiter_home)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn format_challenge(nonce: i32, pubkey: &[u8]) -> Vec<u8> {
|
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
|
||||||
let concat_form = format!("{}:{}", nonce, BASE64_STANDARD.encode(pubkey));
|
let concat_form = format!("{}:{}", challenge.nonce, hex::encode(&challenge.pubkey));
|
||||||
concat_form.into_bytes()
|
concat_form.into_bytes().to_vec()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,293 +1,46 @@
|
|||||||
//! Transport-facing abstractions for protocol/session code.
|
use futures::{Stream, StreamExt};
|
||||||
//!
|
use tokio::sync::mpsc::{self, error::SendError};
|
||||||
//! This module separates three concerns:
|
use tonic::{Status, Streaming};
|
||||||
//!
|
|
||||||
//! - protocol/session logic wants a small duplex interface ([`Bi`])
|
|
||||||
//! - transport adapters push concrete stream items to an underlying IO layer
|
|
||||||
//! - transport boundaries translate between protocol-facing and transport-facing
|
|
||||||
//! item types via direction-specific converters
|
|
||||||
//!
|
|
||||||
//! [`Bi`] is intentionally minimal and transport-agnostic:
|
|
||||||
//! - [`Bi::recv`] yields inbound protocol messages
|
|
||||||
//! - [`Bi::send`] accepts outbound protocol/domain items
|
|
||||||
//!
|
|
||||||
//! # Generic Ordering Rule
|
|
||||||
//!
|
|
||||||
//! This module uses a single convention consistently: when a type or trait is
|
|
||||||
//! parameterized by protocol message directions, the generic parameters are
|
|
||||||
//! declared as `Inbound` first, then `Outbound`.
|
|
||||||
//!
|
|
||||||
//! For [`Bi`], that means `Bi<Inbound, Outbound>`:
|
|
||||||
//! - `recv() -> Option<Inbound>`
|
|
||||||
//! - `send(Outbound)`
|
|
||||||
//!
|
|
||||||
//! For adapter types that are parameterized by direction-specific converters,
|
|
||||||
//! inbound-related converter parameters are declared before outbound-related
|
|
||||||
//! converter parameters.
|
|
||||||
//!
|
|
||||||
//! [`RecvConverter`] and [`SendConverter`] are infallible conversion traits used
|
|
||||||
//! by adapters to map between protocol-facing and transport-facing item types.
|
|
||||||
//! The traits themselves are not result-aware; adapters decide how transport
|
|
||||||
//! errors are handled before (or instead of) conversion.
|
|
||||||
//!
|
|
||||||
//! [`grpc::GrpcAdapter`] combines:
|
|
||||||
//! - a tonic inbound stream
|
|
||||||
//! - a Tokio sender for outbound transport items
|
|
||||||
//! - a [`RecvConverter`] for the receive path
|
|
||||||
//! - a [`SendConverter`] for the send path
|
|
||||||
//!
|
|
||||||
//! [`DummyTransport`] is a no-op implementation useful for tests and local actor
|
|
||||||
//! execution where no real network stream exists.
|
|
||||||
//!
|
|
||||||
//! # Component Interaction
|
|
||||||
//!
|
|
||||||
//! ```text
|
|
||||||
//! inbound (network -> protocol)
|
|
||||||
//! ============================
|
|
||||||
//!
|
|
||||||
//! tonic::Streaming<RecvTransport>
|
|
||||||
//! -> grpc::GrpcAdapter::recv()
|
|
||||||
//! |
|
|
||||||
//! +--> on `Ok(item)`: RecvConverter::convert(RecvTransport) -> Inbound
|
|
||||||
//! +--> on `Err(status)`: log error and close stream (`None`)
|
|
||||||
//! -> Bi::recv()
|
|
||||||
//! -> protocol/session actor
|
|
||||||
//!
|
|
||||||
//! outbound (protocol -> network)
|
|
||||||
//! ==============================
|
|
||||||
//!
|
|
||||||
//! protocol/session actor
|
|
||||||
//! -> Bi::send(Outbound)
|
|
||||||
//! -> grpc::GrpcAdapter::send()
|
|
||||||
//! |
|
|
||||||
//! +--> SendConverter::convert(Outbound) -> SendTransport
|
|
||||||
//! -> Tokio mpsc::Sender<SendTransport>
|
|
||||||
//! -> tonic response stream
|
|
||||||
//! ```
|
|
||||||
//!
|
|
||||||
//! # Design Notes
|
|
||||||
//!
|
|
||||||
//! - `send()` returns [`Error`] only for transport delivery failures (for
|
|
||||||
//! example, when the outbound channel is closed).
|
|
||||||
//! - [`grpc::GrpcAdapter`] logs tonic receive errors and treats them as stream
|
|
||||||
//! closure (`None`).
|
|
||||||
//! - When protocol-facing and transport-facing types are identical, use
|
|
||||||
//! [`IdentityRecvConverter`] / [`IdentitySendConverter`].
|
|
||||||
|
|
||||||
use std::marker::PhantomData;
|
|
||||||
|
|
||||||
use async_trait::async_trait;
|
// Abstraction for stream for sans-io capabilities
|
||||||
|
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
|
||||||
/// Errors returned by transport adapters implementing [`Bi`].
|
type Error;
|
||||||
#[derive(thiserror::Error, Debug)]
|
fn send(
|
||||||
pub enum Error {
|
&mut self,
|
||||||
#[error("Transport channel is closed")]
|
item: Result<U, Status>,
|
||||||
ChannelClosed,
|
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Minimal bidirectional transport abstraction used by protocol code.
|
// Bi-directional stream abstraction for handling gRPC streaming requests and responses
|
||||||
///
|
pub struct BiStream<T, U> {
|
||||||
/// `Bi<Inbound, Outbound>` models a duplex channel with:
|
pub request_stream: Streaming<T>,
|
||||||
/// - inbound items of type `Inbound` read via [`Bi::recv`]
|
pub response_sender: mpsc::Sender<Result<U, Status>>,
|
||||||
/// - outbound items of type `Outbound` written via [`Bi::send`]
|
|
||||||
#[async_trait]
|
|
||||||
pub trait Bi<Inbound, Outbound>: Send + Sync + 'static {
|
|
||||||
async fn send(&mut self, item: Outbound) -> Result<(), Error>;
|
|
||||||
|
|
||||||
async fn recv(&mut self) -> Option<Inbound>;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Converts transport-facing inbound items into protocol-facing inbound items.
|
impl<T, U> Stream for BiStream<T, U>
|
||||||
pub trait RecvConverter: Send + Sync + 'static {
|
|
||||||
type Input;
|
|
||||||
type Output;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Converts protocol/domain outbound items into transport-facing outbound items.
|
|
||||||
pub trait SendConverter: Send + Sync + 'static {
|
|
||||||
type Input;
|
|
||||||
type Output;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// A [`RecvConverter`] that forwards values unchanged.
|
|
||||||
pub struct IdentityRecvConverter<T> {
|
|
||||||
_marker: PhantomData<T>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> IdentityRecvConverter<T> {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
_marker: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Default for IdentityRecvConverter<T> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> RecvConverter for IdentityRecvConverter<T>
|
|
||||||
where
|
where
|
||||||
T: Send + Sync + 'static,
|
T: Send + 'static,
|
||||||
|
U: Send + 'static,
|
||||||
{
|
{
|
||||||
type Input = T;
|
type Item = Result<T, Status>;
|
||||||
type Output = T;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
fn poll_next(
|
||||||
item
|
mut self: std::pin::Pin<&mut Self>,
|
||||||
|
cx: &mut std::task::Context<'_>,
|
||||||
|
) -> std::task::Poll<Option<Self::Item>> {
|
||||||
|
self.request_stream.poll_next_unpin(cx)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A [`SendConverter`] that forwards values unchanged.
|
impl<T, U> Bi<T, U> for BiStream<T, U>
|
||||||
pub struct IdentitySendConverter<T> {
|
|
||||||
_marker: PhantomData<T>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> IdentitySendConverter<T> {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
_marker: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> Default for IdentitySendConverter<T> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> SendConverter for IdentitySendConverter<T>
|
|
||||||
where
|
where
|
||||||
T: Send + Sync + 'static,
|
T: Send + 'static,
|
||||||
|
U: Send + 'static,
|
||||||
{
|
{
|
||||||
type Input = T;
|
type Error = SendError<Result<U, Status>>;
|
||||||
type Output = T;
|
|
||||||
|
|
||||||
fn convert(&self, item: Self::Input) -> Self::Output {
|
async fn send(&mut self, item: Result<U, Status>) -> Result<(), Self::Error> {
|
||||||
item
|
self.response_sender.send(item).await
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// gRPC-specific transport adapters and helpers.
|
|
||||||
pub mod grpc {
|
|
||||||
use async_trait::async_trait;
|
|
||||||
use futures::StreamExt;
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tonic::Streaming;
|
|
||||||
|
|
||||||
use super::{Bi, Error, RecvConverter, SendConverter};
|
|
||||||
|
|
||||||
/// [`Bi`] adapter backed by a tonic gRPC bidirectional stream.
|
|
||||||
///
|
|
||||||
|
|
||||||
/// Tonic receive errors are logged and treated as stream closure (`None`).
|
|
||||||
/// The receive converter is only invoked for successful inbound transport
|
|
||||||
/// items.
|
|
||||||
pub struct GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
{
|
|
||||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
|
||||||
receiver: Streaming<InboundConverter::Input>,
|
|
||||||
inbound_converter: InboundConverter,
|
|
||||||
outbound_converter: OutboundConverter,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<InboundTransport, Inbound, InboundConverter, OutboundConverter>
|
|
||||||
GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter<Input = InboundTransport, Output = Inbound>,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
{
|
|
||||||
pub fn new(
|
|
||||||
sender: mpsc::Sender<OutboundConverter::Output>,
|
|
||||||
receiver: Streaming<InboundTransport>,
|
|
||||||
inbound_converter: InboundConverter,
|
|
||||||
outbound_converter: OutboundConverter,
|
|
||||||
) -> Self {
|
|
||||||
Self {
|
|
||||||
sender,
|
|
||||||
receiver,
|
|
||||||
inbound_converter,
|
|
||||||
outbound_converter,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<InboundConverter, OutboundConverter> Bi<InboundConverter::Output, OutboundConverter::Input>
|
|
||||||
for GrpcAdapter<InboundConverter, OutboundConverter>
|
|
||||||
where
|
|
||||||
InboundConverter: RecvConverter,
|
|
||||||
OutboundConverter: SendConverter,
|
|
||||||
OutboundConverter::Input: Send + 'static,
|
|
||||||
OutboundConverter::Output: Send + 'static,
|
|
||||||
{
|
|
||||||
#[tracing::instrument(level = "trace", skip(self, item))]
|
|
||||||
async fn send(&mut self, item: OutboundConverter::Input) -> Result<(), Error> {
|
|
||||||
let outbound = self.outbound_converter.convert(item);
|
|
||||||
self.sender
|
|
||||||
.send(outbound)
|
|
||||||
.await
|
|
||||||
.map_err(|_| Error::ChannelClosed)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tracing::instrument(level = "trace", skip(self))]
|
|
||||||
async fn recv(&mut self) -> Option<InboundConverter::Output> {
|
|
||||||
match self.receiver.next().await {
|
|
||||||
Some(Ok(item)) => Some(self.inbound_converter.convert(item)),
|
|
||||||
Some(Err(error)) => {
|
|
||||||
tracing::error!(error = ?error, "grpc transport recv failed; closing stream");
|
|
||||||
None
|
|
||||||
}
|
|
||||||
None => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// No-op [`Bi`] transport for tests and manual actor usage.
|
|
||||||
///
|
|
||||||
/// `send` drops all items and succeeds. [`Bi::recv`] never resolves and therefore
|
|
||||||
/// does not busy-wait or spuriously close the stream.
|
|
||||||
pub struct DummyTransport<Inbound, Outbound> {
|
|
||||||
_marker: PhantomData<(Inbound, Outbound)>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<Inbound, Outbound> DummyTransport<Inbound, Outbound> {
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
_marker: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<Inbound, Outbound> Default for DummyTransport<Inbound, Outbound> {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[async_trait]
|
|
||||||
impl<Inbound, Outbound> Bi<Inbound, Outbound> for DummyTransport<Inbound, Outbound>
|
|
||||||
where
|
|
||||||
Inbound: Send + Sync + 'static,
|
|
||||||
Outbound: Send + Sync + 'static,
|
|
||||||
{
|
|
||||||
async fn send(&mut self, _item: Outbound) -> Result<(), Error> {
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn recv(&mut self) -> Option<Inbound> {
|
|
||||||
std::future::pending::<()>().await;
|
|
||||||
None
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,128 +0,0 @@
|
|||||||
use std::fmt::Display;
|
|
||||||
|
|
||||||
use base64::{Engine as _, prelude::BASE64_URL_SAFE};
|
|
||||||
use rustls_pki_types::CertificateDer;
|
|
||||||
|
|
||||||
const ARBITER_URL_SCHEME: &str = "arbiter";
|
|
||||||
const CERT_QUERY_KEY: &str = "cert";
|
|
||||||
const BOOTSTRAP_TOKEN_QUERY_KEY: &str = "bootstrap_token";
|
|
||||||
|
|
||||||
pub struct ArbiterUrl {
|
|
||||||
pub host: String,
|
|
||||||
pub port: u16,
|
|
||||||
pub ca_cert: CertificateDer<'static>,
|
|
||||||
pub bootstrap_token: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Display for ArbiterUrl {
|
|
||||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
||||||
let mut base = format!(
|
|
||||||
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
|
|
||||||
self.host,
|
|
||||||
self.port,
|
|
||||||
BASE64_URL_SAFE.encode(self.ca_cert.to_vec())
|
|
||||||
);
|
|
||||||
if let Some(token) = &self.bootstrap_token {
|
|
||||||
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
|
|
||||||
}
|
|
||||||
f.write_str(&base)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Invalid URL scheme, expected '{ARBITER_URL_SCHEME}://'")]
|
|
||||||
#[diagnostic(
|
|
||||||
code(arbiter::url::invalid_scheme),
|
|
||||||
help("The URL must start with '{ARBITER_URL_SCHEME}://'")
|
|
||||||
)]
|
|
||||||
InvalidScheme,
|
|
||||||
#[error("Missing host in URL")]
|
|
||||||
#[diagnostic(
|
|
||||||
code(arbiter::url::missing_host),
|
|
||||||
help("The URL must include a host, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:<port>'")
|
|
||||||
)]
|
|
||||||
MissingHost,
|
|
||||||
#[error("Missing port in URL")]
|
|
||||||
#[diagnostic(
|
|
||||||
code(arbiter::url::missing_port),
|
|
||||||
help("The URL must include a port, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:1234'")
|
|
||||||
)]
|
|
||||||
MissingPort,
|
|
||||||
#[error("Missing 'cert' query parameter in URL")]
|
|
||||||
#[diagnostic(
|
|
||||||
code(arbiter::url::missing_cert),
|
|
||||||
help("The URL must include a 'cert' query parameter")
|
|
||||||
)]
|
|
||||||
MissingCert,
|
|
||||||
#[error("Invalid base64 in 'cert' query parameter: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::url::invalid_cert_base64))]
|
|
||||||
InvalidCertBase64(#[from] base64::DecodeError),
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a> TryFrom<&'a str> for ArbiterUrl {
|
|
||||||
type Error = Error;
|
|
||||||
|
|
||||||
fn try_from(value: &'a str) -> Result<Self, Self::Error> {
|
|
||||||
let url = url::Url::parse(value).map_err(|_| Error::InvalidScheme)?;
|
|
||||||
|
|
||||||
if url.scheme() != ARBITER_URL_SCHEME {
|
|
||||||
return Err(Error::InvalidScheme);
|
|
||||||
}
|
|
||||||
|
|
||||||
let host = url.host_str().ok_or(Error::MissingHost)?.to_string();
|
|
||||||
let port = url.port().ok_or(Error::MissingPort)?;
|
|
||||||
let cert_str = url
|
|
||||||
.query_pairs()
|
|
||||||
.find(|(k, _)| k == CERT_QUERY_KEY)
|
|
||||||
.ok_or(Error::MissingCert)?
|
|
||||||
.1;
|
|
||||||
|
|
||||||
let cert = BASE64_URL_SAFE.decode(cert_str.as_ref())?;
|
|
||||||
let cert = CertificateDer::from_slice(&cert).into_owned();
|
|
||||||
|
|
||||||
let bootstrap_token = url
|
|
||||||
.query_pairs()
|
|
||||||
.find(|(k, _)| k == BOOTSTRAP_TOKEN_QUERY_KEY)
|
|
||||||
.map(|(_, v)| v.to_string());
|
|
||||||
|
|
||||||
Ok(ArbiterUrl {
|
|
||||||
host,
|
|
||||||
port,
|
|
||||||
ca_cert: cert,
|
|
||||||
bootstrap_token,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use rcgen::generate_simple_self_signed;
|
|
||||||
use rstest::rstest;
|
|
||||||
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[rstest]
|
|
||||||
|
|
||||||
fn test_parsing_correctness(
|
|
||||||
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
|
|
||||||
|
|
||||||
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
|
|
||||||
) {
|
|
||||||
let cert = generate_simple_self_signed(&["Arbiter CA".into()]).unwrap();
|
|
||||||
let cert = cert.cert.der();
|
|
||||||
|
|
||||||
let url = ArbiterUrl {
|
|
||||||
host: host.to_string(),
|
|
||||||
port: 1234,
|
|
||||||
ca_cert: cert.clone().into_owned(),
|
|
||||||
bootstrap_token,
|
|
||||||
};
|
|
||||||
let url_str = url.to_string();
|
|
||||||
let parsed_url = ArbiterUrl::try_from(url_str.as_str()).unwrap();
|
|
||||||
assert_eq!(url.host, parsed_url.host);
|
|
||||||
assert_eq!(url.port, parsed_url.port);
|
|
||||||
assert_eq!(url.ca_cert.to_vec(), parsed_url.ca_cert.to_vec());
|
|
||||||
assert_eq!(url.bootstrap_token, parsed_url.bootstrap_token);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -3,7 +3,6 @@ name = "arbiter-server"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
license = "Apache-2.0"
|
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
||||||
@@ -18,7 +17,6 @@ arbiter-proto.path = "../arbiter-proto"
|
|||||||
tracing.workspace = true
|
tracing.workspace = true
|
||||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
tonic.features = ["tls-aws-lc"]
|
|
||||||
tokio.workspace = true
|
tokio.workspace = true
|
||||||
rustls.workspace = true
|
rustls.workspace = true
|
||||||
smlang.workspace = true
|
smlang.workspace = true
|
||||||
@@ -31,20 +29,21 @@ futures.workspace = true
|
|||||||
tokio-stream.workspace = true
|
tokio-stream.workspace = true
|
||||||
dashmap = "6.1.0"
|
dashmap = "6.1.0"
|
||||||
rand.workspace = true
|
rand.workspace = true
|
||||||
rcgen.workspace = true
|
rcgen = { version = "0.14.7", features = [
|
||||||
|
"aws_lc_rs",
|
||||||
|
"pem",
|
||||||
|
"x509-parser",
|
||||||
|
"zeroize",
|
||||||
|
], default-features = false }
|
||||||
chrono.workspace = true
|
chrono.workspace = true
|
||||||
memsafe = "0.4.0"
|
memsafe = "0.4.0"
|
||||||
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
x25519-dalek.workspace = true
|
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
||||||
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
||||||
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
||||||
restructed = "0.2.2"
|
restructed = "0.2.2"
|
||||||
strum = { version = "0.27.2", features = ["derive"] }
|
strum = { version = "0.27.2", features = ["derive"] }
|
||||||
pem = "3.0.6"
|
|
||||||
k256 = "0.13.4"
|
|
||||||
alloy.workspace = true
|
|
||||||
arbiter-tokens-registry.path = "../arbiter-tokens-registry"
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
insta = "1.46.3"
|
insta = "1.46.3"
|
||||||
|
|||||||
@@ -24,24 +24,14 @@ create unique index if not exists uniq_nonce_per_root_key on aead_encrypted (
|
|||||||
associated_root_key_id
|
associated_root_key_id
|
||||||
);
|
);
|
||||||
|
|
||||||
create table if not exists tls_history (
|
|
||||||
id INTEGER not null PRIMARY KEY,
|
|
||||||
cert text not null,
|
|
||||||
cert_key text not null, -- PEM Encoded private key
|
|
||||||
ca_cert text not null,
|
|
||||||
ca_key text not null, -- PEM Encoded private key
|
|
||||||
created_at integer not null default(unixepoch ('now'))
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- This is a singleton
|
-- This is a singleton
|
||||||
create table if not exists arbiter_settings (
|
create table if not exists arbiter_settings (
|
||||||
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
|
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
|
||||||
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
|
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
|
||||||
tls_id integer references tls_history (id) on delete RESTRICT
|
cert_key blob not null,
|
||||||
|
cert blob not null
|
||||||
) STRICT;
|
) STRICT;
|
||||||
|
|
||||||
insert into arbiter_settings (id) values (1) on conflict do nothing; -- ensure singleton row exists
|
|
||||||
|
|
||||||
create table if not exists useragent_client (
|
create table if not exists useragent_client (
|
||||||
id integer not null primary key,
|
id integer not null primary key,
|
||||||
nonce integer not null default(1), -- used for auth challenge
|
nonce integer not null default(1), -- used for auth challenge
|
||||||
@@ -56,103 +46,4 @@ create table if not exists program_client (
|
|||||||
public_key blob not null,
|
public_key blob not null,
|
||||||
created_at integer not null default(unixepoch ('now')),
|
created_at integer not null default(unixepoch ('now')),
|
||||||
updated_at integer not null default(unixepoch ('now'))
|
updated_at integer not null default(unixepoch ('now'))
|
||||||
) STRICT;
|
) STRICT;
|
||||||
|
|
||||||
create table if not exists evm_wallet (
|
|
||||||
id integer not null primary key,
|
|
||||||
address blob not null, -- 20-byte Ethereum address
|
|
||||||
aead_encrypted_id integer not null references aead_encrypted (id) on delete RESTRICT,
|
|
||||||
created_at integer not null default(unixepoch ('now'))
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
create unique index if not exists uniq_evm_wallet_address on evm_wallet (address);
|
|
||||||
create unique index if not exists uniq_evm_wallet_aead on evm_wallet (aead_encrypted_id);
|
|
||||||
|
|
||||||
create table if not exists evm_ether_transfer_limit (
|
|
||||||
id integer not null primary key,
|
|
||||||
window_secs integer not null, -- window duration in seconds
|
|
||||||
max_volume blob not null -- big-endian 32-byte U256
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- Shared grant properties: client scope, timeframe, fee caps, and rate limit
|
|
||||||
create table if not exists evm_basic_grant (
|
|
||||||
id integer not null primary key,
|
|
||||||
wallet_id integer not null references evm_wallet(id) on delete restrict,
|
|
||||||
client_id integer not null references program_client(id) on delete restrict,
|
|
||||||
chain_id integer not null, -- EIP-155 chain ID
|
|
||||||
valid_from integer, -- unix timestamp (seconds), null = no lower bound
|
|
||||||
valid_until integer, -- unix timestamp (seconds), null = no upper bound
|
|
||||||
max_gas_fee_per_gas blob, -- big-endian 32-byte U256, null = unlimited
|
|
||||||
max_priority_fee_per_gas blob, -- big-endian 32-byte U256, null = unlimited
|
|
||||||
rate_limit_count integer, -- max transactions in window, null = unlimited
|
|
||||||
rate_limit_window_secs integer, -- window duration in seconds, null = unlimited
|
|
||||||
revoked_at integer, -- unix timestamp when revoked, null = still active
|
|
||||||
created_at integer not null default(unixepoch('now'))
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- Shared transaction log for all EVM grants, used for rate limit tracking and auditing
|
|
||||||
create table if not exists evm_transaction_log (
|
|
||||||
id integer not null primary key,
|
|
||||||
grant_id integer not null references evm_basic_grant(id) on delete restrict,
|
|
||||||
client_id integer not null references program_client(id) on delete restrict,
|
|
||||||
wallet_id integer not null references evm_wallet(id) on delete restrict,
|
|
||||||
chain_id integer not null,
|
|
||||||
eth_value blob not null, -- always present on any EVM tx
|
|
||||||
signed_at integer not null default(unixepoch('now'))
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
create index if not exists idx_evm_basic_grant_wallet_chain on evm_basic_grant(client_id, wallet_id, chain_id);
|
|
||||||
|
|
||||||
-- ===============================
|
|
||||||
-- ERC20 token transfer grant
|
|
||||||
-- ===============================
|
|
||||||
create table if not exists evm_token_transfer_grant (
|
|
||||||
id integer not null primary key,
|
|
||||||
basic_grant_id integer not null unique references evm_basic_grant(id) on delete cascade,
|
|
||||||
token_contract blob not null, -- 20-byte ERC20 contract address
|
|
||||||
receiver blob -- 20-byte recipient address or null if every recipient allowed
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- Per-window volume limits for token transfer grants
|
|
||||||
create table if not exists evm_token_transfer_volume_limit (
|
|
||||||
id integer not null primary key,
|
|
||||||
grant_id integer not null references evm_token_transfer_grant(id) on delete cascade,
|
|
||||||
window_secs integer not null, -- window duration in seconds
|
|
||||||
max_volume blob not null -- big-endian 32-byte U256
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- Log table for token transfer grant usage
|
|
||||||
create table if not exists evm_token_transfer_log (
|
|
||||||
id integer not null primary key,
|
|
||||||
grant_id integer not null references evm_token_transfer_grant(id) on delete restrict,
|
|
||||||
log_id integer not null references evm_transaction_log(id) on delete restrict,
|
|
||||||
chain_id integer not null, -- EIP-155 chain ID
|
|
||||||
token_contract blob not null, -- 20-byte ERC20 contract address
|
|
||||||
recipient_address blob not null, -- 20-byte recipient address
|
|
||||||
value blob not null, -- big-endian 32-byte U256
|
|
||||||
created_at integer not null default(unixepoch('now'))
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
create index if not exists idx_token_transfer_log_grant on evm_token_transfer_log(grant_id);
|
|
||||||
create index if not exists idx_token_transfer_log_log_id on evm_token_transfer_log(log_id);
|
|
||||||
create index if not exists idx_token_transfer_log_chain on evm_token_transfer_log(chain_id);
|
|
||||||
|
|
||||||
|
|
||||||
-- ===============================
|
|
||||||
-- Ether transfer grant (uses base log)
|
|
||||||
-- ===============================
|
|
||||||
create table if not exists evm_ether_transfer_grant (
|
|
||||||
id integer not null primary key,
|
|
||||||
basic_grant_id integer not null unique references evm_basic_grant(id) on delete cascade,
|
|
||||||
limit_id integer not null references evm_ether_transfer_limit(id) on delete restrict
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
-- Specific recipient addresses for an ether transfer grant
|
|
||||||
create table if not exists evm_ether_transfer_grant_target (
|
|
||||||
id integer not null primary key,
|
|
||||||
grant_id integer not null references evm_ether_transfer_grant(id) on delete cascade,
|
|
||||||
address blob not null -- 20-byte recipient address
|
|
||||||
) STRICT;
|
|
||||||
|
|
||||||
create unique index if not exists uniq_ether_transfer_target on evm_ether_transfer_grant_target(grant_id, address);
|
|
||||||
|
|
||||||
BIN
server/crates/arbiter-server/src/.DS_Store
vendored
Normal file
4
server/crates/arbiter-server/src/actors.rs
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
pub mod user_agent;
|
||||||
|
pub mod client;
|
||||||
|
pub(crate) mod bootstrap;
|
||||||
|
pub(crate) mod keyholder;
|
||||||
@@ -1,37 +1,34 @@
|
|||||||
use arbiter_proto::{BOOTSTRAP_PATH, home_path};
|
use arbiter_proto::{BOOTSTRAP_TOKEN_PATH, home_path};
|
||||||
use diesel::QueryDsl;
|
use diesel::QueryDsl;
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::{Actor, messages};
|
use kameo::{Actor, messages};
|
||||||
use miette::Diagnostic;
|
use miette::Diagnostic;
|
||||||
use rand::{
|
use rand::{RngExt, distr::StandardUniform, make_rng, rngs::StdRng};
|
||||||
RngExt,
|
|
||||||
distr::{Alphanumeric},
|
|
||||||
make_rng,
|
|
||||||
rngs::StdRng,
|
|
||||||
};
|
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
use tracing::info;
|
||||||
|
|
||||||
use crate::db::{self, DatabasePool, schema};
|
use crate::db::{self, DatabasePool, schema};
|
||||||
|
|
||||||
const TOKEN_LENGTH: usize = 64;
|
const TOKEN_LENGTH: usize = 64;
|
||||||
|
|
||||||
pub async fn generate_token() -> Result<String, std::io::Error> {
|
pub async fn generate_token() -> Result<String, std::io::Error> {
|
||||||
let rng: StdRng = make_rng();
|
let rng: StdRng = make_rng();
|
||||||
|
|
||||||
let token: String = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
|
let token: String = rng
|
||||||
Default::default(),
|
.sample_iter::<char, _>(StandardUniform)
|
||||||
|mut accum, char| {
|
.take(TOKEN_LENGTH)
|
||||||
|
.fold(Default::default(), |mut accum, char| {
|
||||||
accum += char.to_string().as_str();
|
accum += char.to_string().as_str();
|
||||||
accum
|
accum
|
||||||
},
|
});
|
||||||
);
|
|
||||||
|
|
||||||
tokio::fs::write(home_path()?.join(BOOTSTRAP_PATH), token.as_str()).await?;
|
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
|
||||||
|
|
||||||
Ok(token)
|
Ok(token)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Error, Debug, Diagnostic)]
|
#[derive(Error, Debug, Diagnostic)]
|
||||||
pub enum Error {
|
pub enum BootstrapError {
|
||||||
#[error("Database error: {0}")]
|
#[error("Database error: {0}")]
|
||||||
#[diagnostic(code(arbiter_server::bootstrap::database))]
|
#[diagnostic(code(arbiter_server::bootstrap::database))]
|
||||||
Database(#[from] db::PoolError),
|
Database(#[from] db::PoolError),
|
||||||
@@ -51,7 +48,7 @@ pub struct Bootstrapper {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Bootstrapper {
|
impl Bootstrapper {
|
||||||
pub async fn new(db: &DatabasePool) -> Result<Self, Error> {
|
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrapError> {
|
||||||
let mut conn = db.get().await?;
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
let row_count: i64 = schema::useragent_client::table
|
let row_count: i64 = schema::useragent_client::table
|
||||||
@@ -61,9 +58,10 @@ impl Bootstrapper {
|
|||||||
|
|
||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
|
|
||||||
let token = if row_count == 0 {
|
let token = if row_count == 0 {
|
||||||
let token = generate_token().await?;
|
let token = generate_token().await?;
|
||||||
|
info!(%token, "Generated bootstrap token");
|
||||||
|
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
|
||||||
Some(token)
|
Some(token)
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
@@ -71,6 +69,11 @@ impl Bootstrapper {
|
|||||||
|
|
||||||
Ok(Self { token })
|
Ok(Self { token })
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub fn get_token(&self) -> Option<String> {
|
||||||
|
self.token.clone()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[messages]
|
#[messages]
|
||||||
@@ -93,11 +96,3 @@ impl Bootstrapper {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[messages]
|
|
||||||
impl Bootstrapper {
|
|
||||||
#[message]
|
|
||||||
pub fn get_token(&self) -> Option<String> {
|
|
||||||
self.token.clone()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
12
server/crates/arbiter-server/src/actors/client.rs
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
use arbiter_proto::{
|
||||||
|
proto::{ClientRequest, ClientResponse},
|
||||||
|
transport::Bi,
|
||||||
|
};
|
||||||
|
|
||||||
|
use crate::ServerContext;
|
||||||
|
|
||||||
|
pub(crate) async fn handle_client(
|
||||||
|
_context: ServerContext,
|
||||||
|
_bistream: impl Bi<ClientRequest, ClientResponse>,
|
||||||
|
) {
|
||||||
|
}
|
||||||
@@ -1,102 +0,0 @@
|
|||||||
use arbiter_proto::proto::client::{
|
|
||||||
AuthChallengeRequest, AuthChallengeSolution, ClientRequest,
|
|
||||||
client_request::Payload as ClientRequestPayload,
|
|
||||||
};
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
use crate::actors::client::{
|
|
||||||
ClientConnection,
|
|
||||||
auth::state::{AuthContext, AuthStateMachine},
|
|
||||||
session::ClientSession,
|
|
||||||
};
|
|
||||||
|
|
||||||
#[derive(thiserror::Error, Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Unexpected message payload")]
|
|
||||||
UnexpectedMessagePayload,
|
|
||||||
#[error("Invalid client public key length")]
|
|
||||||
InvalidClientPubkeyLength,
|
|
||||||
#[error("Invalid client public key encoding")]
|
|
||||||
InvalidAuthPubkeyEncoding,
|
|
||||||
#[error("Database pool unavailable")]
|
|
||||||
DatabasePoolUnavailable,
|
|
||||||
#[error("Database operation failed")]
|
|
||||||
DatabaseOperationFailed,
|
|
||||||
#[error("Public key not registered")]
|
|
||||||
PublicKeyNotRegistered,
|
|
||||||
#[error("Invalid signature length")]
|
|
||||||
InvalidSignatureLength,
|
|
||||||
#[error("Invalid challenge solution")]
|
|
||||||
InvalidChallengeSolution,
|
|
||||||
#[error("Transport error")]
|
|
||||||
Transport,
|
|
||||||
}
|
|
||||||
|
|
||||||
mod state;
|
|
||||||
use state::*;
|
|
||||||
|
|
||||||
fn parse_auth_event(payload: ClientRequestPayload) -> Result<AuthEvents, Error> {
|
|
||||||
match payload {
|
|
||||||
ClientRequestPayload::AuthChallengeRequest(AuthChallengeRequest { pubkey }) => {
|
|
||||||
let pubkey_bytes = pubkey.as_array().ok_or(Error::InvalidClientPubkeyLength)?;
|
|
||||||
let pubkey = VerifyingKey::from_bytes(pubkey_bytes)
|
|
||||||
.map_err(|_| Error::InvalidAuthPubkeyEncoding)?;
|
|
||||||
Ok(AuthEvents::AuthRequest(ChallengeRequest {
|
|
||||||
pubkey: pubkey.into(),
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
ClientRequestPayload::AuthChallengeSolution(AuthChallengeSolution { signature }) => {
|
|
||||||
Ok(AuthEvents::ReceivedSolution(ChallengeSolution {
|
|
||||||
solution: signature,
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
_ => Err(Error::UnexpectedMessagePayload) ,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn authenticate(props: &mut ClientConnection) -> Result<VerifyingKey, Error> {
|
|
||||||
let mut state = AuthStateMachine::new(AuthContext::new(props));
|
|
||||||
|
|
||||||
loop {
|
|
||||||
let transport = state.context_mut().conn.transport.as_mut();
|
|
||||||
let Some(ClientRequest {
|
|
||||||
payload: Some(payload),
|
|
||||||
}) = transport.recv().await
|
|
||||||
else {
|
|
||||||
return Err(Error::Transport);
|
|
||||||
};
|
|
||||||
|
|
||||||
let event = parse_auth_event(payload)?;
|
|
||||||
|
|
||||||
match state.process_event(event).await {
|
|
||||||
Ok(AuthStates::AuthOk(key)) => return Ok(key.clone()),
|
|
||||||
Err(AuthError::ActionFailed(err)) => {
|
|
||||||
error!(?err, "State machine action failed");
|
|
||||||
return Err(err);
|
|
||||||
}
|
|
||||||
Err(AuthError::GuardFailed(err)) => {
|
|
||||||
error!(?err, "State machine guard failed");
|
|
||||||
return Err(err);
|
|
||||||
}
|
|
||||||
Err(AuthError::InvalidEvent) => {
|
|
||||||
error!("Invalid event for current state");
|
|
||||||
return Err(Error::InvalidChallengeSolution);
|
|
||||||
}
|
|
||||||
Err(AuthError::TransitionsFailed) => {
|
|
||||||
error!("Invalid state transition");
|
|
||||||
return Err(Error::InvalidChallengeSolution);
|
|
||||||
}
|
|
||||||
|
|
||||||
_ => (),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn authenticate_and_create(
|
|
||||||
mut props: ClientConnection,
|
|
||||||
) -> Result<ClientSession, Error> {
|
|
||||||
let key = authenticate(&mut props).await?;
|
|
||||||
let session = ClientSession::new(props, key);
|
|
||||||
Ok(session)
|
|
||||||
}
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
use arbiter_proto::proto::client::{
|
|
||||||
AuthChallenge, ClientResponse,
|
|
||||||
client_response::Payload as ClientResponsePayload,
|
|
||||||
};
|
|
||||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, update};
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
use super::Error;
|
|
||||||
use crate::{actors::client::ClientConnection, db::schema};
|
|
||||||
|
|
||||||
pub struct ChallengeRequest {
|
|
||||||
pub pubkey: VerifyingKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ChallengeContext {
|
|
||||||
pub challenge: AuthChallenge,
|
|
||||||
pub key: VerifyingKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ChallengeSolution {
|
|
||||||
pub solution: Vec<u8>,
|
|
||||||
}
|
|
||||||
|
|
||||||
smlang::statemachine!(
|
|
||||||
name: Auth,
|
|
||||||
custom_error: true,
|
|
||||||
transitions: {
|
|
||||||
*Init + AuthRequest(ChallengeRequest) / async prepare_challenge = SentChallenge(ChallengeContext),
|
|
||||||
SentChallenge(ChallengeContext) + ReceivedSolution(ChallengeSolution) [async verify_solution] / provide_key = AuthOk(VerifyingKey),
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
async fn create_nonce(db: &crate::db::DatabasePool, pubkey_bytes: &[u8]) -> Result<i32, Error> {
|
|
||||||
let mut db_conn = db.get().await.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database pool error");
|
|
||||||
Error::DatabasePoolUnavailable
|
|
||||||
})?;
|
|
||||||
db_conn
|
|
||||||
.exclusive_transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let current_nonce = schema::program_client::table
|
|
||||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
|
||||||
.select(schema::program_client::nonce)
|
|
||||||
.first::<i32>(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
update(schema::program_client::table)
|
|
||||||
.filter(schema::program_client::public_key.eq(pubkey_bytes.to_vec()))
|
|
||||||
.set(schema::program_client::nonce.eq(current_nonce + 1))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, diesel::result::Error>::Ok(current_nonce)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.optional()
|
|
||||||
.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database error");
|
|
||||||
Error::DatabaseOperationFailed
|
|
||||||
})?
|
|
||||||
.ok_or_else(|| {
|
|
||||||
error!(?pubkey_bytes, "Public key not found in database");
|
|
||||||
Error::PublicKeyNotRegistered
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct AuthContext<'a> {
|
|
||||||
pub(super) conn: &'a mut ClientConnection,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a> AuthContext<'a> {
|
|
||||||
pub fn new(conn: &'a mut ClientConnection) -> Self {
|
|
||||||
Self { conn }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl AuthStateMachineContext for AuthContext<'_> {
|
|
||||||
type Error = Error;
|
|
||||||
|
|
||||||
async fn verify_solution(
|
|
||||||
&self,
|
|
||||||
ChallengeContext { challenge, key }: &ChallengeContext,
|
|
||||||
ChallengeSolution { solution }: &ChallengeSolution,
|
|
||||||
) -> Result<bool, Self::Error> {
|
|
||||||
let formatted_challenge =
|
|
||||||
arbiter_proto::format_challenge(challenge.nonce, &challenge.pubkey);
|
|
||||||
|
|
||||||
let signature = solution.as_slice().try_into().map_err(|_| {
|
|
||||||
error!(?solution, "Invalid signature length");
|
|
||||||
Error::InvalidChallengeSolution
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let valid = key.verify_strict(&formatted_challenge, &signature).is_ok();
|
|
||||||
|
|
||||||
Ok(valid)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn prepare_challenge(
|
|
||||||
&mut self,
|
|
||||||
ChallengeRequest { pubkey }: ChallengeRequest,
|
|
||||||
) -> Result<ChallengeContext, Self::Error> {
|
|
||||||
let nonce = create_nonce(&self.conn.db, pubkey.as_bytes()).await?;
|
|
||||||
|
|
||||||
let challenge = AuthChallenge {
|
|
||||||
pubkey: pubkey.as_bytes().to_vec(),
|
|
||||||
nonce,
|
|
||||||
};
|
|
||||||
|
|
||||||
self.conn
|
|
||||||
.transport
|
|
||||||
.send(Ok(ClientResponse {
|
|
||||||
payload: Some(ClientResponsePayload::AuthChallenge(challenge.clone())),
|
|
||||||
}))
|
|
||||||
.await
|
|
||||||
.map_err(|e| {
|
|
||||||
error!(?e, "Failed to send auth challenge");
|
|
||||||
Error::Transport
|
|
||||||
})?;
|
|
||||||
|
|
||||||
Ok(ChallengeContext {
|
|
||||||
challenge,
|
|
||||||
key: pubkey,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn provide_key(
|
|
||||||
&mut self,
|
|
||||||
state_data: &ChallengeContext,
|
|
||||||
_: ChallengeSolution,
|
|
||||||
) -> Result<VerifyingKey, Self::Error> {
|
|
||||||
Ok(state_data.key)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,58 +0,0 @@
|
|||||||
use arbiter_proto::{
|
|
||||||
proto::client::{ClientRequest, ClientResponse},
|
|
||||||
transport::Bi,
|
|
||||||
};
|
|
||||||
use kameo::actor::Spawn;
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::{GlobalActors, client::session::ClientSession},
|
|
||||||
db,
|
|
||||||
};
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
|
|
||||||
pub enum ClientError {
|
|
||||||
#[error("Expected message with payload")]
|
|
||||||
MissingRequestPayload,
|
|
||||||
#[error("Unexpected request payload")]
|
|
||||||
UnexpectedRequestPayload,
|
|
||||||
#[error("State machine error")]
|
|
||||||
StateTransitionFailed,
|
|
||||||
#[error("Connection registration failed")]
|
|
||||||
ConnectionRegistrationFailed,
|
|
||||||
#[error(transparent)]
|
|
||||||
Auth(#[from] auth::Error),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub type Transport = Box<dyn Bi<ClientRequest, Result<ClientResponse, ClientError>> + Send>;
|
|
||||||
|
|
||||||
pub struct ClientConnection {
|
|
||||||
pub(crate) db: db::DatabasePool,
|
|
||||||
pub(crate) transport: Transport,
|
|
||||||
pub(crate) actors: GlobalActors,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ClientConnection {
|
|
||||||
pub fn new(db: db::DatabasePool, transport: Transport, actors: GlobalActors) -> Self {
|
|
||||||
Self {
|
|
||||||
db,
|
|
||||||
transport,
|
|
||||||
actors,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub mod auth;
|
|
||||||
pub mod session;
|
|
||||||
|
|
||||||
pub async fn connect_client(props: ClientConnection) {
|
|
||||||
match auth::authenticate_and_create(props).await {
|
|
||||||
Ok(session) => {
|
|
||||||
ClientSession::spawn(session);
|
|
||||||
info!("Client authenticated, session started");
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
error!(?err, "Authentication failed, closing connection");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
use arbiter_proto::proto::client::{ClientRequest, ClientResponse};
|
|
||||||
use ed25519_dalek::VerifyingKey;
|
|
||||||
use kameo::Actor;
|
|
||||||
use tokio::select;
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::{actors::{
|
|
||||||
GlobalActors, client::{ClientError, ClientConnection}, router::RegisterClient
|
|
||||||
}, db};
|
|
||||||
|
|
||||||
pub struct ClientSession {
|
|
||||||
props: ClientConnection,
|
|
||||||
key: VerifyingKey,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ClientSession {
|
|
||||||
pub(crate) fn new(props: ClientConnection, key: VerifyingKey) -> Self {
|
|
||||||
Self { props, key }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn process_transport_inbound(&mut self, req: ClientRequest) -> Output {
|
|
||||||
let msg = req.payload.ok_or_else(|| {
|
|
||||||
error!(actor = "client", "Received message with no payload");
|
|
||||||
ClientError::MissingRequestPayload
|
|
||||||
})?;
|
|
||||||
|
|
||||||
match msg {
|
|
||||||
_ => Err(ClientError::UnexpectedRequestPayload),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Output = Result<ClientResponse, ClientError>;
|
|
||||||
|
|
||||||
impl Actor for ClientSession {
|
|
||||||
type Args = Self;
|
|
||||||
|
|
||||||
type Error = ClientError;
|
|
||||||
|
|
||||||
async fn on_start(
|
|
||||||
args: Self::Args,
|
|
||||||
this: kameo::prelude::ActorRef<Self>,
|
|
||||||
) -> Result<Self, Self::Error> {
|
|
||||||
args.props
|
|
||||||
.actors
|
|
||||||
.router
|
|
||||||
.ask(RegisterClient { actor: this })
|
|
||||||
.await
|
|
||||||
.map_err(|_| ClientError::ConnectionRegistrationFailed)?;
|
|
||||||
Ok(args)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn next(
|
|
||||||
&mut self,
|
|
||||||
_actor_ref: kameo::prelude::WeakActorRef<Self>,
|
|
||||||
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
|
||||||
) -> Option<kameo::mailbox::Signal<Self>> {
|
|
||||||
loop {
|
|
||||||
select! {
|
|
||||||
signal = mailbox_rx.recv() => {
|
|
||||||
return signal;
|
|
||||||
}
|
|
||||||
msg = self.props.transport.recv() => {
|
|
||||||
match msg {
|
|
||||||
Some(request) => {
|
|
||||||
match self.process_transport_inbound(request).await {
|
|
||||||
Ok(resp) => {
|
|
||||||
if self.props.transport.send(Ok(resp)).await.is_err() {
|
|
||||||
error!(actor = "client", reason = "channel closed", "send.failed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(err) => {
|
|
||||||
let _ = self.props.transport.send(Err(err)).await;
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
info!(actor = "client", "transport.closed");
|
|
||||||
return Some(kameo::mailbox::Signal::Stop);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ClientSession {
|
|
||||||
pub fn new_test(db: db::DatabasePool, actors: GlobalActors) -> Self {
|
|
||||||
use arbiter_proto::transport::DummyTransport;
|
|
||||||
let transport: super::Transport = Box::new(DummyTransport::new());
|
|
||||||
let props = ClientConnection::new(db, transport, actors);
|
|
||||||
let key = VerifyingKey::from_bytes(&[0u8; 32]).unwrap();
|
|
||||||
Self { props, key }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,246 +0,0 @@
|
|||||||
use alloy::{consensus::TxEip1559, network::TxSigner, primitives::Address, signers::Signature};
|
|
||||||
use diesel::{ExpressionMethods, OptionalExtension as _, QueryDsl, SelectableHelper as _, dsl::insert_into};
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
use kameo::{Actor, actor::ActorRef, messages};
|
|
||||||
use memsafe::MemSafe;
|
|
||||||
use rand::{SeedableRng, rng, rngs::StdRng};
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::keyholder::{CreateNew, Decrypt, KeyHolder},
|
|
||||||
db::{self, DatabasePool, models::{self, EvmBasicGrant, SqliteTimestamp}, schema},
|
|
||||||
evm::{
|
|
||||||
self, RunKind,
|
|
||||||
policies::{
|
|
||||||
FullGrant, SharedGrantSettings, SpecificGrant, SpecificMeaning,
|
|
||||||
ether_transfer::EtherTransfer,
|
|
||||||
token_transfers::TokenTransfer,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
pub use crate::evm::safe_signer;
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
|
||||||
pub enum SignTransactionError {
|
|
||||||
#[error("Wallet not found")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::wallet_not_found))]
|
|
||||||
WalletNotFound,
|
|
||||||
|
|
||||||
#[error("Database error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::database))]
|
|
||||||
Database(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("Database pool error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::pool))]
|
|
||||||
Pool(#[from] db::PoolError),
|
|
||||||
|
|
||||||
#[error("Keyholder error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::keyholder))]
|
|
||||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
|
||||||
|
|
||||||
#[error("Keyholder mailbox error")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::keyholder_send))]
|
|
||||||
KeyholderSend,
|
|
||||||
|
|
||||||
#[error("Signing error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::signing))]
|
|
||||||
Signing(#[from] alloy::signers::Error),
|
|
||||||
|
|
||||||
#[error("Policy error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::sign::vet))]
|
|
||||||
Vet(#[from] evm::VetError),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Keyholder error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::keyholder))]
|
|
||||||
Keyholder(#[from] crate::actors::keyholder::Error),
|
|
||||||
|
|
||||||
#[error("Keyholder mailbox error")]
|
|
||||||
#[diagnostic(code(arbiter::evm::keyholder_send))]
|
|
||||||
KeyholderSend,
|
|
||||||
|
|
||||||
#[error("Database error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::database))]
|
|
||||||
Database(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("Database pool error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::database_pool))]
|
|
||||||
DatabasePool(#[from] db::PoolError),
|
|
||||||
|
|
||||||
#[error("Grant creation error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::evm::creation))]
|
|
||||||
Creation(#[from] evm::CreationError),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Actor)]
|
|
||||||
pub struct EvmActor {
|
|
||||||
pub keyholder: ActorRef<KeyHolder>,
|
|
||||||
pub db: DatabasePool,
|
|
||||||
pub rng: StdRng,
|
|
||||||
pub engine: evm::Engine,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl EvmActor {
|
|
||||||
pub fn new(keyholder: ActorRef<KeyHolder>, db: DatabasePool) -> Self {
|
|
||||||
// is it safe to seed rng from system once?
|
|
||||||
// todo: audit
|
|
||||||
let rng = StdRng::from_rng(&mut rng());
|
|
||||||
let engine = evm::Engine::new(db.clone());
|
|
||||||
Self { keyholder, db, rng, engine }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[messages]
|
|
||||||
impl EvmActor {
|
|
||||||
#[message]
|
|
||||||
pub async fn generate(&mut self) -> Result<Address, Error> {
|
|
||||||
let (mut key_cell, address) = safe_signer::generate(&mut self.rng);
|
|
||||||
|
|
||||||
// Move raw key bytes into a Vec<u8> MemSafe for KeyHolder
|
|
||||||
let plaintext = {
|
|
||||||
let reader = key_cell.read().expect("MemSafe read");
|
|
||||||
MemSafe::new(reader.to_vec()).expect("MemSafe allocation")
|
|
||||||
};
|
|
||||||
|
|
||||||
let aead_id: i32 = self
|
|
||||||
.keyholder
|
|
||||||
.ask(CreateNew { plaintext })
|
|
||||||
.await
|
|
||||||
.map_err(|_| Error::KeyholderSend)?;
|
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
insert_into(schema::evm_wallet::table)
|
|
||||||
.values(&models::NewEvmWallet {
|
|
||||||
address: address.as_slice().to_vec(),
|
|
||||||
aead_encrypted_id: aead_id,
|
|
||||||
})
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(address)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn list_wallets(&self) -> Result<Vec<Address>, Error> {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let rows: Vec<models::EvmWallet> = schema::evm_wallet::table
|
|
||||||
.select(models::EvmWallet::as_select())
|
|
||||||
.load(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(rows
|
|
||||||
.into_iter()
|
|
||||||
.map(|w| Address::from_slice(&w.address))
|
|
||||||
.collect())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[messages]
|
|
||||||
impl EvmActor {
|
|
||||||
#[message]
|
|
||||||
pub async fn useragent_create_grant(
|
|
||||||
&mut self,
|
|
||||||
client_id: i32,
|
|
||||||
basic: SharedGrantSettings,
|
|
||||||
grant: SpecificGrant,
|
|
||||||
) -> Result<i32, evm::CreationError> {
|
|
||||||
match grant {
|
|
||||||
SpecificGrant::EtherTransfer(settings) => {
|
|
||||||
self.engine
|
|
||||||
.create_grant::<EtherTransfer>(client_id, FullGrant { basic, specific: settings })
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
SpecificGrant::TokenTransfer(settings) => {
|
|
||||||
self.engine
|
|
||||||
.create_grant::<TokenTransfer>(client_id, FullGrant { basic, specific: settings })
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn useragent_delete_grant(&mut self, grant_id: i32) -> Result<(), Error> {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
diesel::update(schema::evm_basic_grant::table)
|
|
||||||
.filter(schema::evm_basic_grant::id.eq(grant_id))
|
|
||||||
.set(schema::evm_basic_grant::revoked_at.eq(SqliteTimestamp::now()))
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn useragent_list_grants(
|
|
||||||
&mut self,
|
|
||||||
wallet_id: Option<i32>,
|
|
||||||
) -> Result<Vec<EvmBasicGrant>, Error> {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let mut query = schema::evm_basic_grant::table
|
|
||||||
.select(EvmBasicGrant::as_select())
|
|
||||||
.filter(schema::evm_basic_grant::revoked_at.is_null())
|
|
||||||
.into_boxed();
|
|
||||||
if let Some(wid) = wallet_id {
|
|
||||||
query = query.filter(schema::evm_basic_grant::wallet_id.eq(wid));
|
|
||||||
}
|
|
||||||
Ok(query.load(&mut conn).await?)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn shared_analyze_transaction(
|
|
||||||
&mut self,
|
|
||||||
client_id: i32,
|
|
||||||
wallet_address: Address,
|
|
||||||
transaction: TxEip1559,
|
|
||||||
) -> Result<SpecificMeaning, SignTransactionError> {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let wallet = schema::evm_wallet::table
|
|
||||||
.select(models::EvmWallet::as_select())
|
|
||||||
.filter(schema::evm_wallet::address.eq(wallet_address.as_slice()))
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.optional()?
|
|
||||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
let meaning = self.engine
|
|
||||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(meaning)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn client_sign_transaction(
|
|
||||||
&mut self,
|
|
||||||
client_id: i32,
|
|
||||||
wallet_address: Address,
|
|
||||||
mut transaction: TxEip1559,
|
|
||||||
) -> Result<Signature, SignTransactionError> {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let wallet = schema::evm_wallet::table
|
|
||||||
.select(models::EvmWallet::as_select())
|
|
||||||
.filter(schema::evm_wallet::address.eq(wallet_address.as_slice()))
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.optional()?
|
|
||||||
.ok_or(SignTransactionError::WalletNotFound)?;
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
let raw_key: MemSafe<Vec<u8>> = self
|
|
||||||
.keyholder
|
|
||||||
.ask(Decrypt { aead_id: wallet.aead_encrypted_id })
|
|
||||||
.await
|
|
||||||
.map_err(|_| SignTransactionError::KeyholderSend)?;
|
|
||||||
|
|
||||||
let signer = safe_signer::SafeSigner::from_memsafe(raw_key)?;
|
|
||||||
|
|
||||||
self.engine
|
|
||||||
.evaluate_transaction(wallet.id, client_id, transaction.clone(), RunKind::Execution)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
use alloy::network::TxSignerSync as _;
|
|
||||||
Ok(signer.sign_transaction_sync(&mut transaction)?)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
939
server/crates/arbiter-server/src/actors/keyholder.rs
Normal file
@@ -0,0 +1,939 @@
|
|||||||
|
use diesel::{
|
||||||
|
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
|
||||||
|
dsl::{insert_into, update},
|
||||||
|
};
|
||||||
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
|
use kameo::{Actor, Reply, messages};
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
use strum::{EnumDiscriminants, IntoDiscriminant};
|
||||||
|
use tracing::{error, info};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::keyholder::v1::{KeyCell, Nonce},
|
||||||
|
db::{
|
||||||
|
self,
|
||||||
|
models::{self, RootKeyHistory},
|
||||||
|
schema::{self},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
pub mod v1;
|
||||||
|
|
||||||
|
#[derive(Default, EnumDiscriminants)]
|
||||||
|
#[strum_discriminants(derive(Reply), vis(pub))]
|
||||||
|
enum State {
|
||||||
|
#[default]
|
||||||
|
Unbootstrapped,
|
||||||
|
Sealed {
|
||||||
|
root_key_history_id: i32,
|
||||||
|
},
|
||||||
|
Unsealed {
|
||||||
|
root_key_history_id: i32,
|
||||||
|
root_key: KeyCell,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("Keyholder is already bootstrapped")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
|
||||||
|
AlreadyBootstrapped,
|
||||||
|
#[error("Keyholder is not bootstrapped")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
|
||||||
|
NotBootstrapped,
|
||||||
|
#[error("Invalid key provided")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::invalid_key))]
|
||||||
|
InvalidKey,
|
||||||
|
|
||||||
|
#[error("Requested aead entry not found")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
|
||||||
|
NotFound,
|
||||||
|
|
||||||
|
#[error("Encryption error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::encryption_error))]
|
||||||
|
Encryption(#[from] chacha20poly1305::aead::Error),
|
||||||
|
|
||||||
|
#[error("Database error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::database_error))]
|
||||||
|
DatabaseConnection(#[from] db::PoolError),
|
||||||
|
|
||||||
|
#[error("Database transaction error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
|
||||||
|
DatabaseTransaction(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("Broken database")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::broken_database))]
|
||||||
|
BrokenDatabase,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
||||||
|
/// Provides API for encrypting and decrypting data using the vault root key.
|
||||||
|
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
||||||
|
#[derive(Actor)]
|
||||||
|
pub struct KeyHolder {
|
||||||
|
db: db::DatabasePool,
|
||||||
|
state: State,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl KeyHolder {
|
||||||
|
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
|
||||||
|
let state = {
|
||||||
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
|
let (root_key_history,) = schema::arbiter_settings::table
|
||||||
|
.left_join(schema::root_key_history::table)
|
||||||
|
.select((Option::<RootKeyHistory>::as_select(),))
|
||||||
|
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match root_key_history {
|
||||||
|
Some(root_key_history) => State::Sealed {
|
||||||
|
root_key_history_id: root_key_history.id,
|
||||||
|
},
|
||||||
|
None => State::Unbootstrapped,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Self { db, state })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
||||||
|
// additional layer of protection against nonce-reuse
|
||||||
|
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
|
||||||
|
let mut conn = pool.get().await?;
|
||||||
|
|
||||||
|
let nonce = conn
|
||||||
|
.exclusive_transaction(|conn| {
|
||||||
|
Box::pin(async move {
|
||||||
|
let current_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut nonce =
|
||||||
|
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
|
||||||
|
error!(
|
||||||
|
"Broken database: invalid nonce for root key history id={}",
|
||||||
|
root_key_id
|
||||||
|
);
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
nonce.increment();
|
||||||
|
|
||||||
|
update(schema::root_key_history::table)
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_id))
|
||||||
|
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Result::<_, Error>::Ok(nonce)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(nonce)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
||||||
|
if !matches!(self.state, State::Unbootstrapped) {
|
||||||
|
return Err(Error::AlreadyBootstrapped);
|
||||||
|
}
|
||||||
|
let salt = v1::generate_salt();
|
||||||
|
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
||||||
|
let mut root_key = KeyCell::new_secure_random();
|
||||||
|
|
||||||
|
// Zero nonces are fine because they are one-time
|
||||||
|
let root_key_nonce = v1::Nonce::default();
|
||||||
|
let data_encryption_nonce = v1::Nonce::default();
|
||||||
|
|
||||||
|
let root_key_ciphertext: Vec<u8> = {
|
||||||
|
let root_key_reader = root_key.0.read().unwrap();
|
||||||
|
let root_key_reader = root_key_reader.as_slice();
|
||||||
|
seal_key
|
||||||
|
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
||||||
|
.map_err(|err| {
|
||||||
|
error!(?err, "Fatal bootstrap error");
|
||||||
|
Error::Encryption(err)
|
||||||
|
})?
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
|
||||||
|
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
|
||||||
|
let root_key_history_id = conn
|
||||||
|
.transaction(|conn| {
|
||||||
|
Box::pin(async move {
|
||||||
|
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
|
||||||
|
.values(&models::NewRootKeyHistory {
|
||||||
|
ciphertext: root_key_ciphertext,
|
||||||
|
tag: v1::ROOT_KEY_TAG.to_vec(),
|
||||||
|
root_key_encryption_nonce: root_key_nonce.to_vec(),
|
||||||
|
data_encryption_nonce: data_encryption_nonce_bytes,
|
||||||
|
schema_version: 1,
|
||||||
|
salt: salt.to_vec(),
|
||||||
|
})
|
||||||
|
.returning(schema::root_key_history::id)
|
||||||
|
.get_result(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
update(schema::arbiter_settings::table)
|
||||||
|
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
self.state = State::Unsealed {
|
||||||
|
root_key,
|
||||||
|
root_key_history_id,
|
||||||
|
};
|
||||||
|
|
||||||
|
info!("Keyholder bootstrapped successfully");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
||||||
|
let State::Sealed {
|
||||||
|
root_key_history_id,
|
||||||
|
} = &self.state
|
||||||
|
else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
// We don't want to hold connection while doing expensive KDF work
|
||||||
|
let current_key = {
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(*root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce )
|
||||||
|
.select(RootKeyHistory::as_select() )
|
||||||
|
.first(&mut conn)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
let salt = ¤t_key.salt;
|
||||||
|
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
||||||
|
error!("Broken database: invalid salt for root key");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
||||||
|
|
||||||
|
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
|
||||||
|
|
||||||
|
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
||||||
|
|_| {
|
||||||
|
error!("Broken database: invalid nonce for root key");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
|
||||||
|
seal_key
|
||||||
|
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
||||||
|
.map_err(|err| {
|
||||||
|
error!(?err, "Failed to unseal root key: invalid seal key");
|
||||||
|
Error::InvalidKey
|
||||||
|
})?;
|
||||||
|
|
||||||
|
self.state = State::Unsealed {
|
||||||
|
root_key_history_id: current_key.id,
|
||||||
|
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
|
||||||
|
error!(?err, "Broken database: invalid encryption key size");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?,
|
||||||
|
};
|
||||||
|
|
||||||
|
info!("Keyholder unsealed successfully");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
||||||
|
#[message]
|
||||||
|
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
|
||||||
|
let State::Unsealed { root_key, .. } = &mut self.state else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
let row: models::AeadEncrypted = {
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.filter(schema::aead_encrypted::id.eq(aead_id))
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.optional()?
|
||||||
|
.ok_or(Error::NotFound)?
|
||||||
|
};
|
||||||
|
|
||||||
|
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
|
||||||
|
error!(
|
||||||
|
"Broken database: invalid nonce for aead_encrypted id={}",
|
||||||
|
aead_id
|
||||||
|
);
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
let mut output = MemSafe::new(row.ciphertext).unwrap();
|
||||||
|
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
||||||
|
Ok(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
||||||
|
#[message]
|
||||||
|
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
|
||||||
|
let State::Unsealed {
|
||||||
|
root_key,
|
||||||
|
root_key_history_id,
|
||||||
|
} = &mut self.state
|
||||||
|
else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
||||||
|
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
||||||
|
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
||||||
|
|
||||||
|
let mut ciphertext_buffer = plaintext.write().unwrap();
|
||||||
|
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
||||||
|
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
||||||
|
|
||||||
|
let ciphertext = std::mem::take(ciphertext_buffer);
|
||||||
|
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
|
||||||
|
.values(&models::NewAeadEncrypted {
|
||||||
|
ciphertext,
|
||||||
|
tag: v1::TAG.to_vec(),
|
||||||
|
current_nonce: nonce.to_vec(),
|
||||||
|
schema_version: 1,
|
||||||
|
associated_root_key_id: *root_key_history_id,
|
||||||
|
created_at: chrono::Utc::now().timestamp() as i32,
|
||||||
|
})
|
||||||
|
.returning(schema::aead_encrypted::id)
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(aead_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub fn get_state(&self) -> StateDiscriminants {
|
||||||
|
self.state.discriminant()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use std::collections::{HashMap, HashSet};
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use diesel::dsl::{insert_into, sql_query, update};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use futures::stream::TryUnfold;
|
||||||
|
use kameo::actor::{ActorRef, Spawn as _};
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
use tokio::sync::Mutex;
|
||||||
|
use tokio::task::JoinSet;
|
||||||
|
|
||||||
|
use crate::db::{self, models::ArbiterSetting};
|
||||||
|
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
async fn seed_settings(pool: &db::DatabasePool) {
|
||||||
|
let mut conn = pool.get().await.unwrap();
|
||||||
|
insert_into(schema::arbiter_settings::table)
|
||||||
|
.values(&ArbiterSetting {
|
||||||
|
id: 1,
|
||||||
|
root_key_id: None,
|
||||||
|
cert_key: vec![],
|
||||||
|
cert: vec![],
|
||||||
|
})
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
||||||
|
seed_settings(db).await;
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
|
actor
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn write_concurrently(
|
||||||
|
actor: ActorRef<KeyHolder>,
|
||||||
|
prefix: &'static str,
|
||||||
|
count: usize,
|
||||||
|
) -> Vec<(i32, Vec<u8>)> {
|
||||||
|
let mut set = JoinSet::new();
|
||||||
|
for i in 0..count {
|
||||||
|
let actor = actor.clone();
|
||||||
|
set.spawn(async move {
|
||||||
|
let plaintext = format!("{prefix}-{i}").into_bytes();
|
||||||
|
let id = {
|
||||||
|
actor
|
||||||
|
.ask(CreateNew {
|
||||||
|
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
};
|
||||||
|
(id, plaintext)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut out = Vec::with_capacity(count);
|
||||||
|
while let Some(res) = set.join_next().await {
|
||||||
|
out.push(res.unwrap());
|
||||||
|
}
|
||||||
|
out
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_bootstrap() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
seed_settings(&db).await;
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
|
assert!(matches!(actor.state, State::Unbootstrapped));
|
||||||
|
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
|
|
||||||
|
assert!(matches!(actor.state, State::Unsealed { .. }));
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(row.schema_version, 1);
|
||||||
|
assert_eq!(row.tag, v1::ROOT_KEY_TAG);
|
||||||
|
assert!(!row.ciphertext.is_empty());
|
||||||
|
assert!(!row.salt.is_empty());
|
||||||
|
assert_eq!(row.data_encryption_nonce, v1::Nonce::default().to_vec());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_bootstrap_rejects_double() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::AlreadyBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_create_decrypt_roundtrip() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"hello arbiter";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
let decrypted = decrypted.read().unwrap();
|
||||||
|
assert_eq!(*decrypted, plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_create_new_before_bootstrap_fails() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
seed_settings(&db).await;
|
||||||
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_decrypt_before_bootstrap_fails() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
seed_settings(&db).await;
|
||||||
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
|
let err = actor.decrypt(1).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_decrypt_nonexistent_returns_not_found() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let err = actor.decrypt(9999).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotFound));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_new_restores_sealed_state() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = bootstrapped_actor(&db).await;
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let actor2 = KeyHolder::new(db).await.unwrap();
|
||||||
|
assert!(matches!(actor2.state, State::Sealed { .. }));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_nonce_never_reused() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let n = 5;
|
||||||
|
let mut ids = Vec::with_capacity(n);
|
||||||
|
for i in 0..n {
|
||||||
|
let id = actor
|
||||||
|
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
ids.push(id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// read all stored nonces from DB
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(rows.len(), n);
|
||||||
|
|
||||||
|
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
||||||
|
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
||||||
|
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
||||||
|
|
||||||
|
// verify nonces are sequential increments from 1
|
||||||
|
for (i, row) in rows.iter().enumerate() {
|
||||||
|
let mut expected = v1::Nonce::default();
|
||||||
|
for _ in 0..=i {
|
||||||
|
expected.increment();
|
||||||
|
}
|
||||||
|
assert_eq!(row.current_nonce, expected.to_vec(), "nonce {i} mismatch");
|
||||||
|
}
|
||||||
|
|
||||||
|
// verify data_encryption_nonce on root_key_history tracks the latest nonce
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let last_nonce = &rows.last().unwrap().current_nonce;
|
||||||
|
assert_eq!(
|
||||||
|
&root_row.data_encryption_nonce, last_nonce,
|
||||||
|
"root_key_history must track the latest nonce"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_unseal_correct_password() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"survive a restart";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
assert!(matches!(actor.state, State::Sealed { .. }));
|
||||||
|
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.try_unseal(seal_key).await.unwrap();
|
||||||
|
assert!(matches!(actor.state, State::Unsealed { .. }));
|
||||||
|
|
||||||
|
// previously encrypted data is still decryptable
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_unseal_wrong_then_correct_password() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"important data";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
assert!(matches!(actor.state, State::Sealed { .. }));
|
||||||
|
|
||||||
|
// wrong password
|
||||||
|
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
|
||||||
|
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::InvalidKey));
|
||||||
|
assert!(
|
||||||
|
matches!(actor.state, State::Sealed { .. }),
|
||||||
|
"state must remain Sealed after failed attempt"
|
||||||
|
);
|
||||||
|
|
||||||
|
// correct password
|
||||||
|
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.try_unseal(good_key).await.unwrap();
|
||||||
|
assert!(matches!(actor.state, State::Unsealed { .. }));
|
||||||
|
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_ciphertext_differs_across_entries() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"same content";
|
||||||
|
let id1 = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let id2 = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// different nonces => different ciphertext, even for identical plaintext
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let row1: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id1))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let row2: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id2))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_ne!(row1.ciphertext, row2.ciphertext);
|
||||||
|
|
||||||
|
// but both decrypt to the same plaintext
|
||||||
|
let mut d1 = actor.decrypt(id1).await.unwrap();
|
||||||
|
let mut d2 = actor.decrypt(id2).await.unwrap();
|
||||||
|
assert_eq!(*d1.read().unwrap(), plaintext);
|
||||||
|
assert_eq!(*d2.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn concurrent_create_new_no_duplicate_nonces_() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
||||||
|
|
||||||
|
let writes = write_concurrently(actor, "nonce-unique", 32).await;
|
||||||
|
assert_eq!(writes.len(), 32);
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(rows.len(), 32);
|
||||||
|
|
||||||
|
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
||||||
|
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
||||||
|
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn concurrent_create_new_root_nonce_never_moves_backward() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
||||||
|
|
||||||
|
write_concurrently(actor, "root-max", 24).await;
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let max_nonce = rows
|
||||||
|
.iter()
|
||||||
|
.map(|r| r.current_nonce.clone())
|
||||||
|
.max()
|
||||||
|
.expect("at least one row");
|
||||||
|
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(root_row.data_encryption_nonce, max_nonce);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
let root_key_history_id = match actor.state {
|
||||||
|
State::Unsealed {
|
||||||
|
root_key_history_id,
|
||||||
|
..
|
||||||
|
} => root_key_history_id,
|
||||||
|
_ => panic!("expected unsealed state"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
||||||
|
|
||||||
|
let id = actor
|
||||||
|
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
row.current_nonce > n2.to_vec(),
|
||||||
|
"next write must advance nonce"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn insert_failure_does_not_create_partial_row() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
let root_key_history_id = match actor.state {
|
||||||
|
State::Unsealed {
|
||||||
|
root_key_history_id,
|
||||||
|
..
|
||||||
|
} => root_key_history_id,
|
||||||
|
_ => panic!("expected unsealed state"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let before_count: i64 = schema::aead_encrypted::table
|
||||||
|
.count()
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let before_root_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
sql_query(
|
||||||
|
"CREATE TRIGGER fail_aead_insert BEFORE INSERT ON aead_encrypted BEGIN SELECT RAISE(ABORT, 'forced test failure'); END;",
|
||||||
|
)
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
sql_query("DROP TRIGGER fail_aead_insert;")
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let after_count: i64 = schema::aead_encrypted::table
|
||||||
|
.count()
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(
|
||||||
|
before_count, after_count,
|
||||||
|
"failed insert must not create row"
|
||||||
|
);
|
||||||
|
|
||||||
|
let after_root_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
after_root_nonce > before_root_nonce,
|
||||||
|
"current behavior allows nonce gap on failed insert"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn decrypt_roundtrip_after_high_concurrency() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
||||||
|
|
||||||
|
let writes = write_concurrently(actor, "roundtrip", 40).await;
|
||||||
|
let expected: HashMap<i32, Vec<u8>> = writes.into_iter().collect();
|
||||||
|
|
||||||
|
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
decryptor
|
||||||
|
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
for (id, plaintext) in expected {
|
||||||
|
let mut decrypted = decryptor.decrypt(id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// #[tokio::test]
|
||||||
|
// #[test_log::test]
|
||||||
|
// async fn swapping_ciphertext_and_nonce_between_rows_changes_logical_binding() {
|
||||||
|
// let db = db::create_test_pool().await;
|
||||||
|
// let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
|
||||||
|
// let plaintext1 = b"entry-one";
|
||||||
|
// let plaintext2 = b"entry-two";
|
||||||
|
// let id1 = actor
|
||||||
|
// .create_new(MemSafe::new(plaintext1.to_vec()).unwrap())
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
// let id2 = actor
|
||||||
|
// .create_new(MemSafe::new(plaintext2.to_vec()).unwrap())
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
|
||||||
|
// let mut conn = db.get().await.unwrap();
|
||||||
|
// let row1: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
// .filter(schema::aead_encrypted::id.eq(id1))
|
||||||
|
// .select(models::AeadEncrypted::as_select())
|
||||||
|
// .first(&mut conn)
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
// let row2: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
// .filter(schema::aead_encrypted::id.eq(id2))
|
||||||
|
// .select(models::AeadEncrypted::as_select())
|
||||||
|
// .first(&mut conn)
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
|
||||||
|
// update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id1)))
|
||||||
|
// .set((
|
||||||
|
// schema::aead_encrypted::ciphertext.eq(row2.ciphertext.clone()),
|
||||||
|
// schema::aead_encrypted::current_nonce.eq(row2.current_nonce.clone()),
|
||||||
|
// ))
|
||||||
|
// .execute(&mut conn)
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
// update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id2)))
|
||||||
|
// .set((
|
||||||
|
// schema::aead_encrypted::ciphertext.eq(row1.ciphertext.clone()),
|
||||||
|
// schema::aead_encrypted::current_nonce.eq(row1.current_nonce.clone()),
|
||||||
|
// ))
|
||||||
|
// .execute(&mut conn)
|
||||||
|
// .await
|
||||||
|
// .unwrap();
|
||||||
|
|
||||||
|
// let mut d1 = actor.decrypt(id1).await.unwrap();
|
||||||
|
// let mut d2 = actor.decrypt(id2).await.unwrap();
|
||||||
|
// assert_eq!(*d1.read().unwrap(), plaintext2);
|
||||||
|
// assert_eq!(*d2.read().unwrap(), plaintext1);
|
||||||
|
// }
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn broken_db_nonce_format_fails_closed() {
|
||||||
|
// malformed root_key_history nonce must fail create_new
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
let root_key_history_id = match actor.state {
|
||||||
|
State::Unsealed {
|
||||||
|
root_key_history_id,
|
||||||
|
..
|
||||||
|
} => root_key_history_id,
|
||||||
|
_ => panic!("expected unsealed state"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
update(
|
||||||
|
schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id)),
|
||||||
|
)
|
||||||
|
.set(schema::root_key_history::data_encryption_nonce.eq(vec![1, 2, 3]))
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::BrokenDatabase));
|
||||||
|
|
||||||
|
// malformed per-row nonce must fail decrypt
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
let id = actor
|
||||||
|
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id)))
|
||||||
|
.set(schema::aead_encrypted::current_nonce.eq(vec![7, 8]))
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor.decrypt(id).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::BrokenDatabase));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1 +0,0 @@
|
|||||||
pub mod v1;
|
|
||||||
@@ -1,408 +0,0 @@
|
|||||||
use chrono::Utc;
|
|
||||||
use diesel::{
|
|
||||||
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
|
|
||||||
dsl::{insert_into, update},
|
|
||||||
};
|
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
|
||||||
use kameo::{Actor, Reply, messages};
|
|
||||||
use memsafe::MemSafe;
|
|
||||||
use strum::{EnumDiscriminants, IntoDiscriminant};
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::db::{
|
|
||||||
self,
|
|
||||||
models::{self, RootKeyHistory},
|
|
||||||
schema::{self},
|
|
||||||
};
|
|
||||||
use encryption::v1::{self, KeyCell, Nonce};
|
|
||||||
|
|
||||||
pub mod encryption;
|
|
||||||
|
|
||||||
#[derive(Default, EnumDiscriminants)]
|
|
||||||
#[strum_discriminants(derive(Reply), vis(pub))]
|
|
||||||
enum State {
|
|
||||||
#[default]
|
|
||||||
Unbootstrapped,
|
|
||||||
Sealed {
|
|
||||||
root_key_history_id: i32,
|
|
||||||
},
|
|
||||||
Unsealed {
|
|
||||||
root_key_history_id: i32,
|
|
||||||
root_key: KeyCell,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Keyholder is already bootstrapped")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
|
|
||||||
AlreadyBootstrapped,
|
|
||||||
#[error("Keyholder is not bootstrapped")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
|
|
||||||
NotBootstrapped,
|
|
||||||
#[error("Invalid key provided")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::invalid_key))]
|
|
||||||
InvalidKey,
|
|
||||||
|
|
||||||
#[error("Requested aead entry not found")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
|
|
||||||
NotFound,
|
|
||||||
|
|
||||||
#[error("Encryption error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::encryption_error))]
|
|
||||||
Encryption(#[from] chacha20poly1305::aead::Error),
|
|
||||||
|
|
||||||
#[error("Database error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::database_error))]
|
|
||||||
DatabaseConnection(#[from] db::PoolError),
|
|
||||||
|
|
||||||
#[error("Database transaction error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
|
|
||||||
DatabaseTransaction(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("Broken database")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::broken_database))]
|
|
||||||
BrokenDatabase,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
|
||||||
/// Provides API for encrypting and decrypting data using the vault root key.
|
|
||||||
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
|
||||||
#[derive(Actor)]
|
|
||||||
pub struct KeyHolder {
|
|
||||||
db: db::DatabasePool,
|
|
||||||
state: State,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[messages]
|
|
||||||
impl KeyHolder {
|
|
||||||
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
|
|
||||||
let state = {
|
|
||||||
let mut conn = db.get().await?;
|
|
||||||
|
|
||||||
let (root_key_history,) = schema::arbiter_settings::table
|
|
||||||
.left_join(schema::root_key_history::table)
|
|
||||||
.select((Option::<RootKeyHistory>::as_select(),))
|
|
||||||
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
match root_key_history {
|
|
||||||
Some(root_key_history) => State::Sealed {
|
|
||||||
root_key_history_id: root_key_history.id,
|
|
||||||
},
|
|
||||||
None => State::Unbootstrapped,
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(Self { db, state })
|
|
||||||
}
|
|
||||||
|
|
||||||
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
|
||||||
// additional layer of protection against nonce-reuse
|
|
||||||
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
|
|
||||||
let mut conn = pool.get().await?;
|
|
||||||
|
|
||||||
let nonce = conn
|
|
||||||
.exclusive_transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let current_nonce: Vec<u8> = schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce)
|
|
||||||
.first(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
let mut nonce =
|
|
||||||
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
|
|
||||||
error!(
|
|
||||||
"Broken database: invalid nonce for root key history id={}",
|
|
||||||
root_key_id
|
|
||||||
);
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
nonce.increment();
|
|
||||||
|
|
||||||
update(schema::root_key_history::table)
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_id))
|
|
||||||
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, Error>::Ok(nonce)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(nonce)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
|
||||||
if !matches!(self.state, State::Unbootstrapped) {
|
|
||||||
return Err(Error::AlreadyBootstrapped);
|
|
||||||
}
|
|
||||||
let salt = v1::generate_salt();
|
|
||||||
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
|
||||||
let mut root_key = KeyCell::new_secure_random();
|
|
||||||
|
|
||||||
// Zero nonces are fine because they are one-time
|
|
||||||
let root_key_nonce = v1::Nonce::default();
|
|
||||||
let data_encryption_nonce = v1::Nonce::default();
|
|
||||||
|
|
||||||
let root_key_ciphertext: Vec<u8> = {
|
|
||||||
let root_key_reader = root_key.0.read().unwrap();
|
|
||||||
let root_key_reader = root_key_reader.as_slice();
|
|
||||||
seal_key
|
|
||||||
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
|
||||||
.map_err(|err| {
|
|
||||||
error!(?err, "Fatal bootstrap error");
|
|
||||||
Error::Encryption(err)
|
|
||||||
})?
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
|
|
||||||
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
|
|
||||||
let root_key_history_id = conn
|
|
||||||
.transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
|
|
||||||
.values(&models::NewRootKeyHistory {
|
|
||||||
ciphertext: root_key_ciphertext,
|
|
||||||
tag: v1::ROOT_KEY_TAG.to_vec(),
|
|
||||||
root_key_encryption_nonce: root_key_nonce.to_vec(),
|
|
||||||
data_encryption_nonce: data_encryption_nonce_bytes,
|
|
||||||
schema_version: 1,
|
|
||||||
salt: salt.to_vec(),
|
|
||||||
})
|
|
||||||
.returning(schema::root_key_history::id)
|
|
||||||
.get_result(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
update(schema::arbiter_settings::table)
|
|
||||||
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
self.state = State::Unsealed {
|
|
||||||
root_key,
|
|
||||||
root_key_history_id,
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Keyholder bootstrapped successfully");
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
|
||||||
let State::Sealed {
|
|
||||||
root_key_history_id,
|
|
||||||
} = &self.state
|
|
||||||
else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
// We don't want to hold connection while doing expensive KDF work
|
|
||||||
let current_key = {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(*root_key_history_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce)
|
|
||||||
.select(RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await?
|
|
||||||
};
|
|
||||||
|
|
||||||
let salt = ¤t_key.salt;
|
|
||||||
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
|
||||||
error!("Broken database: invalid salt for root key");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
|
||||||
|
|
||||||
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
|
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
|
||||||
|_| {
|
|
||||||
error!("Broken database: invalid nonce for root key");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
},
|
|
||||||
)?;
|
|
||||||
|
|
||||||
seal_key
|
|
||||||
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
|
||||||
.map_err(|err| {
|
|
||||||
error!(?err, "Failed to unseal root key: invalid seal key");
|
|
||||||
Error::InvalidKey
|
|
||||||
})?;
|
|
||||||
|
|
||||||
self.state = State::Unsealed {
|
|
||||||
root_key_history_id: current_key.id,
|
|
||||||
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
|
|
||||||
error!(?err, "Broken database: invalid encryption key size");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?,
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Keyholder unsealed successfully");
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
|
||||||
#[message]
|
|
||||||
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
|
|
||||||
let State::Unsealed { root_key, .. } = &mut self.state else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
let row: models::AeadEncrypted = {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
schema::aead_encrypted::table
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.filter(schema::aead_encrypted::id.eq(aead_id))
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.optional()?
|
|
||||||
.ok_or(Error::NotFound)?
|
|
||||||
};
|
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
|
|
||||||
error!(
|
|
||||||
"Broken database: invalid nonce for aead_encrypted id={}",
|
|
||||||
aead_id
|
|
||||||
);
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
let mut output = MemSafe::new(row.ciphertext).unwrap();
|
|
||||||
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
|
||||||
Ok(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
|
||||||
#[message]
|
|
||||||
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
|
|
||||||
let State::Unsealed {
|
|
||||||
root_key,
|
|
||||||
root_key_history_id,
|
|
||||||
} = &mut self.state
|
|
||||||
else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
|
||||||
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
|
||||||
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
|
||||||
|
|
||||||
let mut ciphertext_buffer = plaintext.write().unwrap();
|
|
||||||
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
|
||||||
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
|
||||||
|
|
||||||
let ciphertext = std::mem::take(ciphertext_buffer);
|
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
|
|
||||||
.values(&models::NewAeadEncrypted {
|
|
||||||
ciphertext,
|
|
||||||
tag: v1::TAG.to_vec(),
|
|
||||||
current_nonce: nonce.to_vec(),
|
|
||||||
schema_version: 1,
|
|
||||||
associated_root_key_id: *root_key_history_id,
|
|
||||||
created_at: Utc::now().into()
|
|
||||||
})
|
|
||||||
.returning(schema::aead_encrypted::id)
|
|
||||||
.get_result(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(aead_id)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub fn get_state(&self) -> StateDiscriminants {
|
|
||||||
self.state.discriminant()
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub fn seal(&mut self) -> Result<(), Error> {
|
|
||||||
let State::Unsealed {
|
|
||||||
root_key_history_id,
|
|
||||||
..
|
|
||||||
} = &self.state
|
|
||||||
else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
self.state = State::Sealed {
|
|
||||||
root_key_history_id: *root_key_history_id,
|
|
||||||
};
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use diesel::SelectableHelper;
|
|
||||||
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
use memsafe::MemSafe;
|
|
||||||
|
|
||||||
use crate::db::{self};
|
|
||||||
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
actor.bootstrap(seal_key).await.unwrap();
|
|
||||||
actor
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
let root_key_history_id = match actor.state {
|
|
||||||
State::Unsealed {
|
|
||||||
root_key_history_id,
|
|
||||||
..
|
|
||||||
} => root_key_history_id,
|
|
||||||
_ => panic!("expected unsealed state"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
|
||||||
.select(models::RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
|
||||||
|
|
||||||
let id = actor
|
|
||||||
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
.filter(schema::aead_encrypted::id.eq(id))
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert!(
|
|
||||||
row.current_nonce > n2.to_vec(),
|
|
||||||
"next write must advance nonce"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||