Compare commits
12 Commits
1545db7428
...
push-yyxvk
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1b4369b1cb | ||
|
|
7bd37b3c4a | ||
|
|
fe8c5e1bd2 | ||
|
|
cbbe1f8881 | ||
|
|
7438d62695 | ||
|
|
4236f2c36d | ||
|
|
76ff535619 | ||
|
|
b3566c8af6 | ||
|
|
bdb9f01757 | ||
|
|
0805e7a846 | ||
|
|
eb9cbc88e9 | ||
|
|
dd716da4cd |
@@ -3,7 +3,6 @@
|
|||||||
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
|
Arbiter is a permissioned signing service for cryptocurrency wallets. It runs as a background service on the user's machine with an optional client application for vault management.
|
||||||
|
|
||||||
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
|
**Core principle:** The vault NEVER exposes key material. It only produces signatures when a request satisfies the configured policies.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 1. Peer Types
|
## 1. Peer Types
|
||||||
|
|||||||
190
LICENSE
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
Copyright 2026 MarketTakers
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
13
README.md
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
# Arbiter
|
||||||
|
> Policy-first multi-client wallet daemon, allowing permissioned transactions across blockchains
|
||||||
|
|
||||||
|
## Security warning
|
||||||
|
Arbiter can't meaningfully protect against host compromise. Potential attack flow:
|
||||||
|
- Attacker steals TLS keys from database
|
||||||
|
- Pretends to be server; just accepts user agent challenge solutions
|
||||||
|
- Pretend to be in sealed state and performing DH with client
|
||||||
|
- Steals user password and derives seal key
|
||||||
|
|
||||||
|
While this attack is highly targetive, it's still possible.
|
||||||
|
|
||||||
|
> This software is experimental. Do not use with funds you cannot afford to lose.
|
||||||
178
app/.dart_tool/package_config.json
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
{
|
||||||
|
"configVersion": 2,
|
||||||
|
"packages": [
|
||||||
|
{
|
||||||
|
"name": "async",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/async-2.13.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "boolean_selector",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/boolean_selector-2.1.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "characters",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/characters-1.4.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clock",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/clock-1.1.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "collection",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/collection-1.19.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cupertino_icons",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/cupertino_icons-1.0.8",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fake_async",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/fake_async-1.3.3",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_lints",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/flutter_lints-6.0.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_test",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/packages/flutter_test",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker-11.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_flutter_testing",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_flutter_testing-3.0.10",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_testing",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/leak_tracker_testing-3.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lints",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/lints-6.1.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "matcher",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/matcher-0.12.17",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "material_color_utilities",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/material_color_utilities-0.11.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "2.17"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "meta",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/meta-1.17.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "path",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/path-1.9.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sky_engine",
|
||||||
|
"rootUri": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable/bin/cache/pkg/sky_engine",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "source_span",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/source_span-1.10.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stack_trace",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stack_trace-1.12.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stream_channel",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/stream_channel-2.1.4",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "string_scanner",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/string_scanner-1.4.1",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "term_glyph",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/term_glyph-1.2.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test_api",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/test_api-0.7.7",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vector_math",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vector_math-2.2.0",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vm_service",
|
||||||
|
"rootUri": "file:///Users/kaska/.pub-cache/hosted/pub.dev/vm_service-15.0.2",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "app",
|
||||||
|
"rootUri": "../",
|
||||||
|
"packageUri": "lib/",
|
||||||
|
"languageVersion": "3.10"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"generator": "pub",
|
||||||
|
"generatorVersion": "3.10.8",
|
||||||
|
"flutterRoot": "file:///Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable",
|
||||||
|
"flutterVersion": "3.38.9",
|
||||||
|
"pubCache": "file:///Users/kaska/.pub-cache"
|
||||||
|
}
|
||||||
230
app/.dart_tool/package_graph.json
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
{
|
||||||
|
"roots": [
|
||||||
|
"app"
|
||||||
|
],
|
||||||
|
"packages": [
|
||||||
|
{
|
||||||
|
"name": "app",
|
||||||
|
"version": "1.0.0+1",
|
||||||
|
"dependencies": [
|
||||||
|
"cupertino_icons",
|
||||||
|
"flutter"
|
||||||
|
],
|
||||||
|
"devDependencies": [
|
||||||
|
"flutter_lints",
|
||||||
|
"flutter_test"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_lints",
|
||||||
|
"version": "6.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"lints"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter_test",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection",
|
||||||
|
"fake_async",
|
||||||
|
"flutter",
|
||||||
|
"leak_tracker_flutter_testing",
|
||||||
|
"matcher",
|
||||||
|
"meta",
|
||||||
|
"path",
|
||||||
|
"stack_trace",
|
||||||
|
"stream_channel",
|
||||||
|
"test_api",
|
||||||
|
"vector_math"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cupertino_icons",
|
||||||
|
"version": "1.0.8",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "flutter",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": [
|
||||||
|
"characters",
|
||||||
|
"collection",
|
||||||
|
"material_color_utilities",
|
||||||
|
"meta",
|
||||||
|
"sky_engine",
|
||||||
|
"vector_math"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lints",
|
||||||
|
"version": "6.1.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stream_channel",
|
||||||
|
"version": "2.1.4",
|
||||||
|
"dependencies": [
|
||||||
|
"async"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "meta",
|
||||||
|
"version": "1.17.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "collection",
|
||||||
|
"version": "1.19.1",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_flutter_testing",
|
||||||
|
"version": "3.0.10",
|
||||||
|
"dependencies": [
|
||||||
|
"flutter",
|
||||||
|
"leak_tracker",
|
||||||
|
"leak_tracker_testing",
|
||||||
|
"matcher",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vector_math",
|
||||||
|
"version": "2.2.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "stack_trace",
|
||||||
|
"version": "1.12.1",
|
||||||
|
"dependencies": [
|
||||||
|
"path"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "clock",
|
||||||
|
"version": "1.1.2",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fake_async",
|
||||||
|
"version": "1.3.3",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "path",
|
||||||
|
"version": "1.9.1",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "matcher",
|
||||||
|
"version": "0.12.17",
|
||||||
|
"dependencies": [
|
||||||
|
"async",
|
||||||
|
"meta",
|
||||||
|
"stack_trace",
|
||||||
|
"term_glyph",
|
||||||
|
"test_api"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test_api",
|
||||||
|
"version": "0.7.7",
|
||||||
|
"dependencies": [
|
||||||
|
"async",
|
||||||
|
"boolean_selector",
|
||||||
|
"collection",
|
||||||
|
"meta",
|
||||||
|
"source_span",
|
||||||
|
"stack_trace",
|
||||||
|
"stream_channel",
|
||||||
|
"string_scanner",
|
||||||
|
"term_glyph"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sky_engine",
|
||||||
|
"version": "0.0.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "material_color_utilities",
|
||||||
|
"version": "0.11.1",
|
||||||
|
"dependencies": [
|
||||||
|
"collection"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "characters",
|
||||||
|
"version": "1.4.0",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "async",
|
||||||
|
"version": "2.13.0",
|
||||||
|
"dependencies": [
|
||||||
|
"collection",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker_testing",
|
||||||
|
"version": "3.0.2",
|
||||||
|
"dependencies": [
|
||||||
|
"leak_tracker",
|
||||||
|
"matcher",
|
||||||
|
"meta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "leak_tracker",
|
||||||
|
"version": "11.0.2",
|
||||||
|
"dependencies": [
|
||||||
|
"clock",
|
||||||
|
"collection",
|
||||||
|
"meta",
|
||||||
|
"path",
|
||||||
|
"vm_service"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "term_glyph",
|
||||||
|
"version": "1.2.2",
|
||||||
|
"dependencies": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "string_scanner",
|
||||||
|
"version": "1.4.1",
|
||||||
|
"dependencies": [
|
||||||
|
"source_span"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "source_span",
|
||||||
|
"version": "1.10.2",
|
||||||
|
"dependencies": [
|
||||||
|
"collection",
|
||||||
|
"path",
|
||||||
|
"term_glyph"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "boolean_selector",
|
||||||
|
"version": "2.1.2",
|
||||||
|
"dependencies": [
|
||||||
|
"source_span",
|
||||||
|
"string_scanner"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vm_service",
|
||||||
|
"version": "15.0.2",
|
||||||
|
"dependencies": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"configVersion": 1
|
||||||
|
}
|
||||||
1
app/.dart_tool/version
Normal file
@@ -0,0 +1 @@
|
|||||||
|
3.38.9
|
||||||
11
app/macos/Flutter/ephemeral/Flutter-Generated.xcconfig
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
// This is a generated file; do not edit or check into version control.
|
||||||
|
FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable
|
||||||
|
FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app
|
||||||
|
COCOAPODS_PARALLEL_CODE_SIGN=true
|
||||||
|
FLUTTER_BUILD_DIR=build
|
||||||
|
FLUTTER_BUILD_NAME=1.0.0
|
||||||
|
FLUTTER_BUILD_NUMBER=1
|
||||||
|
DART_OBFUSCATION=false
|
||||||
|
TRACK_WIDGET_CREATION=true
|
||||||
|
TREE_SHAKE_ICONS=false
|
||||||
|
PACKAGE_CONFIG=.dart_tool/package_config.json
|
||||||
12
app/macos/Flutter/ephemeral/flutter_export_environment.sh
Executable file
@@ -0,0 +1,12 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
# This is a generated file; do not edit or check into version control.
|
||||||
|
export "FLUTTER_ROOT=/Users/kaska/.local/share/mise/installs/flutter/3.38.9-stable"
|
||||||
|
export "FLUTTER_APPLICATION_PATH=/Users/kaska/Documents/Projects/Major/arbiter/app"
|
||||||
|
export "COCOAPODS_PARALLEL_CODE_SIGN=true"
|
||||||
|
export "FLUTTER_BUILD_DIR=build"
|
||||||
|
export "FLUTTER_BUILD_NAME=1.0.0"
|
||||||
|
export "FLUTTER_BUILD_NUMBER=1"
|
||||||
|
export "DART_OBFUSCATION=false"
|
||||||
|
export "TRACK_WIDGET_CREATION=true"
|
||||||
|
export "TREE_SHAKE_ICONS=false"
|
||||||
|
export "PACKAGE_CONFIG=.dart_tool/package_config.json"
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
name: app
|
|
||||||
description: "A new Flutter project."
|
|
||||||
# The following line prevents the package from being accidentally published to
|
|
||||||
# pub.dev using `flutter pub publish`. This is preferred for private packages.
|
|
||||||
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
|
|
||||||
|
|
||||||
# The following defines the version and build number for your application.
|
|
||||||
# A version number is three numbers separated by dots, like 1.2.43
|
|
||||||
# followed by an optional build number separated by a +.
|
|
||||||
# Both the version and the builder number may be overridden in flutter
|
|
||||||
# build by specifying --build-name and --build-number, respectively.
|
|
||||||
# In Android, build-name is used as versionName while build-number used as versionCode.
|
|
||||||
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
|
|
||||||
# In iOS, build-name is used as CFBundleShortVersionString while build-number is used as CFBundleVersion.
|
|
||||||
# Read more about iOS versioning at
|
|
||||||
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
|
|
||||||
# In Windows, build-name is used as the major, minor, and patch parts
|
|
||||||
# of the product and file versions while build-number is used as the build suffix.
|
|
||||||
version: 1.0.0+1
|
|
||||||
|
|
||||||
environment:
|
|
||||||
sdk: ^3.10.8
|
|
||||||
|
|
||||||
# Dependencies specify other packages that your package needs in order to work.
|
|
||||||
# To automatically upgrade your package dependencies to the latest versions
|
|
||||||
# consider running `flutter pub upgrade --major-versions`. Alternatively,
|
|
||||||
# dependencies can be manually updated by changing the version numbers below to
|
|
||||||
# the latest version available on pub.dev. To see which dependencies have newer
|
|
||||||
# versions available, run `flutter pub outdated`.
|
|
||||||
dependencies:
|
|
||||||
flutter:
|
|
||||||
sdk: flutter
|
|
||||||
|
|
||||||
# The following adds the Cupertino Icons font to your application.
|
|
||||||
# Use with the CupertinoIcons class for iOS style icons.
|
|
||||||
cupertino_icons: ^1.0.8
|
|
||||||
|
|
||||||
dev_dependencies:
|
|
||||||
flutter_test:
|
|
||||||
sdk: flutter
|
|
||||||
|
|
||||||
# The "flutter_lints" package below contains a set of recommended lints to
|
|
||||||
# encourage good coding practices. The lint set provided by the package is
|
|
||||||
# activated in the `analysis_options.yaml` file located at the root of your
|
|
||||||
# package. See that file for information about deactivating specific lint
|
|
||||||
# rules and activating additional ones.
|
|
||||||
flutter_lints: ^6.0.0
|
|
||||||
|
|
||||||
# For information on the generic Dart part of this file, see the
|
|
||||||
# following page: https://dart.dev/tools/pub/pubspec
|
|
||||||
|
|
||||||
# The following section is specific to Flutter packages.
|
|
||||||
flutter:
|
|
||||||
|
|
||||||
# The following line ensures that the Material Icons font is
|
|
||||||
# included with your application, so that you can use the icons in
|
|
||||||
# the material Icons class.
|
|
||||||
uses-material-design: true
|
|
||||||
|
|
||||||
# To add assets to your application, add an assets section, like this:
|
|
||||||
# assets:
|
|
||||||
# - images/a_dot_burr.jpeg
|
|
||||||
# - images/a_dot_ham.jpeg
|
|
||||||
|
|
||||||
# An image asset can refer to one or more resolution-specific "variants", see
|
|
||||||
# https://flutter.dev/to/resolution-aware-images
|
|
||||||
|
|
||||||
# For details regarding adding assets from package dependencies, see
|
|
||||||
# https://flutter.dev/to/asset-from-package
|
|
||||||
|
|
||||||
# To add custom fonts to your application, add a fonts section here,
|
|
||||||
# in this "flutter" section. Each entry in this list should have a
|
|
||||||
# "family" key with the font family name, and a "fonts" key with a
|
|
||||||
# list giving the asset and other descriptors for the font. For
|
|
||||||
# example:
|
|
||||||
# fonts:
|
|
||||||
# - family: Schyler
|
|
||||||
# fonts:
|
|
||||||
# - asset: fonts/Schyler-Regular.ttf
|
|
||||||
# - asset: fonts/Schyler-Italic.ttf
|
|
||||||
# style: italic
|
|
||||||
# - family: Trajan Pro
|
|
||||||
# fonts:
|
|
||||||
# - asset: fonts/TrajanPro.ttf
|
|
||||||
# - asset: fonts/TrajanPro_Bold.ttf
|
|
||||||
# weight: 700
|
|
||||||
#
|
|
||||||
# For details regarding fonts from package dependencies,
|
|
||||||
# see https://flutter.dev/to/font-from-package
|
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package arbiter.unseal;
|
|
||||||
|
|
||||||
import "google/protobuf/empty.proto";
|
|
||||||
|
|
||||||
message UnsealStart {
|
|
||||||
bytes client_pubkey = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message UnsealStartResponse {
|
|
||||||
bytes server_pubkey = 1;
|
|
||||||
}
|
|
||||||
message UnsealEncryptedKey {
|
|
||||||
bytes nonce = 1;
|
|
||||||
bytes ciphertext = 2;
|
|
||||||
bytes associated_data = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
enum UnsealResult {
|
|
||||||
UNSEAL_RESULT_UNSPECIFIED = 0;
|
|
||||||
UNSEAL_RESULT_SUCCESS = 1;
|
|
||||||
UNSEAL_RESULT_INVALID_KEY = 2;
|
|
||||||
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
|
||||||
}
|
|
||||||
@@ -3,19 +3,49 @@ syntax = "proto3";
|
|||||||
package arbiter;
|
package arbiter;
|
||||||
|
|
||||||
import "auth.proto";
|
import "auth.proto";
|
||||||
import "unseal.proto";
|
import "google/protobuf/empty.proto";
|
||||||
|
|
||||||
|
message UnsealStart {
|
||||||
|
bytes client_pubkey = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message UnsealStartResponse {
|
||||||
|
bytes server_pubkey = 1;
|
||||||
|
}
|
||||||
|
message UnsealEncryptedKey {
|
||||||
|
bytes nonce = 1;
|
||||||
|
bytes ciphertext = 2;
|
||||||
|
bytes associated_data = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
enum UnsealResult {
|
||||||
|
UNSEAL_RESULT_UNSPECIFIED = 0;
|
||||||
|
UNSEAL_RESULT_SUCCESS = 1;
|
||||||
|
UNSEAL_RESULT_INVALID_KEY = 2;
|
||||||
|
UNSEAL_RESULT_UNBOOTSTRAPPED = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
enum VaultState {
|
||||||
|
VAULT_STATE_UNSPECIFIED = 0;
|
||||||
|
VAULT_STATE_UNBOOTSTRAPPED = 1;
|
||||||
|
VAULT_STATE_SEALED = 2;
|
||||||
|
VAULT_STATE_UNSEALED = 3;
|
||||||
|
VAULT_STATE_ERROR = 4;
|
||||||
|
}
|
||||||
|
|
||||||
message UserAgentRequest {
|
message UserAgentRequest {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
arbiter.auth.ClientMessage auth_message = 1;
|
arbiter.auth.ClientMessage auth_message = 1;
|
||||||
arbiter.unseal.UnsealStart unseal_start = 2;
|
UnsealStart unseal_start = 2;
|
||||||
arbiter.unseal.UnsealEncryptedKey unseal_encrypted_key = 3;
|
UnsealEncryptedKey unseal_encrypted_key = 3;
|
||||||
|
google.protobuf.Empty query_vault_state = 4;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
message UserAgentResponse {
|
message UserAgentResponse {
|
||||||
oneof payload {
|
oneof payload {
|
||||||
arbiter.auth.ServerMessage auth_message = 1;
|
arbiter.auth.ServerMessage auth_message = 1;
|
||||||
arbiter.unseal.UnsealStartResponse unseal_start_response = 2;
|
UnsealStartResponse unseal_start_response = 2;
|
||||||
arbiter.unseal.UnsealResult unseal_result = 3;
|
UnsealResult unseal_result = 3;
|
||||||
|
VaultState vault_state = 4;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
341
server/Cargo.lock
generated
@@ -59,14 +59,22 @@ version = "0.1.0"
|
|||||||
name = "arbiter-proto"
|
name = "arbiter-proto"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
|
"base64",
|
||||||
"futures",
|
"futures",
|
||||||
"hex",
|
|
||||||
"kameo",
|
"kameo",
|
||||||
|
"miette",
|
||||||
"prost",
|
"prost",
|
||||||
|
"rand",
|
||||||
|
"rcgen",
|
||||||
|
"rstest",
|
||||||
|
"rustls-pki-types",
|
||||||
|
"thiserror",
|
||||||
"tokio",
|
"tokio",
|
||||||
"tonic",
|
"tonic",
|
||||||
"tonic-prost",
|
"tonic-prost",
|
||||||
"tonic-prost-build",
|
"tonic-prost-build",
|
||||||
|
"tracing",
|
||||||
|
"url",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -88,6 +96,7 @@ dependencies = [
|
|||||||
"kameo",
|
"kameo",
|
||||||
"memsafe",
|
"memsafe",
|
||||||
"miette",
|
"miette",
|
||||||
|
"pem",
|
||||||
"rand",
|
"rand",
|
||||||
"rcgen",
|
"rcgen",
|
||||||
"restructed",
|
"restructed",
|
||||||
@@ -109,6 +118,16 @@ dependencies = [
|
|||||||
[[package]]
|
[[package]]
|
||||||
name = "arbiter-useragent"
|
name = "arbiter-useragent"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
|
dependencies = [
|
||||||
|
"arbiter-proto",
|
||||||
|
"ed25519-dalek",
|
||||||
|
"kameo",
|
||||||
|
"smlang",
|
||||||
|
"tokio",
|
||||||
|
"tonic",
|
||||||
|
"tracing",
|
||||||
|
"x25519-dalek",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "argon2"
|
name = "argon2"
|
||||||
@@ -859,6 +878,15 @@ version = "0.2.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
|
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "form_urlencoded"
|
||||||
|
version = "1.2.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "cb4cb245038516f5f85277875cdaa4f7d2c9a0fa0468de06ed190163b1581fcf"
|
||||||
|
dependencies = [
|
||||||
|
"percent-encoding",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "fs_extra"
|
name = "fs_extra"
|
||||||
version = "1.3.0"
|
version = "1.3.0"
|
||||||
@@ -936,6 +964,12 @@ version = "0.3.32"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
|
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "futures-timer"
|
||||||
|
version = "3.0.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "futures-util"
|
name = "futures-util"
|
||||||
version = "0.3.32"
|
version = "0.3.32"
|
||||||
@@ -1006,6 +1040,12 @@ version = "0.32.3"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e629b9b98ef3dd8afe6ca2bd0f89306cec16d43d907889945bc5d6687f2f13c7"
|
checksum = "e629b9b98ef3dd8afe6ca2bd0f89306cec16d43d907889945bc5d6687f2f13c7"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "glob"
|
||||||
|
version = "0.3.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "h2"
|
name = "h2"
|
||||||
version = "0.4.13"
|
version = "0.4.13"
|
||||||
@@ -1055,12 +1095,6 @@ version = "0.5.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
|
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "hex"
|
|
||||||
version = "0.4.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "http"
|
name = "http"
|
||||||
version = "1.4.0"
|
version = "1.4.0"
|
||||||
@@ -1195,6 +1229,87 @@ dependencies = [
|
|||||||
"cc",
|
"cc",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_collections"
|
||||||
|
version = "2.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43"
|
||||||
|
dependencies = [
|
||||||
|
"displaydoc",
|
||||||
|
"potential_utf",
|
||||||
|
"yoke",
|
||||||
|
"zerofrom",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_locale_core"
|
||||||
|
version = "2.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6"
|
||||||
|
dependencies = [
|
||||||
|
"displaydoc",
|
||||||
|
"litemap",
|
||||||
|
"tinystr",
|
||||||
|
"writeable",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_normalizer"
|
||||||
|
version = "2.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599"
|
||||||
|
dependencies = [
|
||||||
|
"icu_collections",
|
||||||
|
"icu_normalizer_data",
|
||||||
|
"icu_properties",
|
||||||
|
"icu_provider",
|
||||||
|
"smallvec",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_normalizer_data"
|
||||||
|
version = "2.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_properties"
|
||||||
|
version = "2.1.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec"
|
||||||
|
dependencies = [
|
||||||
|
"icu_collections",
|
||||||
|
"icu_locale_core",
|
||||||
|
"icu_properties_data",
|
||||||
|
"icu_provider",
|
||||||
|
"zerotrie",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_properties_data"
|
||||||
|
version = "2.1.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "icu_provider"
|
||||||
|
version = "2.1.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614"
|
||||||
|
dependencies = [
|
||||||
|
"displaydoc",
|
||||||
|
"icu_locale_core",
|
||||||
|
"writeable",
|
||||||
|
"yoke",
|
||||||
|
"zerofrom",
|
||||||
|
"zerotrie",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "id-arena"
|
name = "id-arena"
|
||||||
version = "2.3.0"
|
version = "2.3.0"
|
||||||
@@ -1207,6 +1322,27 @@ version = "1.0.1"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "b9e0384b61958566e926dc50660321d12159025e767c18e043daf26b70104c39"
|
checksum = "b9e0384b61958566e926dc50660321d12159025e767c18e043daf26b70104c39"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "idna"
|
||||||
|
version = "1.1.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "3b0875f23caa03898994f6ddc501886a45c7d3d62d04d2d90788d47be1b1e4de"
|
||||||
|
dependencies = [
|
||||||
|
"idna_adapter",
|
||||||
|
"smallvec",
|
||||||
|
"utf8_iter",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "idna_adapter"
|
||||||
|
version = "1.2.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344"
|
||||||
|
dependencies = [
|
||||||
|
"icu_normalizer",
|
||||||
|
"icu_properties",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "indexmap"
|
name = "indexmap"
|
||||||
version = "2.13.0"
|
version = "2.13.0"
|
||||||
@@ -1342,6 +1478,12 @@ version = "0.11.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
|
checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "litemap"
|
||||||
|
version = "0.8.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lock_api"
|
name = "lock_api"
|
||||||
version = "0.4.14"
|
version = "0.4.14"
|
||||||
@@ -1684,6 +1826,15 @@ version = "1.13.1"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
|
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "potential_utf"
|
||||||
|
version = "0.1.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
|
||||||
|
dependencies = [
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "powerfmt"
|
name = "powerfmt"
|
||||||
version = "0.2.0"
|
version = "0.2.0"
|
||||||
@@ -1700,6 +1851,15 @@ dependencies = [
|
|||||||
"syn 2.0.115",
|
"syn 2.0.115",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "proc-macro-crate"
|
||||||
|
version = "3.4.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "219cb19e96be00ab2e37d6e299658a0cfa83e52429179969b0f0121b4ac46983"
|
||||||
|
dependencies = [
|
||||||
|
"toml_edit",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "proc-macro-error"
|
name = "proc-macro-error"
|
||||||
version = "1.0.4"
|
version = "1.0.4"
|
||||||
@@ -1900,6 +2060,12 @@ version = "0.8.9"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "a96887878f22d7bad8a3b6dc5b7440e0ada9a245242924394987b21cf2210a4c"
|
checksum = "a96887878f22d7bad8a3b6dc5b7440e0ada9a245242924394987b21cf2210a4c"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "relative-path"
|
||||||
|
version = "1.9.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "ba39f3699c378cd8970968dcbff9c43159ea4cfbd88d43c00b22f2ef10a435d2"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "restructed"
|
name = "restructed"
|
||||||
version = "0.2.2"
|
version = "0.2.2"
|
||||||
@@ -1936,6 +2102,35 @@ dependencies = [
|
|||||||
"thiserror",
|
"thiserror",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rstest"
|
||||||
|
version = "0.26.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "f5a3193c063baaa2a95a33f03035c8a72b83d97a54916055ba22d35ed3839d49"
|
||||||
|
dependencies = [
|
||||||
|
"futures-timer",
|
||||||
|
"futures-util",
|
||||||
|
"rstest_macros",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rstest_macros"
|
||||||
|
version = "0.26.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "9c845311f0ff7951c5506121a9ad75aec44d083c31583b2ea5a30bcb0b0abba0"
|
||||||
|
dependencies = [
|
||||||
|
"cfg-if",
|
||||||
|
"glob",
|
||||||
|
"proc-macro-crate",
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"regex",
|
||||||
|
"relative-path",
|
||||||
|
"rustc_version",
|
||||||
|
"syn 2.0.115",
|
||||||
|
"unicode-ident",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rustc-demangle"
|
name = "rustc-demangle"
|
||||||
version = "0.1.27"
|
version = "0.1.27"
|
||||||
@@ -2206,6 +2401,12 @@ dependencies = [
|
|||||||
"wasm-bindgen",
|
"wasm-bindgen",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "stable_deref_trait"
|
||||||
|
version = "1.2.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "string_morph"
|
name = "string_morph"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
@@ -2419,6 +2620,16 @@ dependencies = [
|
|||||||
"time-core",
|
"time-core",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "tinystr"
|
||||||
|
version = "0.8.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869"
|
||||||
|
dependencies = [
|
||||||
|
"displaydoc",
|
||||||
|
"zerovec",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tokio"
|
name = "tokio"
|
||||||
version = "1.49.0"
|
version = "1.49.0"
|
||||||
@@ -2505,6 +2716,18 @@ dependencies = [
|
|||||||
"serde_core",
|
"serde_core",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "toml_edit"
|
||||||
|
version = "0.23.10+spec-1.0.0"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "84c8b9f757e028cee9fa244aea147aab2a9ec09d5325a9b01e0a49730c2b5269"
|
||||||
|
dependencies = [
|
||||||
|
"indexmap",
|
||||||
|
"toml_datetime",
|
||||||
|
"toml_parser",
|
||||||
|
"winnow",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml_parser"
|
name = "toml_parser"
|
||||||
version = "1.0.8+spec-1.1.0"
|
version = "1.0.8+spec-1.1.0"
|
||||||
@@ -2747,6 +2970,24 @@ version = "0.9.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
|
checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "url"
|
||||||
|
version = "2.5.8"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "ff67a8a4397373c3ef660812acab3268222035010ab8680ec4215f38ba3d0eed"
|
||||||
|
dependencies = [
|
||||||
|
"form_urlencoded",
|
||||||
|
"idna",
|
||||||
|
"percent-encoding",
|
||||||
|
"serde",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "utf8_iter"
|
||||||
|
version = "1.0.4"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "uuid"
|
name = "uuid"
|
||||||
version = "1.21.0"
|
version = "1.21.0"
|
||||||
@@ -3138,6 +3379,9 @@ name = "winnow"
|
|||||||
version = "0.7.14"
|
version = "0.7.14"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829"
|
checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829"
|
||||||
|
dependencies = [
|
||||||
|
"memchr",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "wit-bindgen"
|
name = "wit-bindgen"
|
||||||
@@ -3227,6 +3471,12 @@ dependencies = [
|
|||||||
"wasmparser",
|
"wasmparser",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "writeable"
|
||||||
|
version = "0.6.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "x25519-dalek"
|
name = "x25519-dalek"
|
||||||
version = "2.0.1"
|
version = "2.0.1"
|
||||||
@@ -3266,6 +3516,50 @@ dependencies = [
|
|||||||
"time",
|
"time",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "yoke"
|
||||||
|
version = "0.8.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954"
|
||||||
|
dependencies = [
|
||||||
|
"stable_deref_trait",
|
||||||
|
"yoke-derive",
|
||||||
|
"zerofrom",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "yoke-derive"
|
||||||
|
version = "0.8.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn 2.0.115",
|
||||||
|
"synstructure",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zerofrom"
|
||||||
|
version = "0.1.6"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "50cc42e0333e05660c3587f3bf9d0478688e15d870fab3346451ce7f8c9fbea5"
|
||||||
|
dependencies = [
|
||||||
|
"zerofrom-derive",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zerofrom-derive"
|
||||||
|
version = "0.1.6"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "d71e5d6e06ab090c67b5e44993ec16b72dcbaabc526db883a360057678b48502"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn 2.0.115",
|
||||||
|
"synstructure",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zeroize"
|
name = "zeroize"
|
||||||
version = "1.8.2"
|
version = "1.8.2"
|
||||||
@@ -3286,6 +3580,39 @@ dependencies = [
|
|||||||
"syn 2.0.115",
|
"syn 2.0.115",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zerotrie"
|
||||||
|
version = "0.2.3"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851"
|
||||||
|
dependencies = [
|
||||||
|
"displaydoc",
|
||||||
|
"yoke",
|
||||||
|
"zerofrom",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zerovec"
|
||||||
|
version = "0.11.5"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002"
|
||||||
|
dependencies = [
|
||||||
|
"yoke",
|
||||||
|
"zerofrom",
|
||||||
|
"zerovec-derive",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "zerovec-derive"
|
||||||
|
version = "0.11.2"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3"
|
||||||
|
dependencies = [
|
||||||
|
"proc-macro2",
|
||||||
|
"quote",
|
||||||
|
"syn 2.0.115",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zmij"
|
name = "zmij"
|
||||||
version = "1.0.21"
|
version = "1.0.21"
|
||||||
|
|||||||
@@ -23,3 +23,12 @@ async-trait = "0.1.89"
|
|||||||
futures = "0.3.31"
|
futures = "0.3.31"
|
||||||
tokio-stream = { version = "0.1.18", features = ["full"] }
|
tokio-stream = { version = "0.1.18", features = ["full"] }
|
||||||
kameo = "0.19.2"
|
kameo = "0.19.2"
|
||||||
|
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
||||||
|
rstest = "0.26.1"
|
||||||
|
rustls-pki-types = "1.14.0"
|
||||||
|
rcgen = { version = "0.14.7", features = [
|
||||||
|
"aws_lc_rs",
|
||||||
|
"pem",
|
||||||
|
"x509-parser",
|
||||||
|
"zeroize",
|
||||||
|
], default-features = false }
|
||||||
|
|||||||
@@ -3,5 +3,6 @@ name = "arbiter-client"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
|
license = "Apache-2.0"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
|||||||
@@ -3,17 +3,32 @@ name = "arbiter-proto"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
|
license = "Apache-2.0"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
tokio.workspace = true
|
tokio.workspace = true
|
||||||
futures.workspace = true
|
futures.workspace = true
|
||||||
hex = "0.4.3"
|
|
||||||
tonic-prost = "0.14.3"
|
tonic-prost = "0.14.3"
|
||||||
prost = "0.14.3"
|
prost = "0.14.3"
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
|
url = "2.5.8"
|
||||||
|
miette.workspace = true
|
||||||
|
thiserror.workspace = true
|
||||||
|
rustls-pki-types.workspace = true
|
||||||
|
base64 = "0.22.1"
|
||||||
|
tracing.workspace = true
|
||||||
|
|
||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
tonic-prost-build = "0.14.3"
|
tonic-prost-build = "0.14.3"
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
rstest.workspace = true
|
||||||
|
rand.workspace = true
|
||||||
|
rcgen.workspace = true
|
||||||
|
|
||||||
|
[package.metadata.cargo-shear]
|
||||||
|
ignored = ["tonic-prost", "prost", "kameo"]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,8 @@
|
|||||||
|
pub mod transport;
|
||||||
|
pub mod url;
|
||||||
|
|
||||||
|
use base64::{Engine, prelude::BASE64_STANDARD};
|
||||||
|
|
||||||
use crate::proto::auth::AuthChallenge;
|
use crate::proto::auth::AuthChallenge;
|
||||||
|
|
||||||
pub mod proto {
|
pub mod proto {
|
||||||
@@ -6,17 +11,12 @@ pub mod proto {
|
|||||||
pub mod auth {
|
pub mod auth {
|
||||||
tonic::include_proto!("arbiter.auth");
|
tonic::include_proto!("arbiter.auth");
|
||||||
}
|
}
|
||||||
pub mod unseal {
|
|
||||||
tonic::include_proto!("arbiter.unseal");
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub mod transport;
|
pub static BOOTSTRAP_PATH: &str = "bootstrap_token";
|
||||||
|
|
||||||
pub static BOOTSTRAP_TOKEN_PATH: &'static str = "bootstrap_token";
|
|
||||||
|
|
||||||
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
||||||
static ARBITER_HOME: &'static str = ".arbiter";
|
static ARBITER_HOME: &str = ".arbiter";
|
||||||
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
|
let home_dir = std::env::home_dir().ok_or(std::io::Error::new(
|
||||||
std::io::ErrorKind::PermissionDenied,
|
std::io::ErrorKind::PermissionDenied,
|
||||||
"can not get home directory",
|
"can not get home directory",
|
||||||
@@ -29,6 +29,6 @@ pub fn home_path() -> Result<std::path::PathBuf, std::io::Error> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
|
pub fn format_challenge(challenge: &AuthChallenge) -> Vec<u8> {
|
||||||
let concat_form = format!("{}:{}", challenge.nonce, hex::encode(&challenge.pubkey));
|
let concat_form = format!("{}:{}", challenge.nonce, BASE64_STANDARD.encode(&challenge.pubkey));
|
||||||
concat_form.into_bytes().to_vec()
|
concat_form.into_bytes().to_vec()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,9 +1,75 @@
|
|||||||
|
//! Transport abstraction layer for bridging gRPC bidirectional streaming with kameo actors.
|
||||||
|
//!
|
||||||
|
//! This module provides a clean separation between the gRPC transport layer and business logic
|
||||||
|
//! by modeling the connection as two linked kameo actors:
|
||||||
|
//!
|
||||||
|
//! - A **transport actor** ([`GrpcTransportActor`]) that owns the gRPC stream and channel,
|
||||||
|
//! forwarding inbound messages to the business actor and outbound messages to the client.
|
||||||
|
//! - A **business logic actor** that receives inbound messages from the transport actor and
|
||||||
|
//! sends outbound messages back through the transport actor.
|
||||||
|
//!
|
||||||
|
//! The [`wire()`] function sets up bidirectional linking between the two actors, ensuring
|
||||||
|
//! that if either actor dies, the other is notified and can shut down gracefully.
|
||||||
|
//!
|
||||||
|
//! # Terminology
|
||||||
|
//!
|
||||||
|
//! - **InboundMessage**: a message received by the transport actor from the channel/socket
|
||||||
|
//! and forwarded to the business actor.
|
||||||
|
//! - **OutboundMessage**: a message produced by the business actor and sent to the transport
|
||||||
|
//! actor to be forwarded to the channel/socket.
|
||||||
|
//!
|
||||||
|
//! # Architecture
|
||||||
|
//!
|
||||||
|
//! ```text
|
||||||
|
//! gRPC Stream ──InboundMessage──▶ GrpcTransportActor ──tell(InboundMessage)──▶ BusinessActor
|
||||||
|
//! ▲ │
|
||||||
|
//! └─tell(Result<OutboundMessage, _>)────┘
|
||||||
|
//! │
|
||||||
|
//! mpsc::Sender ──▶ Client
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! # Example
|
||||||
|
//!
|
||||||
|
//! ```rust,ignore
|
||||||
|
//! let (tx, rx) = mpsc::channel(1000);
|
||||||
|
//! let context = server_context.clone();
|
||||||
|
//!
|
||||||
|
//! wire(
|
||||||
|
//! |transport_ref| MyBusinessActor::new(context, transport_ref),
|
||||||
|
//! |business_recipient, business_id| GrpcTransportActor {
|
||||||
|
//! sender: tx,
|
||||||
|
//! receiver: grpc_stream,
|
||||||
|
//! business_logic_actor: business_recipient,
|
||||||
|
//! business_logic_actor_id: business_id,
|
||||||
|
//! },
|
||||||
|
//! ).await;
|
||||||
|
//!
|
||||||
|
//! Ok(Response::new(ReceiverStream::new(rx)))
|
||||||
|
//! ```
|
||||||
|
|
||||||
use futures::{Stream, StreamExt};
|
use futures::{Stream, StreamExt};
|
||||||
use tokio::sync::mpsc::{self, error::SendError};
|
use kameo::{
|
||||||
|
Actor,
|
||||||
|
actor::{ActorRef, PreparedActor, Recipient, Spawn, WeakActorRef},
|
||||||
|
mailbox::Signal,
|
||||||
|
prelude::Message,
|
||||||
|
};
|
||||||
|
use tokio::{
|
||||||
|
select,
|
||||||
|
sync::mpsc::{self, error::SendError},
|
||||||
|
};
|
||||||
use tonic::{Status, Streaming};
|
use tonic::{Status, Streaming};
|
||||||
|
use tracing::{debug, error};
|
||||||
|
|
||||||
|
/// A bidirectional stream abstraction for sans-io testing.
|
||||||
// Abstraction for stream for sans-io capabilities
|
///
|
||||||
|
/// Combines a [`Stream`] of incoming messages with the ability to [`send`](Bi::send)
|
||||||
|
/// outgoing responses. This trait allows business logic to be tested without a real
|
||||||
|
/// gRPC connection by swapping in an in-memory implementation.
|
||||||
|
///
|
||||||
|
/// # Type Parameters
|
||||||
|
/// - `T`: `InboundMessage` received from the channel/socket (e.g., `UserAgentRequest`)
|
||||||
|
/// - `U`: `OutboundMessage` sent to the channel/socket (e.g., `UserAgentResponse`)
|
||||||
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
|
pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
|
||||||
type Error;
|
type Error;
|
||||||
fn send(
|
fn send(
|
||||||
@@ -12,7 +78,10 @@ pub trait Bi<T, U>: Stream<Item = Result<T, Status>> + Send + Sync + 'static {
|
|||||||
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
|
) -> impl std::future::Future<Output = Result<(), Self::Error>> + Send;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bi-directional stream abstraction for handling gRPC streaming requests and responses
|
/// Concrete [`Bi`] implementation backed by a tonic gRPC [`Streaming`] and an [`mpsc::Sender`].
|
||||||
|
///
|
||||||
|
/// This is the production implementation used in gRPC service handlers. The `request_stream`
|
||||||
|
/// receives messages from the client, and `response_sender` sends responses back.
|
||||||
pub struct BiStream<T, U> {
|
pub struct BiStream<T, U> {
|
||||||
pub request_stream: Streaming<T>,
|
pub request_stream: Streaming<T>,
|
||||||
pub response_sender: mpsc::Sender<Result<U, Status>>,
|
pub response_sender: mpsc::Sender<Result<U, Status>>,
|
||||||
@@ -44,3 +113,259 @@ where
|
|||||||
self.response_sender.send(item).await
|
self.response_sender.send(item).await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Marker trait for transport actors that can receive outbound messages of type `T`.
|
||||||
|
///
|
||||||
|
/// Implement this on your transport actor to indicate it can handle outbound messages
|
||||||
|
/// produced by the business actor. Requires the actor to implement [`Message<Result<T, E>>`]
|
||||||
|
/// so business logic can forward responses via [`tell()`](ActorRef::tell).
|
||||||
|
///
|
||||||
|
/// # Example
|
||||||
|
///
|
||||||
|
/// ```rust,ignore
|
||||||
|
/// #[derive(Actor)]
|
||||||
|
/// struct MyTransportActor { /* ... */ }
|
||||||
|
///
|
||||||
|
/// impl Message<Result<MyResponse, MyError>> for MyTransportActor {
|
||||||
|
/// type Reply = ();
|
||||||
|
/// async fn handle(&mut self, msg: Result<MyResponse, MyError>, _ctx: &mut Context<Self, Self::Reply>) -> Self::Reply {
|
||||||
|
/// // forward outbound message to channel/socket
|
||||||
|
/// }
|
||||||
|
/// }
|
||||||
|
///
|
||||||
|
/// impl TransportActor<MyResponse, MyError> for MyTransportActor {}
|
||||||
|
/// ```
|
||||||
|
pub trait TransportActor<Outbound: Send + 'static, DomainError: Send + 'static>:
|
||||||
|
Actor + Send + Message<Result<Outbound, DomainError>>
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A kameo actor that bridges a gRPC bidirectional stream with a business logic actor.
|
||||||
|
///
|
||||||
|
/// This actor owns the gRPC [`Streaming`] receiver and an [`mpsc::Sender`] for responses.
|
||||||
|
/// It multiplexes between its own mailbox (for outbound messages from the business actor)
|
||||||
|
/// and the gRPC stream (for inbound client messages) using [`tokio::select!`].
|
||||||
|
///
|
||||||
|
/// # Message Flow
|
||||||
|
///
|
||||||
|
/// - **Inbound**: Messages from the gRPC stream are forwarded to `business_logic_actor`
|
||||||
|
/// via [`tell()`](Recipient::tell).
|
||||||
|
/// - **Outbound**: The business actor sends `Result<Outbound, DomainError>` messages to this
|
||||||
|
/// actor, which forwards them through the `sender` channel to the gRPC response stream.
|
||||||
|
///
|
||||||
|
/// # Lifecycle
|
||||||
|
///
|
||||||
|
/// - If the business logic actor dies (detected via actor linking), this actor stops,
|
||||||
|
/// which closes the gRPC stream.
|
||||||
|
/// - If the gRPC stream closes or errors, this actor stops, which (via linking) notifies
|
||||||
|
/// the business actor.
|
||||||
|
/// - Error responses (`Err(DomainError)`) are forwarded to the client and then the actor stops,
|
||||||
|
/// closing the connection.
|
||||||
|
///
|
||||||
|
/// # Type Parameters
|
||||||
|
/// - `Outbound`: `OutboundMessage` sent to the client (e.g., `UserAgentResponse`)
|
||||||
|
/// - `Inbound`: `InboundMessage` received from the client (e.g., `UserAgentRequest`)
|
||||||
|
/// - `E`: The domain error type, must implement `Into<tonic::Status>` for gRPC conversion
|
||||||
|
pub struct GrpcTransportActor<Outbound, Inbound, DomainError>
|
||||||
|
where
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
DomainError: Into<tonic::Status> + Send + 'static,
|
||||||
|
{
|
||||||
|
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
|
||||||
|
receiver: tonic::Streaming<Inbound>,
|
||||||
|
business_logic_actor: Recipient<Inbound>,
|
||||||
|
_error: std::marker::PhantomData<DomainError>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Outbound, Inbound, DomainError> GrpcTransportActor<Outbound, Inbound, DomainError>
|
||||||
|
where
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
DomainError: Into<tonic::Status> + Send + 'static,
|
||||||
|
{
|
||||||
|
pub fn new(
|
||||||
|
sender: mpsc::Sender<Result<Outbound, tonic::Status>>,
|
||||||
|
receiver: tonic::Streaming<Inbound>,
|
||||||
|
business_logic_actor: Recipient<Inbound>,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
sender,
|
||||||
|
receiver,
|
||||||
|
business_logic_actor,
|
||||||
|
_error: std::marker::PhantomData,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Outbound, Inbound, E> Actor for GrpcTransportActor<Outbound, Inbound, E>
|
||||||
|
where
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
E: Into<tonic::Status> + Send + 'static,
|
||||||
|
{
|
||||||
|
type Args = Self;
|
||||||
|
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
async fn on_start(args: Self::Args, _: ActorRef<Self>) -> Result<Self, Self::Error> {
|
||||||
|
Ok(args)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn on_link_died(
|
||||||
|
&mut self,
|
||||||
|
_: WeakActorRef<Self>,
|
||||||
|
id: kameo::prelude::ActorId,
|
||||||
|
_: kameo::prelude::ActorStopReason,
|
||||||
|
) -> impl Future<
|
||||||
|
Output = Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error>,
|
||||||
|
> + Send {
|
||||||
|
async move {
|
||||||
|
if id == self.business_logic_actor.id() {
|
||||||
|
error!("Business logic actor died, stopping GrpcTransportActor");
|
||||||
|
Ok(std::ops::ControlFlow::Break(
|
||||||
|
kameo::prelude::ActorStopReason::Normal,
|
||||||
|
))
|
||||||
|
} else {
|
||||||
|
debug!(
|
||||||
|
"Linked actor {} died, but it's not the business logic actor, ignoring",
|
||||||
|
id
|
||||||
|
);
|
||||||
|
Ok(std::ops::ControlFlow::Continue(()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn next(
|
||||||
|
&mut self,
|
||||||
|
_: WeakActorRef<Self>,
|
||||||
|
mailbox_rx: &mut kameo::prelude::MailboxReceiver<Self>,
|
||||||
|
) -> Option<kameo::mailbox::Signal<Self>> {
|
||||||
|
select! {
|
||||||
|
msg = mailbox_rx.recv() => {
|
||||||
|
msg
|
||||||
|
}
|
||||||
|
recv_msg = self.receiver.next() => {
|
||||||
|
match recv_msg {
|
||||||
|
Some(Ok(msg)) => {
|
||||||
|
match self.business_logic_actor.tell(msg).await {
|
||||||
|
Ok(_) => None,
|
||||||
|
Err(e) => {
|
||||||
|
// TODO: this would probably require better error handling - or resending if backpressure is the issue
|
||||||
|
error!("Failed to send message to business logic actor: {}", e);
|
||||||
|
Some(Signal::Stop)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Some(Err(e)) => {
|
||||||
|
error!("Received error from stream: {}, stopping GrpcTransportActor", e);
|
||||||
|
Some(Signal::Stop)
|
||||||
|
}
|
||||||
|
None => {
|
||||||
|
error!("Receiver channel closed, stopping GrpcTransportActor");
|
||||||
|
Some(Signal::Stop)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Outbound, Inbound, E> Message<Result<Outbound, E>> for GrpcTransportActor<Outbound, Inbound, E>
|
||||||
|
where
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
E: Into<tonic::Status> + Send + 'static,
|
||||||
|
{
|
||||||
|
type Reply = ();
|
||||||
|
|
||||||
|
async fn handle(
|
||||||
|
&mut self,
|
||||||
|
msg: Result<Outbound, E>,
|
||||||
|
ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||||
|
) -> Self::Reply {
|
||||||
|
let is_err = msg.is_err();
|
||||||
|
let grpc_msg = msg.map_err(Into::into);
|
||||||
|
match self.sender.send(grpc_msg).await {
|
||||||
|
Ok(_) => {
|
||||||
|
if is_err {
|
||||||
|
ctx.stop();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("Failed to send message: {}", e);
|
||||||
|
ctx.stop();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<Outbound, Inbound, E> TransportActor<Outbound, E> for GrpcTransportActor<Outbound, Inbound, E>
|
||||||
|
where
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
E: Into<tonic::Status> + Send + 'static,
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Wires together a transport actor and a business logic actor with bidirectional linking.
|
||||||
|
///
|
||||||
|
/// This function handles the chicken-and-egg problem of two actors that need references
|
||||||
|
/// to each other at construction time. It uses kameo's [`PreparedActor`] to obtain
|
||||||
|
/// [`ActorRef`]s before spawning, then links both actors so that if either dies,
|
||||||
|
/// the other is notified via [`on_link_died`](Actor::on_link_died).
|
||||||
|
///
|
||||||
|
/// The business actor receives a type-erased [`Recipient<Result<Outbound, DomainError>>`] instead of an
|
||||||
|
/// `ActorRef<Transport>`, keeping it decoupled from the concrete transport implementation.
|
||||||
|
///
|
||||||
|
/// # Type Parameters
|
||||||
|
/// - `Transport`: The transport actor type (e.g., [`GrpcTransportActor`])
|
||||||
|
/// - `Inbound`: `InboundMessage` received by the business actor from the transport
|
||||||
|
/// - `Outbound`: `OutboundMessage` sent by the business actor back to the transport
|
||||||
|
/// - `Business`: The business logic actor
|
||||||
|
/// - `BusinessCtor`: Closure that receives a prepared business actor and transport recipient,
|
||||||
|
/// spawns the business actor, and returns its [`ActorRef`]
|
||||||
|
/// - `TransportCtor`: Closure that receives a prepared transport actor, a recipient for
|
||||||
|
/// inbound messages, and the business actor id, then spawns the transport actor
|
||||||
|
///
|
||||||
|
/// # Returns
|
||||||
|
/// A tuple of `(transport_ref, business_ref)` — actor references for both spawned actors.
|
||||||
|
pub async fn wire<
|
||||||
|
Transport,
|
||||||
|
Inbound,
|
||||||
|
Outbound,
|
||||||
|
DomainError,
|
||||||
|
Business,
|
||||||
|
BusinessCtor,
|
||||||
|
TransportCtor,
|
||||||
|
>(
|
||||||
|
business_ctor: BusinessCtor,
|
||||||
|
transport_ctor: TransportCtor,
|
||||||
|
) -> (ActorRef<Transport>, ActorRef<Business>)
|
||||||
|
where
|
||||||
|
Transport: TransportActor<Outbound, DomainError>,
|
||||||
|
Inbound: Send + 'static,
|
||||||
|
Outbound: Send + 'static,
|
||||||
|
DomainError: Send + 'static,
|
||||||
|
Business: Actor + Message<Inbound> + Send + 'static,
|
||||||
|
BusinessCtor: FnOnce(PreparedActor<Business>, Recipient<Result<Outbound, DomainError>>),
|
||||||
|
TransportCtor:
|
||||||
|
FnOnce(PreparedActor<Transport>, Recipient<Inbound>),
|
||||||
|
{
|
||||||
|
let prepared_business: PreparedActor<Business> = Spawn::prepare();
|
||||||
|
let prepared_transport: PreparedActor<Transport> = Spawn::prepare();
|
||||||
|
|
||||||
|
let business_ref = prepared_business.actor_ref().clone();
|
||||||
|
let transport_ref = prepared_transport.actor_ref().clone();
|
||||||
|
|
||||||
|
transport_ref.link(&business_ref).await;
|
||||||
|
business_ref.link(&transport_ref).await;
|
||||||
|
|
||||||
|
let recipient = transport_ref.clone().recipient();
|
||||||
|
business_ctor(prepared_business, recipient);
|
||||||
|
let business_recipient = business_ref.clone().recipient();
|
||||||
|
transport_ctor(prepared_transport, business_recipient);
|
||||||
|
|
||||||
|
|
||||||
|
(transport_ref, business_ref)
|
||||||
|
}
|
||||||
|
|||||||
128
server/crates/arbiter-proto/src/url.rs
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
use std::fmt::Display;
|
||||||
|
|
||||||
|
use base64::{Engine as _, prelude::BASE64_URL_SAFE};
|
||||||
|
use rustls_pki_types::CertificateDer;
|
||||||
|
|
||||||
|
const ARBITER_URL_SCHEME: &str = "arbiter";
|
||||||
|
const CERT_QUERY_KEY: &str = "cert";
|
||||||
|
const BOOTSTRAP_TOKEN_QUERY_KEY: &str = "bootstrap_token";
|
||||||
|
|
||||||
|
pub struct ArbiterUrl {
|
||||||
|
pub host: String,
|
||||||
|
pub port: u16,
|
||||||
|
pub ca_cert: CertificateDer<'static>,
|
||||||
|
pub bootstrap_token: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Display for ArbiterUrl {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
let mut base = format!(
|
||||||
|
"{ARBITER_URL_SCHEME}://{}:{}?{CERT_QUERY_KEY}={}",
|
||||||
|
self.host,
|
||||||
|
self.port,
|
||||||
|
BASE64_URL_SAFE.encode(self.ca_cert.to_vec())
|
||||||
|
);
|
||||||
|
if let Some(token) = &self.bootstrap_token {
|
||||||
|
base.push_str(&format!("&{BOOTSTRAP_TOKEN_QUERY_KEY}={}", token));
|
||||||
|
}
|
||||||
|
f.write_str(&base)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("Invalid URL scheme, expected '{ARBITER_URL_SCHEME}://'")]
|
||||||
|
#[diagnostic(
|
||||||
|
code(arbiter::url::invalid_scheme),
|
||||||
|
help("The URL must start with '{ARBITER_URL_SCHEME}://'")
|
||||||
|
)]
|
||||||
|
InvalidScheme,
|
||||||
|
#[error("Missing host in URL")]
|
||||||
|
#[diagnostic(
|
||||||
|
code(arbiter::url::missing_host),
|
||||||
|
help("The URL must include a host, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:<port>'")
|
||||||
|
)]
|
||||||
|
MissingHost,
|
||||||
|
#[error("Missing port in URL")]
|
||||||
|
#[diagnostic(
|
||||||
|
code(arbiter::url::missing_port),
|
||||||
|
help("The URL must include a port, e.g., '{ARBITER_URL_SCHEME}://127.0.0.1:1234'")
|
||||||
|
)]
|
||||||
|
MissingPort,
|
||||||
|
#[error("Missing 'cert' query parameter in URL")]
|
||||||
|
#[diagnostic(
|
||||||
|
code(arbiter::url::missing_cert),
|
||||||
|
help("The URL must include a 'cert' query parameter")
|
||||||
|
)]
|
||||||
|
MissingCert,
|
||||||
|
#[error("Invalid base64 in 'cert' query parameter: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::url::invalid_cert_base64))]
|
||||||
|
InvalidCertBase64(#[from] base64::DecodeError),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> TryFrom<&'a str> for ArbiterUrl {
|
||||||
|
type Error = Error;
|
||||||
|
|
||||||
|
fn try_from(value: &'a str) -> Result<Self, Self::Error> {
|
||||||
|
let url = url::Url::parse(value).map_err(|_| Error::InvalidScheme)?;
|
||||||
|
|
||||||
|
if url.scheme() != ARBITER_URL_SCHEME {
|
||||||
|
return Err(Error::InvalidScheme);
|
||||||
|
}
|
||||||
|
|
||||||
|
let host = url.host_str().ok_or(Error::MissingHost)?.to_string();
|
||||||
|
let port = url.port().ok_or(Error::MissingPort)?;
|
||||||
|
let cert_str = url
|
||||||
|
.query_pairs()
|
||||||
|
.find(|(k, _)| k == CERT_QUERY_KEY)
|
||||||
|
.ok_or(Error::MissingCert)?
|
||||||
|
.1;
|
||||||
|
|
||||||
|
let cert = BASE64_URL_SAFE.decode(cert_str.as_ref())?;
|
||||||
|
let cert = CertificateDer::from_slice(&cert).into_owned();
|
||||||
|
|
||||||
|
let bootstrap_token = url
|
||||||
|
.query_pairs()
|
||||||
|
.find(|(k, _)| k == BOOTSTRAP_TOKEN_QUERY_KEY)
|
||||||
|
.map(|(_, v)| v.to_string());
|
||||||
|
|
||||||
|
Ok(ArbiterUrl {
|
||||||
|
host,
|
||||||
|
port,
|
||||||
|
ca_cert: cert,
|
||||||
|
bootstrap_token,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use rcgen::generate_simple_self_signed;
|
||||||
|
use rstest::rstest;
|
||||||
|
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[rstest]
|
||||||
|
|
||||||
|
fn test_parsing_correctness(
|
||||||
|
#[values("127.0.0.1", "localhost", "192.168.1.1", "some.domain.com")] host: &str,
|
||||||
|
|
||||||
|
#[values(None, Some("token123".to_string()))] bootstrap_token: Option<String>,
|
||||||
|
) {
|
||||||
|
let cert = generate_simple_self_signed(&["Arbiter CA".into()]).unwrap();
|
||||||
|
let cert = cert.cert.der();
|
||||||
|
|
||||||
|
let url = ArbiterUrl {
|
||||||
|
host: host.to_string(),
|
||||||
|
port: 1234,
|
||||||
|
ca_cert: cert.clone().into_owned(),
|
||||||
|
bootstrap_token,
|
||||||
|
};
|
||||||
|
let url_str = url.to_string();
|
||||||
|
let parsed_url = ArbiterUrl::try_from(url_str.as_str()).unwrap();
|
||||||
|
assert_eq!(url.host, parsed_url.host);
|
||||||
|
assert_eq!(url.port, parsed_url.port);
|
||||||
|
assert_eq!(url.ca_cert.to_vec(), parsed_url.ca_cert.to_vec());
|
||||||
|
assert_eq!(url.bootstrap_token, parsed_url.bootstrap_token);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@ name = "arbiter-server"
|
|||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
repository = "https://git.markettakers.org/MarketTakers/arbiter"
|
||||||
|
license = "Apache-2.0"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
diesel = { version = "2.3.6", features = ["chrono", "returning_clauses_for_sqlite_3_35", "serde_json", "time", "uuid"] }
|
||||||
@@ -17,6 +18,7 @@ arbiter-proto.path = "../arbiter-proto"
|
|||||||
tracing.workspace = true
|
tracing.workspace = true
|
||||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
tonic.workspace = true
|
tonic.workspace = true
|
||||||
|
tonic.features = ["tls-aws-lc"]
|
||||||
tokio.workspace = true
|
tokio.workspace = true
|
||||||
rustls.workspace = true
|
rustls.workspace = true
|
||||||
smlang.workspace = true
|
smlang.workspace = true
|
||||||
@@ -29,21 +31,17 @@ futures.workspace = true
|
|||||||
tokio-stream.workspace = true
|
tokio-stream.workspace = true
|
||||||
dashmap = "6.1.0"
|
dashmap = "6.1.0"
|
||||||
rand.workspace = true
|
rand.workspace = true
|
||||||
rcgen = { version = "0.14.7", features = [
|
rcgen.workspace = true
|
||||||
"aws_lc_rs",
|
|
||||||
"pem",
|
|
||||||
"x509-parser",
|
|
||||||
"zeroize",
|
|
||||||
], default-features = false }
|
|
||||||
chrono.workspace = true
|
chrono.workspace = true
|
||||||
memsafe = "0.4.0"
|
memsafe = "0.4.0"
|
||||||
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
zeroize = { version = "1.8.2", features = ["std", "simd"] }
|
||||||
kameo.workspace = true
|
kameo.workspace = true
|
||||||
x25519-dalek = { version = "2.0.1", features = ["getrandom"] }
|
x25519-dalek.workspace = true
|
||||||
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
chacha20poly1305 = { version = "0.10.1", features = ["std"] }
|
||||||
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
argon2 = { version = "0.5.3", features = ["zeroize"] }
|
||||||
restructed = "0.2.2"
|
restructed = "0.2.2"
|
||||||
strum = { version = "0.27.2", features = ["derive"] }
|
strum = { version = "0.27.2", features = ["derive"] }
|
||||||
|
pem = "3.0.6"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
insta = "1.46.3"
|
insta = "1.46.3"
|
||||||
|
|||||||
@@ -24,14 +24,24 @@ create unique index if not exists uniq_nonce_per_root_key on aead_encrypted (
|
|||||||
associated_root_key_id
|
associated_root_key_id
|
||||||
);
|
);
|
||||||
|
|
||||||
|
create table if not exists tls_history (
|
||||||
|
id INTEGER not null PRIMARY KEY,
|
||||||
|
cert text not null,
|
||||||
|
cert_key text not null, -- PEM Encoded private key
|
||||||
|
ca_cert text not null,
|
||||||
|
ca_key text not null, -- PEM Encoded private key
|
||||||
|
created_at integer not null default(unixepoch ('now'))
|
||||||
|
) STRICT;
|
||||||
|
|
||||||
-- This is a singleton
|
-- This is a singleton
|
||||||
create table if not exists arbiter_settings (
|
create table if not exists arbiter_settings (
|
||||||
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
|
id INTEGER not null PRIMARY KEY CHECK (id = 1), -- singleton row, id must be 1
|
||||||
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
|
root_key_id integer references root_key_history (id) on delete RESTRICT, -- if null, means wasn't bootstrapped yet
|
||||||
cert_key blob not null,
|
tls_id integer references tls_history (id) on delete RESTRICT
|
||||||
cert blob not null
|
|
||||||
) STRICT;
|
) STRICT;
|
||||||
|
|
||||||
|
insert into arbiter_settings (id) values (1) on conflict do nothing; -- ensure singleton row exists
|
||||||
|
|
||||||
create table if not exists useragent_client (
|
create table if not exists useragent_client (
|
||||||
id integer not null primary key,
|
id integer not null primary key,
|
||||||
nonce integer not null default(1), -- used for auth challenge
|
nonce integer not null default(1), -- used for auth challenge
|
||||||
|
|||||||
@@ -1,4 +0,0 @@
|
|||||||
pub mod user_agent;
|
|
||||||
pub mod client;
|
|
||||||
pub(crate) mod bootstrap;
|
|
||||||
pub(crate) mod keyholder;
|
|
||||||
@@ -1,34 +1,37 @@
|
|||||||
use arbiter_proto::{BOOTSTRAP_TOKEN_PATH, home_path};
|
use arbiter_proto::{BOOTSTRAP_PATH, home_path};
|
||||||
use diesel::QueryDsl;
|
use diesel::QueryDsl;
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use kameo::{Actor, messages};
|
use kameo::{Actor, messages};
|
||||||
use miette::Diagnostic;
|
use miette::Diagnostic;
|
||||||
use rand::{RngExt, distr::StandardUniform, make_rng, rngs::StdRng};
|
use rand::{
|
||||||
|
RngExt,
|
||||||
|
distr::{Alphanumeric},
|
||||||
|
make_rng,
|
||||||
|
rngs::StdRng,
|
||||||
|
};
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
use tracing::info;
|
|
||||||
|
|
||||||
use crate::db::{self, DatabasePool, schema};
|
use crate::db::{self, DatabasePool, schema};
|
||||||
|
|
||||||
const TOKEN_LENGTH: usize = 64;
|
const TOKEN_LENGTH: usize = 64;
|
||||||
|
|
||||||
pub async fn generate_token() -> Result<String, std::io::Error> {
|
pub async fn generate_token() -> Result<String, std::io::Error> {
|
||||||
let rng: StdRng = make_rng();
|
let rng: StdRng = make_rng();
|
||||||
|
|
||||||
let token: String = rng
|
let token: String = rng.sample_iter(Alphanumeric).take(TOKEN_LENGTH).fold(
|
||||||
.sample_iter::<char, _>(StandardUniform)
|
Default::default(),
|
||||||
.take(TOKEN_LENGTH)
|
|mut accum, char| {
|
||||||
.fold(Default::default(), |mut accum, char| {
|
|
||||||
accum += char.to_string().as_str();
|
accum += char.to_string().as_str();
|
||||||
accum
|
accum
|
||||||
});
|
},
|
||||||
|
);
|
||||||
|
|
||||||
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
|
tokio::fs::write(home_path()?.join(BOOTSTRAP_PATH), token.as_str()).await?;
|
||||||
|
|
||||||
Ok(token)
|
Ok(token)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Error, Debug, Diagnostic)]
|
#[derive(Error, Debug, Diagnostic)]
|
||||||
pub enum BootstrapError {
|
pub enum Error {
|
||||||
#[error("Database error: {0}")]
|
#[error("Database error: {0}")]
|
||||||
#[diagnostic(code(arbiter_server::bootstrap::database))]
|
#[diagnostic(code(arbiter_server::bootstrap::database))]
|
||||||
Database(#[from] db::PoolError),
|
Database(#[from] db::PoolError),
|
||||||
@@ -48,7 +51,7 @@ pub struct Bootstrapper {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Bootstrapper {
|
impl Bootstrapper {
|
||||||
pub async fn new(db: &DatabasePool) -> Result<Self, BootstrapError> {
|
pub async fn new(db: &DatabasePool) -> Result<Self, Error> {
|
||||||
let mut conn = db.get().await?;
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
let row_count: i64 = schema::useragent_client::table
|
let row_count: i64 = schema::useragent_client::table
|
||||||
@@ -58,10 +61,9 @@ impl Bootstrapper {
|
|||||||
|
|
||||||
drop(conn);
|
drop(conn);
|
||||||
|
|
||||||
|
|
||||||
let token = if row_count == 0 {
|
let token = if row_count == 0 {
|
||||||
let token = generate_token().await?;
|
let token = generate_token().await?;
|
||||||
info!(%token, "Generated bootstrap token");
|
|
||||||
tokio::fs::write(home_path()?.join(BOOTSTRAP_TOKEN_PATH), token.as_str()).await?;
|
|
||||||
Some(token)
|
Some(token)
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
@@ -69,11 +71,6 @@ impl Bootstrapper {
|
|||||||
|
|
||||||
Ok(Self { token })
|
Ok(Self { token })
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub fn get_token(&self) -> Option<String> {
|
|
||||||
self.token.clone()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[messages]
|
#[messages]
|
||||||
@@ -96,3 +93,11 @@ impl Bootstrapper {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl Bootstrapper {
|
||||||
|
#[message]
|
||||||
|
pub fn get_token(&self) -> Option<String> {
|
||||||
|
self.token.clone()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,939 +0,0 @@
|
|||||||
use diesel::{
|
|
||||||
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
|
|
||||||
dsl::{insert_into, update},
|
|
||||||
};
|
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
|
||||||
use kameo::{Actor, Reply, messages};
|
|
||||||
use memsafe::MemSafe;
|
|
||||||
use strum::{EnumDiscriminants, IntoDiscriminant};
|
|
||||||
use tracing::{error, info};
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::keyholder::v1::{KeyCell, Nonce},
|
|
||||||
db::{
|
|
||||||
self,
|
|
||||||
models::{self, RootKeyHistory},
|
|
||||||
schema::{self},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
pub mod v1;
|
|
||||||
|
|
||||||
#[derive(Default, EnumDiscriminants)]
|
|
||||||
#[strum_discriminants(derive(Reply), vis(pub))]
|
|
||||||
enum State {
|
|
||||||
#[default]
|
|
||||||
Unbootstrapped,
|
|
||||||
Sealed {
|
|
||||||
root_key_history_id: i32,
|
|
||||||
},
|
|
||||||
Unsealed {
|
|
||||||
root_key_history_id: i32,
|
|
||||||
root_key: KeyCell,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
|
||||||
pub enum Error {
|
|
||||||
#[error("Keyholder is already bootstrapped")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
|
|
||||||
AlreadyBootstrapped,
|
|
||||||
#[error("Keyholder is not bootstrapped")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
|
|
||||||
NotBootstrapped,
|
|
||||||
#[error("Invalid key provided")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::invalid_key))]
|
|
||||||
InvalidKey,
|
|
||||||
|
|
||||||
#[error("Requested aead entry not found")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
|
|
||||||
NotFound,
|
|
||||||
|
|
||||||
#[error("Encryption error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::encryption_error))]
|
|
||||||
Encryption(#[from] chacha20poly1305::aead::Error),
|
|
||||||
|
|
||||||
#[error("Database error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::database_error))]
|
|
||||||
DatabaseConnection(#[from] db::PoolError),
|
|
||||||
|
|
||||||
#[error("Database transaction error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
|
|
||||||
DatabaseTransaction(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("Broken database")]
|
|
||||||
#[diagnostic(code(arbiter::keyholder::broken_database))]
|
|
||||||
BrokenDatabase,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
|
||||||
/// Provides API for encrypting and decrypting data using the vault root key.
|
|
||||||
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
|
||||||
#[derive(Actor)]
|
|
||||||
pub struct KeyHolder {
|
|
||||||
db: db::DatabasePool,
|
|
||||||
state: State,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[messages]
|
|
||||||
impl KeyHolder {
|
|
||||||
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
|
|
||||||
let state = {
|
|
||||||
let mut conn = db.get().await?;
|
|
||||||
|
|
||||||
let (root_key_history,) = schema::arbiter_settings::table
|
|
||||||
.left_join(schema::root_key_history::table)
|
|
||||||
.select((Option::<RootKeyHistory>::as_select(),))
|
|
||||||
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
match root_key_history {
|
|
||||||
Some(root_key_history) => State::Sealed {
|
|
||||||
root_key_history_id: root_key_history.id,
|
|
||||||
},
|
|
||||||
None => State::Unbootstrapped,
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(Self { db, state })
|
|
||||||
}
|
|
||||||
|
|
||||||
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
|
||||||
// additional layer of protection against nonce-reuse
|
|
||||||
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
|
|
||||||
let mut conn = pool.get().await?;
|
|
||||||
|
|
||||||
let nonce = conn
|
|
||||||
.exclusive_transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let current_nonce: Vec<u8> = schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce)
|
|
||||||
.first(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
let mut nonce =
|
|
||||||
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
|
|
||||||
error!(
|
|
||||||
"Broken database: invalid nonce for root key history id={}",
|
|
||||||
root_key_id
|
|
||||||
);
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
nonce.increment();
|
|
||||||
|
|
||||||
update(schema::root_key_history::table)
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_id))
|
|
||||||
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, Error>::Ok(nonce)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(nonce)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
|
||||||
if !matches!(self.state, State::Unbootstrapped) {
|
|
||||||
return Err(Error::AlreadyBootstrapped);
|
|
||||||
}
|
|
||||||
let salt = v1::generate_salt();
|
|
||||||
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
|
||||||
let mut root_key = KeyCell::new_secure_random();
|
|
||||||
|
|
||||||
// Zero nonces are fine because they are one-time
|
|
||||||
let root_key_nonce = v1::Nonce::default();
|
|
||||||
let data_encryption_nonce = v1::Nonce::default();
|
|
||||||
|
|
||||||
let root_key_ciphertext: Vec<u8> = {
|
|
||||||
let root_key_reader = root_key.0.read().unwrap();
|
|
||||||
let root_key_reader = root_key_reader.as_slice();
|
|
||||||
seal_key
|
|
||||||
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
|
||||||
.map_err(|err| {
|
|
||||||
error!(?err, "Fatal bootstrap error");
|
|
||||||
Error::Encryption(err)
|
|
||||||
})?
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
|
|
||||||
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
|
|
||||||
let root_key_history_id = conn
|
|
||||||
.transaction(|conn| {
|
|
||||||
Box::pin(async move {
|
|
||||||
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
|
|
||||||
.values(&models::NewRootKeyHistory {
|
|
||||||
ciphertext: root_key_ciphertext,
|
|
||||||
tag: v1::ROOT_KEY_TAG.to_vec(),
|
|
||||||
root_key_encryption_nonce: root_key_nonce.to_vec(),
|
|
||||||
data_encryption_nonce: data_encryption_nonce_bytes,
|
|
||||||
schema_version: 1,
|
|
||||||
salt: salt.to_vec(),
|
|
||||||
})
|
|
||||||
.returning(schema::root_key_history::id)
|
|
||||||
.get_result(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
update(schema::arbiter_settings::table)
|
|
||||||
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
|
|
||||||
.execute(conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
self.state = State::Unsealed {
|
|
||||||
root_key,
|
|
||||||
root_key_history_id,
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Keyholder bootstrapped successfully");
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
|
||||||
let State::Sealed {
|
|
||||||
root_key_history_id,
|
|
||||||
} = &self.state
|
|
||||||
else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
// We don't want to hold connection while doing expensive KDF work
|
|
||||||
let current_key = {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(*root_key_history_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce )
|
|
||||||
.select(RootKeyHistory::as_select() )
|
|
||||||
.first(&mut conn)
|
|
||||||
.await?
|
|
||||||
};
|
|
||||||
|
|
||||||
let salt = ¤t_key.salt;
|
|
||||||
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
|
||||||
error!("Broken database: invalid salt for root key");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
|
||||||
|
|
||||||
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
|
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
|
||||||
|_| {
|
|
||||||
error!("Broken database: invalid nonce for root key");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
},
|
|
||||||
)?;
|
|
||||||
|
|
||||||
seal_key
|
|
||||||
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
|
||||||
.map_err(|err| {
|
|
||||||
error!(?err, "Failed to unseal root key: invalid seal key");
|
|
||||||
Error::InvalidKey
|
|
||||||
})?;
|
|
||||||
|
|
||||||
self.state = State::Unsealed {
|
|
||||||
root_key_history_id: current_key.id,
|
|
||||||
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
|
|
||||||
error!(?err, "Broken database: invalid encryption key size");
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?,
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Keyholder unsealed successfully");
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
|
||||||
#[message]
|
|
||||||
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
|
|
||||||
let State::Unsealed { root_key, .. } = &mut self.state else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
let row: models::AeadEncrypted = {
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
schema::aead_encrypted::table
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.filter(schema::aead_encrypted::id.eq(aead_id))
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.optional()?
|
|
||||||
.ok_or(Error::NotFound)?
|
|
||||||
};
|
|
||||||
|
|
||||||
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
|
|
||||||
error!(
|
|
||||||
"Broken database: invalid nonce for aead_encrypted id={}",
|
|
||||||
aead_id
|
|
||||||
);
|
|
||||||
Error::BrokenDatabase
|
|
||||||
})?;
|
|
||||||
let mut output = MemSafe::new(row.ciphertext).unwrap();
|
|
||||||
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
|
||||||
Ok(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
|
||||||
#[message]
|
|
||||||
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
|
|
||||||
let State::Unsealed {
|
|
||||||
root_key,
|
|
||||||
root_key_history_id,
|
|
||||||
} = &mut self.state
|
|
||||||
else {
|
|
||||||
return Err(Error::NotBootstrapped);
|
|
||||||
};
|
|
||||||
|
|
||||||
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
|
||||||
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
|
||||||
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
|
||||||
|
|
||||||
let mut ciphertext_buffer = plaintext.write().unwrap();
|
|
||||||
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
|
||||||
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
|
||||||
|
|
||||||
let ciphertext = std::mem::take(ciphertext_buffer);
|
|
||||||
|
|
||||||
let mut conn = self.db.get().await?;
|
|
||||||
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
|
|
||||||
.values(&models::NewAeadEncrypted {
|
|
||||||
ciphertext,
|
|
||||||
tag: v1::TAG.to_vec(),
|
|
||||||
current_nonce: nonce.to_vec(),
|
|
||||||
schema_version: 1,
|
|
||||||
associated_root_key_id: *root_key_history_id,
|
|
||||||
created_at: chrono::Utc::now().timestamp() as i32,
|
|
||||||
})
|
|
||||||
.returning(schema::aead_encrypted::id)
|
|
||||||
.get_result(&mut conn)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(aead_id)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[message]
|
|
||||||
pub fn get_state(&self) -> StateDiscriminants {
|
|
||||||
self.state.discriminant()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use std::collections::{HashMap, HashSet};
|
|
||||||
use std::sync::Arc;
|
|
||||||
|
|
||||||
use diesel::dsl::{insert_into, sql_query, update};
|
|
||||||
use diesel_async::RunQueryDsl;
|
|
||||||
use futures::stream::TryUnfold;
|
|
||||||
use kameo::actor::{ActorRef, Spawn as _};
|
|
||||||
use memsafe::MemSafe;
|
|
||||||
use tokio::sync::Mutex;
|
|
||||||
use tokio::task::JoinSet;
|
|
||||||
|
|
||||||
use crate::db::{self, models::ArbiterSetting};
|
|
||||||
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
async fn seed_settings(pool: &db::DatabasePool) {
|
|
||||||
let mut conn = pool.get().await.unwrap();
|
|
||||||
insert_into(schema::arbiter_settings::table)
|
|
||||||
.values(&ArbiterSetting {
|
|
||||||
id: 1,
|
|
||||||
root_key_id: None,
|
|
||||||
cert_key: vec![],
|
|
||||||
cert: vec![],
|
|
||||||
})
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
|
||||||
seed_settings(db).await;
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
actor.bootstrap(seal_key).await.unwrap();
|
|
||||||
actor
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn write_concurrently(
|
|
||||||
actor: ActorRef<KeyHolder>,
|
|
||||||
prefix: &'static str,
|
|
||||||
count: usize,
|
|
||||||
) -> Vec<(i32, Vec<u8>)> {
|
|
||||||
let mut set = JoinSet::new();
|
|
||||||
for i in 0..count {
|
|
||||||
let actor = actor.clone();
|
|
||||||
set.spawn(async move {
|
|
||||||
let plaintext = format!("{prefix}-{i}").into_bytes();
|
|
||||||
let id = {
|
|
||||||
actor
|
|
||||||
.ask(CreateNew {
|
|
||||||
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.unwrap()
|
|
||||||
};
|
|
||||||
(id, plaintext)
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut out = Vec::with_capacity(count);
|
|
||||||
while let Some(res) = set.join_next().await {
|
|
||||||
out.push(res.unwrap());
|
|
||||||
}
|
|
||||||
out
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_bootstrap() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
seed_settings(&db).await;
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
|
|
||||||
assert!(matches!(actor.state, State::Unbootstrapped));
|
|
||||||
|
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
actor.bootstrap(seal_key).await.unwrap();
|
|
||||||
|
|
||||||
assert!(matches!(actor.state, State::Unsealed { .. }));
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let row: models::RootKeyHistory = schema::root_key_history::table
|
|
||||||
.select(models::RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(row.schema_version, 1);
|
|
||||||
assert_eq!(row.tag, v1::ROOT_KEY_TAG);
|
|
||||||
assert!(!row.ciphertext.is_empty());
|
|
||||||
assert!(!row.salt.is_empty());
|
|
||||||
assert_eq!(row.data_encryption_nonce, v1::Nonce::default().to_vec());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_bootstrap_rejects_double() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::AlreadyBootstrapped));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_create_decrypt_roundtrip() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let plaintext = b"hello arbiter";
|
|
||||||
let aead_id = actor
|
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
|
||||||
let decrypted = decrypted.read().unwrap();
|
|
||||||
assert_eq!(*decrypted, plaintext);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_create_new_before_bootstrap_fails() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
seed_settings(&db).await;
|
|
||||||
let mut actor = KeyHolder::new(db).await.unwrap();
|
|
||||||
|
|
||||||
let err = actor
|
|
||||||
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_decrypt_before_bootstrap_fails() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
seed_settings(&db).await;
|
|
||||||
let mut actor = KeyHolder::new(db).await.unwrap();
|
|
||||||
|
|
||||||
let err = actor.decrypt(1).await.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::NotBootstrapped));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_decrypt_nonexistent_returns_not_found() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let err = actor.decrypt(9999).await.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::NotFound));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_new_restores_sealed_state() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let actor = bootstrapped_actor(&db).await;
|
|
||||||
drop(actor);
|
|
||||||
|
|
||||||
let actor2 = KeyHolder::new(db).await.unwrap();
|
|
||||||
assert!(matches!(actor2.state, State::Sealed { .. }));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_nonce_never_reused() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let n = 5;
|
|
||||||
let mut ids = Vec::with_capacity(n);
|
|
||||||
for i in 0..n {
|
|
||||||
let id = actor
|
|
||||||
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
ids.push(id);
|
|
||||||
}
|
|
||||||
|
|
||||||
// read all stored nonces from DB
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.load(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_eq!(rows.len(), n);
|
|
||||||
|
|
||||||
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
|
||||||
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
|
||||||
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
|
||||||
|
|
||||||
// verify nonces are sequential increments from 1
|
|
||||||
for (i, row) in rows.iter().enumerate() {
|
|
||||||
let mut expected = v1::Nonce::default();
|
|
||||||
for _ in 0..=i {
|
|
||||||
expected.increment();
|
|
||||||
}
|
|
||||||
assert_eq!(row.current_nonce, expected.to_vec(), "nonce {i} mismatch");
|
|
||||||
}
|
|
||||||
|
|
||||||
// verify data_encryption_nonce on root_key_history tracks the latest nonce
|
|
||||||
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
|
||||||
.select(models::RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let last_nonce = &rows.last().unwrap().current_nonce;
|
|
||||||
assert_eq!(
|
|
||||||
&root_row.data_encryption_nonce, last_nonce,
|
|
||||||
"root_key_history must track the latest nonce"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_unseal_correct_password() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let plaintext = b"survive a restart";
|
|
||||||
let aead_id = actor
|
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(actor);
|
|
||||||
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
assert!(matches!(actor.state, State::Sealed { .. }));
|
|
||||||
|
|
||||||
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
actor.try_unseal(seal_key).await.unwrap();
|
|
||||||
assert!(matches!(actor.state, State::Unsealed { .. }));
|
|
||||||
|
|
||||||
// previously encrypted data is still decryptable
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_unseal_wrong_then_correct_password() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let plaintext = b"important data";
|
|
||||||
let aead_id = actor
|
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(actor);
|
|
||||||
|
|
||||||
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
assert!(matches!(actor.state, State::Sealed { .. }));
|
|
||||||
|
|
||||||
// wrong password
|
|
||||||
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
|
|
||||||
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::InvalidKey));
|
|
||||||
assert!(
|
|
||||||
matches!(actor.state, State::Sealed { .. }),
|
|
||||||
"state must remain Sealed after failed attempt"
|
|
||||||
);
|
|
||||||
|
|
||||||
// correct password
|
|
||||||
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
|
||||||
actor.try_unseal(good_key).await.unwrap();
|
|
||||||
assert!(matches!(actor.state, State::Unsealed { .. }));
|
|
||||||
|
|
||||||
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn test_ciphertext_differs_across_entries() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
let plaintext = b"same content";
|
|
||||||
let id1 = actor
|
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let id2 = actor
|
|
||||||
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
// different nonces => different ciphertext, even for identical plaintext
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let row1: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
.filter(schema::aead_encrypted::id.eq(id1))
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let row2: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
.filter(schema::aead_encrypted::id.eq(id2))
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
assert_ne!(row1.ciphertext, row2.ciphertext);
|
|
||||||
|
|
||||||
// but both decrypt to the same plaintext
|
|
||||||
let mut d1 = actor.decrypt(id1).await.unwrap();
|
|
||||||
let mut d2 = actor.decrypt(id2).await.unwrap();
|
|
||||||
assert_eq!(*d1.read().unwrap(), plaintext);
|
|
||||||
assert_eq!(*d2.read().unwrap(), plaintext);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn concurrent_create_new_no_duplicate_nonces_() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
|
||||||
|
|
||||||
let writes = write_concurrently(actor, "nonce-unique", 32).await;
|
|
||||||
assert_eq!(writes.len(), 32);
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.load(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(rows.len(), 32);
|
|
||||||
|
|
||||||
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
|
||||||
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
|
||||||
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn concurrent_create_new_root_nonce_never_moves_backward() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
|
||||||
|
|
||||||
write_concurrently(actor, "root-max", 24).await;
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.load(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let max_nonce = rows
|
|
||||||
.iter()
|
|
||||||
.map(|r| r.current_nonce.clone())
|
|
||||||
.max()
|
|
||||||
.expect("at least one row");
|
|
||||||
|
|
||||||
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
|
||||||
.select(models::RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(root_row.data_encryption_nonce, max_nonce);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
let root_key_history_id = match actor.state {
|
|
||||||
State::Unsealed {
|
|
||||||
root_key_history_id,
|
|
||||||
..
|
|
||||||
} => root_key_history_id,
|
|
||||||
_ => panic!("expected unsealed state"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
|
||||||
.select(models::RootKeyHistory::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
|
||||||
|
|
||||||
let id = actor
|
|
||||||
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
.filter(schema::aead_encrypted::id.eq(id))
|
|
||||||
.select(models::AeadEncrypted::as_select())
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert!(
|
|
||||||
row.current_nonce > n2.to_vec(),
|
|
||||||
"next write must advance nonce"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn insert_failure_does_not_create_partial_row() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
let root_key_history_id = match actor.state {
|
|
||||||
State::Unsealed {
|
|
||||||
root_key_history_id,
|
|
||||||
..
|
|
||||||
} => root_key_history_id,
|
|
||||||
_ => panic!("expected unsealed state"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
let before_count: i64 = schema::aead_encrypted::table
|
|
||||||
.count()
|
|
||||||
.get_result(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let before_root_nonce: Vec<u8> = schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce)
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
sql_query(
|
|
||||||
"CREATE TRIGGER fail_aead_insert BEFORE INSERT ON aead_encrypted BEGIN SELECT RAISE(ABORT, 'forced test failure'); END;",
|
|
||||||
)
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
let err = actor
|
|
||||||
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
sql_query("DROP TRIGGER fail_aead_insert;")
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
let after_count: i64 = schema::aead_encrypted::table
|
|
||||||
.count()
|
|
||||||
.get_result(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(
|
|
||||||
before_count, after_count,
|
|
||||||
"failed insert must not create row"
|
|
||||||
);
|
|
||||||
|
|
||||||
let after_root_nonce: Vec<u8> = schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
|
||||||
.select(schema::root_key_history::data_encryption_nonce)
|
|
||||||
.first(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
assert!(
|
|
||||||
after_root_nonce > before_root_nonce,
|
|
||||||
"current behavior allows nonce gap on failed insert"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn decrypt_roundtrip_after_high_concurrency() {
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let actor = KeyHolder::spawn(bootstrapped_actor(&db).await);
|
|
||||||
|
|
||||||
let writes = write_concurrently(actor, "roundtrip", 40).await;
|
|
||||||
let expected: HashMap<i32, Vec<u8>> = writes.into_iter().collect();
|
|
||||||
|
|
||||||
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
|
|
||||||
decryptor
|
|
||||||
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
for (id, plaintext) in expected {
|
|
||||||
let mut decrypted = decryptor.decrypt(id).await.unwrap();
|
|
||||||
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// #[tokio::test]
|
|
||||||
// #[test_log::test]
|
|
||||||
// async fn swapping_ciphertext_and_nonce_between_rows_changes_logical_binding() {
|
|
||||||
// let db = db::create_test_pool().await;
|
|
||||||
// let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
|
|
||||||
// let plaintext1 = b"entry-one";
|
|
||||||
// let plaintext2 = b"entry-two";
|
|
||||||
// let id1 = actor
|
|
||||||
// .create_new(MemSafe::new(plaintext1.to_vec()).unwrap())
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
// let id2 = actor
|
|
||||||
// .create_new(MemSafe::new(plaintext2.to_vec()).unwrap())
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
|
|
||||||
// let mut conn = db.get().await.unwrap();
|
|
||||||
// let row1: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
// .filter(schema::aead_encrypted::id.eq(id1))
|
|
||||||
// .select(models::AeadEncrypted::as_select())
|
|
||||||
// .first(&mut conn)
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
// let row2: models::AeadEncrypted = schema::aead_encrypted::table
|
|
||||||
// .filter(schema::aead_encrypted::id.eq(id2))
|
|
||||||
// .select(models::AeadEncrypted::as_select())
|
|
||||||
// .first(&mut conn)
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
|
|
||||||
// update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id1)))
|
|
||||||
// .set((
|
|
||||||
// schema::aead_encrypted::ciphertext.eq(row2.ciphertext.clone()),
|
|
||||||
// schema::aead_encrypted::current_nonce.eq(row2.current_nonce.clone()),
|
|
||||||
// ))
|
|
||||||
// .execute(&mut conn)
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
// update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id2)))
|
|
||||||
// .set((
|
|
||||||
// schema::aead_encrypted::ciphertext.eq(row1.ciphertext.clone()),
|
|
||||||
// schema::aead_encrypted::current_nonce.eq(row1.current_nonce.clone()),
|
|
||||||
// ))
|
|
||||||
// .execute(&mut conn)
|
|
||||||
// .await
|
|
||||||
// .unwrap();
|
|
||||||
|
|
||||||
// let mut d1 = actor.decrypt(id1).await.unwrap();
|
|
||||||
// let mut d2 = actor.decrypt(id2).await.unwrap();
|
|
||||||
// assert_eq!(*d1.read().unwrap(), plaintext2);
|
|
||||||
// assert_eq!(*d2.read().unwrap(), plaintext1);
|
|
||||||
// }
|
|
||||||
#[tokio::test]
|
|
||||||
#[test_log::test]
|
|
||||||
async fn broken_db_nonce_format_fails_closed() {
|
|
||||||
// malformed root_key_history nonce must fail create_new
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
let root_key_history_id = match actor.state {
|
|
||||||
State::Unsealed {
|
|
||||||
root_key_history_id,
|
|
||||||
..
|
|
||||||
} => root_key_history_id,
|
|
||||||
_ => panic!("expected unsealed state"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
update(
|
|
||||||
schema::root_key_history::table
|
|
||||||
.filter(schema::root_key_history::id.eq(root_key_history_id)),
|
|
||||||
)
|
|
||||||
.set(schema::root_key_history::data_encryption_nonce.eq(vec![1, 2, 3]))
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
let err = actor
|
|
||||||
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::BrokenDatabase));
|
|
||||||
|
|
||||||
// malformed per-row nonce must fail decrypt
|
|
||||||
let db = db::create_test_pool().await;
|
|
||||||
let mut actor = bootstrapped_actor(&db).await;
|
|
||||||
let id = actor
|
|
||||||
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let mut conn = db.get().await.unwrap();
|
|
||||||
update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id)))
|
|
||||||
.set(schema::aead_encrypted::current_nonce.eq(vec![7, 8]))
|
|
||||||
.execute(&mut conn)
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
let err = actor.decrypt(id).await.unwrap_err();
|
|
||||||
assert!(matches!(err, Error::BrokenDatabase));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
pub mod v1;
|
||||||
@@ -42,12 +42,12 @@ impl<'a> TryFrom<&'a [u8]> for Nonce {
|
|||||||
return Err(());
|
return Err(());
|
||||||
}
|
}
|
||||||
let mut nonce = [0u8; NONCE_LENGTH];
|
let mut nonce = [0u8; NONCE_LENGTH];
|
||||||
nonce.copy_from_slice(&value);
|
nonce.copy_from_slice(value);
|
||||||
Ok(Self(nonce))
|
Ok(Self(nonce))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct KeyCell(pub(super) MemSafe<Key>);
|
pub struct KeyCell(pub MemSafe<Key>);
|
||||||
impl From<MemSafe<Key>> for KeyCell {
|
impl From<MemSafe<Key>> for KeyCell {
|
||||||
fn from(value: MemSafe<Key>) -> Self {
|
fn from(value: MemSafe<Key>) -> Self {
|
||||||
Self(value)
|
Self(value)
|
||||||
@@ -85,10 +85,6 @@ impl KeyCell {
|
|||||||
key.into()
|
key.into()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn into_inner(self) -> MemSafe<Key> {
|
|
||||||
self.0
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn encrypt_in_place(
|
pub fn encrypt_in_place(
|
||||||
&mut self,
|
&mut self,
|
||||||
nonce: &Nonce,
|
nonce: &Nonce,
|
||||||
@@ -128,9 +124,8 @@ impl KeyCell {
|
|||||||
let mut cipher = XChaCha20Poly1305::new(key_ref);
|
let mut cipher = XChaCha20Poly1305::new(key_ref);
|
||||||
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
let nonce = XNonce::from_slice(nonce.0.as_ref());
|
||||||
|
|
||||||
|
|
||||||
let ciphertext = cipher.encrypt(
|
let ciphertext = cipher.encrypt(
|
||||||
&nonce,
|
nonce,
|
||||||
Payload {
|
Payload {
|
||||||
msg: plaintext.as_ref(),
|
msg: plaintext.as_ref(),
|
||||||
aad: associated_data,
|
aad: associated_data,
|
||||||
@@ -142,7 +137,7 @@ impl KeyCell {
|
|||||||
|
|
||||||
pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
|
pub type Salt = [u8; ArgonSalt::RECOMMENDED_LENGTH];
|
||||||
|
|
||||||
pub(super) fn generate_salt() -> Salt {
|
pub fn generate_salt() -> Salt {
|
||||||
let mut salt = Salt::default();
|
let mut salt = Salt::default();
|
||||||
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
let mut rng = StdRng::try_from_rng(&mut SysRng).unwrap();
|
||||||
rng.fill_bytes(&mut salt);
|
rng.fill_bytes(&mut salt);
|
||||||
@@ -151,7 +146,7 @@ pub(super) fn generate_salt() -> Salt {
|
|||||||
|
|
||||||
/// User password might be of different length, have not enough entropy, etc...
|
/// User password might be of different length, have not enough entropy, etc...
|
||||||
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
|
/// Derive a fixed-length key from the password using Argon2id, which is designed for password hashing and key derivation.
|
||||||
pub(super) fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
|
pub fn derive_seal_key(mut password: MemSafe<Vec<u8>>, salt: &Salt) -> KeyCell {
|
||||||
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
|
let params = argon2::Params::new(262_144, 3, 4, None).unwrap();
|
||||||
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
let hasher = Argon2::new(Algorithm::Argon2id, argon2::Version::V0x13, params);
|
||||||
let mut key = MemSafe::new(Key::default()).unwrap();
|
let mut key = MemSafe::new(Key::default()).unwrap();
|
||||||
407
server/crates/arbiter-server/src/actors/keyholder/mod.rs
Normal file
@@ -0,0 +1,407 @@
|
|||||||
|
use diesel::{
|
||||||
|
ExpressionMethods as _, OptionalExtension, QueryDsl, SelectableHelper,
|
||||||
|
dsl::{insert_into, update},
|
||||||
|
};
|
||||||
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
|
use kameo::{Actor, Reply, messages};
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
use strum::{EnumDiscriminants, IntoDiscriminant};
|
||||||
|
use tracing::{error, info};
|
||||||
|
|
||||||
|
use crate::db::{
|
||||||
|
self,
|
||||||
|
models::{self, RootKeyHistory},
|
||||||
|
schema::{self},
|
||||||
|
};
|
||||||
|
use encryption::v1::{self, KeyCell, Nonce};
|
||||||
|
|
||||||
|
pub mod encryption;
|
||||||
|
|
||||||
|
#[derive(Default, EnumDiscriminants)]
|
||||||
|
#[strum_discriminants(derive(Reply), vis(pub))]
|
||||||
|
enum State {
|
||||||
|
#[default]
|
||||||
|
Unbootstrapped,
|
||||||
|
Sealed {
|
||||||
|
root_key_history_id: i32,
|
||||||
|
},
|
||||||
|
Unsealed {
|
||||||
|
root_key_history_id: i32,
|
||||||
|
root_key: KeyCell,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error, miette::Diagnostic)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("Keyholder is already bootstrapped")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::already_bootstrapped))]
|
||||||
|
AlreadyBootstrapped,
|
||||||
|
#[error("Keyholder is not bootstrapped")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::not_bootstrapped))]
|
||||||
|
NotBootstrapped,
|
||||||
|
#[error("Invalid key provided")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::invalid_key))]
|
||||||
|
InvalidKey,
|
||||||
|
|
||||||
|
#[error("Requested aead entry not found")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::aead_not_found))]
|
||||||
|
NotFound,
|
||||||
|
|
||||||
|
#[error("Encryption error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::encryption_error))]
|
||||||
|
Encryption(#[from] chacha20poly1305::aead::Error),
|
||||||
|
|
||||||
|
#[error("Database error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::database_error))]
|
||||||
|
DatabaseConnection(#[from] db::PoolError),
|
||||||
|
|
||||||
|
#[error("Database transaction error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::database_transaction_error))]
|
||||||
|
DatabaseTransaction(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("Broken database")]
|
||||||
|
#[diagnostic(code(arbiter::keyholder::broken_database))]
|
||||||
|
BrokenDatabase,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Manages vault root key and tracks current state of the vault (bootstrapped/unbootstrapped, sealed/unsealed).
|
||||||
|
/// Provides API for encrypting and decrypting data using the vault root key.
|
||||||
|
/// Abstraction over database to make sure nonces are never reused and encryption keys are never exposed in plaintext outside of this actor.
|
||||||
|
#[derive(Actor)]
|
||||||
|
pub struct KeyHolder {
|
||||||
|
db: db::DatabasePool,
|
||||||
|
state: State,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[messages]
|
||||||
|
impl KeyHolder {
|
||||||
|
pub async fn new(db: db::DatabasePool) -> Result<Self, Error> {
|
||||||
|
let state = {
|
||||||
|
let mut conn = db.get().await?;
|
||||||
|
|
||||||
|
let (root_key_history,) = schema::arbiter_settings::table
|
||||||
|
.left_join(schema::root_key_history::table)
|
||||||
|
.select((Option::<RootKeyHistory>::as_select(),))
|
||||||
|
.get_result::<(Option<RootKeyHistory>,)>(&mut conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match root_key_history {
|
||||||
|
Some(root_key_history) => State::Sealed {
|
||||||
|
root_key_history_id: root_key_history.id,
|
||||||
|
},
|
||||||
|
None => State::Unbootstrapped,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Self { db, state })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exclusive transaction to avoid race condtions if multiple keyholders write
|
||||||
|
// additional layer of protection against nonce-reuse
|
||||||
|
async fn get_new_nonce(pool: &db::DatabasePool, root_key_id: i32) -> Result<Nonce, Error> {
|
||||||
|
let mut conn = pool.get().await?;
|
||||||
|
|
||||||
|
let nonce = conn
|
||||||
|
.exclusive_transaction(|conn| {
|
||||||
|
Box::pin(async move {
|
||||||
|
let current_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut nonce =
|
||||||
|
v1::Nonce::try_from(current_nonce.as_slice()).map_err(|_| {
|
||||||
|
error!(
|
||||||
|
"Broken database: invalid nonce for root key history id={}",
|
||||||
|
root_key_id
|
||||||
|
);
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
nonce.increment();
|
||||||
|
|
||||||
|
update(schema::root_key_history::table)
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_id))
|
||||||
|
.set(schema::root_key_history::data_encryption_nonce.eq(nonce.to_vec()))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Result::<_, Error>::Ok(nonce)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(nonce)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub async fn bootstrap(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
||||||
|
if !matches!(self.state, State::Unbootstrapped) {
|
||||||
|
return Err(Error::AlreadyBootstrapped);
|
||||||
|
}
|
||||||
|
let salt = v1::generate_salt();
|
||||||
|
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
||||||
|
let mut root_key = KeyCell::new_secure_random();
|
||||||
|
|
||||||
|
// Zero nonces are fine because they are one-time
|
||||||
|
let root_key_nonce = v1::Nonce::default();
|
||||||
|
let data_encryption_nonce = v1::Nonce::default();
|
||||||
|
|
||||||
|
let root_key_ciphertext: Vec<u8> = {
|
||||||
|
let root_key_reader = root_key.0.read().unwrap();
|
||||||
|
let root_key_reader = root_key_reader.as_slice();
|
||||||
|
seal_key
|
||||||
|
.encrypt(&root_key_nonce, v1::ROOT_KEY_TAG, root_key_reader)
|
||||||
|
.map_err(|err| {
|
||||||
|
error!(?err, "Fatal bootstrap error");
|
||||||
|
Error::Encryption(err)
|
||||||
|
})?
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
|
||||||
|
let data_encryption_nonce_bytes = data_encryption_nonce.to_vec();
|
||||||
|
let root_key_history_id = conn
|
||||||
|
.transaction(|conn| {
|
||||||
|
Box::pin(async move {
|
||||||
|
let root_key_history_id: i32 = insert_into(schema::root_key_history::table)
|
||||||
|
.values(&models::NewRootKeyHistory {
|
||||||
|
ciphertext: root_key_ciphertext,
|
||||||
|
tag: v1::ROOT_KEY_TAG.to_vec(),
|
||||||
|
root_key_encryption_nonce: root_key_nonce.to_vec(),
|
||||||
|
data_encryption_nonce: data_encryption_nonce_bytes,
|
||||||
|
schema_version: 1,
|
||||||
|
salt: salt.to_vec(),
|
||||||
|
})
|
||||||
|
.returning(schema::root_key_history::id)
|
||||||
|
.get_result(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
update(schema::arbiter_settings::table)
|
||||||
|
.set(schema::arbiter_settings::root_key_id.eq(root_key_history_id))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Result::<_, diesel::result::Error>::Ok(root_key_history_id)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
self.state = State::Unsealed {
|
||||||
|
root_key,
|
||||||
|
root_key_history_id,
|
||||||
|
};
|
||||||
|
|
||||||
|
info!("Keyholder bootstrapped successfully");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub async fn try_unseal(&mut self, seal_key_raw: MemSafe<Vec<u8>>) -> Result<(), Error> {
|
||||||
|
let State::Sealed {
|
||||||
|
root_key_history_id,
|
||||||
|
} = &self.state
|
||||||
|
else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
// We don't want to hold connection while doing expensive KDF work
|
||||||
|
let current_key = {
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(*root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.select(RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
let salt = ¤t_key.salt;
|
||||||
|
let salt = v1::Salt::try_from(salt.as_slice()).map_err(|_| {
|
||||||
|
error!("Broken database: invalid salt for root key");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
let mut seal_key = v1::derive_seal_key(seal_key_raw, &salt);
|
||||||
|
|
||||||
|
let mut root_key = MemSafe::new(current_key.ciphertext.clone()).unwrap();
|
||||||
|
|
||||||
|
let nonce = v1::Nonce::try_from(current_key.root_key_encryption_nonce.as_slice()).map_err(
|
||||||
|
|_| {
|
||||||
|
error!("Broken database: invalid nonce for root key");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
|
||||||
|
seal_key
|
||||||
|
.decrypt_in_place(&nonce, v1::ROOT_KEY_TAG, &mut root_key)
|
||||||
|
.map_err(|err| {
|
||||||
|
error!(?err, "Failed to unseal root key: invalid seal key");
|
||||||
|
Error::InvalidKey
|
||||||
|
})?;
|
||||||
|
|
||||||
|
self.state = State::Unsealed {
|
||||||
|
root_key_history_id: current_key.id,
|
||||||
|
root_key: v1::KeyCell::try_from(root_key).map_err(|err| {
|
||||||
|
error!(?err, "Broken database: invalid encryption key size");
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?,
|
||||||
|
};
|
||||||
|
|
||||||
|
info!("Keyholder unsealed successfully");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decrypts the `aead_encrypted` entry with the given ID and returns the plaintext
|
||||||
|
#[message]
|
||||||
|
pub async fn decrypt(&mut self, aead_id: i32) -> Result<MemSafe<Vec<u8>>, Error> {
|
||||||
|
let State::Unsealed { root_key, .. } = &mut self.state else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
let row: models::AeadEncrypted = {
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.filter(schema::aead_encrypted::id.eq(aead_id))
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.optional()?
|
||||||
|
.ok_or(Error::NotFound)?
|
||||||
|
};
|
||||||
|
|
||||||
|
let nonce = v1::Nonce::try_from(row.current_nonce.as_slice()).map_err(|_| {
|
||||||
|
error!(
|
||||||
|
"Broken database: invalid nonce for aead_encrypted id={}",
|
||||||
|
aead_id
|
||||||
|
);
|
||||||
|
Error::BrokenDatabase
|
||||||
|
})?;
|
||||||
|
let mut output = MemSafe::new(row.ciphertext).unwrap();
|
||||||
|
root_key.decrypt_in_place(&nonce, v1::TAG, &mut output)?;
|
||||||
|
Ok(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Creates new `aead_encrypted` entry in the database and returns it's ID
|
||||||
|
#[message]
|
||||||
|
pub async fn create_new(&mut self, mut plaintext: MemSafe<Vec<u8>>) -> Result<i32, Error> {
|
||||||
|
let State::Unsealed {
|
||||||
|
root_key,
|
||||||
|
root_key_history_id,
|
||||||
|
} = &mut self.state
|
||||||
|
else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Order matters here - `get_new_nonce` acquires connection, so we need to call it before next acquire
|
||||||
|
// Borrow checker note: &mut borrow a few lines above is disjoint from this field
|
||||||
|
let nonce = Self::get_new_nonce(&self.db, *root_key_history_id).await?;
|
||||||
|
|
||||||
|
let mut ciphertext_buffer = plaintext.write().unwrap();
|
||||||
|
let ciphertext_buffer: &mut Vec<u8> = ciphertext_buffer.as_mut();
|
||||||
|
root_key.encrypt_in_place(&nonce, v1::TAG, &mut *ciphertext_buffer)?;
|
||||||
|
|
||||||
|
let ciphertext = std::mem::take(ciphertext_buffer);
|
||||||
|
|
||||||
|
let mut conn = self.db.get().await?;
|
||||||
|
let aead_id: i32 = insert_into(schema::aead_encrypted::table)
|
||||||
|
.values(&models::NewAeadEncrypted {
|
||||||
|
ciphertext,
|
||||||
|
tag: v1::TAG.to_vec(),
|
||||||
|
current_nonce: nonce.to_vec(),
|
||||||
|
schema_version: 1,
|
||||||
|
associated_root_key_id: *root_key_history_id,
|
||||||
|
created_at: chrono::Utc::now().timestamp() as i32,
|
||||||
|
})
|
||||||
|
.returning(schema::aead_encrypted::id)
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(aead_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub fn get_state(&self) -> StateDiscriminants {
|
||||||
|
self.state.discriminant()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[message]
|
||||||
|
pub fn seal(&mut self) -> Result<(), Error> {
|
||||||
|
let State::Unsealed {
|
||||||
|
root_key_history_id,
|
||||||
|
..
|
||||||
|
} = &self.state
|
||||||
|
else {
|
||||||
|
return Err(Error::NotBootstrapped);
|
||||||
|
};
|
||||||
|
self.state = State::Sealed {
|
||||||
|
root_key_history_id: *root_key_history_id,
|
||||||
|
};
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use diesel::SelectableHelper;
|
||||||
|
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
|
||||||
|
use crate::db::{self};
|
||||||
|
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
async fn bootstrapped_actor(db: &db::DatabasePool) -> KeyHolder {
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
|
actor
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn nonce_monotonic_even_when_nonce_allocation_interleaves() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = bootstrapped_actor(&db).await;
|
||||||
|
let root_key_history_id = match actor.state {
|
||||||
|
State::Unsealed {
|
||||||
|
root_key_history_id,
|
||||||
|
..
|
||||||
|
} => root_key_history_id,
|
||||||
|
_ => panic!("expected unsealed state"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let n1 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let n2 = KeyHolder::get_new_nonce(&db, root_key_history_id)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(n2.to_vec() > n1.to_vec(), "nonce must increase");
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(root_row.data_encryption_nonce, n2.to_vec());
|
||||||
|
|
||||||
|
let id = actor
|
||||||
|
.create_new(MemSafe::new(b"post-interleave".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let row: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
row.current_nonce > n2.to_vec(),
|
||||||
|
"next write must advance nonce"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
40
server/crates/arbiter-server/src/actors/mod.rs
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
use kameo::actor::{ActorRef, Spawn};
|
||||||
|
use miette::Diagnostic;
|
||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::{bootstrap::Bootstrapper, keyholder::KeyHolder},
|
||||||
|
db,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub mod bootstrap;
|
||||||
|
pub mod client;
|
||||||
|
pub mod keyholder;
|
||||||
|
pub mod user_agent;
|
||||||
|
|
||||||
|
#[derive(Error, Debug, Diagnostic)]
|
||||||
|
pub enum SpawnError {
|
||||||
|
#[error("Failed to spawn Bootstrapper actor")]
|
||||||
|
#[diagnostic(code(SpawnError::Bootstrapper))]
|
||||||
|
Bootstrapper(#[from] bootstrap::Error),
|
||||||
|
|
||||||
|
#[error("Failed to spawn KeyHolder actor")]
|
||||||
|
#[diagnostic(code(SpawnError::KeyHolder))]
|
||||||
|
KeyHolder(#[from] keyholder::Error),
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Long-lived actors that are shared across all connections and handle global state and operations
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct GlobalActors {
|
||||||
|
pub key_holder: ActorRef<KeyHolder>,
|
||||||
|
pub bootstrapper: ActorRef<Bootstrapper>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl GlobalActors {
|
||||||
|
pub async fn spawn(db: db::DatabasePool) -> Result<Self, SpawnError> {
|
||||||
|
Ok(Self {
|
||||||
|
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
|
||||||
|
key_holder: KeyHolder::spawn(KeyHolder::new(db.clone()).await?),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
57
server/crates/arbiter-server/src/actors/user_agent/error.rs
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
use tonic::Status;
|
||||||
|
|
||||||
|
use crate::db;
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
pub enum UserAgentError {
|
||||||
|
#[error("Missing payload in request")]
|
||||||
|
MissingPayload,
|
||||||
|
|
||||||
|
#[error("Invalid bootstrap token")]
|
||||||
|
InvalidBootstrapToken,
|
||||||
|
|
||||||
|
#[error("Public key not registered")]
|
||||||
|
PubkeyNotRegistered,
|
||||||
|
|
||||||
|
#[error("Invalid public key format")]
|
||||||
|
InvalidPubkey,
|
||||||
|
|
||||||
|
#[error("Invalid signature length")]
|
||||||
|
InvalidSignatureLength,
|
||||||
|
|
||||||
|
#[error("Invalid challenge solution")]
|
||||||
|
InvalidChallengeSolution,
|
||||||
|
|
||||||
|
#[error("Invalid state for operation")]
|
||||||
|
InvalidState,
|
||||||
|
|
||||||
|
#[error("Actor unavailable")]
|
||||||
|
ActorUnavailable,
|
||||||
|
|
||||||
|
#[error("Database error")]
|
||||||
|
Database(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("Database pool error")]
|
||||||
|
DatabasePool(#[from] db::PoolError),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<UserAgentError> for Status {
|
||||||
|
fn from(err: UserAgentError) -> Self {
|
||||||
|
match err {
|
||||||
|
UserAgentError::MissingPayload
|
||||||
|
| UserAgentError::InvalidBootstrapToken
|
||||||
|
| UserAgentError::InvalidPubkey
|
||||||
|
| UserAgentError::InvalidSignatureLength => Status::invalid_argument(err.to_string()),
|
||||||
|
|
||||||
|
UserAgentError::PubkeyNotRegistered | UserAgentError::InvalidChallengeSolution => {
|
||||||
|
Status::unauthenticated(err.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
UserAgentError::InvalidState => Status::failed_precondition(err.to_string()),
|
||||||
|
|
||||||
|
UserAgentError::ActorUnavailable
|
||||||
|
| UserAgentError::Database(_)
|
||||||
|
| UserAgentError::DatabasePool(_) => Status::internal(err.to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,91 +1,106 @@
|
|||||||
use std::{
|
use std::{ops::DerefMut, sync::Mutex};
|
||||||
ops::DerefMut,
|
|
||||||
sync::Mutex,
|
|
||||||
};
|
|
||||||
|
|
||||||
use arbiter_proto::proto::{
|
use arbiter_proto::proto::{
|
||||||
|
UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse, UserAgentRequest,
|
||||||
UserAgentResponse,
|
UserAgentResponse,
|
||||||
auth::{
|
auth::{
|
||||||
self, AuthChallengeRequest, AuthOk, ServerMessage as AuthServerMessage,
|
self, AuthChallengeRequest, AuthOk, ClientMessage as ClientAuthMessage,
|
||||||
|
ServerMessage as AuthServerMessage,
|
||||||
|
client_message::Payload as ClientAuthPayload,
|
||||||
server_message::Payload as ServerAuthPayload,
|
server_message::Payload as ServerAuthPayload,
|
||||||
},
|
},
|
||||||
unseal::{UnsealEncryptedKey, UnsealResult, UnsealStart, UnsealStartResponse},
|
user_agent_request::Payload as UserAgentRequestPayload,
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
};
|
};
|
||||||
use chacha20poly1305::{
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
AeadInPlace, XChaCha20Poly1305, XNonce,
|
|
||||||
aead::KeyInit,
|
|
||||||
};
|
|
||||||
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
|
use diesel::{ExpressionMethods as _, OptionalExtension as _, QueryDsl, dsl::update};
|
||||||
use diesel_async::{AsyncConnection, RunQueryDsl};
|
use diesel_async::RunQueryDsl;
|
||||||
use ed25519_dalek::VerifyingKey;
|
use ed25519_dalek::VerifyingKey;
|
||||||
use kameo::{Actor, actor::ActorRef, messages};
|
use kameo::{Actor, actor::Recipient, error::SendError, messages, prelude::Message};
|
||||||
use memsafe::MemSafe;
|
use memsafe::MemSafe;
|
||||||
use tokio::sync::mpsc::Sender;
|
|
||||||
use tonic::Status;
|
|
||||||
use tracing::{error, info};
|
use tracing::{error, info};
|
||||||
use x25519_dalek::{EphemeralSecret, PublicKey};
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
ServerContext,
|
ServerContext,
|
||||||
actors::{
|
actors::{
|
||||||
bootstrap::{Bootstrapper, ConsumeToken},
|
GlobalActors,
|
||||||
|
bootstrap::ConsumeToken,
|
||||||
|
keyholder::{self, TryUnseal},
|
||||||
user_agent::state::{
|
user_agent::state::{
|
||||||
AuthRequestContext, ChallengeContext, DummyContext, UnsealContext, UserAgentEvents,
|
ChallengeContext, DummyContext, UnsealContext, UserAgentEvents, UserAgentStateMachine,
|
||||||
UserAgentStateMachine, UserAgentStates,
|
UserAgentStates,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
db::{self, schema},
|
db::{self, schema},
|
||||||
errors::GrpcStatusExt,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
|
mod error;
|
||||||
mod state;
|
mod state;
|
||||||
#[cfg(test)]
|
|
||||||
mod tests;
|
|
||||||
|
|
||||||
mod transport;
|
pub use error::UserAgentError;
|
||||||
pub(crate) use transport::handle_user_agent;
|
|
||||||
|
|
||||||
#[derive(Actor)]
|
#[derive(Actor)]
|
||||||
pub struct UserAgentActor {
|
pub struct UserAgentActor {
|
||||||
db: db::DatabasePool,
|
db: db::DatabasePool,
|
||||||
bootstapper: ActorRef<Bootstrapper>,
|
actors: GlobalActors,
|
||||||
state: UserAgentStateMachine<DummyContext>,
|
state: UserAgentStateMachine<DummyContext>,
|
||||||
// will be used in future
|
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||||
_tx: Sender<Result<UserAgentResponse, Status>>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl UserAgentActor {
|
impl UserAgentActor {
|
||||||
pub(crate) fn new(
|
pub(crate) fn new(
|
||||||
context: ServerContext,
|
context: ServerContext,
|
||||||
tx: Sender<Result<UserAgentResponse, Status>>,
|
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
Self {
|
Self {
|
||||||
db: context.db.clone(),
|
db: context.db.clone(),
|
||||||
bootstapper: context.bootstrapper.clone(),
|
actors: context.actors.clone(),
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
state: UserAgentStateMachine::new(DummyContext),
|
||||||
_tx: tx,
|
transport,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
pub fn new_manual(
|
||||||
pub(crate) fn new_manual(
|
|
||||||
db: db::DatabasePool,
|
db: db::DatabasePool,
|
||||||
bootstapper: ActorRef<Bootstrapper>,
|
actors: GlobalActors,
|
||||||
tx: Sender<Result<UserAgentResponse, Status>>,
|
transport: Recipient<Result<UserAgentResponse, UserAgentError>>,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
Self {
|
Self {
|
||||||
db,
|
db,
|
||||||
bootstapper,
|
actors,
|
||||||
state: UserAgentStateMachine::new(DummyContext),
|
state: UserAgentStateMachine::new(DummyContext),
|
||||||
_tx: tx,
|
transport,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transition(&mut self, event: UserAgentEvents) -> Result<(), Status> {
|
async fn process_request(&mut self, req: UserAgentRequest) -> Output {
|
||||||
|
let msg = req.payload.ok_or_else(|| {
|
||||||
|
error!(actor = "useragent", "Received message with no payload");
|
||||||
|
UserAgentError::MissingPayload
|
||||||
|
})?;
|
||||||
|
|
||||||
|
match msg {
|
||||||
|
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
||||||
|
payload: Some(ClientAuthPayload::AuthChallengeRequest(req)),
|
||||||
|
}) => self.handle_auth_challenge_request(req).await,
|
||||||
|
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
||||||
|
payload: Some(ClientAuthPayload::AuthChallengeSolution(solution)),
|
||||||
|
}) => self.handle_auth_challenge_solution(solution).await,
|
||||||
|
UserAgentRequestPayload::UnsealStart(unseal_start) => {
|
||||||
|
self.handle_unseal_request(unseal_start).await
|
||||||
|
}
|
||||||
|
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => {
|
||||||
|
self.handle_unseal_encrypted_key(unseal_encrypted_key).await
|
||||||
|
}
|
||||||
|
_ => Err(UserAgentError::MissingPayload),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn transition(&mut self, event: UserAgentEvents) -> Result<(), UserAgentError> {
|
||||||
self.state.process_event(event).map_err(|e| {
|
self.state.process_event(event).map_err(|e| {
|
||||||
error!(?e, "State transition failed");
|
error!(?e, "State transition failed");
|
||||||
Status::internal("State machine error")
|
UserAgentError::InvalidState
|
||||||
})?;
|
})?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -94,23 +109,24 @@ impl UserAgentActor {
|
|||||||
&mut self,
|
&mut self,
|
||||||
pubkey: ed25519_dalek::VerifyingKey,
|
pubkey: ed25519_dalek::VerifyingKey,
|
||||||
token: String,
|
token: String,
|
||||||
) -> Result<UserAgentResponse, Status> {
|
) -> Output {
|
||||||
let token_ok: bool = self
|
let token_ok: bool = self
|
||||||
.bootstapper
|
.actors
|
||||||
|
.bootstrapper
|
||||||
.ask(ConsumeToken { token })
|
.ask(ConsumeToken { token })
|
||||||
.await
|
.await
|
||||||
.map_err(|e| {
|
.map_err(|e| {
|
||||||
error!(?pubkey, "Failed to consume bootstrap token: {e}");
|
error!(?pubkey, "Failed to consume bootstrap token: {e}");
|
||||||
Status::internal("Bootstrap token consumption failed")
|
UserAgentError::ActorUnavailable
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
if !token_ok {
|
if !token_ok {
|
||||||
error!(?pubkey, "Invalid bootstrap token provided");
|
error!(?pubkey, "Invalid bootstrap token provided");
|
||||||
return Err(Status::invalid_argument("Invalid bootstrap token"));
|
return Err(UserAgentError::InvalidBootstrapToken);
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut conn = self.db.get().await.to_status()?;
|
let mut conn = self.db.get().await?;
|
||||||
|
|
||||||
diesel::insert_into(schema::useragent_client::table)
|
diesel::insert_into(schema::useragent_client::table)
|
||||||
.values((
|
.values((
|
||||||
@@ -118,8 +134,7 @@ impl UserAgentActor {
|
|||||||
schema::useragent_client::nonce.eq(1),
|
schema::useragent_client::nonce.eq(1),
|
||||||
))
|
))
|
||||||
.execute(&mut conn)
|
.execute(&mut conn)
|
||||||
.await
|
.await?;
|
||||||
.to_status()?;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
|
self.transition(UserAgentEvents::ReceivedBootstrapToken)?;
|
||||||
@@ -129,9 +144,9 @@ impl UserAgentActor {
|
|||||||
|
|
||||||
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
|
async fn auth_with_challenge(&mut self, pubkey: VerifyingKey, pubkey_bytes: Vec<u8>) -> Output {
|
||||||
let nonce: Option<i32> = {
|
let nonce: Option<i32> = {
|
||||||
let mut db_conn = self.db.get().await.to_status()?;
|
let mut db_conn = self.db.get().await?;
|
||||||
db_conn
|
db_conn
|
||||||
.transaction(|conn| {
|
.exclusive_transaction(|conn| {
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
let current_nonce = schema::useragent_client::table
|
let current_nonce = schema::useragent_client::table
|
||||||
.filter(
|
.filter(
|
||||||
@@ -153,18 +168,17 @@ impl UserAgentActor {
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
.await
|
.await
|
||||||
.optional()
|
.optional()?
|
||||||
.to_status()?
|
|
||||||
};
|
};
|
||||||
|
|
||||||
let Some(nonce) = nonce else {
|
let Some(nonce) = nonce else {
|
||||||
error!(?pubkey, "Public key not found in database");
|
error!(?pubkey, "Public key not found in database");
|
||||||
return Err(Status::unauthenticated("Public key not registered"));
|
return Err(UserAgentError::PubkeyNotRegistered);
|
||||||
};
|
};
|
||||||
|
|
||||||
let challenge = auth::AuthChallenge {
|
let challenge = auth::AuthChallenge {
|
||||||
pubkey: pubkey_bytes,
|
pubkey: pubkey_bytes,
|
||||||
nonce: nonce,
|
nonce,
|
||||||
};
|
};
|
||||||
|
|
||||||
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
|
self.transition(UserAgentEvents::SentChallenge(ChallengeContext {
|
||||||
@@ -184,19 +198,17 @@ impl UserAgentActor {
|
|||||||
fn verify_challenge_solution(
|
fn verify_challenge_solution(
|
||||||
&self,
|
&self,
|
||||||
solution: &auth::AuthChallengeSolution,
|
solution: &auth::AuthChallengeSolution,
|
||||||
) -> Result<(bool, &ChallengeContext), Status> {
|
) -> Result<(bool, &ChallengeContext), UserAgentError> {
|
||||||
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
|
let UserAgentStates::WaitingForChallengeSolution(challenge_context) = self.state.state()
|
||||||
else {
|
else {
|
||||||
error!("Received challenge solution in invalid state");
|
error!("Received challenge solution in invalid state");
|
||||||
return Err(Status::invalid_argument(
|
return Err(UserAgentError::InvalidState);
|
||||||
"Invalid state for challenge solution",
|
|
||||||
));
|
|
||||||
};
|
};
|
||||||
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
|
let formatted_challenge = arbiter_proto::format_challenge(&challenge_context.challenge);
|
||||||
|
|
||||||
let signature = solution.signature.as_slice().try_into().map_err(|_| {
|
let signature = solution.signature.as_slice().try_into().map_err(|_| {
|
||||||
error!(?solution, "Invalid signature length");
|
error!(?solution, "Invalid signature length");
|
||||||
Status::invalid_argument("Invalid signature length")
|
UserAgentError::InvalidSignatureLength
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let valid = challenge_context
|
let valid = challenge_context
|
||||||
@@ -208,7 +220,7 @@ impl UserAgentActor {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type Output = Result<UserAgentResponse, Status>;
|
type Output = Result<UserAgentResponse, UserAgentError>;
|
||||||
|
|
||||||
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
|
fn auth_response(payload: ServerAuthPayload) -> UserAgentResponse {
|
||||||
UserAgentResponse {
|
UserAgentResponse {
|
||||||
@@ -234,12 +246,11 @@ impl UserAgentActor {
|
|||||||
let client_pubkey_bytes: [u8; 32] = req
|
let client_pubkey_bytes: [u8; 32] = req
|
||||||
.client_pubkey
|
.client_pubkey
|
||||||
.try_into()
|
.try_into()
|
||||||
.map_err(|_| Status::invalid_argument("client_pubkey must be 32 bytes"))?;
|
.map_err(|_| UserAgentError::InvalidPubkey)?;
|
||||||
|
|
||||||
let client_public_key = PublicKey::from(client_pubkey_bytes);
|
let client_public_key = PublicKey::from(client_pubkey_bytes);
|
||||||
|
|
||||||
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
self.transition(UserAgentEvents::UnsealRequest(UnsealContext {
|
||||||
server_public_key: public_key,
|
|
||||||
secret: Mutex::new(Some(secret)),
|
secret: Mutex::new(Some(secret)),
|
||||||
client_public_key,
|
client_public_key,
|
||||||
}))?;
|
}))?;
|
||||||
@@ -255,9 +266,7 @@ impl UserAgentActor {
|
|||||||
pub async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
|
pub async fn handle_unseal_encrypted_key(&mut self, req: UnsealEncryptedKey) -> Output {
|
||||||
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
let UserAgentStates::WaitingForUnsealKey(unseal_context) = self.state.state() else {
|
||||||
error!("Received unseal encrypted key in invalid state");
|
error!("Received unseal encrypted key in invalid state");
|
||||||
return Err(Status::failed_precondition(
|
return Err(UserAgentError::InvalidState);
|
||||||
"Invalid state for unseal encrypted key",
|
|
||||||
));
|
|
||||||
};
|
};
|
||||||
let ephemeral_secret = {
|
let ephemeral_secret = {
|
||||||
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
let mut secret_lock = unseal_context.secret.lock().unwrap();
|
||||||
@@ -280,39 +289,73 @@ impl UserAgentActor {
|
|||||||
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
|
let shared_secret = ephemeral_secret.diffie_hellman(&unseal_context.client_public_key);
|
||||||
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||||
|
|
||||||
let mut root_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
|
let mut seal_key_buffer = MemSafe::new(req.ciphertext.clone()).unwrap();
|
||||||
let mut write_handle = root_key_buffer.write().unwrap();
|
|
||||||
let write_handle = write_handle.deref_mut();
|
|
||||||
|
|
||||||
let decryption_result = cipher
|
let decryption_result = {
|
||||||
.decrypt_in_place(nonce, &req.associated_data, write_handle);
|
let mut write_handle = seal_key_buffer.write().unwrap();
|
||||||
|
let write_handle = write_handle.deref_mut();
|
||||||
|
cipher.decrypt_in_place(nonce, &req.associated_data, write_handle)
|
||||||
|
};
|
||||||
|
|
||||||
match decryption_result {
|
match decryption_result {
|
||||||
Ok(_) => todo!("Send key to the keyguarding"),
|
Ok(_) => {
|
||||||
|
match self
|
||||||
|
.actors
|
||||||
|
.key_holder
|
||||||
|
.ask(TryUnseal {
|
||||||
|
seal_key_raw: seal_key_buffer,
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(_) => {
|
||||||
|
info!("Successfully unsealed key with client-provided key");
|
||||||
|
self.transition(UserAgentEvents::ReceivedValidKey)?;
|
||||||
|
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||||
|
UnsealResult::Success.into(),
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(keyholder::Error::InvalidKey)) => {
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||||
|
UnsealResult::InvalidKey.into(),
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
Err(SendError::HandlerError(err)) => {
|
||||||
|
error!(?err, "Keyholder failed to unseal key");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||||
|
UnsealResult::InvalidKey.into(),
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
error!(?err, "Failed to send unseal request to keyholder");
|
||||||
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
|
Err(UserAgentError::ActorUnavailable)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
error!(?err, "Failed to decrypt unseal key");
|
error!(?err, "Failed to decrypt unseal key");
|
||||||
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
self.transition(UserAgentEvents::ReceivedInvalidKey)?;
|
||||||
return Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
Ok(unseal_response(UserAgentResponsePayload::UnsealResult(
|
||||||
UnsealResult::InvalidKey.into(),
|
UnsealResult::InvalidKey.into(),
|
||||||
)));
|
)))
|
||||||
},
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[message]
|
#[message]
|
||||||
pub async fn handle_auth_challenge_request(&mut self, req: AuthChallengeRequest) -> Output {
|
pub async fn handle_auth_challenge_request(&mut self, req: AuthChallengeRequest) -> Output {
|
||||||
let pubkey = req.pubkey.as_array().ok_or(Status::invalid_argument(
|
let pubkey = req
|
||||||
"Expected pubkey to have specific length",
|
.pubkey
|
||||||
))?;
|
.as_array()
|
||||||
|
.ok_or(UserAgentError::InvalidPubkey)?;
|
||||||
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|_err| {
|
let pubkey = VerifyingKey::from_bytes(pubkey).map_err(|_err| {
|
||||||
error!(?pubkey, "Failed to convert to VerifyingKey");
|
error!(?pubkey, "Failed to convert to VerifyingKey");
|
||||||
Status::invalid_argument("Failed to convert pubkey to VerifyingKey")
|
UserAgentError::InvalidPubkey
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
self.transition(UserAgentEvents::AuthRequest(AuthRequestContext {
|
self.transition(UserAgentEvents::AuthRequest)?;
|
||||||
pubkey,
|
|
||||||
bootstrap_token: req.bootstrap_token.clone(),
|
|
||||||
}))?;
|
|
||||||
|
|
||||||
match req.bootstrap_token {
|
match req.bootstrap_token {
|
||||||
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
|
Some(token) => self.auth_with_bootstrap_token(pubkey, token).await,
|
||||||
@@ -337,7 +380,22 @@ impl UserAgentActor {
|
|||||||
} else {
|
} else {
|
||||||
error!("Client provided invalid solution to authentication challenge");
|
error!("Client provided invalid solution to authentication challenge");
|
||||||
self.transition(UserAgentEvents::ReceivedBadSolution)?;
|
self.transition(UserAgentEvents::ReceivedBadSolution)?;
|
||||||
Err(Status::unauthenticated("Invalid challenge solution"))
|
Err(UserAgentError::InvalidChallengeSolution)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Message<UserAgentRequest> for UserAgentActor {
|
||||||
|
type Reply = ();
|
||||||
|
|
||||||
|
async fn handle(
|
||||||
|
&mut self,
|
||||||
|
msg: UserAgentRequest,
|
||||||
|
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||||
|
) -> Self::Reply {
|
||||||
|
let result = self.process_request(msg).await;
|
||||||
|
if let Err(e) = self.transport.tell(result).await {
|
||||||
|
error!(actor = "useragent", "Failed to send response to transport: {}", e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -12,31 +12,19 @@ pub struct ChallengeContext {
|
|||||||
pub key: VerifyingKey,
|
pub key: VerifyingKey,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Request context with deserialized public key for state machine.
|
|
||||||
// This intermediate struct is needed because the state machine branches depending on presence of bootstrap token,
|
|
||||||
// but we want to have the deserialized key in both branches.
|
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
pub struct AuthRequestContext {
|
|
||||||
pub pubkey: VerifyingKey,
|
|
||||||
pub bootstrap_token: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct UnsealContext {
|
pub struct UnsealContext {
|
||||||
pub server_public_key: PublicKey,
|
|
||||||
pub client_public_key: PublicKey,
|
pub client_public_key: PublicKey,
|
||||||
pub secret: Mutex<Option<EphemeralSecret>>,
|
pub secret: Mutex<Option<EphemeralSecret>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
smlang::statemachine!(
|
smlang::statemachine!(
|
||||||
name: UserAgent,
|
name: UserAgent,
|
||||||
custom_error: false,
|
custom_error: false,
|
||||||
transitions: {
|
transitions: {
|
||||||
*Init + AuthRequest(AuthRequestContext) / auth_request_context = ReceivedAuthRequest(AuthRequestContext),
|
*Init + AuthRequest = ReceivedAuthRequest,
|
||||||
ReceivedAuthRequest(AuthRequestContext) + ReceivedBootstrapToken = Idle,
|
ReceivedAuthRequest + ReceivedBootstrapToken = Idle,
|
||||||
|
|
||||||
ReceivedAuthRequest(AuthRequestContext) + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
|
ReceivedAuthRequest + SentChallenge(ChallengeContext) / move_challenge = WaitingForChallengeSolution(ChallengeContext),
|
||||||
|
|
||||||
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Idle,
|
WaitingForChallengeSolution(ChallengeContext) + ReceivedGoodSolution = Idle,
|
||||||
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
|
WaitingForChallengeSolution(ChallengeContext) + ReceivedBadSolution = AuthError, // block further transitions, but connection should close anyway
|
||||||
@@ -49,28 +37,15 @@ smlang::statemachine!(
|
|||||||
|
|
||||||
pub struct DummyContext;
|
pub struct DummyContext;
|
||||||
impl UserAgentStateMachineContext for DummyContext {
|
impl UserAgentStateMachineContext for DummyContext {
|
||||||
#[allow(missing_docs)]
|
|
||||||
#[allow(clippy::unused_unit)]
|
|
||||||
fn move_challenge(
|
|
||||||
&mut self,
|
|
||||||
_state_data: &AuthRequestContext,
|
|
||||||
event_data: ChallengeContext,
|
|
||||||
) -> Result<ChallengeContext, ()> {
|
|
||||||
Ok(event_data)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[allow(missing_docs)]
|
|
||||||
#[allow(clippy::unused_unit)]
|
|
||||||
fn auth_request_context(
|
|
||||||
&mut self,
|
|
||||||
event_data: AuthRequestContext,
|
|
||||||
) -> Result<AuthRequestContext, ()> {
|
|
||||||
Ok(event_data)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[allow(missing_docs)]
|
#[allow(missing_docs)]
|
||||||
#[allow(clippy::unused_unit)]
|
#[allow(clippy::unused_unit)]
|
||||||
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
fn generate_temp_keypair(&mut self, event_data: UnsealContext) -> Result<UnsealContext, ()> {
|
||||||
Ok(event_data)
|
Ok(event_data)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(missing_docs)]
|
||||||
|
#[allow(clippy::unused_unit)]
|
||||||
|
fn move_challenge(&mut self, event_data: ChallengeContext) -> Result<ChallengeContext, ()> {
|
||||||
|
Ok(event_data)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,97 +0,0 @@
|
|||||||
use super::UserAgentActor;
|
|
||||||
use arbiter_proto::proto::{
|
|
||||||
UserAgentRequest, UserAgentResponse,
|
|
||||||
auth::{
|
|
||||||
ClientMessage as ClientAuthMessage, client_message::Payload as ClientAuthPayload,
|
|
||||||
},
|
|
||||||
user_agent_request::Payload as UserAgentRequestPayload,
|
|
||||||
};
|
|
||||||
use futures::StreamExt;
|
|
||||||
use kameo::{
|
|
||||||
actor::{ActorRef, Spawn as _},
|
|
||||||
error::SendError,
|
|
||||||
};
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use tonic::Status;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::user_agent::{
|
|
||||||
HandleAuthChallengeRequest, HandleAuthChallengeSolution, HandleUnsealEncryptedKey,
|
|
||||||
HandleUnsealRequest,
|
|
||||||
},
|
|
||||||
context::ServerContext,
|
|
||||||
};
|
|
||||||
|
|
||||||
pub(crate) async fn handle_user_agent(
|
|
||||||
context: ServerContext,
|
|
||||||
mut req_stream: tonic::Streaming<UserAgentRequest>,
|
|
||||||
tx: mpsc::Sender<Result<UserAgentResponse, Status>>,
|
|
||||||
) {
|
|
||||||
let actor = UserAgentActor::spawn(UserAgentActor::new(context, tx.clone()));
|
|
||||||
|
|
||||||
while let Some(Ok(req)) = req_stream.next().await
|
|
||||||
&& actor.is_alive()
|
|
||||||
{
|
|
||||||
match process_message(&actor, req).await {
|
|
||||||
Ok(resp) => {
|
|
||||||
if tx.send(Ok(resp)).await.is_err() {
|
|
||||||
error!(actor = "useragent", "Failed to send response to client");
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(status) => {
|
|
||||||
let _ = tx.send(Err(status)).await;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
actor.kill();
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn process_message(
|
|
||||||
actor: &ActorRef<UserAgentActor>,
|
|
||||||
req: UserAgentRequest,
|
|
||||||
) -> Result<UserAgentResponse, Status> {
|
|
||||||
let msg = req.payload.ok_or_else(|| {
|
|
||||||
error!(actor = "useragent", "Received message with no payload");
|
|
||||||
Status::invalid_argument("Expected message with payload")
|
|
||||||
})?;
|
|
||||||
|
|
||||||
match msg {
|
|
||||||
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
|
||||||
payload: Some(ClientAuthPayload::AuthChallengeRequest(req)),
|
|
||||||
}) => actor
|
|
||||||
.ask(HandleAuthChallengeRequest { req })
|
|
||||||
.await
|
|
||||||
.map_err(into_status),
|
|
||||||
UserAgentRequestPayload::AuthMessage(ClientAuthMessage {
|
|
||||||
payload: Some(ClientAuthPayload::AuthChallengeSolution(solution)),
|
|
||||||
}) => actor
|
|
||||||
.ask(HandleAuthChallengeSolution { solution })
|
|
||||||
.await
|
|
||||||
.map_err(into_status),
|
|
||||||
UserAgentRequestPayload::UnsealStart(unseal_start) => actor
|
|
||||||
.ask(HandleUnsealRequest { req: unseal_start })
|
|
||||||
.await
|
|
||||||
.map_err(into_status),
|
|
||||||
UserAgentRequestPayload::UnsealEncryptedKey(unseal_encrypted_key) => actor
|
|
||||||
.ask(HandleUnsealEncryptedKey {
|
|
||||||
req: unseal_encrypted_key,
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
.map_err(into_status),
|
|
||||||
_ => Err(Status::invalid_argument("Expected message with payload")),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn into_status<M>(e: SendError<M, Status>) -> Status {
|
|
||||||
match e {
|
|
||||||
SendError::HandlerError(status) => status,
|
|
||||||
_ => {
|
|
||||||
error!(actor = "useragent", "Failed to send message to actor");
|
|
||||||
Status::internal("session failure")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
use std::sync::Arc;
|
|
||||||
|
|
||||||
use diesel::OptionalExtension as _;
|
|
||||||
use diesel_async::RunQueryDsl as _;
|
|
||||||
use kameo::actor::{ActorRef, Spawn};
|
|
||||||
use miette::Diagnostic;
|
|
||||||
use thiserror::Error;
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::{
|
|
||||||
bootstrap::{self, Bootstrapper},
|
|
||||||
keyholder::KeyHolder,
|
|
||||||
},
|
|
||||||
context::tls::{TlsDataRaw, TlsManager},
|
|
||||||
db::{self, models::ArbiterSetting, schema::arbiter_settings},
|
|
||||||
};
|
|
||||||
|
|
||||||
pub mod tls;
|
|
||||||
|
|
||||||
#[derive(Error, Debug, Diagnostic)]
|
|
||||||
pub enum InitError {
|
|
||||||
#[error("Database setup failed: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::database_setup))]
|
|
||||||
DatabaseSetup(#[from] db::DatabaseSetupError),
|
|
||||||
|
|
||||||
#[error("Connection acquire failed: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::database_pool))]
|
|
||||||
DatabasePool(#[from] db::PoolError),
|
|
||||||
|
|
||||||
#[error("Database query error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::database_query))]
|
|
||||||
DatabaseQuery(#[from] diesel::result::Error),
|
|
||||||
|
|
||||||
#[error("TLS initialization failed: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::tls_init))]
|
|
||||||
Tls(#[from] tls::TlsInitError),
|
|
||||||
|
|
||||||
#[error("Bootstrap token generation failed: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::bootstrap_token))]
|
|
||||||
BootstrapToken(#[from] bootstrap::BootstrapError),
|
|
||||||
|
|
||||||
#[error("KeyHolder initialization failed: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::keyholder_init))]
|
|
||||||
KeyHolder(#[from] crate::actors::keyholder::Error),
|
|
||||||
|
|
||||||
#[error("I/O Error: {0}")]
|
|
||||||
#[diagnostic(code(arbiter_server::init::io))]
|
|
||||||
Io(#[from] std::io::Error),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct _ServerContextInner {
|
|
||||||
pub db: db::DatabasePool,
|
|
||||||
pub tls: TlsManager,
|
|
||||||
pub bootstrapper: ActorRef<Bootstrapper>,
|
|
||||||
pub keyholder: ActorRef<KeyHolder>,
|
|
||||||
}
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct ServerContext(Arc<_ServerContextInner>);
|
|
||||||
|
|
||||||
impl std::ops::Deref for ServerContext {
|
|
||||||
type Target = _ServerContextInner;
|
|
||||||
|
|
||||||
fn deref(&self) -> &Self::Target {
|
|
||||||
&self.0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ServerContext {
|
|
||||||
async fn load_tls(
|
|
||||||
db: &mut db::DatabaseConnection,
|
|
||||||
settings: Option<&ArbiterSetting>,
|
|
||||||
) -> Result<TlsManager, InitError> {
|
|
||||||
match &settings {
|
|
||||||
Some(settings) => {
|
|
||||||
let tls_data_raw = TlsDataRaw {
|
|
||||||
cert: settings.cert.clone(),
|
|
||||||
key: settings.cert_key.clone(),
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(TlsManager::new(Some(tls_data_raw)).await?)
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
let tls = TlsManager::new(None).await?;
|
|
||||||
let tls_data_raw = tls.bytes();
|
|
||||||
|
|
||||||
diesel::insert_into(arbiter_settings::table)
|
|
||||||
.values(&ArbiterSetting {
|
|
||||||
id: 1,
|
|
||||||
root_key_id: None,
|
|
||||||
cert_key: tls_data_raw.key,
|
|
||||||
cert: tls_data_raw.cert,
|
|
||||||
})
|
|
||||||
.execute(db)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(tls)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
|
|
||||||
let mut conn = db.get().await?;
|
|
||||||
|
|
||||||
let settings = arbiter_settings::table
|
|
||||||
.first::<ArbiterSetting>(&mut conn)
|
|
||||||
.await
|
|
||||||
.optional()?;
|
|
||||||
|
|
||||||
let tls = Self::load_tls(&mut conn, settings.as_ref()).await?;
|
|
||||||
|
|
||||||
drop(conn);
|
|
||||||
|
|
||||||
Ok(Self(Arc::new(_ServerContextInner {
|
|
||||||
bootstrapper: Bootstrapper::spawn(Bootstrapper::new(&db).await?),
|
|
||||||
keyholder: KeyHolder::spawn(KeyHolder::new(db.clone()).await?),
|
|
||||||
db,
|
|
||||||
tls,
|
|
||||||
})))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
65
server/crates/arbiter-server/src/context/mod.rs
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use miette::Diagnostic;
|
||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
actors::GlobalActors,
|
||||||
|
context::tls::TlsManager,
|
||||||
|
db::{self},
|
||||||
|
};
|
||||||
|
|
||||||
|
pub mod tls;
|
||||||
|
|
||||||
|
#[derive(Error, Debug, Diagnostic)]
|
||||||
|
pub enum InitError {
|
||||||
|
#[error("Database setup failed: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::database_setup))]
|
||||||
|
DatabaseSetup(#[from] db::DatabaseSetupError),
|
||||||
|
|
||||||
|
#[error("Connection acquire failed: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::database_pool))]
|
||||||
|
DatabasePool(#[from] db::PoolError),
|
||||||
|
|
||||||
|
#[error("Database query error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::database_query))]
|
||||||
|
DatabaseQuery(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("TLS initialization failed: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::tls_init))]
|
||||||
|
Tls(#[from] tls::InitError),
|
||||||
|
|
||||||
|
#[error("Actor spawn failed: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::actor_spawn))]
|
||||||
|
ActorSpawn(#[from] crate::actors::SpawnError),
|
||||||
|
|
||||||
|
#[error("I/O Error: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::init::io))]
|
||||||
|
Io(#[from] std::io::Error),
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct _ServerContextInner {
|
||||||
|
pub db: db::DatabasePool,
|
||||||
|
pub tls: TlsManager,
|
||||||
|
pub actors: GlobalActors,
|
||||||
|
}
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct ServerContext(Arc<_ServerContextInner>);
|
||||||
|
|
||||||
|
impl std::ops::Deref for ServerContext {
|
||||||
|
type Target = _ServerContextInner;
|
||||||
|
|
||||||
|
fn deref(&self) -> &Self::Target {
|
||||||
|
&self.0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ServerContext {
|
||||||
|
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
|
||||||
|
Ok(Self(Arc::new(_ServerContextInner {
|
||||||
|
actors: GlobalActors::spawn(db.clone()).await?,
|
||||||
|
tls: TlsManager::new(db.clone()).await?,
|
||||||
|
db,
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,13 +1,36 @@
|
|||||||
use std::string::FromUtf8Error;
|
use std::string::FromUtf8Error;
|
||||||
|
|
||||||
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper as _};
|
||||||
|
use diesel_async::{AsyncConnection, RunQueryDsl};
|
||||||
use miette::Diagnostic;
|
use miette::Diagnostic;
|
||||||
use rcgen::{Certificate, KeyPair};
|
use pem::Pem;
|
||||||
use rustls::pki_types::CertificateDer;
|
use rcgen::{
|
||||||
|
BasicConstraints, Certificate, CertificateParams, CertifiedIssuer, DistinguishedName, DnType,
|
||||||
|
IsCa, Issuer, KeyPair, KeyUsagePurpose,
|
||||||
|
};
|
||||||
|
use rustls::pki_types::{pem::PemObject};
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
use tonic::transport::CertificateDer;
|
||||||
|
|
||||||
|
use crate::db::{
|
||||||
|
self,
|
||||||
|
models::{NewTlsHistory, TlsHistory},
|
||||||
|
schema::{
|
||||||
|
arbiter_settings,
|
||||||
|
tls_history::{self},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const ENCODE_CONFIG: pem::EncodeConfig = {
|
||||||
|
let line_ending = match cfg!(target_family = "windows") {
|
||||||
|
true => pem::LineEnding::CRLF,
|
||||||
|
false => pem::LineEnding::LF,
|
||||||
|
};
|
||||||
|
pem::EncodeConfig::new().set_line_ending(line_ending)
|
||||||
|
};
|
||||||
|
|
||||||
#[derive(Error, Debug, Diagnostic)]
|
#[derive(Error, Debug, Diagnostic)]
|
||||||
pub enum TlsInitError {
|
pub enum InitError {
|
||||||
#[error("Key generation error during TLS initialization: {0}")]
|
#[error("Key generation error during TLS initialization: {0}")]
|
||||||
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
|
#[diagnostic(code(arbiter_server::tls_init::key_generation))]
|
||||||
KeyGeneration(#[from] rcgen::Error),
|
KeyGeneration(#[from] rcgen::Error),
|
||||||
@@ -19,71 +42,211 @@ pub enum TlsInitError {
|
|||||||
#[error("Key deserialization error: {0}")]
|
#[error("Key deserialization error: {0}")]
|
||||||
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
|
#[diagnostic(code(arbiter_server::tls_init::key_deserialization))]
|
||||||
KeyDeserializationError(rcgen::Error),
|
KeyDeserializationError(rcgen::Error),
|
||||||
|
|
||||||
|
#[error("Database error during TLS initialization: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::tls_init::database_error))]
|
||||||
|
DatabaseError(#[from] diesel::result::Error),
|
||||||
|
|
||||||
|
#[error("Pem deserialization error during TLS initialization: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::tls_init::pem_deserialization))]
|
||||||
|
PemDeserializationError(#[from] rustls::pki_types::pem::Error),
|
||||||
|
|
||||||
|
#[error("Database pool acquire error during TLS initialization: {0}")]
|
||||||
|
#[diagnostic(code(arbiter_server::tls_init::database_pool_acquire))]
|
||||||
|
DatabasePoolAcquire(#[from] db::PoolError),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct TlsData {
|
pub type PemCert = String;
|
||||||
pub cert: CertificateDer<'static>,
|
|
||||||
pub keypair: KeyPair,
|
pub fn encode_cert_to_pem(cert: &CertificateDer) -> PemCert {
|
||||||
|
pem::encode_config(
|
||||||
|
&Pem::new("CERTIFICATE", cert.to_vec()),
|
||||||
|
ENCODE_CONFIG,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct TlsDataRaw {
|
#[allow(unused)]
|
||||||
pub cert: Vec<u8>,
|
struct SerializedTls {
|
||||||
pub key: Vec<u8>,
|
cert_pem: PemCert,
|
||||||
|
cert_key_pem: String,
|
||||||
}
|
}
|
||||||
impl TlsDataRaw {
|
|
||||||
pub fn serialize(cert: &TlsData) -> Self {
|
struct TlsCa {
|
||||||
Self {
|
issuer: Issuer<'static, KeyPair>,
|
||||||
cert: cert.cert.as_ref().to_vec(),
|
cert: CertificateDer<'static>,
|
||||||
key: cert.keypair.serialize_pem().as_bytes().to_vec(),
|
}
|
||||||
|
|
||||||
|
impl TlsCa {
|
||||||
|
fn generate() -> Result<Self, InitError> {
|
||||||
|
let keypair = KeyPair::generate()?;
|
||||||
|
let mut params = CertificateParams::new(["Arbiter Instance CA".into()])?;
|
||||||
|
params.is_ca = IsCa::Ca(BasicConstraints::Unconstrained);
|
||||||
|
params.key_usages = vec![
|
||||||
|
KeyUsagePurpose::KeyCertSign,
|
||||||
|
KeyUsagePurpose::CrlSign,
|
||||||
|
KeyUsagePurpose::DigitalSignature,
|
||||||
|
];
|
||||||
|
|
||||||
|
let mut dn = DistinguishedName::new();
|
||||||
|
dn.push(DnType::CommonName, "Arbiter Instance CA");
|
||||||
|
params.distinguished_name = dn;
|
||||||
|
let certified_issuer = CertifiedIssuer::self_signed(params, keypair)?;
|
||||||
|
|
||||||
|
let cert_key_pem = certified_issuer.key().serialize_pem();
|
||||||
|
|
||||||
|
let issuer = Issuer::from_ca_cert_pem(
|
||||||
|
&certified_issuer.pem(),
|
||||||
|
KeyPair::from_pem(cert_key_pem.as_ref()).unwrap(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
issuer,
|
||||||
|
cert: certified_issuer.der().clone(),
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
fn generate_leaf(&self) -> Result<TlsCert, InitError> {
|
||||||
|
let cert_key = KeyPair::generate()?;
|
||||||
|
let mut params = CertificateParams::new(["Arbiter Instance Leaf".into()])?;
|
||||||
|
params.is_ca = IsCa::NoCa;
|
||||||
|
params.key_usages = vec![
|
||||||
|
KeyUsagePurpose::DigitalSignature,
|
||||||
|
KeyUsagePurpose::KeyEncipherment,
|
||||||
|
];
|
||||||
|
|
||||||
|
let mut dn = DistinguishedName::new();
|
||||||
|
dn.push(DnType::CommonName, "Arbiter Instance Leaf");
|
||||||
|
params.distinguished_name = dn;
|
||||||
|
|
||||||
|
let new_cert = params.signed_by(&cert_key, &self.issuer)?;
|
||||||
|
|
||||||
|
Ok(TlsCert {
|
||||||
|
cert: new_cert,
|
||||||
|
cert_key,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn deserialize(&self) -> Result<TlsData, TlsInitError> {
|
#[allow(unused)]
|
||||||
let cert = CertificateDer::from_slice(&self.cert).into_owned();
|
fn serialize(&self) -> Result<SerializedTls, InitError> {
|
||||||
|
let cert_key_pem = self.issuer.key().serialize_pem();
|
||||||
|
Ok(SerializedTls {
|
||||||
|
cert_pem: encode_cert_to_pem(&self.cert),
|
||||||
|
cert_key_pem,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
let key =
|
#[allow(unused)]
|
||||||
String::from_utf8(self.key.clone()).map_err(TlsInitError::KeyInvalidFormat)?;
|
fn try_deserialize(cert_pem: &str, cert_key_pem: &str) -> Result<Self, InitError> {
|
||||||
|
let keypair =
|
||||||
let keypair = KeyPair::from_pem(&key).map_err(TlsInitError::KeyDeserializationError)?;
|
KeyPair::from_pem(cert_key_pem).map_err(InitError::KeyDeserializationError)?;
|
||||||
|
let issuer = Issuer::from_ca_cert_pem(cert_pem, keypair)?;
|
||||||
Ok(TlsData { cert, keypair })
|
Ok(Self {
|
||||||
|
issuer,
|
||||||
|
cert: CertificateDer::from_pem_slice(cert_pem.as_bytes())?,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn generate_cert(key: &KeyPair) -> Result<Certificate, rcgen::Error> {
|
struct TlsCert {
|
||||||
let params = rcgen::CertificateParams::new(vec![
|
cert: Certificate,
|
||||||
"arbiter.local".to_string(),
|
cert_key: KeyPair,
|
||||||
"localhost".to_string(),
|
|
||||||
])?;
|
|
||||||
|
|
||||||
params.self_signed(key)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Implement cert rotation
|
// TODO: Implement cert rotation
|
||||||
pub struct TlsManager {
|
pub struct TlsManager {
|
||||||
data: TlsData,
|
cert: CertificateDer<'static>,
|
||||||
|
keypair: KeyPair,
|
||||||
|
ca_cert: CertificateDer<'static>,
|
||||||
|
_db: db::DatabasePool,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl TlsManager {
|
impl TlsManager {
|
||||||
pub async fn new(data: Option<TlsDataRaw>) -> Result<Self, TlsInitError> {
|
pub async fn generate_new(db: &db::DatabasePool) -> Result<Self, InitError> {
|
||||||
match data {
|
let ca = TlsCa::generate()?;
|
||||||
Some(raw) => {
|
let new_cert = ca.generate_leaf()?;
|
||||||
let tls_data = raw.deserialize()?;
|
|
||||||
Ok(Self { data: tls_data })
|
{
|
||||||
}
|
let mut conn = db.get().await?;
|
||||||
None => {
|
conn.transaction(|conn| {
|
||||||
let keypair = KeyPair::generate()?;
|
Box::pin(async {
|
||||||
let cert = generate_cert(&keypair)?;
|
let new_tls_history = NewTlsHistory {
|
||||||
let tls_data = TlsData {
|
cert: new_cert.cert.pem(),
|
||||||
cert: cert.der().clone(),
|
cert_key: new_cert.cert_key.serialize_pem(),
|
||||||
keypair,
|
ca_cert: encode_cert_to_pem(&ca.cert),
|
||||||
|
ca_key: ca.issuer.key().serialize_pem(),
|
||||||
};
|
};
|
||||||
Ok(Self { data: tls_data })
|
|
||||||
|
let inserted_tls_history: i32 = diesel::insert_into(tls_history::table)
|
||||||
|
.values(&new_tls_history)
|
||||||
|
.returning(tls_history::id)
|
||||||
|
.get_result(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
diesel::update(arbiter_settings::table)
|
||||||
|
.set(arbiter_settings::tls_id.eq(inserted_tls_history))
|
||||||
|
.execute(conn)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Result::<_, diesel::result::Error>::Ok(())
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
cert: new_cert.cert.der().clone(),
|
||||||
|
keypair: new_cert.cert_key,
|
||||||
|
ca_cert: ca.cert,
|
||||||
|
_db: db.clone(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn new(db: db::DatabasePool) -> Result<Self, InitError> {
|
||||||
|
let cert_data: Option<TlsHistory> = {
|
||||||
|
let mut conn = db.get().await?;
|
||||||
|
arbiter_settings::table
|
||||||
|
.left_join(tls_history::table)
|
||||||
|
.select(Option::<TlsHistory>::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
match cert_data {
|
||||||
|
Some(data) => {
|
||||||
|
let try_load = || -> Result<_, Box<dyn std::error::Error>> {
|
||||||
|
let keypair = KeyPair::from_pem(&data.cert_key)?;
|
||||||
|
let cert = CertificateDer::from_pem_slice(data.cert.as_bytes())?;
|
||||||
|
let ca_cert = CertificateDer::from_pem_slice(data.ca_cert.as_bytes())?;
|
||||||
|
Ok(Self {
|
||||||
|
cert,
|
||||||
|
keypair,
|
||||||
|
ca_cert,
|
||||||
|
_db: db.clone(),
|
||||||
|
})
|
||||||
|
};
|
||||||
|
match try_load() {
|
||||||
|
Ok(manager) => Ok(manager),
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to load existing TLS certs: {e}. Generating new ones.");
|
||||||
|
Self::generate_new(&db).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => Self::generate_new(&db).await,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn bytes(&self) -> TlsDataRaw {
|
pub fn cert(&self) -> &CertificateDer<'static> {
|
||||||
TlsDataRaw::serialize(&self.data)
|
&self.cert
|
||||||
|
}
|
||||||
|
pub fn ca_cert(&self) -> &CertificateDer<'static> {
|
||||||
|
&self.ca_cert
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn cert_pem(&self) -> PemCert {
|
||||||
|
encode_cert_to_pem(&self.cert)
|
||||||
|
}
|
||||||
|
pub fn key_pem(&self) -> String {
|
||||||
|
self.keypair.serialize_pem()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
|
use diesel::{Connection as _, SqliteConnection, connection::SimpleConnection as _};
|
||||||
use diesel::{
|
|
||||||
Connection as _, SqliteConnection,
|
|
||||||
connection::SimpleConnection as _,
|
|
||||||
};
|
|
||||||
use diesel_async::{
|
use diesel_async::{
|
||||||
AsyncConnection, SimpleAsyncConnection,
|
AsyncConnection, SimpleAsyncConnection,
|
||||||
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig},
|
pooled_connection::{AsyncDieselConnectionManager, ManagerConfig},
|
||||||
@@ -21,30 +17,30 @@ pub type DatabasePool = diesel_async::pooled_connection::bb8::Pool<DatabaseConne
|
|||||||
pub type PoolInitError = diesel_async::pooled_connection::PoolError;
|
pub type PoolInitError = diesel_async::pooled_connection::PoolError;
|
||||||
pub type PoolError = diesel_async::pooled_connection::bb8::RunError;
|
pub type PoolError = diesel_async::pooled_connection::bb8::RunError;
|
||||||
|
|
||||||
static DB_FILE: &'static str = "arbiter.sqlite";
|
static DB_FILE: &str = "arbiter.sqlite";
|
||||||
|
|
||||||
const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
|
const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
|
||||||
|
|
||||||
#[derive(Error, Diagnostic, Debug)]
|
#[derive(Error, Diagnostic, Debug)]
|
||||||
pub enum DatabaseSetupError {
|
pub enum DatabaseSetupError {
|
||||||
#[error("Failed to determine home directory")]
|
#[error("Failed to determine home directory")]
|
||||||
#[diagnostic(code(arbiter::db::home_dir_error))]
|
#[diagnostic(code(arbiter::db::home_dir))]
|
||||||
HomeDir(std::io::Error),
|
HomeDir(std::io::Error),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
#[diagnostic(code(arbiter::db::connection_error))]
|
#[diagnostic(code(arbiter::db::connection))]
|
||||||
Connection(diesel::ConnectionError),
|
Connection(diesel::ConnectionError),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
#[diagnostic(code(arbiter::db::concurrency_error))]
|
#[diagnostic(code(arbiter::db::concurrency))]
|
||||||
ConcurrencySetup(diesel::result::Error),
|
ConcurrencySetup(diesel::result::Error),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
#[diagnostic(code(arbiter::db::migration_error))]
|
#[diagnostic(code(arbiter::db::migration))]
|
||||||
Migration(Box<dyn std::error::Error + Send + Sync>),
|
Migration(Box<dyn std::error::Error + Send + Sync>),
|
||||||
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
#[diagnostic(code(arbiter::db::pool_error))]
|
#[diagnostic(code(arbiter::db::pool))]
|
||||||
Pool(#[from] PoolInitError),
|
Pool(#[from] PoolInitError),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -95,12 +91,12 @@ fn initialize_database(url: &str) -> Result<(), DatabaseSetupError> {
|
|||||||
|
|
||||||
#[tracing::instrument(level = "info")]
|
#[tracing::instrument(level = "info")]
|
||||||
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetupError> {
|
||||||
let database_url = url.map(String::from).unwrap_or(format!(
|
let database_url = url.map(String::from).unwrap_or(
|
||||||
"{}?mode=rwc",
|
database_path()?
|
||||||
(database_path()?
|
|
||||||
.to_str()
|
.to_str()
|
||||||
.expect("database path is not valid UTF-8"))
|
.expect("database path is not valid UTF-8")
|
||||||
));
|
.to_string(),
|
||||||
|
);
|
||||||
|
|
||||||
initialize_database(&database_url)?;
|
initialize_database(&database_url)?;
|
||||||
|
|
||||||
@@ -133,7 +129,6 @@ pub async fn create_pool(url: Option<&str>) -> Result<DatabasePool, DatabaseSetu
|
|||||||
Ok(pool)
|
Ok(pool)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
pub async fn create_test_pool() -> DatabasePool {
|
pub async fn create_test_pool() -> DatabasePool {
|
||||||
use rand::distr::{Alphanumeric, SampleString as _};
|
use rand::distr::{Alphanumeric, SampleString as _};
|
||||||
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
#![allow(unused)]
|
#![allow(unused)]
|
||||||
#![allow(clippy::all)]
|
#![allow(clippy::all)]
|
||||||
|
|
||||||
use crate::db::schema::{self, aead_encrypted, arbiter_settings, root_key_history};
|
use crate::db::schema::{self, aead_encrypted, arbiter_settings, root_key_history, tls_history};
|
||||||
use diesel::{prelude::*, sqlite::Sqlite};
|
use diesel::{prelude::*, sqlite::Sqlite};
|
||||||
use restructed::Models;
|
use restructed::Models;
|
||||||
|
|
||||||
@@ -46,13 +46,29 @@ pub struct RootKeyHistory {
|
|||||||
pub salt: Vec<u8>,
|
pub salt: Vec<u8>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Queryable, Debug, Insertable)]
|
#[derive(Models, Queryable, Debug, Insertable, Selectable)]
|
||||||
|
#[diesel(table_name = tls_history, check_for_backend(Sqlite))]
|
||||||
|
#[view(
|
||||||
|
NewTlsHistory,
|
||||||
|
derive(Insertable),
|
||||||
|
omit(id, created_at),
|
||||||
|
attributes_with = "deriveless"
|
||||||
|
)]
|
||||||
|
pub struct TlsHistory {
|
||||||
|
pub id: i32,
|
||||||
|
pub cert: String,
|
||||||
|
pub cert_key: String, // PEM Encoded private key
|
||||||
|
pub ca_cert: String, // PEM Encoded certificate for cert signing
|
||||||
|
pub ca_key: String, // PEM Encoded public key for cert signing
|
||||||
|
pub created_at: i32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Queryable, Debug, Insertable, Selectable)]
|
||||||
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
|
#[diesel(table_name = arbiter_settings, check_for_backend(Sqlite))]
|
||||||
pub struct ArbiterSetting {
|
pub struct ArbiterSettings {
|
||||||
pub id: i32,
|
pub id: i32,
|
||||||
pub root_key_id: Option<i32>, // references root_key_history.id
|
pub root_key_id: Option<i32>, // references root_key_history.id
|
||||||
pub cert_key: Vec<u8>,
|
pub tls_id: Option<i32>, // references tls_history.id
|
||||||
pub cert: Vec<u8>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Queryable, Debug)]
|
#[derive(Queryable, Debug)]
|
||||||
|
|||||||
@@ -16,8 +16,7 @@ diesel::table! {
|
|||||||
arbiter_settings (id) {
|
arbiter_settings (id) {
|
||||||
id -> Integer,
|
id -> Integer,
|
||||||
root_key_id -> Nullable<Integer>,
|
root_key_id -> Nullable<Integer>,
|
||||||
cert_key -> Binary,
|
tls_id -> Nullable<Integer>,
|
||||||
cert -> Binary,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -43,6 +42,17 @@ diesel::table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
diesel::table! {
|
||||||
|
tls_history (id) {
|
||||||
|
id -> Integer,
|
||||||
|
cert -> Text,
|
||||||
|
cert_key -> Text,
|
||||||
|
ca_cert -> Text,
|
||||||
|
ca_key -> Text,
|
||||||
|
created_at -> Integer,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
diesel::table! {
|
diesel::table! {
|
||||||
useragent_client (id) {
|
useragent_client (id) {
|
||||||
id -> Integer,
|
id -> Integer,
|
||||||
@@ -55,11 +65,13 @@ diesel::table! {
|
|||||||
|
|
||||||
diesel::joinable!(aead_encrypted -> root_key_history (associated_root_key_id));
|
diesel::joinable!(aead_encrypted -> root_key_history (associated_root_key_id));
|
||||||
diesel::joinable!(arbiter_settings -> root_key_history (root_key_id));
|
diesel::joinable!(arbiter_settings -> root_key_history (root_key_id));
|
||||||
|
diesel::joinable!(arbiter_settings -> tls_history (tls_id));
|
||||||
|
|
||||||
diesel::allow_tables_to_appear_in_same_query!(
|
diesel::allow_tables_to_appear_in_same_query!(
|
||||||
aead_encrypted,
|
aead_encrypted,
|
||||||
arbiter_settings,
|
arbiter_settings,
|
||||||
program_client,
|
program_client,
|
||||||
root_key_history,
|
root_key_history,
|
||||||
|
tls_history,
|
||||||
useragent_client,
|
useragent_client,
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -1,24 +0,0 @@
|
|||||||
use tonic::Status;
|
|
||||||
use tracing::error;
|
|
||||||
|
|
||||||
pub trait GrpcStatusExt<T> {
|
|
||||||
fn to_status(self) -> Result<T, Status>;
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> GrpcStatusExt<T> for Result<T, diesel::result::Error> {
|
|
||||||
fn to_status(self) -> Result<T, Status> {
|
|
||||||
self.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database error");
|
|
||||||
Status::internal("Database error")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> GrpcStatusExt<T> for Result<T, crate::db::PoolError> {
|
|
||||||
fn to_status(self) -> Result<T, Status> {
|
|
||||||
self.map_err(|e| {
|
|
||||||
error!(error = ?e, "Database pool error");
|
|
||||||
Status::internal("Database pool error")
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,23 +1,26 @@
|
|||||||
#![forbid(unsafe_code)]
|
#![forbid(unsafe_code)]
|
||||||
use arbiter_proto::{
|
use arbiter_proto::{
|
||||||
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
|
proto::{ClientRequest, ClientResponse, UserAgentRequest, UserAgentResponse},
|
||||||
transport::BiStream,
|
transport::{BiStream, GrpcTransportActor, wire},
|
||||||
};
|
};
|
||||||
use async_trait::async_trait;
|
use async_trait::async_trait;
|
||||||
|
use kameo::actor::PreparedActor;
|
||||||
use tokio_stream::wrappers::ReceiverStream;
|
use tokio_stream::wrappers::ReceiverStream;
|
||||||
|
|
||||||
use tokio::sync::mpsc;
|
use tokio::sync::mpsc;
|
||||||
use tonic::{Request, Response, Status};
|
use tonic::{Request, Response, Status};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
actors::{client::handle_client, user_agent::handle_user_agent},
|
actors::{
|
||||||
|
client::handle_client,
|
||||||
|
user_agent::UserAgentActor,
|
||||||
|
},
|
||||||
context::ServerContext,
|
context::ServerContext,
|
||||||
};
|
};
|
||||||
|
|
||||||
pub mod actors;
|
pub mod actors;
|
||||||
pub mod context;
|
pub mod context;
|
||||||
pub mod db;
|
pub mod db;
|
||||||
mod errors;
|
|
||||||
|
|
||||||
const DEFAULT_CHANNEL_SIZE: usize = 1000;
|
const DEFAULT_CHANNEL_SIZE: usize = 1000;
|
||||||
|
|
||||||
@@ -59,7 +62,22 @@ impl arbiter_proto::proto::arbiter_service_server::ArbiterService for Server {
|
|||||||
) -> Result<Response<Self::UserAgentStream>, Status> {
|
) -> Result<Response<Self::UserAgentStream>, Status> {
|
||||||
let req_stream = request.into_inner();
|
let req_stream = request.into_inner();
|
||||||
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
let (tx, rx) = mpsc::channel(DEFAULT_CHANNEL_SIZE);
|
||||||
tokio::spawn(handle_user_agent(self.context.clone(), req_stream, tx));
|
let context = self.context.clone();
|
||||||
|
|
||||||
|
wire(
|
||||||
|
|prepared: PreparedActor<UserAgentActor>, recipient| {
|
||||||
|
prepared.spawn(UserAgentActor::new(context, recipient));
|
||||||
|
},
|
||||||
|
|prepared: PreparedActor<GrpcTransportActor<_, _, _>>, business_recipient| {
|
||||||
|
prepared.spawn(GrpcTransportActor::new(
|
||||||
|
tx,
|
||||||
|
req_stream,
|
||||||
|
business_recipient,
|
||||||
|
));
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
Ok(Response::new(ReceiverStream::new(rx)))
|
Ok(Response::new(ReceiverStream::new(rx)))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,13 @@
|
|||||||
use arbiter_proto::proto::arbiter_service_server::ArbiterServiceServer;
|
use std::net::SocketAddr;
|
||||||
use arbiter_server::{Server, context::ServerContext, db};
|
|
||||||
|
use arbiter_proto::{proto::arbiter_service_server::ArbiterServiceServer, url::ArbiterUrl};
|
||||||
|
use arbiter_server::{Server, actors::bootstrap::GetToken, context::ServerContext, db};
|
||||||
|
use miette::miette;
|
||||||
|
use tonic::transport::{Identity, ServerTlsConfig};
|
||||||
use tracing::info;
|
use tracing::info;
|
||||||
|
|
||||||
|
const PORT: u16 = 50051;
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> miette::Result<()> {
|
async fn main() -> miette::Result<()> {
|
||||||
tracing_subscriber::fmt()
|
tracing_subscriber::fmt()
|
||||||
@@ -13,18 +19,31 @@ async fn main() -> miette::Result<()> {
|
|||||||
|
|
||||||
info!("Starting arbiter server");
|
info!("Starting arbiter server");
|
||||||
|
|
||||||
info!("Initializing database");
|
|
||||||
let db = db::create_pool(None).await?;
|
let db = db::create_pool(None).await?;
|
||||||
info!("Database ready");
|
info!("Database ready");
|
||||||
|
|
||||||
info!("Initializing server context");
|
|
||||||
let context = ServerContext::new(db).await?;
|
let context = ServerContext::new(db).await?;
|
||||||
info!("Server context ready");
|
|
||||||
|
|
||||||
let addr = "[::1]:50051".parse().expect("valid address");
|
let addr: SocketAddr = format!("127.0.0.1:{PORT}").parse().expect("valid address");
|
||||||
info!(%addr, "Starting gRPC server");
|
info!(%addr, "Starting gRPC server");
|
||||||
|
|
||||||
|
let url = ArbiterUrl {
|
||||||
|
host: addr.ip().to_string(),
|
||||||
|
port: addr.port(),
|
||||||
|
ca_cert: context.tls.ca_cert().clone().into_owned(),
|
||||||
|
bootstrap_token: context.actors.bootstrapper.ask(GetToken).await.unwrap(),
|
||||||
|
};
|
||||||
|
|
||||||
|
info!(%url, "Server URL");
|
||||||
|
|
||||||
|
let tls = ServerTlsConfig::new().identity(Identity::from_pem(
|
||||||
|
context.tls.cert_pem(),
|
||||||
|
context.tls.key_pem(),
|
||||||
|
));
|
||||||
|
|
||||||
tonic::transport::Server::builder()
|
tonic::transport::Server::builder()
|
||||||
|
.tls_config(tls)
|
||||||
|
.map_err(|err| miette!("Faild to setup TLS: {err}"))?
|
||||||
.add_service(ArbiterServiceServer::new(Server::new(context)))
|
.add_service(ArbiterServiceServer::new(Server::new(context)))
|
||||||
.serve(addr)
|
.serve(addr)
|
||||||
.await
|
.await
|
||||||
|
|||||||
28
server/crates/arbiter-server/tests/common/mod.rs
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
use arbiter_server::{
|
||||||
|
actors::keyholder::KeyHolder,
|
||||||
|
db::{self, schema},
|
||||||
|
};
|
||||||
|
use diesel::QueryDsl;
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub async fn bootstrapped_keyholder(db: &db::DatabasePool) -> KeyHolder {
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
actor
|
||||||
|
.bootstrap(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
actor
|
||||||
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub async fn root_key_history_id(db: &db::DatabasePool) -> i32 {
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let id = schema::arbiter_settings::table
|
||||||
|
.select(schema::arbiter_settings::root_key_id)
|
||||||
|
.first::<Option<i32>>(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
id.expect("root_key_id should be set after bootstrap")
|
||||||
|
}
|
||||||
8
server/crates/arbiter-server/tests/keyholder.rs
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
#[path = "keyholder/concurrency.rs"]
|
||||||
|
mod concurrency;
|
||||||
|
#[path = "keyholder/lifecycle.rs"]
|
||||||
|
mod lifecycle;
|
||||||
|
#[path = "keyholder/storage.rs"]
|
||||||
|
mod storage;
|
||||||
173
server/crates/arbiter-server/tests/keyholder/concurrency.rs
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
use std::collections::{HashMap, HashSet};
|
||||||
|
|
||||||
|
use arbiter_server::{
|
||||||
|
actors::keyholder::{CreateNew, Error, KeyHolder},
|
||||||
|
db::{self, models, schema},
|
||||||
|
};
|
||||||
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::sql_query};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use kameo::actor::{ActorRef, Spawn as _};
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
use tokio::task::JoinSet;
|
||||||
|
|
||||||
|
use crate::common;
|
||||||
|
|
||||||
|
async fn write_concurrently(
|
||||||
|
actor: ActorRef<KeyHolder>,
|
||||||
|
prefix: &'static str,
|
||||||
|
count: usize,
|
||||||
|
) -> Vec<(i32, Vec<u8>)> {
|
||||||
|
let mut set = JoinSet::new();
|
||||||
|
for i in 0..count {
|
||||||
|
let actor = actor.clone();
|
||||||
|
set.spawn(async move {
|
||||||
|
let plaintext = format!("{prefix}-{i}").into_bytes();
|
||||||
|
let id = actor
|
||||||
|
.ask(CreateNew {
|
||||||
|
plaintext: MemSafe::new(plaintext.clone()).unwrap(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
(id, plaintext)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut out = Vec::with_capacity(count);
|
||||||
|
while let Some(res) = set.join_next().await {
|
||||||
|
out.push(res.unwrap());
|
||||||
|
}
|
||||||
|
out
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn concurrent_create_new_no_duplicate_nonces_() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
|
||||||
|
|
||||||
|
let writes = write_concurrently(actor, "nonce-unique", 32).await;
|
||||||
|
assert_eq!(writes.len(), 32);
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(rows.len(), 32);
|
||||||
|
|
||||||
|
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
||||||
|
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
||||||
|
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn concurrent_create_new_root_nonce_never_moves_backward() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
|
||||||
|
|
||||||
|
write_concurrently(actor, "root-max", 24).await;
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let max_nonce = rows
|
||||||
|
.iter()
|
||||||
|
.map(|r| r.current_nonce.clone())
|
||||||
|
.max()
|
||||||
|
.expect("at least one row");
|
||||||
|
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(root_row.data_encryption_nonce, max_nonce);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn insert_failure_does_not_create_partial_row() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
let root_key_history_id = common::root_key_history_id(&db).await;
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let before_count: i64 = schema::aead_encrypted::table
|
||||||
|
.count()
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let before_root_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
sql_query(
|
||||||
|
"CREATE TRIGGER fail_aead_insert BEFORE INSERT ON aead_encrypted BEGIN SELECT RAISE(ABORT, 'forced test failure'); END;",
|
||||||
|
)
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"should fail".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::DatabaseTransaction(_)));
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
sql_query("DROP TRIGGER fail_aead_insert;")
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let after_count: i64 = schema::aead_encrypted::table
|
||||||
|
.count()
|
||||||
|
.get_result(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(
|
||||||
|
before_count, after_count,
|
||||||
|
"failed insert must not create row"
|
||||||
|
);
|
||||||
|
|
||||||
|
let after_root_nonce: Vec<u8> = schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id))
|
||||||
|
.select(schema::root_key_history::data_encryption_nonce)
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(
|
||||||
|
after_root_nonce > before_root_nonce,
|
||||||
|
"current behavior allows nonce gap on failed insert"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn decrypt_roundtrip_after_high_concurrency() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = KeyHolder::spawn(common::bootstrapped_keyholder(&db).await);
|
||||||
|
|
||||||
|
let writes = write_concurrently(actor, "roundtrip", 40).await;
|
||||||
|
let expected: HashMap<i32, Vec<u8>> = writes.into_iter().collect();
|
||||||
|
|
||||||
|
let mut decryptor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
decryptor
|
||||||
|
.try_unseal(MemSafe::new(b"test-seal-key".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
for (id, plaintext) in expected {
|
||||||
|
let mut decrypted = decryptor.decrypt(id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
}
|
||||||
131
server/crates/arbiter-server/tests/keyholder/lifecycle.rs
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
use arbiter_server::{
|
||||||
|
actors::keyholder::{Error, KeyHolder},
|
||||||
|
db::{self, models, schema},
|
||||||
|
};
|
||||||
|
use diesel::{QueryDsl, SelectableHelper};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
|
||||||
|
use crate::common;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_bootstrap() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.bootstrap(seal_key).await.unwrap();
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(row.schema_version, 1);
|
||||||
|
assert_eq!(
|
||||||
|
row.tag,
|
||||||
|
arbiter_server::actors::keyholder::encryption::v1::ROOT_KEY_TAG
|
||||||
|
);
|
||||||
|
assert!(!row.ciphertext.is_empty());
|
||||||
|
assert!(!row.salt.is_empty());
|
||||||
|
assert_eq!(
|
||||||
|
row.data_encryption_nonce,
|
||||||
|
arbiter_server::actors::keyholder::encryption::v1::Nonce::default().to_vec()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_bootstrap_rejects_double() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let seal_key2 = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
let err = actor.bootstrap(seal_key2).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::AlreadyBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_create_new_before_bootstrap_fails() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"data".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_decrypt_before_bootstrap_fails() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = KeyHolder::new(db).await.unwrap();
|
||||||
|
|
||||||
|
let err = actor.decrypt(1).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_new_restores_sealed_state() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let mut actor2 = KeyHolder::new(db).await.unwrap();
|
||||||
|
let err = actor2.decrypt(1).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotBootstrapped));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_unseal_correct_password() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"survive a restart";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
let seal_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.try_unseal(seal_key).await.unwrap();
|
||||||
|
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_unseal_wrong_then_correct_password() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"important data";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(actor);
|
||||||
|
|
||||||
|
let mut actor = KeyHolder::new(db.clone()).await.unwrap();
|
||||||
|
|
||||||
|
let bad_key = MemSafe::new(b"wrong-password".to_vec()).unwrap();
|
||||||
|
let err = actor.try_unseal(bad_key).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::InvalidKey));
|
||||||
|
|
||||||
|
let good_key = MemSafe::new(b"test-seal-key".to_vec()).unwrap();
|
||||||
|
actor.try_unseal(good_key).await.unwrap();
|
||||||
|
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
161
server/crates/arbiter-server/tests/keyholder/storage.rs
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
use std::collections::HashSet;
|
||||||
|
|
||||||
|
use arbiter_server::{
|
||||||
|
actors::keyholder::{Error, encryption::v1},
|
||||||
|
db::{self, models, schema},
|
||||||
|
};
|
||||||
|
use diesel::{ExpressionMethods as _, QueryDsl, SelectableHelper, dsl::update};
|
||||||
|
use diesel_async::RunQueryDsl;
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
|
||||||
|
use crate::common;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_create_decrypt_roundtrip() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"hello arbiter";
|
||||||
|
let aead_id = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut decrypted = actor.decrypt(aead_id).await.unwrap();
|
||||||
|
assert_eq!(*decrypted.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_decrypt_nonexistent_returns_not_found() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let err = actor.decrypt(9999).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::NotFound));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_ciphertext_differs_across_entries() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let plaintext = b"same content";
|
||||||
|
let id1 = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let id2 = actor
|
||||||
|
.create_new(MemSafe::new(plaintext.to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let row1: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id1))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let row2: models::AeadEncrypted = schema::aead_encrypted::table
|
||||||
|
.filter(schema::aead_encrypted::id.eq(id2))
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_ne!(row1.ciphertext, row2.ciphertext);
|
||||||
|
|
||||||
|
let mut d1 = actor.decrypt(id1).await.unwrap();
|
||||||
|
let mut d2 = actor.decrypt(id2).await.unwrap();
|
||||||
|
assert_eq!(*d1.read().unwrap(), plaintext);
|
||||||
|
assert_eq!(*d2.read().unwrap(), plaintext);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn test_nonce_never_reused() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
|
||||||
|
let n = 5;
|
||||||
|
for i in 0..n {
|
||||||
|
actor
|
||||||
|
.create_new(MemSafe::new(format!("secret {i}").into_bytes()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
let rows: Vec<models::AeadEncrypted> = schema::aead_encrypted::table
|
||||||
|
.select(models::AeadEncrypted::as_select())
|
||||||
|
.load(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(rows.len(), n);
|
||||||
|
|
||||||
|
let nonces: Vec<&Vec<u8>> = rows.iter().map(|r| &r.current_nonce).collect();
|
||||||
|
let unique: HashSet<&Vec<u8>> = nonces.iter().copied().collect();
|
||||||
|
assert_eq!(nonces.len(), unique.len(), "all nonces must be unique");
|
||||||
|
|
||||||
|
for (i, row) in rows.iter().enumerate() {
|
||||||
|
let mut expected = v1::Nonce::default();
|
||||||
|
for _ in 0..=i {
|
||||||
|
expected.increment();
|
||||||
|
}
|
||||||
|
assert_eq!(row.current_nonce, expected.to_vec(), "nonce {i} mismatch");
|
||||||
|
}
|
||||||
|
|
||||||
|
let root_row: models::RootKeyHistory = schema::root_key_history::table
|
||||||
|
.select(models::RootKeyHistory::as_select())
|
||||||
|
.first(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let last_nonce = &rows.last().unwrap().current_nonce;
|
||||||
|
assert_eq!(&root_row.data_encryption_nonce, last_nonce);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
async fn broken_db_nonce_format_fails_closed() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
let root_key_history_id = common::root_key_history_id(&db).await;
|
||||||
|
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
update(
|
||||||
|
schema::root_key_history::table
|
||||||
|
.filter(schema::root_key_history::id.eq(root_key_history_id)),
|
||||||
|
)
|
||||||
|
.set(schema::root_key_history::data_encryption_nonce.eq(vec![1, 2, 3]))
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor
|
||||||
|
.create_new(MemSafe::new(b"must fail".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::BrokenDatabase));
|
||||||
|
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
let mut actor = common::bootstrapped_keyholder(&db).await;
|
||||||
|
let id = actor
|
||||||
|
.create_new(MemSafe::new(b"decrypt target".to_vec()).unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let mut conn = db.get().await.unwrap();
|
||||||
|
update(schema::aead_encrypted::table.filter(schema::aead_encrypted::id.eq(id)))
|
||||||
|
.set(schema::aead_encrypted::current_nonce.eq(vec![7, 8]))
|
||||||
|
.execute(&mut conn)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
drop(conn);
|
||||||
|
|
||||||
|
let err = actor.decrypt(id).await.unwrap_err();
|
||||||
|
assert!(matches!(err, Error::BrokenDatabase));
|
||||||
|
}
|
||||||
31
server/crates/arbiter-server/tests/user_agent.rs
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
use arbiter_proto::proto::UserAgentResponse;
|
||||||
|
use arbiter_server::actors::user_agent::UserAgentError;
|
||||||
|
use kameo::{Actor, actor::Recipient, actor::Spawn, prelude::Message};
|
||||||
|
|
||||||
|
/// A no-op actor that discards any messages it receives.
|
||||||
|
#[derive(Actor)]
|
||||||
|
struct NullSink;
|
||||||
|
|
||||||
|
impl Message<Result<UserAgentResponse, UserAgentError>> for NullSink {
|
||||||
|
type Reply = ();
|
||||||
|
|
||||||
|
async fn handle(
|
||||||
|
&mut self,
|
||||||
|
_msg: Result<UserAgentResponse, UserAgentError>,
|
||||||
|
_ctx: &mut kameo::prelude::Context<Self, Self::Reply>,
|
||||||
|
) -> Self::Reply {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a `Recipient` that silently discards all messages.
|
||||||
|
fn null_recipient() -> Recipient<Result<UserAgentResponse, UserAgentError>> {
|
||||||
|
let actor_ref = NullSink::spawn(NullSink);
|
||||||
|
actor_ref.recipient()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[path = "user_agent/auth.rs"]
|
||||||
|
mod auth;
|
||||||
|
#[path = "user_agent/unseal.rs"]
|
||||||
|
mod unseal;
|
||||||
@@ -3,39 +3,30 @@ use arbiter_proto::proto::{
|
|||||||
auth::{self, AuthChallengeRequest, AuthOk},
|
auth::{self, AuthChallengeRequest, AuthOk},
|
||||||
user_agent_response::Payload as UserAgentResponsePayload,
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
};
|
};
|
||||||
use chrono::format;
|
use arbiter_server::{
|
||||||
|
actors::{
|
||||||
|
GlobalActors,
|
||||||
|
bootstrap::GetToken,
|
||||||
|
user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution, UserAgentActor},
|
||||||
|
},
|
||||||
|
db::{self, schema},
|
||||||
|
};
|
||||||
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
|
use diesel::{ExpressionMethods as _, QueryDsl, insert_into};
|
||||||
use diesel_async::RunQueryDsl;
|
use diesel_async::RunQueryDsl;
|
||||||
use ed25519_dalek::Signer as _;
|
use ed25519_dalek::Signer as _;
|
||||||
use kameo::actor::Spawn;
|
use kameo::actor::Spawn;
|
||||||
|
|
||||||
use crate::{
|
|
||||||
actors::{
|
|
||||||
bootstrap::Bootstrapper,
|
|
||||||
user_agent::{HandleAuthChallengeRequest, HandleAuthChallengeSolution},
|
|
||||||
},
|
|
||||||
db::{self, schema},
|
|
||||||
};
|
|
||||||
|
|
||||||
use super::UserAgentActor;
|
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_bootstrap_token_auth() {
|
pub async fn test_bootstrap_token_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db =db::create_test_pool().await;
|
||||||
// explicitly not installing any user_agent pubkeys
|
|
||||||
let bootstrapper = Bootstrapper::new(&db).await.unwrap(); // this will create bootstrap token
|
|
||||||
let token = bootstrapper.get_token().unwrap();
|
|
||||||
|
|
||||||
let bootstrapper_ref = Bootstrapper::spawn(bootstrapper);
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
let user_agent = UserAgentActor::new_manual(
|
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
||||||
db.clone(),
|
let user_agent =
|
||||||
bootstrapper_ref,
|
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||||
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
|
|
||||||
);
|
|
||||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||||
|
|
||||||
// simulate client sending auth request with bootstrap token
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||||
|
|
||||||
@@ -49,7 +40,6 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
.await
|
.await
|
||||||
.expect("Shouldn't fail to send message");
|
.expect("Shouldn't fail to send message");
|
||||||
|
|
||||||
// auth succeeded
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
result,
|
result,
|
||||||
UserAgentResponse {
|
UserAgentResponse {
|
||||||
@@ -63,7 +53,6 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
}
|
}
|
||||||
);
|
);
|
||||||
|
|
||||||
// key is succesfully recorded in database
|
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
let stored_pubkey: Vec<u8> = schema::useragent_client::table
|
let stored_pubkey: Vec<u8> = schema::useragent_client::table
|
||||||
.select(schema::useragent_client::public_key)
|
.select(schema::useragent_client::public_key)
|
||||||
@@ -77,18 +66,12 @@ pub async fn test_bootstrap_token_auth() {
|
|||||||
#[test_log::test]
|
#[test_log::test]
|
||||||
pub async fn test_bootstrap_invalid_token_auth() {
|
pub async fn test_bootstrap_invalid_token_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
// explicitly not installing any user_agent pubkeys
|
|
||||||
let bootstrapper = Bootstrapper::new(&db).await.unwrap(); // this will create bootstrap token
|
|
||||||
|
|
||||||
let bootstrapper_ref = Bootstrapper::spawn(bootstrapper);
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
let user_agent = UserAgentActor::new_manual(
|
let user_agent =
|
||||||
db.clone(),
|
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||||
bootstrapper_ref,
|
|
||||||
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
|
|
||||||
);
|
|
||||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||||
|
|
||||||
// simulate client sending auth request with bootstrap token
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||||
|
|
||||||
@@ -102,15 +85,11 @@ pub async fn test_bootstrap_invalid_token_auth() {
|
|||||||
.await;
|
.await;
|
||||||
|
|
||||||
match result {
|
match result {
|
||||||
Err(kameo::error::SendError::HandlerError(status)) => {
|
Err(kameo::error::SendError::HandlerError(err)) => {
|
||||||
assert_eq!(status.code(), tonic::Code::InvalidArgument);
|
assert!(
|
||||||
insta::assert_debug_snapshot!(status, @r#"
|
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidBootstrapToken),
|
||||||
Status {
|
"Expected InvalidBootstrapToken, got {err:?}"
|
||||||
code: InvalidArgument,
|
);
|
||||||
message: "Invalid bootstrap token",
|
|
||||||
source: None,
|
|
||||||
}
|
|
||||||
"#);
|
|
||||||
}
|
}
|
||||||
Err(other) => {
|
Err(other) => {
|
||||||
panic!("Expected SendError::HandlerError, got {other:?}");
|
panic!("Expected SendError::HandlerError, got {other:?}");
|
||||||
@@ -126,19 +105,14 @@ pub async fn test_bootstrap_invalid_token_auth() {
|
|||||||
pub async fn test_challenge_auth() {
|
pub async fn test_challenge_auth() {
|
||||||
let db = db::create_test_pool().await;
|
let db = db::create_test_pool().await;
|
||||||
|
|
||||||
let bootstrapper_ref = Bootstrapper::spawn(Bootstrapper::new(&db).await.unwrap());
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
let user_agent = UserAgentActor::new_manual(
|
let user_agent =
|
||||||
db.clone(),
|
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||||
bootstrapper_ref,
|
|
||||||
tokio::sync::mpsc::channel(1).0, // dummy channel, we won't actually send responses in this test
|
|
||||||
);
|
|
||||||
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||||
|
|
||||||
// simulate client sending auth request with bootstrap token
|
|
||||||
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
let new_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
let pubkey_bytes = new_key.verifying_key().to_bytes().to_vec();
|
||||||
|
|
||||||
// insert pubkey into database to trigger challenge-response auth flow
|
|
||||||
{
|
{
|
||||||
let mut conn = db.get().await.unwrap();
|
let mut conn = db.get().await.unwrap();
|
||||||
insert_into(schema::useragent_client::table)
|
insert_into(schema::useragent_client::table)
|
||||||
@@ -158,7 +132,6 @@ pub async fn test_challenge_auth() {
|
|||||||
.await
|
.await
|
||||||
.expect("Shouldn't fail to send message");
|
.expect("Shouldn't fail to send message");
|
||||||
|
|
||||||
// auth challenge succeeded
|
|
||||||
let UserAgentResponse {
|
let UserAgentResponse {
|
||||||
payload:
|
payload:
|
||||||
Some(UserAgentResponsePayload::AuthMessage(arbiter_proto::proto::auth::ServerMessage {
|
Some(UserAgentResponsePayload::AuthMessage(arbiter_proto::proto::auth::ServerMessage {
|
||||||
@@ -183,7 +156,6 @@ pub async fn test_challenge_auth() {
|
|||||||
.await
|
.await
|
||||||
.expect("Shouldn't fail to send message");
|
.expect("Shouldn't fail to send message");
|
||||||
|
|
||||||
// auth succeeded
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
result,
|
result,
|
||||||
UserAgentResponse {
|
UserAgentResponse {
|
||||||
230
server/crates/arbiter-server/tests/user_agent/unseal.rs
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
use arbiter_proto::proto::{
|
||||||
|
UnsealEncryptedKey, UnsealResult, UnsealStart, auth::AuthChallengeRequest,
|
||||||
|
user_agent_response::Payload as UserAgentResponsePayload,
|
||||||
|
};
|
||||||
|
use arbiter_server::{
|
||||||
|
actors::{
|
||||||
|
GlobalActors,
|
||||||
|
bootstrap::GetToken,
|
||||||
|
keyholder::{Bootstrap, Seal},
|
||||||
|
user_agent::{
|
||||||
|
HandleAuthChallengeRequest, HandleUnsealEncryptedKey, HandleUnsealRequest,
|
||||||
|
UserAgentActor,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
db,
|
||||||
|
};
|
||||||
|
use chacha20poly1305::{AeadInPlace, XChaCha20Poly1305, XNonce, aead::KeyInit};
|
||||||
|
use kameo::actor::{ActorRef, Spawn};
|
||||||
|
use memsafe::MemSafe;
|
||||||
|
use x25519_dalek::{EphemeralSecret, PublicKey};
|
||||||
|
|
||||||
|
async fn setup_authenticated_user_agent(
|
||||||
|
seal_key: &[u8],
|
||||||
|
) -> (arbiter_server::db::DatabasePool, ActorRef<UserAgentActor>) {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
|
||||||
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
actors
|
||||||
|
.key_holder
|
||||||
|
.ask(Bootstrap {
|
||||||
|
seal_key_raw: MemSafe::new(seal_key.to_vec()).unwrap(),
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
actors.key_holder.ask(Seal).await.unwrap();
|
||||||
|
|
||||||
|
let user_agent =
|
||||||
|
UserAgentActor::new_manual(db.clone(), actors.clone(), super::null_recipient());
|
||||||
|
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||||
|
|
||||||
|
let token = actors.bootstrapper.ask(GetToken).await.unwrap().unwrap();
|
||||||
|
let auth_key = ed25519_dalek::SigningKey::generate(&mut rand::rng());
|
||||||
|
user_agent_ref
|
||||||
|
.ask(HandleAuthChallengeRequest {
|
||||||
|
req: AuthChallengeRequest {
|
||||||
|
pubkey: auth_key.verifying_key().to_bytes().to_vec(),
|
||||||
|
bootstrap_token: Some(token),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
(db, user_agent_ref)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn client_dh_encrypt(
|
||||||
|
user_agent_ref: &ActorRef<UserAgentActor>,
|
||||||
|
key_to_send: &[u8],
|
||||||
|
) -> UnsealEncryptedKey {
|
||||||
|
let client_secret = EphemeralSecret::random();
|
||||||
|
let client_public = PublicKey::from(&client_secret);
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealRequest {
|
||||||
|
req: UnsealStart {
|
||||||
|
client_pubkey: client_public.as_bytes().to_vec(),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let server_pubkey = match response.payload.unwrap() {
|
||||||
|
UserAgentResponsePayload::UnsealStartResponse(resp) => resp.server_pubkey,
|
||||||
|
other => panic!("Expected UnsealStartResponse, got {other:?}"),
|
||||||
|
};
|
||||||
|
let server_public = PublicKey::from(<[u8; 32]>::try_from(server_pubkey.as_slice()).unwrap());
|
||||||
|
|
||||||
|
let shared_secret = client_secret.diffie_hellman(&server_public);
|
||||||
|
let cipher = XChaCha20Poly1305::new(shared_secret.as_bytes().into());
|
||||||
|
let nonce = XNonce::from([0u8; 24]);
|
||||||
|
let associated_data = b"unseal";
|
||||||
|
let mut ciphertext = key_to_send.to_vec();
|
||||||
|
cipher
|
||||||
|
.encrypt_in_place(&nonce, associated_data, &mut ciphertext)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
UnsealEncryptedKey {
|
||||||
|
nonce: nonce.to_vec(),
|
||||||
|
ciphertext,
|
||||||
|
associated_data: associated_data.to_vec(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
pub async fn test_unseal_success() {
|
||||||
|
let seal_key = b"test-seal-key";
|
||||||
|
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
|
||||||
|
|
||||||
|
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
response.payload.unwrap(),
|
||||||
|
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
pub async fn test_unseal_wrong_seal_key() {
|
||||||
|
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"correct-key").await;
|
||||||
|
|
||||||
|
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
response.payload.unwrap(),
|
||||||
|
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
pub async fn test_unseal_corrupted_ciphertext() {
|
||||||
|
let (_db, user_agent_ref) = setup_authenticated_user_agent(b"test-key").await;
|
||||||
|
|
||||||
|
let client_secret = EphemeralSecret::random();
|
||||||
|
let client_public = PublicKey::from(&client_secret);
|
||||||
|
|
||||||
|
user_agent_ref
|
||||||
|
.ask(HandleUnsealRequest {
|
||||||
|
req: UnsealStart {
|
||||||
|
client_pubkey: client_public.as_bytes().to_vec(),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealEncryptedKey {
|
||||||
|
req: UnsealEncryptedKey {
|
||||||
|
nonce: vec![0u8; 24],
|
||||||
|
ciphertext: vec![0u8; 32],
|
||||||
|
associated_data: vec![],
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
response.payload.unwrap(),
|
||||||
|
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
pub async fn test_unseal_start_without_auth_fails() {
|
||||||
|
let db = db::create_test_pool().await;
|
||||||
|
|
||||||
|
let actors = GlobalActors::spawn(db.clone()).await.unwrap();
|
||||||
|
let user_agent =
|
||||||
|
UserAgentActor::new_manual(db.clone(), actors, super::null_recipient());
|
||||||
|
let user_agent_ref = UserAgentActor::spawn(user_agent);
|
||||||
|
|
||||||
|
let client_secret = EphemeralSecret::random();
|
||||||
|
let client_public = PublicKey::from(&client_secret);
|
||||||
|
|
||||||
|
let result = user_agent_ref
|
||||||
|
.ask(HandleUnsealRequest {
|
||||||
|
req: UnsealStart {
|
||||||
|
client_pubkey: client_public.as_bytes().to_vec(),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Err(kameo::error::SendError::HandlerError(err)) => {
|
||||||
|
assert!(
|
||||||
|
matches!(err, arbiter_server::actors::user_agent::UserAgentError::InvalidState),
|
||||||
|
"Expected InvalidState, got {err:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
other => panic!("Expected state machine error, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
#[test_log::test]
|
||||||
|
pub async fn test_unseal_retry_after_invalid_key() {
|
||||||
|
let seal_key = b"real-seal-key";
|
||||||
|
let (_db, user_agent_ref) = setup_authenticated_user_agent(seal_key).await;
|
||||||
|
|
||||||
|
{
|
||||||
|
let encrypted_key = client_dh_encrypt(&user_agent_ref, b"wrong-key").await;
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
response.payload.unwrap(),
|
||||||
|
UserAgentResponsePayload::UnsealResult(UnsealResult::InvalidKey.into()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
let encrypted_key = client_dh_encrypt(&user_agent_ref, seal_key).await;
|
||||||
|
|
||||||
|
let response = user_agent_ref
|
||||||
|
.ask(HandleUnsealEncryptedKey { req: encrypted_key })
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
response.payload.unwrap(),
|
||||||
|
UserAgentResponsePayload::UnsealResult(UnsealResult::Success.into()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,5 +2,14 @@
|
|||||||
name = "arbiter-useragent"
|
name = "arbiter-useragent"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
|
license = "Apache-2.0"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
arbiter-proto.path = "../arbiter-proto"
|
||||||
|
kameo.workspace = true
|
||||||
|
tokio = {workspace = true, features = ["net"]}
|
||||||
|
tonic.workspace = true
|
||||||
|
tracing.workspace = true
|
||||||
|
ed25519-dalek.workspace = true
|
||||||
|
smlang.workspace = true
|
||||||
|
x25519-dalek.workspace = true
|
||||||
|
|||||||
@@ -1,14 +1,66 @@
|
|||||||
pub fn add(left: u64, right: u64) -> u64 {
|
use arbiter_proto::{proto::UserAgentRequest, transport::TransportActor};
|
||||||
left + right
|
use ed25519_dalek::SigningKey;
|
||||||
|
use kameo::{
|
||||||
|
Actor, Reply,
|
||||||
|
actor::{ActorRef, WeakActorRef},
|
||||||
|
prelude::Message,
|
||||||
|
};
|
||||||
|
use smlang::statemachine;
|
||||||
|
use tonic::transport::CertificateDer;
|
||||||
|
use tracing::{debug, error};
|
||||||
|
|
||||||
|
struct Storage {
|
||||||
|
pub identity: SigningKey,
|
||||||
|
pub server_ca_cert: CertificateDer<'static>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[derive(Debug)]
|
||||||
mod tests {
|
pub enum InitError {
|
||||||
use super::*;
|
StorageError,
|
||||||
|
Other(String),
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
statemachine! {
|
||||||
fn it_works() {
|
name: UserAgentStateMachine,
|
||||||
let result = add(2, 2);
|
custom_error: false,
|
||||||
assert_eq!(result, 4);
|
transitions: {
|
||||||
|
*Init + SendAuthChallenge = WaitingForAuthSolution
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
pub struct UserAgentActor<A: TransportActor<UserAgentRequest>> {
|
||||||
|
key: SigningKey,
|
||||||
|
server_ca_cert: CertificateDer<'static>,
|
||||||
|
sender: ActorRef<A>,
|
||||||
|
}
|
||||||
|
impl<A: TransportActor<UserAgentRequest>> Actor for UserAgentActor<A> {
|
||||||
|
type Args = Self;
|
||||||
|
|
||||||
|
type Error = InitError;
|
||||||
|
|
||||||
|
async fn on_start(args: Self::Args, actor_ref: ActorRef<Self>) -> Result<Self, Self::Error> {
|
||||||
|
todo!()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn on_link_died(
|
||||||
|
&mut self,
|
||||||
|
_: WeakActorRef<Self>,
|
||||||
|
id: kameo::prelude::ActorId,
|
||||||
|
_: kameo::prelude::ActorStopReason,
|
||||||
|
) -> Result<std::ops::ControlFlow<kameo::prelude::ActorStopReason>, Self::Error> {
|
||||||
|
if id == self.sender.id() {
|
||||||
|
error!("Transport actor died, stopping UserAgentActor");
|
||||||
|
Ok(std::ops::ControlFlow::Break(
|
||||||
|
kameo::prelude::ActorStopReason::Normal,
|
||||||
|
))
|
||||||
|
} else {
|
||||||
|
debug!(
|
||||||
|
"Linked actor {} died, but it's not the transport actor, ignoring",
|
||||||
|
id
|
||||||
|
);
|
||||||
|
Ok(std::ops::ControlFlow::Continue(()))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
0
app/.gitignore → useragent/.gitignore
vendored
|
Before Width: | Height: | Size: 101 KiB After Width: | Height: | Size: 101 KiB |
|
Before Width: | Height: | Size: 5.5 KiB After Width: | Height: | Size: 5.5 KiB |
|
Before Width: | Height: | Size: 520 B After Width: | Height: | Size: 520 B |
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 1.0 KiB After Width: | Height: | Size: 1.0 KiB |
|
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
@@ -41,14 +41,6 @@ packages:
|
|||||||
url: "https://pub.dev"
|
url: "https://pub.dev"
|
||||||
source: hosted
|
source: hosted
|
||||||
version: "1.19.1"
|
version: "1.19.1"
|
||||||
cupertino_icons:
|
|
||||||
dependency: "direct main"
|
|
||||||
description:
|
|
||||||
name: cupertino_icons
|
|
||||||
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
|
|
||||||
url: "https://pub.dev"
|
|
||||||
source: hosted
|
|
||||||
version: "1.0.8"
|
|
||||||
fake_async:
|
fake_async:
|
||||||
dependency: transitive
|
dependency: transitive
|
||||||
description:
|
description:
|
||||||
21
useragent/pubspec.yaml
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
name: arbiter
|
||||||
|
description: "User agent for Arbiter"
|
||||||
|
publish_to: 'none'
|
||||||
|
|
||||||
|
version: 0.1.0
|
||||||
|
|
||||||
|
environment:
|
||||||
|
sdk: ^3.10.8
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
flutter:
|
||||||
|
sdk: flutter
|
||||||
|
|
||||||
|
dev_dependencies:
|
||||||
|
flutter_test:
|
||||||
|
sdk: flutter
|
||||||
|
|
||||||
|
flutter_lints: ^6.0.0
|
||||||
|
|
||||||
|
flutter:
|
||||||
|
uses-material-design: true
|
||||||
|
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |