Threat model appendix: use case description#81
Conversation
The threat model speaks very broadly, because it needs to cover all of Tock's use cases. This adds an appendix that is much more specific, listing out individual Tock use cases and describing how the Tock system should be set up for each security model. The purpose is twofold: 1. Give users an easy starting point for configuring Tock (and serving as an explanation for how app ID can be used in practice). 2. Help guide the design of Tock's security mechanisms (e.g. why we support cryptographically-verified app IDs).
252891b to
3693493
Compare
| The AppID checker verifies the signature is valid for the public key. The | ||
| application ID for the process is `format || public key` (a contiguous byte | ||
| range, `footer[4..40]`, so the appID slice can point into the footer). |
There was a problem hiding this comment.
This is unusual. I think this AppId scheme is to ensure all applets have the same appid and two applets will never run at the same time? I think explicitly calling out why using a scheme that uses the same AppId is actually desirable here would be instructive.
There was a problem hiding this comment.
The way I wrote this scheme, for a single developer to have multiple applets they would need to generate multiple private-public keypairs in their PKI.
As an alternative, I can add an "applet ID" field, which would be part of the appID, allowing developers to create multiple applets using a single private-public keypair. I decided to omit that because I thought the simplicity would make this easier to understand, but maybe it's better to have that field?
There was a problem hiding this comment.
"applet ID" would be in the TBF header? I think based on what it does it can't go in the footer because then it isn't signed and could be changed by anyone.
There was a problem hiding this comment.
Yes, applet ID would need to be in a TBF header.
The way I would implement it is I would put it into both a TBF header and the TBF footer. In the footer, I would place it adjacent to the public key so that it can easily be included in the appID slice as well. The credentials checker would need to verify that the copy in the TBF header matches the copy in the TBF footer.
There was a problem hiding this comment.
Or we could remove the sentence "Computing an integrity value in a Credentials Footer MUST NOT include the contents of Footers." from the appID TRD. Then no new TBF header would be required.
| The first range, system apps, is for applications that are part of the operating | ||
| system itself. These applications may have special permissions that other | ||
| applets do not have (for example, if the system for installing applets uses a | ||
| userspace process, that userspace process would have extra permissions). These | ||
| applications have hardcoded short IDs, which allows hardcoded ACLs (such as the | ||
| syscall filtering table) to refer to them. | ||
|
|
||
| The remaining range is for dynamically-installed applets. There are allocated | ||
| using a table stored in nonvolatile storage, allowing each app to be assigned a | ||
| short ID. | ||
|
|
||
| When the kernel loads a process, it checks the tables to identify its short ID. | ||
| If the application is not present, it assigns it a new short ID and adds it to | ||
| the table in nonvolatile storage. |
There was a problem hiding this comment.
Using the same AppId but different ShortIds is confusing, or I'm missing something.
There was a problem hiding this comment.
Yeah there's a miscommunication here. There should only be one applet with a particular appID running at any one time (as per the appID TRD), so every process should have a distinct ShortId. But once an applet has been assigned a ShortId, that ShortId should remain in the table (until it has been fully uninstalled), so every execution of that applet should carry the same ShortId.
Or are you saying it's confusing that uninstalling and reinstalling an applet results in different ShortIds? If that's the case, then I don't see the issue.
| The following documents give a sampling of possible use cases for Tock. Each | ||
| document describes how Tock can be configured to meet each user's needs and how | ||
| that configuration interacts with the threat model. | ||
|
|
There was a problem hiding this comment.
| The following documents give a sampling of possible use cases for Tock. Each | |
| document describes how Tock can be configured to meet each user's needs and how | |
| that configuration interacts with the threat model. | |
| The following documents give a sampling of possible use cases for Tock. Each | |
| document describes how Tock can be configured to meet each user's needs and how | |
| that configuration interacts with the threat model. | |
| These use case guides are intended to be instructive, and not prescriptive. Users with these (or similar) use cases may choose to follow the design outlined in these documents. However, there likely exist other successful designs for these use cases as well. |
There was a problem hiding this comment.
Done.
However: "Example Use Case: Manual Local Tock Deployment" matches the primary use cases for most of the boards in the tock/tock repository. Therefore, I think most boards in tock/tock should use that configuration.
The threat model speaks very broadly, because it needs to cover all of Tock's use cases. This adds an appendix that is much more specific, listing out specific Tock use cases and describing how the Tock system should be set up for each security model.
The purpose is twofold:
Pages as rendered by GitHub: