A Logic of Sattestation
Abstract
We introduce a logic for reasoning about contextual trust for web addresses, provide a Kripke semantics for it, and prove its soundness under reasonable assumptions about principals’ policies.
Self-Authenticating Traditional Addresses (SATAs) are valid DNS addresses or URLs that are generally meaningful—to both humans and web infrastructure—and contain a commitment to a public key in the address itself. Trust in web addresses is currently established via domain name registration, TLS certificates, and other hierarchical elements of the internet infrastructure. SATAs support such structural roots of trust but also complementary contextual roots associated with descriptive properties. The existing structural roots leave web connections open to a variety of well-documented and significant hijack vulnerabilities. Contextual trust roots provide, among other things, stronger resistance to such vulnerabilities.
We also consider labeled SATAs, which include descriptive properties such as that a SATA is an address for a news organization, a site belonging to a particular government or company, a site with information about a certain topic, etc. Our logic addresses both trust in the bound together identity of the address and trust in the binding of labels to it. Our logic allows reasoning about delegation of trust with respect to specified labels, relationships between labels that provide more or less specific information, and the interaction between these two aspects.
In addition to soundness, we prove that if a principal trusts a
particular identity (possibly with label), then either this trust is
initially assumed, or there is a trust chain of delegations to this
from initial trust assumptions. We also present an algorithm that
effectively derives all possible trust statements from the set of
initial trust assumptions and show it to be sound, complete, and
terminating.This is a full version, including proofs, of a paper to appear in the Proceedings of the 37th IEEE Computer Security Foundations Symposium (CSF’24).
Distribution Statement A. Approved for public release: distribution is unlimited.
I Introduction
We introduce a logic for reasoning about contextual trust for web addresses, both for properties essential to the identity of the address and for descriptive properties. These properties, or labels, say things such as: that an address is for a news organization; that it belongs to the U.S. Government, Microsoft, or other specified entity; that it relates to a topic like cooking; etc. Importantly, these are descriptive and not for authorization. Trust in web addresses is currently established via domain name registration, TLS certificates, and other hierarchical components of the internet infrastructure. Formal reasoning about trust in TLS certificates (or predecessors of them) has long been researched [1] as has reasoning about alternatives to hierarchies like X.509 via local naming such as SPKI or SDSI [2]. In our logic, names (addresses) are always global so as to remain entirely compatible with the predominant existing internet naming and authentication infrastructure. It is the descriptive properties of them that can be local or contextual. But we also reason about contextual roots of trust complementary to the existing web authentication infrastructure both for global addresses and for descriptive properties of them.
Traditional domain names like microsoft.com are meaningful: users understand that connecting to https://www.microsoft.com will take them to a website offering information about products and services offered by Microsoft. They are also meaningful to the internet’s infrastructure: microsoft.com is meaningful to DNS, which is generally able to provide an IP address for it. Further, there is structural meaning to domain names understood by both humans and infrastructure systems: support.microsoft.com is understood to be a subdomain of microsoft.com, a subdomain at which users might expect to find support information for Microsoft products and services.
To protect against hijack and other attacks, TLS certificates signed by Certificate Authorities (CAs) are understood by web browsers. These assert that the key authenticating a connection to https://www.microsoft.com is indeed properly bound to that domain. In contrast, an onion address in Tor needs no certificate to assert a binding between the authenticating key and the connection’s address. It is self-authenticating: the onion address is a 56-character string encoding the public key for the private key used to authenticate the connection [3]. However, onion addresses are not meaningful. For humans they appear to be long random strings of letters and numbers; the RFC that recognizes .onion as a reserved TLD also prohibits DNS resolution of onion addresses [4]; and most popular browsers are not set up to use the onion-service protocols needed for address resolution or reaching an onion address or, more generally, for connections via the Tor network.
Self-authenticating traditional addresses (SATAs) were introduced to
combine the meaningfulness of traditional web addresses to both humans
and the common infrastructure (e.g., DNS and popular browsers) with
the self-authentication of onion
addresses [5, 6, 7]. A SATA for the
Tor Project would combine its traditional domain name,
www.torproject.org, with its onion address, thus for example:
www.torproject.org/?onion=2gzyxa5ihm7nsggfxnu52 rck2vv4rvmdlkiu3zzui5du4xyclen53wid.
SATAs do not by themselves bring self-authentication to meaningful domain names. Seeing the above SATA in her browser, how could Alice know whether or not an attacker with a hijacked certificate for www.torproject.org, had associated its own onion address versus one actually belonging to the Tor Project? Prior work that we will briefly review presently sets out the technical means for binding the parts of a SATA. Part of the work of this paper is to provide a logic for reasoning about how Alice can ground trusting that such a SATA is bound together in trust credentials that are based on meaningful associations rather than just structural authority.
The above SATA format has usable security and deployment advantages [7], but the first introduced format was for the onion address to be a subdomain of the traditional domain name [5]. This allowed the onion address to be bound in a Domain Validation (DV) certificate by a CA using then existing industry standards for certificate issuance. And that format is still used in this way, further making onion addresses meaningful to the internet infrastructure, in this case the infrastructure for TLS certificate issuance. The certificate indicates that the CA has done a standard check that someone controlling the traditional domain name intends the binding of the onion address to that domain name. Since onion addresses are self-authenticating, a SATA owner can attest that someone controlling the onion address also intends this binding by signing an HTTP header also containing both addresses, along with a hash of the TLS certificate, a validity interval, etc. This is called a self-sattestation.
If a full SATA is known, such as by being recalled from a browser bookmark, then this might be sufficient. But if address discovery is via the traditional domain name, such as from user memory or via a search engine lookup of, e.g., “microsoft” or “New York Times”, then it is not. Even with self-sattestation, SATA authentication as described so far would only be as strong as the structural guarantees of TLS certificate issuance: an attacker able to obtain a fraudulent TLS certificate for a domain could assert binding of an onion address under their control within that certificate. Since onion addresses are not easily recognized, this could pass undetected. And TLS certificates are subject to significant hijack risk [8] despite substantial efforts to reduce that risk (e.g., via DNSSEC or Extended Validation (EV) certificates) or to detect misissuance (e.g., via Certificate Transparency (CT) logs).
For this reason, one SATA can provide a sattestation for another. An example suggested in [6] would be for a SATA controlled by the Freedom of the Press Foundation or the Berkman Klein Center for Internet & Society to provide a sattestatation for a SATA for CNN that the components comprising CNN’s SATA are properly bound together. As introduced in [7], sattestations are implemented as JSON formatted headers signed using the onion key of the sattestor SATA and may contain descriptive labels, such as that the sattestee is a “news” site. Worked examples of sattestations and showing such JSON headers are provided in Appendix V.
Prior work described implementation and formats of sattestation but not analysis or formalization for reasoning, neither about contextual trust for sattestors asserting the binding of a SATA’s identity nor for the structure of related descriptive labels in sattestations. These are set out in Section III.
I-A Contributions
In this paper we
-
•
introduce concepts of identity and contextual properties for web addresses and an associated concept of trusted belief in the binding together of such identities with or without associated properties. [Section II]
-
•
introduce a formal language and axiomatic logic for reasoning about trust in attestations of identity (i.e. binding of a public key in a self-authenticating address with a traditional domain name, e.g., as resolvable via DNS), a logic that supports two specific types of trust delegation, also introduce a Kripke semantics for the language and show soundness of the logic with respect to that semantics. [Section III]
-
•
contrast and compare the introduced logic with logics of authentication, logics of local names, and authorization and access control logics. [Section IV]
-
•
prove that any statement a principal trusts is either initially trusted, or that there is a trust chain of delegations to this from initial trust assumptions. [Section VI]
-
•
present a saturation algorithm that effectively derives all possible trust statements from a finite set of initial trust assumptions and show it to be sound, complete, and terminating. [Section VII]
I-B Related Work
Basic concepts and implementations of SATAs and sattestations as described in this paper are set out in papers by Syverson together with various coauthors [5, 6, 7]. [6] introduces a concept of contextual trust illustrated in the “news” example cited above, as well as an implementation in the form of a list of sattestations a site can post that a browser WebExtension can select to trust. Appendix V presents examples following the JSON format for sattestation headers introduced in [7], which include contextual labels. [7] also mentions in one sentence the possibility of transitivity if a sattestor label is permitted, but does not say anything more about these. None of these papers describes any means, formal or otherwise, to reason about when a sattestation should be trusted. Nor do they mention any structure to labels and how these relate to each other (e.g., as introduced below in Section III-A). Further they do not provide a language for, or any conception of, transitive or iterated trust in any form.
The basic meaning we attach to “trust” is the same as given by Coker et al. [9]: “Principal trusts principal with regard to the statement if and only if, from the fact that has said , infers that was true at a given time.” [9] is about formalizing remote attestation for trusted hardware as in the Trusted Platform Modules of the Trusted Computing Group (TCG). While our basic notion of trust is in this sense the same, that paper also does not discuss transitivity or iteration of trust or anything like our concept of contextual trust.
Formalizing transitive trust has previously been described in the context of secure communication [10]. However, that paper is primarily focused on honest messages (known to be true by the sender when sent) and related concepts that help ground a theory of cooperative communication in distributed systems. We do not assume messages are honest, and most of our rules effectively set out conditions under which it is possible to make a valid trust inference from a message. There is also nothing analogous to our notion of contextual trust therein. We further discuss transitive and iterated trust in Section III-C.
II Identities, Beliefs, and Trust
II-A No entity without identity
An entity comprises an identity and properties. The identity is signified by a domain name and an onion address . In our notation an arbitrary is only a potential entity, a may-be. It may or may not signify a real identity. If and are bound together, then is a real identity.111By analogy to Thomistic ontology [11], we would say has esse. In this case we write or, more succinctly, .
Though it is often to be expected that any is bound with at most one and vice versa, this need not be the case. Amongst other things, this allows multiple domain names to be bound to a single onion address (roughly as multiple domains might be hosted at the same IP address) and allows multiple onion addresses bound to a single domain name (as might be useful during an updating of a domain’s associated onion address).
Properties are expressed using labels, including the label just noted.222The only essential property an identity has is . All other properties are accidental. signifies that and are bound and that this entity has property . Having a property may entail having a less specific property. For example, a site with the property of being a U.S. Department of Justice (DOJ) site also has the more generic property of being a U.S. Government (USG) site. A site for French recipes could have separate properties pertaining to French things and pertaining to recipes. It could also have a single “French recipes” property; this could entail a more generic “French” property, or it could entail a more generic “recipes” property.333As noted when we formalize our treatment of labels in Sec. III-A, we require that, if two properties are more generic than a third property, then one of the first two properties is more generic than the other.
Sattestations never assert a set of properties of which some are claimed to be bound and others are not, and sattestations never assert multiple non- properties. Thus, we do not need to worry about capturing a scenario in which is known to have and we are reasoning about whether or not it has a different property as well. Also, as mentioned above in Section I, there are currently two implemented SAT address formats in use [7]. The SATA notation we have introduced in this section and that we use in our logic is an abstraction that covers both of them.
II-B Belief, Knowledge, and Trust
Suppose is a generic principal—either a client or a SATA—that forms beliefs about entities. These beliefs are expressed in our logic using the trusts operator, e.g., as . Our logic does not cope with incorrect beliefs; if holds, then is actually bound to . On the other hand, we have no need of anything comparable to a knowledge (T) axiom, because we never express except when indicating or that asserts (or , for some label ).
Besides bindings, the other thing a principal, , may trust is that a SATA has asserted something about the binding of an identity (either with just the label or possibly an additional label). We capture such assertions using the says operator, so that we have, e.g., (sometimes omitting the outer parentheses). ’s trust in an assertion may follow from receiving or retrieving a sattestation header, as described in [7], or it may follow from retrieving a list of entities claimed to be bound from the appropriate page of a SATA-site, as described in [6]. As formalized below, the left-hand side of the says operator is always written as a may-be, even if the name and address are known to be bound.
We will formally set out our language and logic in the next section. For purposes of comparison with other logics we note that all atomic formulae of our language are in the following forms (and generalizations of them to be explained when introduced below).
- relations between property labels:
-
- trust in a binding:
-
- trust that others asserted a binding:
-
Note that it could be practical to support reasoning about arbitrary principals asserting bindings. For example, a client on a company-provided computer might download a list of labeled SATAs from a file on a trusted connection to a company server without a SATA, or Alice might be sent one or more labeled SATAs in an authenticated email message from Bob. For simplicity, we will restrict the reasoning about assertions in our current language and logic to reasoning about assertions made by SATAs. We will simply make appropriate local trust assumptions about the binding and labels of SATAs if based on non-SATA sources. Further, we assume that SATAs only make sattestations they believe. Thus, if a SATA issues a sattestation, then what is asserted will be reflected in the local trust assumptions of that SATA.
Note that this means trust about assertions are always in formulae such as the above , and we never write . Even if trusts that and are bound; the dependence on this in the axioms of our logic is reflected in other antecedent clauses. The only trust a binding assertion indicates is which may-be asserted what binding.
The axioms of our logic reflect circumstances under which it is valid to infer trust in a binding. Our logic does not incorporate time or an entity gaining or losing a property (as might happen, even with , when a domain is sold, a key rotated, etc.). We assume antecedents of a derivation hold when the derivation is made. If an antecedent ceases to be valid, reasoning can be restarted without that statement. Our approach might be extended, e.g., by considering time intervals over which antecedents are valid (and, from those, computing a time interval over which a derivation’s conclusion is valid).
Although we should expect in practice that a client would only accept a SATA binding if it has checked an appropriately associated (non-expired, non-revoked, CT logged, etc.) TLS Certificate, we do not here represent such checks or associated reasoning. More broadly, our purpose in this paper is not to set out a usable tool for formal analysis of sattestation systems in practice, though we believe one could be based on our logic. Rather, our goal is to lay the basis for a rigorous understanding of sattestation.
II-C Contextual Trust Model
The central aspect of SATAs and sattestations that distinguish them from other approaches for authentication (X.509, PGP) or trusted naming (DNS, SPKI/SDSI) is that they are contextual rather than structural. Note that even if PGP supports locally based trust, it is still a purely structural determination. Trust derives from distance and/or numbers of independent paths to primarily trusted nodes in a web rather than X.509’s adequate delegation from hierarchical roots, but this is not based on any context (e.g., association with a particular profession, company, or government agency); thus, it is still a structural determination. Contexts are also not types, and we cannot directly represent contextual trust in a standard type system. The same entity can have multiple contextual properties, and, as we shall see, this makes a difference as to what can be inferred about them. But entities might not have contextual labels at all other than the generic label (so, roughly, are not necessarily typed).
Also, while X.509 and DNS work primarily at large scales, and local names and PGP work primarily at much smaller scales, sattestations scale both up and down. They might operate at a large scale, such as the USG example above, or similarly for large corporations, or organizations, in which case one would expect large classes of browsers to come preconfigured by default to trust those as sattestors. Besides making these widely available to consumers, requiring such on enterprise issued or configured equipment can also protect corporate or governmental internal connections by remote workers against phishing attacks, VPN certificate hijack, and other attacks that could leak sensitive information, such as [12].
Sattestations will also operate fine, however, on a level of small organizations, municipalities and groups of various kinds, possibly without the resources or manpower to perform ongoing checks of CT log monitors or other indicators of certificate problems or hijack. (And those only provide post hoc evidence in any case.) Perhaps especially at a small scale, a basis for contextual trust is in associations and trust structures that already or naturally exist within the culture of an organization. It is generally a higher bar to set oneself up to fool the organizers and participants in a community that one is a member of that community than to fool others with no vested interest in that community per se.
Sattestations also do not require organizations to take on the overhead and responsibility of becoming a CA, nor do those need the active support of such structural entities, even if they utilize and leverage CAs, CT logs, etc., for the trust provided. CAs might also offer sattestation services, and this can provide some additional assurance and attack resistance. But to the extent these services are based on purely structural criteria for issuance of sattestations, they will be bound for most domains by the trust that a DV (Domain Validation) level check of identity can provide, even if they do build security (in the form of public keys) into the web itself.
II-D Trust in Implementation
Though our focus is on the logic of SATAs and sattestation, we briefly describe their implementation and use in practice, especially in contrast to TLS certificates. SATAs and sattestations have had research implementation and deployment on both the client and server sides [13, 14], which have also been described in research publications [6, 7]. On the client side the implementation uses a WebExtension for Firefox or Tor Browser that checks headers such as shown in Sec. V, although the cited publications note that the longer-term desirability is implementation natively in browsers. In the Firefox deployment, clients receive the strengthened authentication and hijack resistance properties described herein. In The Tor Browser implementation, clients also receive the routing and lookup protections that come from routing over the Tor network and using onion service protocols (thus, e.g., lookup via the onion-service Distributed Hash Table rather than via DNS). Unlike Tor’s onion addresses, SATAs are fully backwards compatible: browsers knowing nothing of SATAs will offer none of these additional protections. But access using them will succeed as normal (assuming no attacks).
Because SATAs are represented in TLS certificates using subdomains of registered domains, site owners can straightforwardly obtain needed DV certificates for free, e.g., from Let’s Encrypt. Because SATAs are generally formatted in one of the two formats described in Section I, no reconfiguring is necessary to make a site SATA-compatible. Besides the TLS certificate just noted, a front end is all that is needed to provide the sattestation headers.
III Formal Language, Logic, and Semantics
III-A Labels
We start with the labels used in our logic. We assume that we have a “basic” label set whose elements are free of the distinguished operators and . (These operators will be explained below.) We expand this to a set of “general” labels by single applications of either or :
In presenting rules below, we assume that labels , , are elements of , and we will use, e.g., and for subsets of . We assume labels , , and are elements of , and we will use, e.g., and for subsets of .
The operators and , which take as a single argument , always occur as labels for bound SATAs. E.g., indicates that may assert the binding of to any SATA, while indicates that may indefinitely delegate the binding of label . In other words may assert for any that or that or that . These notions are made more precise by our axioms and truth conditions below and are further glossed after those are presented.
We assume that there is a partial order on ; if , then we say that “narrows” and that “broadens” . We use to allow for , and we use and for (weak) broadening. includes the distinguished label , which we discuss further below and which is incomparable to all other labels in . We make the following additional assumptions about :444While the first two assumptions are stated in terms of what principals “can determine,” these are assumptions about information conveyed by labels that a principal may not have seen before rather than principals’ computational power. These assumptions are satisfied by the label system we use here and the canonical label representation described in the following paragraphs.
-
•
For every , the set is finite. Furthermore, given , any principal can determine the elements of .
-
•
Given and , any principal can determine whether , , , or and are incomparable under .
-
•
For every , is linearly (totally) ordered under .
Equivalent to the third condition above is that induces the structure of an ordered, rooted forest on . The root of each tree in the forest is the -maximum element of that tree. We observe that
with equality holding on one side iff it holds on the other.
If every can be represented by a distinct finite string, then, given our first and third assumptions on the label poset, we may without loss of generality represent each as a finite string , where these strings satisfy the property that if, and only if, , concatenated with a distinguished symbol, is a prefix of .
To see the previous claim: Let ‘’ be a distinguished symbol, and (re-encoding if necessary) let be a finite, ‘’-free string representing the label (and no ). Given , we define its string label as follows. Let be the maximum element in . Define as . Inductively, let be the largest element of for which has not yet been defined. Let be the label covering .555 covers iff and implies or . We define as the concatenation (indicated by the operator) .
Because is defined inductively in terms of the string representations of the elements of the finite, linearly ordered set , it will not be defined in conflicting ways in the process of defining the labels of multiple elements of .
By our inductive construction, in this canonical representation of labels, is a prefix of if, and only if, . Furthermore, if and is any canonical label representation, then appears at most once as a substring of .
Our examples of labels will all have the form of canonical -labels as just described and as specified in the following grammar.
If we use two labels , then we have if, and only if, is a prefix of .
Example 1.
Assume ‘’, ‘’, and ‘’ are -strings corresponding to the U.S. Government, Department of Justice, and the Federal Bureau of Investigation, respectively. Then:
but the label ‘’ and the label ‘’ are incomparable.
Because we are using the canonical label format described above, no string may appear multiple times as a -string in a label; e.g., ‘’ is not permitted as one of our example labels.
III-B Grammar, axioms, and rules
We now present a grammar (Sec. III-B1) for terms that may appear in our logic, our logical axioms and local axiom schemata (Sec. III-B2), and the deduction rules (Sec. III-B3) that we use. The local axiom schemata describe the forms of axioms that a principal may optionally use; they capture trust decisions specific to a principal and not inherent in our logic. We provide further discussion of the axioms and local axiom schemata in Sec. III-C.
III-B1 Grammar
We start with a grammar for the terms that may appear in our logic.
The definition of s captures that, as noted above, only s make assertions via says statements.
III-B2 Axioms and local axiom schemata
We start with our axioms.
-
A1
-
A2
-
A3
-
A4
-
A5
-
A6
-
A7
-
A8
-
A9
-
A10
-
A11
-
A12
-
(a)
-
(b)
-
(c)
-
(a)
We also specify the form of local axiom schemata that a principal may, optionally, use to reflect local trust. These are parameterized either by two s () or by a and ( and ).
(1) |
(2) |
(3) |
III-B3 Rules
Finally, we note the logical rules that we allow.
() |
() |
We now describe the reasoning reflected in our axioms and motivation for these as well as for our linguistic and logical choices, after which we will present our semantics.
III-C Motivation for linguistic and logical design decisions
This section describes the intent behind our axioms and rule schemata as well as decisions for the linguistic and logic design choices we have made.
The only rules are Modus Ponens () and Adjunction (, a.k.a. And-Introduction). Local axiom schemata reflect local trust assumptions particular principals might hold about nonlogical trust relations between labels or which principals to trust as sattestor.
Axiom A1 effectively says that any SATA that trusts to have a particular property label is also trusted to have any broader label, e.g., being a U.S. Government Dept. of Justice SATA (label ) implies being a U.S. Government SATA (label ). A1 guides other design choices and how labels should be interpreted. It is important to understand that in our view labels do not capture authorization. We instead view narrowing a label as providing more specific information: for , we view as more specific than .
The label is incomparable to all other labels. If it were narrower than one or more other labels, A1 would allow us to infer that a SATA had those labels simply because it is bound. And if were broader than one or more other labels, A9 would allow a SATA that is trusted to have to sattest to for any and and any of those narrower labels . An analogous problem would also arise with A10 and .
Axioms A2–A4 tell us a label that one can infer is bound to a particular SATA based on a label that a trusted sattestor has attested is bound to that SATA. The first antecedent conjunct of each of these (about the relation between the three labels occuring in the other antecedent conjuncts) allows for the possibility that the labels in each of them need not be identical but can be appropriate narrowings or broadenings of the others. These axioms also reflect (in the fourth antecedent conjunct of each) the goal of authority independence. As originally stated, authority independence means that a misbehaving or misled CA will not be trusted by itself to bind an onion address and a domain in a TLS certificate. The owner of the domain and the onion address must concur for this binding to be trusted by a SATA-aware client [6, 7]. The authority independence aspect introduced herein applies not to CAs but to sattestors: a sattestor’s assertion that a sattestee has a label will not be trusted unless the sattestee concurs. For Axiom A2, this is an explicit assertion of a label that weakly narrows the sattested label; for Axioms A3 and A4, this is a statement in which the sattestee acts as a sattestor in the appropriate way. We need three axioms to cover the different kinds of labels in the ultimate consequent: sattestor-free, delegatable sattestor, or simple sattestor, respectively.
For practical simplicity, a sattestor knows the label for which it is providing a sattestation and only states it under a role qualified to do so. We assume any sattestor can detect and resist an attacker attempting to somehow confuse that sattestor about the state it is in so as to cause it to utter a sattestation under a sattestor role different from one appropriate for the sattestation uttered. This allows us to avoid having to include a label (or respectively ) under which is making a sattestation in the second trust conjunct of Axiom A2—or indeed having a sattestor making any statement be anything beyond a may-be.
We do not include an analogue of Axiom A2 that replaces the restriction with the restrictions and , i.e., in which the label claimed by the sattestee is (strictly) broader than the label given by the sattestor. We now explain why.
If we derived the narrower label given by the sattestor—i.e., we derived —then we would be giving the sattestee a label that is more specific than what it claims about itself. This contradicts the motivation for requiring the sattestee to say a label consistent with what the sattestor says. For example, we would not want a sattestor to be able to give the label to a site that only claims the label for itself.
If we derived the broader label claimed by the sattestee—i.e., we derived —then the concern is a bit more subtle. As an example, consider labels that reflect the structure of a hierarchical organization. Suppose that is a directory site in the engineering division of ACME Corp. that can label other sites as research sites within that division, i.e., it has the property . Suppose that is an internal server for the marketing division of ACME Corp. claims that it is an ACME Corp. site by saying it has the label . If says that has the label , then—using the hypothetical axiom we are considering—we could derive that has the label . On one hand, this seems reasonable; if had the label , then it would also have the label (by A1). On the other hand, the derivation that we are considering makes use of a sattestation by a site that is in a different administrative division from and whose authority does not cover the division containing . On balance, we think this is undesirable, so we do not include such a derivation here. If wanted the broader label —here, —based on the sattestation from , then it needs to claim the narrower label —here —so that , and A2 would apply.
Many notions called ‘trust’ are transitive, though this is also sometimes criticised [15]. Following previous work on SATAs, our notion is not inherently transitive. Just because trusts that a is bound and has property , does not mean that trusts sattestations by . And even if trusts sattestations by , it may not trust to sattest who else is a sattestor. See [6] for an illustrative example.
Rather than simply prohibit transitive trust, our axioms (above) and truth conditions (below) set out the contextual assumptions under which we support it.
Axiom A5 ensures that a SATA trusted to be bound with any label is also trusted to be simply bound.
Axiom A6 tells us that if a principal trusts that a SATA has asserted anything, that SATA is trusted to be implicitly asserting that it is itself a bound SATA.
Axiom A7 indicates that when a principal trusts that a SATA is a delegatable-sattestor of a label it also trusts that SATA is simply a sattestor of that same label. Axiom A8 reflects that principal trust in an assertion about a delegatable-sattestor label similarly implies trust in an assertion about a simple sattestor of the label. Note that A8, combined with A7, allows for a simplification of the logic with respect to Axioms A2–A4. It could have been necessary to have two versions of each of those axioms: wherever occurs in an antecedent clause of those axioms we could have also needed to have a version of the axiom with in that clause. But Axioms A7 and A8 allow us to infer trust in a simple sattestor label (assertion about such a label) from trust in a delegatable-sattestor label (assertion about such a label).
Axioms A9 and A10 describe trust that a (delegatable) sattestor of any label is also a (delegatable) sattestor of any narrowing of that label.
Axiom A11 indicates that any statement is effectively a public announcement. Every principal knows every assertion made by any SATA. (Principals do not necessarily trust what is asserted; they only trust that it is asserted.) Note that we are not concerned in this logic with capturing reasoning about the sending of messages or authenticated propagation of information. Rather we are concerned with what can be inferred from assertions made by any principal together with assumptions regarding who is trusted to assert which bindings.
Axiom A12 comprises collectively the standard axioms for use of the sentential connective for conjunction. We will in general be liberal in our writing of conjunctions when the usage is clear and consistent with propositional logic, thus allowing, e.g., ‘’. (We have similarly allowed, .e.g., ‘’.)
For the local axiom schemata, a principal would use an instance of 1 to trust that any SATA bound to a label is also bound to the label . The 2 and 3 schemata allow a principal to trust a specified SATA to be a (delegatable) sattestor for any label. This is useful if the principal wants to trust whatever her employer (or a trustworthy friend, etc.) says.
III-D Semantics
A frame comprises a tuple where is a set of worlds, and for each principal there is a relation on worlds such that indicates is accessible from by . A model consists of: a frame , a client , a set of domain names, a set of onion addresses, a label set and a partial order on it, the set of all well-formed formulae (all s in our grammar above), and a (world-dependent) assignment function based on those. Note that all names, labels (and the partial order on them), and statements are shared globally across worlds, while local differences (between worlds) are all reflected in the assignment function . So, in a frame , a world is a tuple . Because we do not attempt in this logic to capture faulty trust reasoning (where a principal can trust something that is false), our accessibility relation on worlds is reflexive.
Ours is not a normal modal logic, primarily because we do not capture trust of arbitrary formulae. Given our motivating application, our focus is ultimately on reasoning about contextual trust (the binding of labeled SATAs) while relying on the types of utterances that are made in sattestations. While extensions of our logic may be of future interest from a logical perspective, here we intentionally restrict our language to trust in SATA bindings and specified types of assertions. Thus, as already noted, in our current language only a may-be can say anything. Further, the only thing that can be said is a bound SATA, which may have any type of label. And the only formulae that a principal can trust are those expressing such bound SATAs and those expressing utterances of bound SATAs by a may-be SATA. To minimize notation we do not introduce separate notation for the formulae that a may-be can say or for the formulae that a principal can trust. But it should be kept in mind that these cannot be arbitrary formulae.
To capture binding of identities we introduce a pair of functions
on , where
the set of all
s.t. is
bound with , and
the set of all
s.t. is
bound with . Those preliminaries out of the way, we now
set out the conditions that our assignment function must satisfy. The specification of an assignment function satisfying these then determines the truth conditions. For a world and a label , defines those for which in . Similarly, defines the set of statements that utters in . We discuss constraints on the assignment function in Sec. III-C.
-
AF1
, ,
-
AF2
-
AF3
-
AF4
,
,
By uttering a formula, any SATA implicitly asserts that it is an entity. We thus restrict so that if is nonempty, then . (In general, note that an element of is a and not an of the form .) Also, the set of SATAs with a given label can vary from world to world. Nonetheless, we do not permit that in one world but not in another. Thus if at any world , then at all worlds . We reflect this in our truth conditions, which we set out following some preliminary remarks.
We give truth conditions to expressions in our language in terms of satisfaction at a world . (We take the model, to be set and implicit.) We begin with the various logically atomic formulae: first expressing the narrowing relation on labels, then various types of formulae for SATAs bound with different types of labels. Next we present truth conditions for formulae reflecting assertions, then trust formulae, and finally formulae containing logical connectives.
Our semantics is model-theoretic in structure, not operational, but we are also motivated by an operational perspective reflected in the truth conditions for various individual kinds of formulae. Together, the truth conditions allow us to evaluate , where is a in our grammar. Truth conditions rely on the function and truth conditions weakly earlier in the list; for convenience, the first line of each truth condition indicates the type of for which it defines the truth of .
-
TC1
[ is ]
iff
( is an initial string of )
( for all , ) -
TC2
[ is ]
iff
iff
(
for some )
(
for some ) -
TC3
[ is with ]
iff
() ( -
TC4
[ is with ]
iff
()
(if (
)
then ) -
TC5
[ is with ]
iff
( if ( ), then
((if (
) then
)
(if (
) then
)) ) -
TC6
[ is , i.e., of form ]
iff
(for all , )
(if for some , , and ,
then ) -
TC7
[ is ; must be or ]
iff
(if , then ) -
TC8
[ is ]
iff
or . -
TC9
[ is ]
iff
and .
Note that it is possible for circular dependencies involving TC5. Thus, e.g., the condition might depend on and vice versa. For our motivating scenario in Section V, this is not a problem: as shown by Thm. VI.4, a principal will not derive trust in a SATA of the form unless this is rooted in an explicit local assumption (and not a cycle of dependencies). We now further describe scenarios motivating choices for the semantics of .
Repeated delegation of same label for redundancy
A (hypothetical) motivating case for the operator is: the Department of Defense (DoD) is a trusted root for saying that a server is usable by operational forces via the label . These forces may be operating in a communication-denied/-degraded environment. If one server is being replaced or augmented by another server , it may not be possible to get DoD to provide a sattestation for in a timely manner. It would be useful if could do this, so that there would be a chain of trust from DoD to . could then provide a sattestation of ; if needed, this would allow to provide a sattestation of , etc.
Delegating narrower authority without needing to know how it might be narrowed further
A (hypothetical) motivating case for the operator that makes use of label narrowing is: the General Services Administration (GSA, assumed to be a trusted root for ) sattests to for the DOJ. sattests to for the FBI. This eventually works its way down to a sattestation that effectively asserts that the FBI’s New York field office website is a U.S. Government website. GSA should not need to know DOJ or FBI structure and delegations in order for this to work.
Delegation is permitted to “break” narrowing
The the U.S. Dept. of Justice, Bureau of Justice Assistance maintains on the web [16] a list of civil liberties organizations including the Electronic Frontier Foundation (EFF). One could imagine using SATAs to capture (with stronger authentication) trust in this list by using local axiom schema 1 to state that if trusts then trusts . This could then permit a trust that is ultimately rooted in a GSA SATA with the label to permit trust in an EFF SATA with label even though, . Section V presents a worked example elaborating on this and illustrating the implementation of it.
III-E Soundness and incompleteness
III-E1 Soundness
A derivation is a sequence of formulae, where each line in the derivation is either an assumption or follows from the set of assumptions by an application of one of the two rules to previously derived lines or instances of an axiom. Assumptions include instances of local trust axiom schemata and specific trust statements that a principal has preloaded, each being either a bound SATA or an assertion by a SATA about a bound SATA. Assumptions also include facts about narrowings and broadenings of labels otherwise occurring in the set of assumptions.
We present new notation needed to state our soundness result. Let stand for a finite set of formulae666We recognize the notation overload from the use of ‘’ for a set of labels introduced above. We nonetheless stick with the common use of ‘’ for a finite set of formulae when speaking of derivations and soundness, and we rely on context to keep the usages clear. with and as before be an arbitrary formula. ‘’ means that is derivable from and the axioms using the inference rules of the logic. ‘’ means that, in all models, is true at all the worlds at which all the members of are true. Our use of ‘’ follows [17]. The more common notation corresponding to this use of ‘’ is ‘’. But, ‘’ is being used to represent satisfaction of a formula at a given world. We thus avoid this notational overload.
Theorem III.1.
(Soundness) If , then .
Proof.
To begin we need the following lemma.
Lemma III.2.
All axioms are valid in all models.
Proof.
This result follows directly by inspection of the truth conditions, which amounts to tedious case checking. We thus set out just one axiom as an example. We choose Axiom A3 because it uses all of the linguistic expressions of our language, thus invokes multiple truth conditions.
Suppose satisfies the antecedent of Axiom A3, where the labels are and the SATAs are , , and . Then by TC9, satisfies each of the conjuncts. Thus we suppose . Let be a principal for which . Then by TC7, for all s.t. , we have . Thus, by TC7 applied also to the other antecedent conjuncts and then by clause (a) of TC5, . Thus by a final invocation of TC7, .
∎
We now proceed to prove the theorem by showing that all the ways that can follow from are ways that preserve truth. This is also straightforward.
- is an axiom or member of .
-
Then trivially.
- is obtained by modus ponens from and .
-
This follows by a trivial argument via strong induction and by the definition of the truth conditions. Suppose that soundness holds for all lines of a derivation up to the one in question, where occurs. Then, by inductive hypothesis, and . So, by TC8.
- is obtained by -introduction from and .
-
The same as for MP: Suppose soundness holds for all lines of a derivation up to the one in question, where occurs. Then, by inductive hypothesis, and . So, by the definition of .
∎
III-E2 Incompleteness
Our logic is intentionally not complete by design for practical considerations. As one example, our axioms are all based on premises it is legitimate to assume and on what can follow from them. We thus do not have to cope with negation in our logic or language. So if , then at all , . Given any , such that or , by TC8, at all both and . But both formulae cannot be derivable in our logic (indeed, neither is).
IV Our logic of identities versus other logics
We present a more detailed comparison below, but in this paragraph we list specific distinguishing features of our logic making it not a simple extension (nor restriction) of others. Authentication logics have no means to reason about delegation or contextual trust. Logics for local names support delegation but not global entities with local properties. Delegation logics have also been proposed for authorization and access control. Sattestation is an assigned permission, so somewhat similar. But, unlike authorization, we place no restrictions on which principals can create a sattestation. There are however conditions for when to trust what is sattested. Also unlike either authentication logics or authorization logics, our logic supports authority independence: for a principal to trust that a may-be is an entity or an entity with particular properties, the principal must trust that the entity has itself concurred. Finally, logics of authentication and authorization generally address the distribution of session keys to appropriate principals or propagation of authorizations to appropriate relying parties. The logic we present herein does not capture any sending or receiving of messages or acquiring of new information and is incompatible with those logics. Indeed, as reflected in our Axiom A11, it is not possible for any principal to say anything unless all principals know that she did.
Burrows, Abadi, and Needham in their logic of authentication [18] introduced a construct that formalizes basic trust as used by us and, as noted above, by Coker et al. In the language of BAN logic (as it is commonly known) ‘ controls ’ means that has jurisdiction over the truth of . They also have constructs for binding keys to principals, which allows them to have “message-meaning” rules in their logic, that say, e.g., if believes is ’s public key, and sees a message stating that is signed with the private cognate of , then believes said . A primary sort of statement made in messages that principals (e.g., key servers) send is about a session key being good for two principals to communicate. Thus if the conclusion of an application of a message meaning rule is believes said good() if also believes that controls statements about the goodness of session keys, then their “jurisdiction” rules allows the derivation of believes good(). (The jurisdiction rule actually refers to what believes believes rather than what says, but since BAN is also a logic for honest principals—who only say things that they believe—we elide this as aside for our purposes.) Note also, however, that this is still reflecting privately receiving about which she is forming a belief. Indeed, only believes good() if she believes that awareness of is limited to principals and (and or other trusted server).
Our logic thus shares much with BAN logic. Though they do analyze a draft X.509 protocol from the eighties, their work is all pre-web and is thus not designed for reasoning about TLS (or SSL) authentication. More importantly, they explicitly recognize the complexity of delegation of jurisdiction and note that they do not attempt to reason about nested jurisdiction [18]. Further, though they can express jurisdiction over particular statements, they have no notion comparable to our contextual labels much less one with structure such as in our narrowing relation. BAN logic was reformulated and given a Kripke semantics by Abadi and Tuttle [19]. In this respect, our logic is closer to the Abadi-Tuttle reformulation than to the original BAN logic. Although, as previously noted, our logic of trust is not a normal modal logic while their logic of belief is. In this sense we may be closer to the original BAN, which limited beliefs to things like key bindings, utterances made in encrypted messages, and jurisdiction, rather than representing beliefs about formulae containing logical connectives.
Note that our labels are meant to convey descriptive information about SATAs. They are simply not roles that can be authorized to perform an action. Trust in a sattestation might depend on the labels that the sattestation asserts apply to the sattestee, as well as labels already trusted to apply to the sattestor. But the only action SATAs can be “authorized” (more accurately ‘trusted’) to do is sattestation.
Various delegation logics have been published that capture forms of transitive or iterated trust, and that do express contextual names and contextual trust, for example as formalizations of reasoning about SPKI (Simple Public Key Infrastructure), SDSI (Simple Distributed Security Infrastructure), or their combination. (See [20] and related work referenced therein for papers on reasoning about delegation and local names.)
Our contextual labels are not local names (nor global names). Our labels are not names at all but descriptive labels for principals, more akin to predicates than individuals to which predicates apply. It would, nonetheless, be straightforward to express linked-name-based contextual trust in our logic if reflected in SATAs.
Also, logics for local names generally have linking axioms giving principals complete jurisdiction over their local namespace. But sattestor jurisdiction over a contextual label is not absolute. For example, even if a SATA is trusted as sattestor for a label, we cannot infer that a sattestee has that label unless the sattestee has also asserted of itself that it has that label, or a narrowing of it. (See Axiom A2 and associated discussion in Sec. III-C.) Conversely, a principal will also not automatically trust the label in a self-sattestation.
Leaving aside for the moment labels (besides ), SATAs do provide a sort of local-name. These are, however, comprised of the binding of two different types of global names. Unlike local names, it is only the binding that is local, and a (local) sattestor cannot use locally either global name (domain or onion address) that someone else controls as part of its local namespace. And like other labels, trust that depends on either trust that has label wrt itself, or trust in another sattestor for and that has implicitly or explicitly asserting such binding about itself.
While we do have something that is very roughly akin to authorization (sattestation), this is primarily in support of inferring trust that a may-be is an entity or is an entity with a particular property label. Access control and authorization logics primarily support inferences about whether a principal (or in the case of trust management, a key) should have read or write access to a resource. Reasoning about the binding of an entity to a key (authentication) is generally not part of an authorization logic per se. Also, unlike authorizations, we place no restriction on who may perform a sattestation. And our logics are designed to work in environments where everyone has global access to everything (except private keys).
V Worked Example
In this section, we describe example scenarios wherein a SATA-aware browser receives sattestations that are JSON formatted as in the implementation described in [7]. And, as a result, the browser is then able to infer trust in a SATA with specific labels based on initial trust assumptions.
Assume principal is a browser with a WebExtension capable of processing sattestations, evaluating whether it has sufficient sattestations to trust various SATAs. We will assume for simplicity that when contacting a SATA, receives from that SATA, or already possesses, all sattestations needed for such judgments. We leave as beyond our current scope the important questions of credential discovery and delivery for various practical scenarios as well as any discussion of how and if trust negotiation should be managed. Also out of scope is any discussion of how stores or processes assumptions or inferences; when we state what trusts, we intend by this only what assumes or is entitled to infer based on our logic.
Assume that already trusts
,
i.e., that a SATA for U.S. General Services Administration (GSA) is
delegatable sattestor for U.S. Government SATAs. Assume further that
receives the JSON formatted sattestations in
Figures 1,2.
(In these figures and throughout this appendix, actual onion
addresses are replaced with in interest of space.
Similarly convenience applies to sattestor signatures in figures.)
{ "sattestation": { "sattestation_version":1, "sattestor_domain":"gsa.gov", "sattestor_onion":"onion1", "sattestor_refresh_rate":"7 days", "sattestees": [ { // bind domain to a self auth. address "domain": "justice.gov", "onion": "onion2", // onion address "labels": "USG:DOJ", "sattestor*(USG:DOJ)", "issued": "2023-12-08", "refreshed_on": "2024-01-25" }, { // bind domain to a self auth. adress "domain": "hhs.gov", "onion": "onion3", // onion address "labels": "USG:HHS", "sattestor*(USG:HHS)", "issued": "2023-12-08", "refreshed_on": "2023-01-25" } ... ] }, // signature by sattestor "signature": "sig-gsa-1" }
{ "sattestation": { "sattestation_version":1, "sattestor_domain":"justice.gov", "sattestor_onion":"onion2", "sattestor_refresh_rate":"7 days", "sattestees": [ { // bind domain to a self auth. address "domain": "fbi.gov", "onion": "onion4", // onion address "labels": "USG:DOJ:FBI", "sattestor*(USG:DOJ:FBI)", "issued": "2023-12-14", "refreshed_on": "2024-01-27" }, { // bind domain to a self auth. address "domain": "bja.ojp.gov", "onion": "onion5", // onion address "labels": "USG:DOJ:OJP:BJA", "issued": "2023-12-14", "refreshed_on": "2024-01-27" } ] }, // signature by sattestor "signature": "sig-justice-1" }
{ "sattestation": { "sattestation_version":1, "sattestor_domain":"fbi.gov", "sattestor_onion":"onion4", "sattestor_refresh_rate":"7 days", "sattestees": [ { // bind domain to a self auth. address "domain": "fbi.gov", "onion": "onion4", // onion address "labels": "USG:DOJ:FBI", "sattestor*(USG:DOJ:FBI)", "issued": "2023-12-20", "refreshed_on": "2024-01-20" } ] }, // signature by sattestor "signature": "sig-fbi-1" }
Suppose Alice is looking at an FBI webpage. Her browser, , is
configured with the trust assumptions we have described and has
received sattestations as in the
Figures 1–3.
We will use our axioms to derive from this
.
We begin by using an instance of Axiom A3 to
derive
.
, which gives us the first conjunct of the axiom antecedent.
Our initial trust assumption provides the second conjunct.
From receipt of the Figure 1 sattestation, we
get
,
providing the third conjunct. From receipt of the
Figure 2 sattestation we also know that
.
Thus
by Axiom A8, we also have the fourth conjunct,
.
We now have all the antecedents of Axiom A3, allowing us to apply the axiom and derive its consequent.
If has also received a self-sattestation from an FBI SATA as in
Figure 3, we have
.
Then by an application of Axiom A2 similar to the
application of Axiom A3 just illustrated, we can derive
.
Continuing to build on the above, our next example illustrates both the applicability of local axioms and how delegation can effectively “break” narrowing. Suppose that also trusts Justice Department sattestations concerning which SATAs are civil liberties organizations and which SATAs are competent to issue sattestations thereof.
This is reflected in instances of local axiom
schema
1, in particular
and ,
as noted in the above discussion leading into this worked example.
At time of writing, the Dept. of Justice, Bureau of Justice Assistance (BJA) maintains on the web a list of privacy and civil liberties organizations [16]. To provide authenticated, hijack resistant credentials for both the addresses of these organizations and endorsement of them being competent privacy and civil liberties organizations, a BJA SATA could provide sattestations as in Figure 4.
{ "sattestation": { "sattestation_version":1, "sattestor_domain":"bja.ojp.gov", "sattestor_onion":"onion5", "sattestor_refresh_rate":"7 days", "sattestees": [ { // bind domain to a self auth. address "domain": "aclu.org", "onion": "onion6", // onion address "labels": "civil_liberties", "sattestor*(civil_liberties)", "issued": "2023-12-05", "refreshed_on": "2024-01-24" }, { // bind domain to a self auth. address "domain": "adl.org", "onion": "onion7", // onion address "labels": "civil_liberties", "sattestor*(civil_liberties)", "issued": "2023-12-05", "refreshed_on": "2024-01-24" }, { // bind domain to a self auth. address "domain": "eff.org", "onion": "onion8", // onion address "labels": "civil_liberties", "sattestor*(civil_liberties)", "issued": "2023-12-05", "refreshed_on": "2024-01-24" }, { // bind domain to a self auth. address "domain": "epic.org", "onion": "onion9", // onion address "labels": "civil_liberties", "sattestor*(civil_liberties)", "issued": "2023-12-05", "refreshed_on": "2024-01-24" }, { // bind domain to a self auth. address "domain": "splc.org", "onion": "onion10", // onion address "labels": "civil_liberties", "sattestor*(civil_liberties)", "issued": "2023-12-05", "refreshed_on": "2024-01-24" } ] }, // signature by sattestor "signature": "sig-bja-1" }
Note that unlike the prior figures, Figure 4 does not include sattestee labels reflecting a narrowing of labels for which the sattestor is assumed or derived to be trusted by . Nonetheless, because of the local axioms mentioned above, it can ground derivation of in ’s trust of .
And despite this break of narrowing, because that same sattestation by
the Bureau of Justice Assistance SATA covers
,
this could continue.
For example, if receives a (current)
sattestation signed by this EFF SATA for
as well as a current self-sattestation from that
Tor Project SATA, then this is sufficient to ground
.
VI Properties of Trust Derivations
We prove properties of the trust derivations obtained with our logic. These properties describe conditions that must hold if is able to derive trust in a . We give separate results depending on the form of . These cover whether is: (Prop. VI.1); a -free label (Prop. VI.2); of the form for some (Prop. VI.3); or of the form for some (Thm. VI.4). Here, we present the proof of Prop. VI.3. The other proofs, presented in the extended version [21] of this paper, are similar.
Each of these results guarantees that one of a list of conditions must hold, including the possibility that a term of a certain form appears in (’s initial trust assumptions). Other possibilities include that certain types of terms are derivable or that (’s trusted instances of local-axiom schemata) includes a certain schema instance. In the case of Thm. VI.4, our most complex result, there is a chain of triples , ending with , in which successive triples show how ’s trust is transferred and whose initial triple shows that this trust is rooted in something local to (either a local trust assumption or a local axiom schemata).
If the conclusion holds that certain terms are derivable, at least one such term is such that we may then again apply one of our results. This allows us to eventually show that the derivation is rooted in something explicitly trusted by (as an element of or ) and that (if applicable) certain sattestations were made by a chain of SATAs and that trusted corresponding SATAs as sattestors. We state this formally as a combined result that gives all the possible conditions when starting with the derivability of for an arbitrary ; however, we think that the lengthy case enumeration that would entail would not provide any more insight or utility than what we state here.
VI-A Definitions and assumptions
We use to capture the trust that assumes rather than derives. We assume that the elements of are all s of the form . Considering our grammar, each of these is conjunction free.
We use for the set of local-axiom schema instances that assumes. Each element of is assumed to be of one of the forms given above (i.e., , , and ). We note that antecedent and conclusion of an instantiation of , and the terms in instantiations of , and , are all conjunction free.
We use for the set of says statements that sees and may use for derivations. Each element of is an and thus of the form , where is a . Recall that, using and Axiom A11, if , then we can derive . Considering our grammar, each element of is conjunction free.
We consider derivations of terms that use applications of and to the axioms and the elements of , , and . As observed in the definitions of these sets, each element of these sets is conjunction free. Thus, if a derivation of a term includes the application of to Axiom A12a (to extract one conjunct from a conjunction), there is also a derivation of that does not include any applications of to Axiom A12a. We thus assume that any derivation in which we are interested does not include any applications of to Axiom A12a.
Recall that is incomparable with every other label.
VI-B Formal results
The proofs of these results are similar to each other. Each fixes a derivation of a statement and iterates backwards through the derivation. Inspection of the various axioms and local axiom schemata identifies which instantiations of these could produce the statement we are then considering. If (Prop. VI.1), then the trust statement may be in ; otherwise, we immediately have that , for , or that another SATA sattests to . If is sattestor-free (Prop. VI.2), then moving backwards through the derivation narrows the label associated with until reaching one that is sattested to by another SATA, is produced by an instance of , or is in . If (Prop. VI.3), which might arise after working backwards from the sattestor-free case, then moving backwards through the derivation can broaden the label in the argument to ; this broader label may serve as an argument to a label bound to (in , produced by , or produced by ). A broader label may also serve as an argument to , either bound to or bound to a different SATA that sattests to a label for . Finally, if (Thm. VI.4), then there are conditions for initial trust and local schema analogous to the case. The difference here is that there can be a sequence of multiple sattestations in which each SATA sattests to a property of the next SATA in the sequence.
We now turn to the formal statements of our results that we have just sketched. The proof of Prop. VI.3 is presented here; the other proofs are in the extended version [21] of this paper.
Proposition VI.1.
Proof of Prop. VI.1.
Let and consider a derivation of .
-
1.
If is produced by an application of to Axiom A1, then (because is incomparable to all other labels), the antecedent in this instance of the axiom was also , and we consider the previous step in .
- 2.
- 3.
No other steps could produce in if it was not in , completing the proof. ∎
Proposition VI.2.
If, for , ,
is derivable by repeatedly applying Rules and to the axioms, the instances of the Local Axiom Schemata in , the elements of , and the elements of , then one of the following conditions holds:
-
1.
is in for some ; or
-
2.
For some and :
-
(a)
; and
-
(b)
is derivable;
or
-
(a)
-
3.
For some and and such that :
-
(a)
is derivable; and
-
(b)
is derivable
-
(a)
Proof of Prop. VI.2.
Fix a derivation of . Start with . Inductively, we assume has been defined to be of the form for and , and we consider how it was produced in .
-
1.
If , then we have satisfied Condition 1 and we terminate the induction.
-
2.
If results from an application of to Axiom A1, then the antecedent of the axiom instance includes as a conjunct for some . We update to be this conjunct (setting ) and continue the induction.
- 3.
- 4.
By inspection, and our assumption that contains no applications of Axiom A12a, there are no other ways to derive , completing the proof. ∎
Proposition VI.3.
If
is derivable by repeatedly applying Rules and to the axioms, the instances of the Local Axiom Schemata in , the elements of , and the elements of , then one of the following conditions holds:
-
1.
is in for some ; or
-
2.
; or
-
3.
For some and :
-
(a)
; and
-
(b)
is derivable;
or
-
(a)
-
4.
is derivable for some ; or
-
5.
For some and :
-
(a)
is derivable; and
-
(b)
is derivable
-
(a)
Proof of Prop. VI.3.
Fix a derivation of using applications Rules and to the axioms other than Axiom A12a, the instances of the Local Axiom Schemata in , the elements of , and the elements of .
We work inductively backwards through . We assume that for some , and we consider how was derived in .
-
1.
If , then the claimed Condition 1 is satisfied, and we terminate the induction.
- 2.
- 3.
-
4.
If is obtained by an application of to an instance of Axiom A9, then the antecedent of that axiom instantiation is for some and . We update to be this antecedent and continue the induction.
-
5.
If is obtained by an application of to an instance of , then we have and the term must be derivable. This satisfies the claimed Condition 3, and we terminate the induction.
- 6.
By inspection and our assumption that contains no applications of Axiom A12a, there are no other ways to derive , completing the proof. ∎
Theorem VI.4.
If
is derivable by repeatedly applying Rules and to the axioms, the instances of the Local Axiom Schemata in , the elements of , and the elements of , then there is a sequence such that the following conditions hold
-
1.
; and
-
2.
One of the following holds:
-
(a)
and ; or
-
(b)
; or
-
(c)
For some and some :
-
i.
; and
-
ii.
is derivable;
-
i.
and
-
(a)
-
3.
For :
-
(a)
; and
-
(b)
is derivable;
and
-
(a)
-
4.
For , can derive
for some such that ; and
-
5.
Proof of Thm. VI.4.
Fix a derivation of using applications Rules and to the axioms other than Axiom A12a, the instances of the Local Axiom Schemata in , the elements of , and the elements of . Set , where is the number of applications of Modus Ponens in . Let
and let .
We work inductively backwards through . Assume that has been defined but that has not been defined for some , . Assume that has been defined as the statement for some and that appears as (a conjunct in) the precondition of an axiom to which is applied in .
-
1.
If is not obtained in by applying , then it must be either the instantiation of or an element of ; it does not match the form of the other local-axiom schemata, any axiom (without the application of ), the () elements of , or the result of applying Rule . We set . If , we renumber the by subtracting from each index; this gives . This puts us in Condition 2a or 2b, and we stop the induction.
-
2.
If is obtained in by applying , then it must be an application of to Axiom A3, Axiom A10, or ; by inspection, no other axioms (including instances of local-axiom schemata, and excluding Axiom A12 by the discussion above) can produce a statement including . We consider these three cases.
-
(a)
If is obtained by applying to Axiom A3, then the precondition of the axiom includes a unique term of the form , where . We let and set . Note that, in the precondition of the axiom, we also have a unique conjunct of the form for some such that . We set and continue the induction.
-
(b)
If is obtained by applying Modus Ponens to Axiom A10, then the precondition of the axiom includes a unique conjunct of the form for some . We set and continue the induction.
- (c)
-
(a)
The proof cases that terminate the induction identify which of Conditions 2a, 2b, and 2c is satisfied. We consider the other conditions that are also claimed to hold.
Condition 1 is satisfied by construction. For Condition 3a, every update to the definition of ensures that , where is the smallest index such that has been defined. When a new is defined, it weakly broadens the then-current value of , which (as just noted) weakly broadens .
Condition 3b says that is derivable. Each is defined as the then-current value of ; that, in turn, is (a conjunct in) a precondition to which is applied in . Thus, the precondition is derivable in , and the conjunct defining may be derived from that precondition (or, as discussed above, directly).
VII Saturation Algorithm
We now describe a saturation algorithm, that, given a principal and a finite initial set of statements, can be used to derive all sattestor-free trust statements derivable from using the axioms and rules. We assume the elements of are 1) s, and 2) s of the form (“- trusts ” statements). We characterize a finite superset of that the algorithm uses to construct derivations, and we show that this can be done without sacrificing completeness. This means that the number of statements the algorithm has to derive is a function of and ’s local axioms, not of the set of all possible statements. In particular, this guarantees termination even if the set of labels is infinite.
We begin by making some necessary definitions.
Definition VII.1.
If is a set of s and/or s, we let be the set of labels appearing in , and be the set of identities appearing in . Given a label , and a set of labels , we define to be if or , where we will use to include both and . We then define to be . Given a set of labels , we define to to be the set of all labels such that is in the closure above under of the set .
Let be a principal, and let be the set of axioms local to . Let be a collection of says and - trusts statements. We define the local axiom label closure of , or ), to be . If and are two sets of says and trusts statements we say that is -limited if ) and . For any two sets of statements and , we denote the maximal -limited subset of by ).
We use the following proposition in proving Thm. VII.6:
Proposition VII.2.
Given any set of says and - trusts statements there is a -limited set such that a) if and only if b) any statement that can be derived from can be derived using a sequence of statements from .
We present three lemmata that we need to prove Prop. VII.2. The first lemma shows that any statement of the form that can be derived from a set is in ).
Lemma VII.3.
For any set of statements , let be derivable from . Then is in ).
Proof.
This follows from the fact that cannot be derived unless is present, where . That statement is not derivable from any other statement, hence it must be in . Thus is in ) ∎
The next two lemmata are used to help show that any statement of the form such that ), that is derivable using statements from , can be derived using statements from ).
Lemma VII.4.
For any set of statements , and any statement of the form that is derivable from , we have .
Proof.
This follows directly from the fact that the only way such a statement could be derived from is if the statement appears in . ∎
Lemma VII.5.
For any , let be a valid instantiation of either axiom A2, any global axiom A , whose consequent is , or a member of the local axiom schema 1. Then, if the labels of the consequent are in , so are the labels of the antecedent.
Proof.
The proof follows by inspection of the axioms and local axiom schemata, and by the fact that, by Lemma VII.4, for any derivable statement , then . ∎
We now turn to the proof of Prop VII.2.
Proof of Prop. VII.2.
The b) implies a) direction follows directly from Lemma VII.3.
We prove the a) implies b) direction by assuming a that satisfies a), and then show that when unnecessary statements are removed it satisfies b). To prove b), we begin by proving that any sattestor-free statement that is derivable from is -limited. .Next we need to show that if there is a derivation of a -limited statement , there is a -limited derivation of .
We first prove that . We note that every SATA that appears in the the conclusion of an axiom also appears in the antecedent, except for instances of the schemata 2 and 3. However, in order for those local axioms to prove anything except more instances of , we need to trust either that has said something, or somebody has said something about . In that case . Otherwise, we can remove this unnecessary statements.
The proof that is similar but is complicated by the use of the narrowing relation, so it takes a little more work. Let be the last statement in such that . We note that cannot be used as input to an instantiation of an antecedent Axiom A2, or any axiom whose consequent is of the form and contains in the antecedent, because by Lemma VII.5, this would result in a consequent such that , so that , contradicting the assumption. Thus it can be removed without affecting the correctness of the derivation. ∎
Our saturation algorithm is as follows: Let be a set of non-negated atomic statements. Let ), where is the set of all possible non-negated atomic statements.
-
1.
Initialize: , , .
-
2.
While , do:
-
3.
-
4.
-
5.
, where is the set of all elements of derivable from in one step.
-
6.
, ,
-
7.
End while
-
8.
-
9.
-
10.
Return Resulta, Resultb
We give an example of the saturation algorithm at work below. Step 0 gives the statements trusts initially, while Steps 1 and 2 gives the statements derives at each respective step.
Step 0
-
01
-
02
-
03
Step 1
Step 2
-
21
(from statements 01, 12, and 13 via Axiom A2)
Theorem VII.6.
The saturation algorithm is sound (any statement in the saturation set Resultb is derivable from ), complete (if a sattestation-free statement is derivable from , it appears in Resultb), and terminating, with and .
Proof.
Soundness follows directly from the definition of derivation. Termination follows from the finiteness of , and and follows from the fact that same is true of .
For completeness, Prop. VII.2 gives us that any sattestation-free statement in derivable from is derivable using statements in , and that any sattestation-free statement derivable from is itself in . All that remains is to show that such a derivation can be constructed using the elements of Resulta.
We first note that all statements added by the algorithm are non-negated, so addition of a statement to Present does not invalidate any statement that could be derived from the statements in Present alone. Thus all we need to show is that Resulta contains every statement derivable from via statements in . We prove this by induction on , the minimum length of any derivation Der of from using only statements from , where the length of Der is defined to be the length of the longest path in Der, considered as a graph. First, if , then . Suppose now that if then . Suppose that there is a statement such that and . That means that, if Der is a derivation of from using only statements from then at least one statement used in Der is not in Resulta. However, by construction, , contradicting the induction hypotheses. ∎
VIII Conclusion
We have set out a logic of sattestation that is a foundation for reasoning about contextual roots of trust for internet domains if they are SATAs. Previous work has presented logics for reasoning about contextual trust in the sense of trust in statements of any kind by a principal on a topic about which the principal is trusted to have expertise [22]. But we believe this is the first logic to support reasoning about the identity and authentication of internet addresses grounded in the contextual trust of the attesting principal. Contextual trust is based in sattestations, an implemented technology for creating, distributing, and validating credentials issued by SATAs contextually trusted to issue them [7]. Sattestations have been previously implemented, but we introduce a logic to reason about sattestations in the rich practical enviroments where contextual trust may depend on organizational or social structure and where trust might be delegated within such structures. Despite the richness and complexity of our logic, we prove that it is sound with respect to the set of sattestation credentials trusted by a principal at a given time. We also prove results about what trust statements can be derived from a set of initial trust assumptions.
References
- [1] K. Gaarder and E. Snekkenes, “Applying a formal analysis technique to the CCITT X.509 strong two-way authentication protocol,” Journal of Cryptology, vol. 3, no. 2, pp. 81–98, 1991.
- [2] M. Abadi, “On SDSI’s linked local name spaces,” Journal of Computer Security, vol. 6, no. 1-2, pp. 3–21, 1998.
- [3] D. Goulet, G. Kadianakis, and N. Mathewson, “Next-generation hidden services in Tor (Tor proposal 224),” https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt.
- [4] J. Appelbaum and A. Muffett, “The .onion special-use domain name,” https://tools.ietf.org/html/rfc7686, 2015.
- [5] P. Syverson, “The once and future onion,” in Computer Security – ESORICS 2017, S. N. Foley, D. Gollmann, and E. Snekkenes, Eds. Springer-Verlag, LNCS 10492, 2017, pp. 18–28.
- [6] P. Syverson and M. Traudt, “Self-authenticating traditional domain names,” in 2019 IEEE Secure Development (SecDev). IEEE, September 2019, pp. 147–160.
- [7] P. Syverson, M. Finkel, S. Eskandarian, and D. Boneh, “Attacks on onion discovery and remedies via self-authenticating traditional addresses,” in ACM Workshop on Privacy in the Electronic Society (WPES ’21). ACM, November 2021.
- [8] D. Goodin, “A DNS hijacking wave is targeting companies at an almost unprecedented scale: Clever trick allows attackers to obtain valid TLS certificate for hijacked domains,” Ars Technica, Jan. 10 2019.
- [9] G. Coker, J. Guttman, P. Loscocco, A. Herzog, J. Millen, B. O’Hanlon, J. Ramsdell, A. Segall, J. Sheehy, and B. Sniffen, “Principles of remote attestation,” International Journal of Information Security, vol. 10, no. 2, pp. 63–81, 2011.
- [10] R. Fagin and J. Y. Halpern, “I’m OK if you’re OK: On the notion of trusting communication,” Journal of Philosophical Logic, vol. 17, pp. 329–354, 1998.
- [11] Aquinas, On Being and Essence, ch. 4, See also https://aquinasonline.com/essence-and-existence/.
- [12] 502nd Air Base Wing Public Affairs, “CYBER HYGIENE: Beware of fake DoD SAFE site,” JBSA News, April 27 2023.
- [13] M. Traudt, “Self-authenticating domains,” https://github.com/pastly/satis-selfauth-domains.
- [14] M. Finkel, “Self-authenticating domains,” https://github.com/sysrqb/satis-selfauth-domains.
- [15] B. Christianson and W. S. Harbison, “Why isn’t trust transitive?” in Security Protocols: International Workshop, Cambridge United Kingdom, April 10-12 1996, Proceedings, M. Lomas, Ed. Springer-Verlag, LNCS 1189, 1996.
- [16] “Privacy and civil liberties organizations: Justice information sharing,” https://bja.ojp.gov/program/it/privacy-civil-liberties/agencies-org/pcl-organizations, Bureau of Justice Assistance, U.S. Dept. of Justice.
- [17] P. Syverson, Logic, Convention, and Common Knowledge. CSLI Lecture Notes, Volume 142, 2002.
- [18] M. Burrows, M. Abadi, and R. Needham, “A logic of authentication,” ACM Transactions on Computer Systems, vol. 8, no. 1, pp. 18–36, Feb 1990.
- [19] M. Abadi and M. R. Tuttle, “A semantics for a logic of authentication,” in Proceedings of the Tenth Annual ACM Symposium on Principles of Distributed Computing (PODC). ACM Press, August 1991, pp. 201–216.
- [20] J. Y. Halpern and R. van der Meyden, “A logical reconstruction of SPKI,” J. Computer Security, vol. 11, no. 4, p. 581–613, July 2003.
- [21] A. D. Jaggard, P. Syverson, and C. Meadows, “A logic of sattestation,” 2024, version with full proofs to appear on arXiv.
- [22] M. Dastani, A. Herzig, J. Hulstijn, and L. van der Torre, “Inferring trust,” in Computational Logic in Multi-Agent Systems, J. Leite and P. Torroni, Eds. Springer-Verlag, LNAI 3487, 2005, pp. 144–160.