Support + Belief = Decision Trust
Abstract
We present SBTrust, a logical framework designed to formalize decision trust. Our logic integrates a doxastic modality with a novel non-monotonic conditional operator that establishes a positive support relation between statements, and is closely related to a known dyadic deontic modality. For SBTrust, we provide semantics, proof theory and complexity results, as well as motivating examples. Compared to existing approaches, our framework seamlessly accommodates the integration of multiple factors in the emergence of trust.
1 Introduction
Decision trust is defined as the willingness to depend on something (or somebody) with a feeling of relative security, although negative consequences are possible [29, 39]. This notion plays a central role in computer-mediated interactions. For instance, in e-commerce, when there is an abundance of vendors in a marketplace offering nearly identical products, customers use trust to decide whom to buy from [50]. Similarly, in the next generation Internet of Things, smart sensors, edge computing nodes, and cloud computing data centers rely on trust to share services such as data routing and analytics [17]. In spite of their differences, in both scenarios, interactions are governed by trust evaluations that depend on various conditions, e.g., security-based policies, reputation scores, Quality of Service (QoS), and the trustee’s (avail)ability to behave as expected by the trustor.
Those facts drove the development of various formal models for assessing trust, see, e.g., [2, 11, 28]. Yet, each existing model relies on specific conditions for the emergence of trust. The conditions are specifically selected depending on the Trust model in use and then applied to a given domain as fundamental requirements enabling trust. However, this specialized approach fails to work in environments where many conditions contribute to the emergence of trust, see, e.g., the Forbes report [49]. This calls for Trust models that can express multifaceted information combined to evaluate the presence or lack of trust in the environment [51]. To address this need, we introduce SBTrust, a logic that allows reasoning about decision trust relying on varied enabling conditions. In our logic, trusting a formula means that the trustor is willing to accept the formula as being true, although it might be false. This acceptance-based interpretation of trust is compatible with influential conceptual analyses of the notion of trust that show that trusting a proposition boils down to using the proposition as a premise in one’s reasoning, even though the proposition might be false [19].
Concretely, in SBTrust, Trust is a derived operator whose constituents are a Support connective and a belief operator (hence the logic’s name). Whenever it is both believed that a formula supports a formula , and that is true, then is -trusted. The notion of support, establishing a form of positive influence between two statements, is modeled through a novel dyadic operator , where is read as: in the most likely -scenarios holds. The operator yields a non-monotonic conditional sharing properties with the KLM logic of preferential reasoning [31], but cautious monotonicity. Furthermore, it encompasses additional properties. We characterize with semantics and proof theory; its axioms and rules turn out to axiomatize the flat (i.e., non-nested) fragment of Åqvist system [6] – a foundational preference-based logic for normative reasoning. The notion of belief (what is considered to be true from a subjective standpoint) is modeled through the classical belief operator , obtained through the normal modal logic . Hence, our Trust operator () is built using those ingredients - .
In the following, through a comparison with the literature, we provide motivations for introducing yet a new logical framework for decision trust. The key ingredient is the support operator , for which we discuss in Sect. 2 the (undesired and) required properties. For our logic we present syntax (Sect. 3), semantics (Sect. 4), and establish the connection between and Åqvist system . Soundness, completeness, and complexity (for both the satisfiability and the model checking problem) for SBTrust are established in Sect. 5.
1.1 Decision trust: state of the art
Logical formalisms for decision trust can be classified into one of the following three paradigms [7]:
- •
- •
- •
Although models that fall within one given paradigm are employed in real-world applications, see, e.g., [30], they tend to rely on partial features of trust or assume extremely specific conditions, thus are limited. Policy-based models flatten trust on the use of (cryptographic) protocols and regulations that fail whenever they circularly rely on some trust conditions - the problem of trusting the policy-makers [27]. Reputation-based models flatten trust on scores that often represent only a proxy for trust - the problem of the insufficiency of reputation for trust [11]. Differently from the other paradigms, cognitive models of trust can combine various ingredients that reflect agents’ cognitive states, thus capturing a more nuanced notion of trust. However, those models rely on specific definitions of trust taken from cognitive science (see, e.g., the logical model of [25], which is inspired by the cognitive theory of trust studied in [12]) and specify necessary conditions for trust emergence. This creates a trade-off between the effectiveness in modeling various aspects of trust and the complexity in estimating all its constituting elements in real-world environments. The following example better clarifies the difference between the various paradigms.
Example 1.
As a leading global online retailer, Amazon prioritizes building consumer trust to drive transactions on their platform. To this end, the company enforces various protocols and vendor rules. Imagine a customer assessing whether to trust the proposition “Amazon vendor is reliable” (). In a policy-based model of trust, the customer would only be able to trust under the conditions that successfully fulfills Amazon’s internal policies (e.g., is a registered company). Yet, this approach has various drawbacks: the requirements might be tricked, giving the customer a false sense of security and exposing her to scams; building trust goes beyond regulations, and customers’ trust seldom rely only on vendors abiding to legal and technical policies; the problem of trust computing would only be shifted from the vendor to Amazon and the policies enforced by it. In a reputation-based model, trust in can only depend on ’s positive reviews. However, this has two limitations: new vendors lack reviews, hindering their ability to establish trust; reviews can be manipulated, leading to inaccurate trust assessments (e.g., in 2017, The Shed at Dulwich restaurant became London’s number one restaurant on Tripadvisor, although serving fake food). Using a specific cognitive model of trust, it would be possible to compute trust estimations based on cognitive features of the agents involved (e.g., the intention of the vendor to provide a good service). However, by having to choose a specific cognitive model, the features that can be modelled as trust triggers would be limited to the ones indicated by the model itself. Moreover, cognitive models often neglect to include features typical of the other paradigms, i.e., policies and reputation.
2 Support operator
We introduce the support operator . We compare our modeling approach with other approaches used in the literature on non-monotonic reasoning and motivate the axiomatization we have chosen for our operator. Henceforth, we will shorten the reading of to: given , then is most likely.
2.1 Why yet another notion of support
Various notions of support and axiomatizations as a conditional operator have been introduced in the literature; see, e.g. [15] for three potential readings of it as an evidence operator, or [16] for a thorough discussion on a dyadic operator for relevance. What all authors agree upon is that the operator should be non-monotonic, i.e., given , there is no reason why should be the case. This is because additional information () may undermine the previously established supporting statement. We also assume that support is a non-monotonic operator. However, we make assumptions that distinguish our view from the existing ones. Specifically, contraposition () and right weakening (if , then ) together give monotonicity for the operator. Differently from [14], to avoid monotonicity, we give up contraposition (as motivated through Example 2) rather than right weakening, which is a reasonable assumption (see Sect. 2.2).
Example 2 (Contraposition and Modus Ponens).
Assume that supports that ’s products are delivered fast (), i.e., . This should not imply that if the delivery is slow, then it is most likely that the vendor is not a good one (), as the delay may depend on other reasons. For analogous reasons, we do not have that and imply , meaning that Modus Ponens for does not hold.
To proceed methodically, we draw upon the axiomatizations of non-monotonic conditionals introduced in [31], commonly referred to as KLM systems, as they serve as cornerstones for non-monotonic reasoning. We start by illustrating with an example why the KLM principle of cautious monotonicity is unsuitable for formalizing our concept of support (see also Remark 2):
Example 3 (CM).
Let denote that customer receives a defective item from vendor , and that is refunded by . In the Amazon marketplace, we have that . We also have that given then it is most likely that is not reliable (). However, having does NOT mean that is most likely. In fact, receiving a refund eases the customer into considering the vendor a good one, and this invalidates CM. Example 12 will show the failure of this inference in our logic.
2.2 Intuitive properties
We introduce the properties that we envision for the concept of support, illustrating their rationale refining the scenario outlined in Example 1. As will be shown in Theorem 1, many of the properties discussed below are inter-derivable, leading to a more concise axiomatization for .
Henceforth, by axioms, we mean axiom schemata. The naming conventions for the considered properties are taken from the KLM systems [31] and [6, 41].
As the support operator applies to boolean formulas we expect all classical tautologies to be provable. Moreover, since any fact intuitively supports itself, the axiomatization of should contain the following axiom:
The presence of this axiom highlights that does not establish a causal relation, see Remark 1. Moreover we want our support system to not support contradictions. In essence, anything supporting a contradiction must be dismissed. This principle is reflected in the following axiom:
Example 4.
Let mean that vendor is compliant with “Amazon Seller Terms and Conditions” and let be a vendor with an average rating of 4.5 stars for her main product (). Assume that and support that is a good vendor, i.e., . This implies that being compliant supports the connection between having good reviews and being a good vendor, as expected by Amazon and their customers, i.e., .
This example leads to the following axiom (first introduced as a rule in [48]):
that expresses the fact that deductions performed under strong assumptions may be useful even if the assumptions are not known facts.
It is quite natural to assume that if a statement supports two other statements, it supports their conjunction, as expressed by the axiom below:
Note that due to ST, this axiom can never be used to derive that a non-contradictory statement supports a contradiction. We illustrate this with the following example, involving the lottery paradox [32], a stumbling block for default reasoning systems (see [43]).
Example 5.
The paradox states that in a fair lottery, it is rational to assume that each individual ticket is likely not to win. By allowing to infer from two statements being likely that their conjunction is also likely, one concludes that two tickets are likely not to win. By iterating this reasoning, we can infer that it is likely that no ticket will win, which contradicts the fact that a winning ticket exists. The paradox does not apply to . Indeed, if we assume that every ticket is most likely not to win, , we can infer by AND. Being a contradiction, using ST we could derive , which is impossible.
A support operator should also satisfy the axiom, as illustrated by Example 6 below
Example 6.
Let stand for is authenticated on the Amazon marketplace. Obviously, an authenticated and compliant vendor is most likely to be a legitimate business (), i.e., . Moreover, due to Amazon’s policies, . This implies that given it is already most likely that is legitimate, .
Example 7.
Let mean that abides to the “Acting Fairly” policy of the “Amazon’s Code of Conduct”. Consider the case in which we have both and . These two facts imply that it should be sufficient to satisfy or to be considered a good vendor, i.e., .
This example leads to the axiom:
Example 8.
Let mean that vendor offers high QoS while selling product . Consider a situation in which sells two distinct products and , using the same commercial infrastructure (same logistics, same customer care, and so on). Hence, it would be absurd that offers high QoS only for one of the two products, i.e., . Hence whatever supports, it should also be supported by , and viceversa.
This example leads to the following axiom:
The first rule we consider is motivated by the example:
Example 9.
Let mean that prices well her products and that the product sold by is labeled as an “Amazon’s Choice” product. By Amazon’s policy, . Now, assume (a good vendor is most likely to price well her products), (a good vendor is most likely to deliver her products fast), and (a good vendor is most likely to have good reviews). Hence, all of these supports together should imply that supports .
The resulting rule is:
of which Right Weakening (RW), see Remark 1, represents the particular case .
Example 10.
Let be a vendor selling product and assume that and . If these conditions hold, then customer will have a good purchasing experience. Given that this implication holds, under the same hypothesis, it follows that having a negative purchasing experience supports a contradiction.
This example leads to the following “-like” rule:
where stands for either or its negated version . The rule is named because, when considered alongside other axioms and rules, grants the operator all the properties of an -modality for the shallow fragment (see Theorem 3). As shown in Sect. 5, lets the operator behave locally like an absolute operator, playing a crucial role in the completeness proof.
Remark 1.
(Most of) The axioms and rules discussed above are present in well-known systems. For instance, the KLM logic of preferential reasoning, which interprets the dyadic operator as “ typically implies ”, contains the rule RW (see below) and the axioms ID, CUT, AND and OR. I/O logics [37] and their causal versions [10], whose dyadic operator is interpreted as a dyadic obligation and a causal relation, respectively, share RW, CUT, AND and OR (but notably not ID). Note that KLM and (deontic and causal) I/O logics also contain the rule below right:
which is weaker than the axiom . An important difference between these logics and our operator is the direct interaction of support formulas and propositional formulas due to the rule .
3 A logical framework for decision trust
We introduce the logic SBTrust for reasoning about decision trust. SBTrust is obtained by combining the operator with a belief operator . For the former, we use a (subset of) the discussed axioms and rules and for the latter a modality222Alternatively one could use a modality, which, however, would include as a side effect negative introspection; see [26] for a discussion of why it might be undesirable in scenarios similar to the ones we discuss.
3.1 Syntax and axiomatization
The language of SBTrust consists of a countable set of propositional variables (ranging over ), the connectives , , , and of classical logic, the binary support operator , and the unary belief operator . is defined by the following two layers grammar ():
We use , and only for propositional logic formulas, and and for general formulas in . and will denote the set of formulas of SBTrust and of classical propositional logic, respectively. We identify theoremhood in SBTrust with derivability in its Hilbert-style system.
Definition 1.
SBTrust is obtained by extending any axiom system for propositional classical logic: (indicating its axioms by) () and the Modus Ponens rule MP, together with:
- For the support operator:
-
The axiom schemata
together with the rules and (whose applications must ensure the resulting formulas to be within the language ).
- For the belief operator:
-
The following axiom schemata
and the Necessitation rule for ().
Trust arises as a combination of support and belief.
Definition 2.
.
The notion of derivation is the usual one (some care is required to maintain the restrictions on our language), as well as the notion of derivability for a formula from a set of assumptions , which we denote as . We write iff . Clearly, in SBTrust, the deduction theorem holds. We now prove that all the axioms and rules stated in Sect. 2.2 are derivable in SBTrust.
Theorem 1.
The rules RW and LLE, as well as the axioms AND, CUT, and OR are derivable in the system for .
Proof.
Trivial for RW, LLE and AND. The CUT axiom follows by applied to formulas obtained by SH and CL. Axiom OR: by applying LLE to we get ); similarly, we get also ). The claim follows by two applications of SH, followed by CL, RCK, ID, AND and RW. Full proof in appendix. ∎
Remark 2.
Another reason for rejecting CM is that, in conjunction with CUT and RW, it permits to derive REC: (derivation can be found in the appendix). REC is too strong for a support operator since two statements that support each other do not necessarily support the same statements. For instance, the proposition and the statement ” always responds quickly to a customer’s question” may support each other but do not support the same statements; the latter may support that uses an AI tool to answer while the former does not.
A strong connection holds between and , the dyadic deontic logic introduced in [6] and axiomatized in [41] as follows ( stands for “ is obligatory under the condition ”):
-
•
Axioms:
-
•
Rules: modus ponens MP and necessitation for .
The flat fragment (i.e., and apply only to formulas of ) of the language of can be translated into our language as follows:
Definition 3.
Let be any formula in the flat fragment of . The translation using is ():
We show that with the exception of and , all axioms and rules of (within the flat fragment) are derivable in the axiomatization for . This establishes a first link between and .
Theorem 2.
The translation ∗ of all axioms and rules of – but and – are derivable in the axiomatization for .
Proof.
The claim for axioms , , , and directly follows from the translation. For the remaining axioms: The translation ∗ of D∗ is . Its contraposition is an instance of AND. The translation of can be derived by two applications of , the axioms and the rule . COK follows by , and . Nec follows by , together with the rules and . The Necessitation rule for follows by modus ponens using and . Full proofs in appendix. ∎
Remark 3.
The translation of axioms Abs and from results in formulas containing nested applications of the support operator. In Sect. 4, we will see that the axioms and rules for axiomatize the shallow fragment of . In this regard, the rule , which does not correspond to any rule known in the literature, does not follow from the remaining axioms and rules for , and it is needed to derive (some) flat formulas which hold in .
Example 11.
Let us see SBTrust at work. Assume that a customer believes the two formulas supported by discussed in Example 3. If receives a defective item from vendor (), from and we derive . Similarly, it also holds that . Now, assume that does indeed receive the refund, thus .
We can show that it is not the case that trusts , . We use the following abbreviations to write a concise derivation: Let , , and , and, by hypothesis, , i.e., . Hence:
Then, applying rule to (3) and using the hypothesis together with axiom , we derive . This formula, by axiom , finally implies . Note that the lack of axiom CM impedes to derive , as shown in Example 12.
4 Semantics
For evaluating a formula of the form , we intuitively consider only the most likely -scenarios and check whether holds in those scenarios. This approach is inspired by preference-based logics [22], in which a conditional statement “If then ” is interpreted as among the “best” possible scenarios in which is true, is true as well. Hence the semantics for SBTrust is built on preference-based models [47] (for the support statements) and standard relational models (for the belief operator).
Also used in KLM logics and in Åqvist system , preference-based models are triples , where denotes a set of states, a valuation function, and the preference relation orders the states in . In our context, means that the state is at least as likely as ( uses a better than interpretation, while the KLM logic uses a preferred to interpretation). Although similar to and , our approach differs from those for two important aspects. First, in our semantics, we include a component for belief formulas. Second, instead of having a unique preference frame, in a Trust frame, the set of states is partitioned into multiple preference frames ; this allows us to consider different support systems within the same model. Note that, without partitioning, the unique given support system would necessarily have to be believed. In our interpretation, a statement is true in a state if among all the -states in () the most likely -states in () satisfy ; consists of the maximal elements in the set of all -states of according to .
In line with systems and , our semantics include a limitedness condition: . Limitedness allows us to express when a formula is impossible in a partition using . Hence, limitedness restricts SBTrust to support only non-contradictory options (with the exception of ). Consequently, an agent will never place trust in a blatant contradiction.333This is different from trusting contradicting statements separately, which is still possible in SBTrust. The definitions in this section characterize the frames and models on which our logic is based on.
Definition 4 (Trust frame).
Let where
-
•
is a serial and transitive Kripke frame;
-
•
is a partition444 and . of ;
-
•
For each : is a preference frame (therefore ).
Definition 5 (Trust model and truth conditions).
Let where
-
•
is a Trust frame;
-
•
is a Valuation function;
-
•
For each , fulfills the limitedness condition: for every propositional formula
-
•
iff ;
-
•
iff ;
-
•
iff and ;
-
•
iff for ;
-
•
iff
where and
.
Remark 4.
Three observations: The semantics for formulas containing and are defined through and as usual. We do not assume any property on the relations to keep the model as general as possible. holds if is part of a set of states in which supports is true. An agent may or may not believe , independently from the fact that holds or not. This gives us the possibility to capture an agent’s misinformation.
The two notions of a formula being a semantical consequence of a set of formulas (in symbols: ) and being valid (in symbols: ) are defined as usual.
Example 12 (Ctd Ex. 11).
We use our semantics to show that (i.e., the customer trusts that is not a good vendor on the basis of having received a defective item and a refund) does not follow from and . Consider the Trust model , with , , and for support, for belief, and , , . A graphical representation of is below; , , , and solid and dashed arrows represent the preference relation and the accessibility relation , respectively.
Let be the current state. In the customer believes and (i.e. ). Since , we have that . Hence, and . Similarly, , we also have that . Hence, and . Now, since , we have that . Hence, . Finally, we have .
We now examine the relation between and . Theorem 2 has already highlighted a syntactic connection (from to via the translation ∗ in Def. 3). Here, by using their semantics, we uncover a stronger tie. Recall that is sound and complete w.r.t. all preference models which fulfil the limitedness condition, see [41]. We denote by the semantical consequence relation in .
Theorem 3.
For any set of formulas and formula in the language of that do not contain nested modal operators, we have: .
Proof.
Both directions proceed by contraposition.
We show that given a Trust model invalidating the semantical consequence for support we can find a preference model invalidating the semantical consequence for . Assume that . Hence, there exists a Trust model and a state such that and . We cut down the Trust model into a preference model as follows . By definition, is a preference model fulfilling the limitedness condition.555In [41], the limitedness condition is stated for every formula of . Since the truth sets of obligations and modalities are those of or , our limitedness condition is equivalent to the one given in . Observe that no formula in contains the operator . Hence, the evaluation of the formulas in at the state coincides with the evaluation of the formulas in in . This lets us conclude and .
Given a preference model invalidating , we provide a Trust model invalidating . Assume to have the preference model , fulfilling the limitedness condition and a state such that and . We extend it into a Trust model with only one element in the partition, as follows , with , , and . By definition, is a Trust model. Again no formula in contains the operator . Hence the evaluation of the formulas in at the state coincides with the evaluation of the formulas in in . This lets us conclude and . ∎
By the Soundness and Completeness of SBTrust w.r.t. Trust models, proved in the next section, it follows that the axioms and rules of axiomatize the flat fragment of .
5 Soundness, completeness and complexity of SBTrust
We start with the soundness of SBTrust w.r.t. Trust models.
Theorem 4 (Soundness).
Given and it holds that: .
Proof.
Proceed, as usual, by induction of the length of the derivation. We distinguish cases according to the last rule applied, showing the details of the cases for the axiom LL+ and the rule .
LL+: Given a Trust model and a state such that , then we get . Given the limitedness assumption, this is equivalent to and furthermore to . Hence, and are equivalent in every state of . Therefore the sets and coincide, i.e., .
: Given a Trust model , we assume to be true in every state of . Given a state such that holds, it follows that because by virtue of the semantics of , all the states of a given partition class satisfy the same (negated) support formulas. Therefore, we have that , which means and finally . ∎
Completeness is shown via the canonical model construction, adapted to our framework from the technique outlined in [22]. The needed modifications are the following. First, we have to ensure that SBTrust allows us to derive all the axioms and rules required for the construction to proceed. Furthermore, unlike the models considered in [22], Trust models incorporate multiple preference frames, the belief operator , and include the limitedness condition.
The modifications are implemented as follows. The required axioms and rules for their proof to go through are those of without (which corresponds to limitedness). In Sect. 3.1 we have shown that with the exception of 5 and Abs the axioms of are derivable in SBTrust. We prove that we can derive all the necessary properties of the canonical model even in the absence of axioms 5 and Abs, by relying on other rules of SBTrust, primarily on . The multiple preference frames are handled by partitioning the maximal consistent sets used in the canonical model construction into equivalence classes containing the same support formulas. The addition of belief is easy: We equip the canonical model with the accessibility relation in the usual Kripke fashion. Incorporating the limitedness condition poses the challenge of guaranteeing that, in every preference model of our canonical model, each non-empty set contains a maximal element according to the preference relation. We address this by using the axiom ST.
Definition 6.
A set is called a maximal consistent set (MCS for short) if (a) , and (b) For every either or .
Although not all the states in a model validate the same support formulas, we still need to make sure that all the states inside the same preference frame do. For that reason, we partition the maximal consistent sets into equivalence classes, containing the same support formulas. We also define a set containing all formulas which are supported in a MCS by a formula .
Definition 7.
Given and we define:
-
•
(We write if ).
-
•
(We call -likely for if ).
Fact 1.
is an equivalence relation on the set of all MCSs. We write for the equivalence class containing .
Each equivalence class serves as a basis for a preference frame in our canonical model. The maximal consistent sets, which are -likely for are our candidates for the most likely states in the preference frame based on the equivalence class , as they contain all formulas supported by .
Before moving forward, we need a result that will be used repeatedly. It asserts that if is not supported by in then we can construct a MCS with the same support formulas as , and including the negation of as well as all the propositions supported by .
Lemma 1.
Given a MCS and a propositional formula with , then there exists such that .
Proof.
We show the consistency of the set If this holds, we can extend the set to an MCS which, by construction, is contained in . We prove the consistency of by contradiction. Assume . Hence, we can find
and such that with
The axioms and the deduction theorem yield From we get
and then with the help of Nec. This leads to because of RW, a contradiction to our assumption. ∎
We define the states and preference relations for our canonical model (the index is a representative of an equivalence class of the equivalence relation ). The states are where is a MCS in the equivalence class , , and is a propositional formula. and are used to pinpoint the maximal -states according to the relation and to ensure that the maximal elements of coincide with the states satisfying the supported formulas within , see Cor. 2.
For a MCS we use the following notation:
Definition 8.
The preference relation is defined as follows:
holds if and only if at least one of the following conditions holds:
-
•
is -likely for and
-
•
( and ) or ( and ) or ( and )
After having defined our preference relations we show that the maximal elements in are . This is what we are aiming for as we want all the elements in to fulfil every formula supported by in . Furthermore, if an MCS is then is a maximal element in . These results will be the core of our completeness proof since they can be used to show that a support formula is true in a state of if and only if the formula appears in . To prove this, we start proving the following technical lemma that establishes a connection between an appearing in and the MCS .
Lemma 2.
Given then:
Proof.
We assume . The other cases being similar.
(a) Given , we have by construction of . This implies by maximality. The latter only holds if is -likely for , and since no other condition applies.
(b) Assume that there exists a MCS such that but . For we then have , but which is a contradiction to . Hence, we can infer that such an MCS does not exist, which means . In other words, every MCS in contains the formula . More specific each MCS with contains the formula , which means is inconsistent. This lets us infer , hence we can find finitely many support formulas in such that . By applying the deduction theorem and we get and finally . ∎
Corollary 1.
Given then
Proof.
(a) By and Lemma 2 we derive . Using the derivable axiom and Necessitation for (see Theorem 2) we get (using the classical tautology ). Let us take an arbitrary , this means . Since we can apply LL+ to derive . Furthermore, by applying SH, we get . By Lemma 2 we know that , which means and therefore . By assumption, we have , which lets us conclude . Since was arbitrary we get .
(b) Since is -likely for by axiom ID we have and therefore . If we take an arbitrary MCS with and an arbitrary propositional formula we end up with since the first point in the definition of is fulfilled. ∎
We can finally show that our construction works as intended, namely that every formula supported by a formula according to a MCS is contained in all the maximal states in the equivalence set of .
Corollary 2.
Given a MCS and two propositional formulas and , it holds that
Proof.
Given we can derive that via Corollary 1. By assumption we get .
Now, we proceed to define the canonical model. To begin, let us fix as a set consisting of one representative of each equivalent class of .
Definition 9 (Canonical model).
Let where:
-
•
;
-
•
;
-
•
is defined as in Definition 8;
-
•
is defined as if for all .
We now begin the final steps of our completeness proof. First, we present the truth lemma, which states that a formula is true at a state in the canonical model if and only if the formula is an element of the maximal consistent set of this state. Following this, we demonstrate that is a Trust model.
Lemma 3 (Truth lemma).
iff .
Proof.
Proceeds, as usual, by structural induction on the formula . See appendix. ∎
Lemma 4.
in Definition 9 is a Trust model.
Proof.
First, we prove that for each fulfils the limitedness condition. Let and with . Hence there exists a MCS with . ST tells us that . This implies and finally . Given that the set is closed under consequences because of RW, we can conclude that it is consistent. By Lemma 1, we can now extend to a MCS . By construction is . Using part (b) of Corollary 1 we obtain , which makes non-empty. We use Lemma 3 to conclude . Since and were arbitrary we are done.
Furthermore, we need to show that the relation is transitive and serial. We will, from now on, write for since only depends on the sets. Transitivity: Assume and . From by 4B we derive . By assumption and the construction in Definition 9 ; the same argument applies for to derive . Since was arbitrary, we can conclude . Seriality: Given a MCS we get via the rule of necessitation for . This implies because of the axiom DB and the consistency of . Now we take a look at the set . Because of the axiom , we know that is closed under consequences, and since , we also know to be consistent. We can, therefore, extend it to a maximal consistent set . By definition , which makes serial. ∎
With these two lemmas, we are now ready to prove the completeness of SBTrust. This will be done in the usual manner by showing that for every non-derivable formula, there exists a state in our canonical model where it does not hold.
Theorem 5 (Completeness).
Given and it holds that: .
Proof.
We now discuss the complexity results for SBTrust. We split the problem into two parts: we reduce SBTrust by ignoring its support part and focusing on the Boolean and belief parts, and then we reinstate the support part, completing the proof.
Definition 10 (SBTrust-reduction).
The SBTrust-reduction is obtained by reducing to and transforming a model into a model in the following way:
-
•
formulas of remain unchanged in ;
-
•
All of the form , are mapped to novel propositional variables taken from a set , where ;
-
•
All the other formulas are adjusted accordingly;
-
•
extends to also include in its domain , according to the following rule: if and is the propositional variable corresponding to , then .
Theorem 6.
The decision problem for SBTrust is PSPACE-complete.
Proof.
First note that the decision problem for a SBTrust-reduction is PSPACE-complete. This follows from the fact that is a set of formulas and the s are serial and transitive relational models. Therefore the results given in [24, 33, 40] hold also for formulas and, in turn, for the SBTrust-reduction. Now, take the conjunction of all support formulas corresponding to the propositional variables of that appear in and are mapped to true. To prove the theorem, we have to show that the problem of deciding those support formulas’ satisfiability is within PSPACE. Refining a result in [18], [45] shows that the satisfiability problem for the (full) logic is NP-complete. The result follows by Theorem 3. ∎
Finally, we state the complexity of the model checking problem for SBTrust, i.e., the problem of deciding whether a formula is satisfied by a state of a model .
Theorem 7.
Given a model and a formula , let be the number of states and the number of pairs determined by the accessibility relation . Let be the number of support formulas, the number of belief modalities, plus the number of atomic propositions in , plus the number of connectives in the formula. Then, the complexity of the model checking decision problem is .
Proof.
We use the splitting methodology: first considering only the modal part of the formula , and afterwards the support part. For the first stage, apply a SBTrust-reduction to and . This takes at most -steps (the propositional formulas are left unchanged and each support formula is substituted with a novel propositional formula). After the reduction, what is left is a model checking problem for a modal formula within a pointed Kripke relational model. As is well-known, the complexity of this problem is , see, e.g., [23]. For the second stage, translate back the novel propositional formulas to their respective support formulas. Take the partition which contains the state in which we must evaluate the formula. To evaluate a support formula , we need to compute the two sets and . The latter is straightforward (since each state was already labeled during the first stage). The former requires at most -steps (we are assuming the worst case in which ): compare each state in with all other states in , keeping track of the states that are preferred to other states. This must be done for all support formulas, thus, the complexity of the whole procedure is . This gives us the whole complexity of the model checking problem, which is . ∎
6 Conclusions and Future Works
We have introduced SBTrust, a logical framework for reasoning about decision trust based on two pillar concepts, namely belief and support. For the latter, we defined a novel non-monotonic conditional operator, which axiomatizes the flat fragment of the logic and is based on preference-semantics.
Because of the generality of the concepts above and the way in which they are combined to formalize trust, SBTrust can integrate elements from the different approaches mentioned in Sect. 1.1 within a unified framework. More precisely, we do not need to indicate specific necessary cognitive conditions for the emergence of trust. Instead, we provide a way to get trust out of the support that exists between different formulas, which can capture the influence of different factors on trust. We also maintain a reference to cognitive features by integrating the belief modality.
Following up the discussion initiated in Example 1, we now illustrate how SBTrust can be used to combine different elements that contribute to establishing trust.
Example 13.
Assume that to trust , customer seeks to fulfil three conditions: i) a cognitive-based one; ii) a reputation-based one; iii) a policy-based one. The cognitive-based condition could be captured by the notion of occurrence Trust (denoted by formula ) given in [25], which depends on multiple cognitive features of the agents involved such as the goals of , the ability and intentions of , and the effects of the actions of on the goals of . The reputation-based condition could be represented, e.g., by the proposition , while the policy-based condition by proposition . Then, the formula will indicate that customer trusts as a good vendor for reason , where stands for .
As emphasized by this example, in SBTrust, we have the flexibility to express several different conditions that specific models cannot capture alone. This flexibility comes, however, at the expense of reduced deductive power.
While our framework is primarily designed to capture decision trust, we claim that it could be versatile enough to encompass other notions of trust. In particular, as discussed in [21], many alternative definitions of trust in heterogeneous domains, such as reliability trust [20] in the setting of economy, are based on the use of explicit supportive information. In addition, it is interesting to note that our approach has strong similarities with the approach followed in argumentation-based formalizations of trust [5, 52]. In the future, we intend to explore further potential applications of SBTrust and of the support operator. From the technical point of view, we plan to study the derived operator in isolation and identify its properties independently of support and belief. Moreover, we intend to extend SBTrust by: allowing nesting of the operators, using beliefs within support statements; providing a proof calculus, along the line of that in [13], equipped with a prover; moving towards a quantitative, dynamic, and multi-agent setting.
References
- [1] Martín Abadi. Logic in access control (tutorial notes). In Alessandro Aldini, Gilles Barthe, and Roberto Gorrieri, editors, Foundations of Security Analysis and Design V: FOSAD 2007/2008/2009 Tutorial Lectures, pages 145–165. Springer Berlin Heidelberg, 2009.
- [2] A. Aldini and M. Tagliaferri. A taxonomy of computational models for trust computing in decision-making procedures. In Audun Jøsang, editor, European Conference on Information Warfare and Security, pages 571–578, 2018.
- [3] Alessandro Aldini, Gianluca Curzi, Pierluigi Graziani, and Mirko Tagliaferri. A probabilistic modal logic for context-aware trust based on evidence. International Journal of Approximate Reasoning, 169, 2024.
- [4] Alessandro Aldini and Mirko Tagliaferri. Logics to reason formally about trust computation and manipulation. In Andrea Saracino and Paolo Mori, editors, Emerging Technologies for Authorization and Authentication, pages 1–15. Springer, 2020.
- [5] Leila Amgoud and Robert Demolombe. An argumentation-based approach for reasoning about trust in information sources. Argument & Computation, 5(2-3):191–215, 2014.
- [6] L. Åqvist. Deontic logic. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic: Volume II, pages 605–714. Springer, Dordrecht, 1984.
- [7] Donovan Artz and Yolanda Gil. A survey of trust in computer science and the semantic web. Journal of Web Semantics, 5(2):58–71, 2007. Software Engineering and the Semantic Web.
- [8] Abdessamad Benlahbib and El Habib Nfaoui. AmazonRep: A reputation system to support Amazon’s customers purchase decision making process based on mining product reviews. In 2020 Fourth International Conference On Intelligent Computing in Data Sciences (ICDS), pages 1–8, 2020.
- [9] Bruno Blanchet. Security protocol verification: Symbolic and computational models. In Pierpaolo Degano and Joshua D. Guttman, editors, Principles of Security and Trust, pages 3–29. Springer, 2012.
- [10] Alexander Bochman. A logic for causal reasoning. In Georg Gottlob and Toby Walsh, editors, Proceedings of IJCAI-03, pages 141–146. Morgan Kaufmann, 2003.
- [11] Diego De Siqueira Braga, Marco Niemann, Bernd Hellingrath, and Fernando Buarque De Lima Neto. Survey on computational trust and reputation models. ACM Comput. Surv., 51(5), nov 2018.
- [12] C. Castelfranchi and R. Falcone. Trust Theory: A Socio-Cognitive and Computational Model. John Wiley and Sons, 2010.
- [13] Agata Ciabattoni and Dmitry Rozplokhas. Streamlining input/output logics with sequent calculi. In Pierre Marquis, Tran Cao Son, and Gabriele Kern-Isberner, editors, Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023, Rhodes, Greece, September 2-8, 2023, pages 146–155, 2023.
- [14] Vincenzo Crupi and Andrea Iacona. The evidential conditional. Erkenntnis, 87:2897–2921, 2022.
- [15] Vincenzo Crupi and Andrea Iacona. Three ways of being non-material. Studia Logica, 110:47–93, 2022.
- [16] James P. Delgrande and Francis Jeffry Pelletier. A formal analysis of relevance. Erkenntnis (1975-), 49(2):137–173, 1998.
- [17] Lidia Fotia, Flávia Delicato, and Giancarlo Fortino. Trust in edge-based internet of things architectures: State of the art and research challenges. ACM Comput. Surv., 55(9), 2023.
- [18] Nir Friedman and Joseph Y. Halpern. On the complexity of conditional logics. In Jon Doyle, Erik Sandewall, and Pietro Torasso, editors, Proceedings of KR’94, pages 202–213. Morgan Kaufmann, 1994.
- [19] Karen Frost-Arnold. The cognitive attitude of rational trust. Synthese, 191(9):1957–1974, 2014.
- [20] Diego Gambetta. Can we trust trust? In Diego Gambetta, editor, Trust: Making and Breaking Cooperative Relations, pages 213–237. Blackwell, 1988.
- [21] Diego Gambetta, editor. Trust: Making and Breaking Cooperative Relations. Blackwell, 1988.
- [22] Davide Grossi, Wiebe van der Hoek, and Louwe B Kuijer. Reasoning about general preference relations. Artificial Intelligence, 313:103793, 2022.
- [23] Joseph Y. Halpern. Reasoning about Knowledge. The MIT Press, 2003.
- [24] Joseph Y. Halpern and Yoram Moses. A guide to completeness and complexity for modal logics of knowledge and belief. Artificial Intelligence, 54(3):319–379, 1992.
- [25] Andreas Herzig, Emiliano Lorini, Jomi F. Hübner, and Laurent Vercouter. A logic of trust and reputation. Logic Journal of the IGPL, 18(1):214–244, 12 2009.
- [26] Lloyd Humberstone. Philosophical Applications of Modal Logic. College Publications, 2016.
- [27] Christian Damsgaard Jensen. The importance of trust in computer security. In Jianying Zhou, Nurit Gal-Oz, Jie Zhang, and Ehud Gudes, editors, Trust Management VIII, pages 1–12. Springer, 2014.
- [28] Audun Jøsang. Trust and reputation systems. In Alessandro Aldini and Roberto Gorrieri, editors, Foundations of Security Analysis and Design IV, pages 209–245. Springer, 2007.
- [29] Audun Jøsang, Roslan Ismail, and Colin Boyd. A survey of trust and reputation systems for online service provision. Decision Support Systems, 43(2):618–644, 2007. Emerging Issues in Collaborative Commerce.
- [30] Sepandar D Kamvar, Mario T Schlosser, and Hector Garcia-Molina. The eigentrust algorithm for reputation management in p2p networks. In Proceedings of the 12th International Conference on World Wide Web, pages 640–651, 2003.
- [31] Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic reasoning, preferential models and cumulative logics. Artif. Intell., 44(1):167–207, 1990.
- [32] Henry Ely Kyburg. Probability and the logic of rational belief. Wesleyan University Press, 1961.
- [33] Richard E. Ladner. The computational complexity of provability in systems of modal propositional logic. SIAM J. Comput., 6:467–480, 1977.
- [34] Christopher Leturc and Grégory Bonnet. A normal modal logic for trust in the sincerity. In International Foundation for Autonomous Agents and Multiagent Systems, AAMAS ’18, page 175–183, 2018.
- [35] Fenrong Liu and Emiliano Lorini. Reasoning about belief, evidence and trust in a multi-agent setting. In Bo An, Ana L. C. Bazzan, João Leite, Serena Villata, and Leendert W. N. van der Torre, editors, PRIMA 2017: Principles and Practice of Multi-Agent Systems, volume 10621 of LNCS, pages 71–89. Springer, 2017.
- [36] Yining Liu, Keqiu Li, Yingwei Jin, Yong Zhang, and Wenyu Qu. A novel reputation computation model based on subjective logic for mobile ad hoc networks. Future Generation Computer Systems, 27(5):547–554, 2011.
- [37] David Makinson and Leendert van der Torre. Input/output logics. J. of Philosophical Logic, 29(4):383–408, 2000.
- [38] Stephen Marsh. Formalising Trust as a Computational Concept. University of Stirling, 1994.
- [39] D. Harrison McKnight, Norman L. Chervany, and Hubert Horatio Humphrey. The meanings of trust. In University of Minnesota MIS Research Center Working Paper series, 2000.
- [40] Linh Anh Nguyen. On the complexity of fragments of modal logics. In Advances in Modal Logic, 2004.
- [41] Xavier Parent. Completeness of Åqvist’s systems E and F. The Review of Symbolic Logic, 8(1):164–177, 2015.
- [42] I. Pinyol, J. Sabater-Mir, P. Dellunde, and M. Paolucci. Reputation-based decisions for logic-based cognitive agents. Autonomous Agents and Multi-Agent Systems, 24:175–216, 2012.
- [43] David Poole. What the lottery paradox tells us about default reasoning. In Ronald J. Brachman, Hector J. Levesque, and Raymond Reiter, editors, Proceedings of the 1st International Conference on Principles of Knowledge Representation and Reasoning (KR’89). Toronto, Canada, May 15-18 1989, pages 333–340. Morgan Kaufmann, 1989.
- [44] G. Primiero. A logic of negative trust. Journal of Applied Non-Classical Logics, 30(3):193–222, 2020.
- [45] Dmitry Rozplokhas. LEGO-like small-model constructions for Åqvist’s logics. Proceedings of AIML 2024, 2024.
- [46] K. E. Seamons, M. Winslett, L. Yu, R. Jarvis, A. Hess, J. Jacobson, B. Smith, and T. Yu. Negotiating trust on the web. IEEE Internet Computing, 6(06):30–37, nov 2002.
- [47] Yoav Shoham. A semantical approach to non-monotonic logics. In Proceedings of the Second Annual IEEE Symposium on Logic in Computer Science (LICS 1987), pages 275–279. IEEE Computer Society Press, 1987.
- [48] Yoav Shoham. Reasoning about Change. The MIT Press, 1988.
- [49] Anthony Smith. What Amazon key teaches us about trust in customer relationships. Forbes., November, 2017.
- [50] Marzieh Soleimani. Buyers’ trust and mistrust in e-commerce platforms: a synthesizing literature review. Inf. Syst. E-bus. Manag., 20(1):57–78, March 2022.
- [51] Mirko Tagliaferri and Alessandro Aldini. From belief to trust: A quantitative framework based on modal logic. Journal of Logic and Computation, 32(6):1017–1047, 03 2022.
- [52] Serena Villata, Guido Boella, Dov M. Gabbay, and Leendert van der Torre. A socio-cognitive model of trust using argumentation theory. International Journal of Approximate Reasoning, 54(4):541–559, 2013. Eleventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2011).
Appendix A Technical appendix: proofs of results
Theorem 1.
The rules RW and LLE, as well as the axioms AND, CUT, and OR are derivable in the system for .
Trivial for RW and LLE.
Axiom AND:
Axiom CUT:
Axiom OR:
Starting from (CL) and applying LLE we get ); similarly, we can get also ). Now, let us assume by hypothesis :
∎
Remark 2.
We show that CM in conjunction with CUT and the rule RW permits to derive REC:
is equivalent to . By again using CM and CUT we can also derive which in total gives as REC.
∎
Theorem 2.
All axioms and rules of – but and – are derivable in the axiomatization for .
The claim for axioms , , , and directly follows from the translation. For the remaining axioms: The translation of axiom D∗ in terms of is . Its contraposition is an instance of AND.
Case :
Case COK:
Case Nec:
Case Necessitation for , if we assume that the formula is provable, then we can derive via the following:
∎
Lemma 3 (Truth lemma).
iff .
This proof proceeds by structural induction on the formula .
If the claim follows directly from . Assuming the induction hypothesis for every formula in lets us derive and .
Let be of the form . We start with the case . Take with . Given an arbitrary with we get by the induction hypothesis. Because Corollary 2 implies that . Again by using the induction hypothesis we conclude which means .
For the other direction, we assume . In this case, we have to find a maximal state which does not satisfy . The assumption lets us derive . By the use of Corollary 2, we obtain a with . By induction hypothesis we get and . This means .
Let be of the form . Then both directions of the claim follow directly from the induction hypothesis and the construction of in Definition 9. ∎