This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA}: An Architecture for Policy-Aware Intentional Agents

John Meyer Miami University, Ohio, USA [email protected] Miami University, Ohio, USA    Daniela Inclezan Miami University, Ohio, USA [email protected]
Abstract

This paper introduces the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture for policy-aware intentional agents. These agents, acting in changing environments, are driven by intentions and yet abide by domain-relevant policies. This work leverages the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture for intention-driven intelligent agents by Blount, Gelfond, and Balduccini. It expands π’œβ€‹β„β€‹π’œ\mathscr{AIA} with notions of policy compliance for authorization and obligation policies specified in the language π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} by Gelfond and Lobo. π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} introduces various agent behavior modes, corresponding to different levels of adherence to policies. π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} reasoning tasks are reduced to computing answer sets using the clingo solver and its Python API.

1 Introduction

This paper introduces π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA},111π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} stands for β€œArchitecture for Policy-aware Intentional Agents.” an architecture for intentional agents that are aware of policies and abide by them. It leverages and bridges together research by Blount, Gelfond, and Balduccini [6, 7] on intention-driven agents (π’œβ€‹β„β€‹π’œ\mathscr{AIA} 222π’œβ€‹β„β€‹π’œ\mathscr{AIA} stands for β€œArchitecture for Intentional Agents.”), and work by Gelfond and Lobo [13] on authorization and obligation policy languages (π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} 333π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} stands for β€œAuthorization and Obligation Policy Language.”). Both π’œβ€‹β„β€‹π’œ\mathscr{AIA} and π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} are expressed in action languages [12] that have a seamless translation into logic programs [11], and are implemented in Answer Set Programming (ASP) [15].

With the current rise in autonomous systems the question arises of how to best construct agents that are capable of acting in a changing environment in pursuit of their goals, while also abiding by domain-relevant policies. For instance, we would want a self-driving car to not only take us to our destination but also do so while respecting the law and cultural conventions. This work is a first step in this direction. As in Blount et al.’s work, we focus on agents:

  • β€’

    whose environment (including actions and their effects) and mental states can be represented by a transition diagram. Physical properties of the environment and the agent’s mental states form the nodes of this transition diagram. Arcs from one state to another are labeled by actions that may cause these transitions.

  • β€’

    who are capable of making correct observations, remembering the domain history, and correctly recording the results of their attempts to perform actions;

  • β€’

    who are normally capable of observing the occurrences of exogenous actions; and

  • β€’

    whose knowledge bases may contain some pre-computed plans for achieving certain goals, which we call activities, while agents can compute other plans on demand.

Additionally, these agents possess specifications of authorization and obligation policies and have access to reasoning mechanisms for evaluating the policy compliance of their actions.

Thus, in our work, we extend the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture introduced by Blount et al. [6, 7] with the notion of policy compliance. π’œβ€‹β„β€‹π’œ\mathscr{AIA} agents are Belief-Desire-Intention (BDI) agents [17] who are driven by goals, have intentions that persist, and can operate with pre-computed activities. However, they are oblivious to the different nuances of authorization and obligation policies (e.g., actions whose execution is permitted vs. not permitted), and policy compliance. These intentional agents have only two behavioral modes: either ignore policies altogether (when policies are not represented by any means in the agent’s knowledge base) or blindly obey policies that disallow certain actions from being performed, even if this may be detrimental (when policies are represented as actions that are impossible to be executed). Instead, we introduce a wider range of possible behaviors that may be set by an agent’s controller (e.g., prefer plans that are certainly policy-compliant to others that are only possibly policy-compliant, but if no plans of the former type exist, accept plans of the latter kind).

In formalizing the different policy compliance behavior modes, we rely on the language π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} [13] for specifying authorization and obligation policies, and priorities between such policies. While Gelfond and Lobo specify reasoning algorithms for determining the degree of policy compliance for a sequence of actions, this is done from the perspective of a third-person observer analyzing the actions of all agents after they already occurred. Instead, our work focuses on an agent making decisions about its own future actions. As a result, various courses of action may be compared to determine the most policy-compliant one according to the behavior mode set by the agent’s controller. Moreover, our research addresses interactions between authorization and obligation policies that were not discussed in the work on π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL}.

There have been several other attempts to enable agents to reason over various kinds of policies. However, these involve reasoning over access control policies only [8, 18, 2] and a few utilize ASP as a reasoning tool for this purpose [4, 5]. Access control policies are more restrictive than the kinds of policies an agent using π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} can reason over. The work that is closest to our goal is the PDC-agent by Liao, Huang, and Gao [14]. The PDC architecture extends the BDI architecture with a policy and contract-aware methodology the authors call BGI-PDC logic. A PDC-agent is an event-driven multi-component framework which allows for controlled and coordinated behavior among independent cooperative agents. Liao et al. use policies to control agent behavior, and contracts as a mechanism to coordinate actions between agents. This architecture was later extended to support reasoning over social norms (the NPDC-agent) [16]. The PDC-agent architecture is defined as a 7-tuple of the following components: (Event Treating Engine, Belief Update, Contract Engine, Policy Engine, Goal Maintenance, Plan Engine, Plan Library)Β [14]. A major distinction of the PDC-agent agent architecture is that it supports coordination among multiple agents. This is beyond the scope of our work. Both π’œβ€‹β„β€‹π’œ\mathscr{AIA} and π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} focus on an agent with individual goals. Expanding these architectures into multi-agent frameworks by introducing communication acts is still part of future work. However, knowledge about the changing environment is expressed in the PDC-agent architecture in terms of a Domain Conceptualization Language (DCL)Β [9] and a Concept Instance Pattern (CIP). While DCL and CIP can represent plans (which are analogous to activities in the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture), there is no support for expressing the direct or indirect effects of an action. This is a disadvantage in comparison to action-language-based architectures since plans have to be pre-computed and the goals that they accomplish must be annotated according to the agent’s designer’s intuition. Since action languages only require a description of the effects of individual actions (and plans consisting of all combinations of actions can be automatically computed), there is significantly less work for a human designer when working with π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} than the PDC-agent architecture.

Thus, our proposed π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture is, to the best of our knowledge, the only intentional agent architecture that is capable to model compliance with complex authorization and obligation policies, while allowing agents to come up with policy-compliant activities on the fly.

The major contributions presented in this paper work are as follows:

  1. 1.

    Create a bridge between research on intentional agents and policy compliance, thus producing a policy-aware intentional agent architecture π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA}.

  2. 2.

    Introduce various agent behavior modes with respect to compliance with authorization and obligation policies.

  3. 3.

    Introduce mechanisms to check the consistency of a policy containing both authorization and obligation policies and reason over the interactions between these two.

  4. 4.

    Implement π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} in clingo (version 5.4.1)444https://potassco.org/clingo/ while leveraging clingo’s Python API.555https://potassco.org/clingo/python-api/5.4/ (As a by-product, π’œβ€‹β„β€‹π’œ\mathscr{AIA} was also updated to this clingo version).

2 Background

In this section, we briefly present the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture and π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} language, which form the two pillars of our work. We direct the unfamiliar reader to outside resources on Answer Set Programming [11, 15] and action language π’œβ€‹β„’\mathscr{AL} [10], which are also relevant to our research.

2.1 π’œβ€‹β„β€‹π’œ\mathscr{AIA}: Architecture for Intentional Agents

The Architecture for Intentional Agents, π’œβ€‹β„β€‹π’œ\mathscr{AIA} [7], builds upon the Observe-Think-Act control loop of the AAA architecture666AAA stands for β€œAutonomous Agent Architecture.” [3] and extends it in a couple of directions. First, π’œβ€‹β„β€‹π’œ\mathscr{AIA} adds the possibility for action failure: the agent attempts to perform an action during its control loop but may find that it is unable to do so. In this case, the action is deemed non-executable. Second, π’œβ€‹β„β€‹π’œ\mathscr{AIA} addresses a limitation of AAA in which plans are not persisted across iterations of the control loop. In π’œβ€‹β„β€‹π’œ\mathscr{AIA}, agents pursue goals by persisting in their intentions to execute action plans known to satisfy these goals (i.e., activities). Activities are represented via a set of staticsΒ [6, 7]:

{activity(M),length(M,L),goal(M,G),component(M,1,C1),component(M,2,C2),…,component(M,L,CL)}\begin{array}[]{l}\{activity(M),\ \ length(M,L),\ \ goal(M,G),\\ \ \ component(M,1,C_{1}),\ \ component(M,2,C_{2}),\ \ \dots,\ \ component(M,L,C_{L})\ \}\\ \end{array}

where MM is a unique identifier for the activity; C1,C2,…,CLC_{1},C_{2},\dots,C_{L} are the 1s​t,2n​d,…,Lt​h1^{st},2^{nd},\dots,L^{th} components of the activity; and GG is the goal that [C1,C2,…,CL][C_{1},C_{2},\dots,C_{L}] achieves. Some activities are pre-computed and stored in the agent’s knowledge base, while the rest can be generated on demand.

In addition to fluents and actions describing the agent’s environment, π’œβ€‹β„β€‹π’œ\mathscr{AIA} introduces mental fluents to keep track of the agent’s progress in the currently intended activity and towards its desired goal. Mental fluents are updated through mental actions. The Theory of Intentions is a collection of axioms that maintain an agent’s mental state. Elements of the agent’s mental state include the currently selected goal GG, stored in the a​c​t​i​v​e​_​g​o​a​l​(G)active\_goal\left(G\right) inertial fluent, and the current planned activity, stored in the s​t​a​t​u​s​(M,k)status(M,k) inertial fluent. When either a goal is selected or an activity is planned, they are said to be intended. Mental action s​t​a​r​t​(M)start(M) initiates the agent’s intention to execute activity MM and s​t​o​p​(M)stop(M) terminates it. Though most actions are executed by the agent itself, some must be executed by the agent’s controller. The exogenous mental action s​e​l​e​c​t​(G)select(G) causes the agent to intend to achieve goal GG while a​b​a​n​d​o​n​(G)abandon(G) causes the agent to cease its intent to achieve goal GG.

An agent in the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture performs the following loop:

1. Interpret observations.2. Find intended action ​A.3. Attempt to perform ​A​; record this attempt in history.4. Observe the world; record observations in history.5. Repeat (i.e.Β go to step 1).\begin{array}[]{l}\mbox{1. Interpret observations.}\\ \mbox{2. Find intended action }A.\\ \mbox{3. Attempt to perform }A\mbox{; record this attempt in history.}\\ \mbox{4. Observe the world; record observations in history.}\\ \mbox{5. Repeat (i.e.~{}go to step 1).}\end{array} (1)

2.2 π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL}: Authorization and Obligation Policies in Dynamic Systems

In real-world applications, an autonomous agent may be required to follow certain rules or ethical constraints, and may be penalized when acting in violation of them. Thus, it is necessary to discuss policies for agent behavior and a formalism with which agents can deduce the compliance of their actions. Gelfond and Lobo [13] introduce the Authorization and Obligation Policy Language π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} for policy specification. An authorization policy is a set of conditions that denote whether an agent’s action is permitted or not. An obligation policy describes what an agent must do or must not do. π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} works in conjunction with a dynamic system description written in an action language such as π’œβ€‹β„’\mathscr{AL}. An agent’s policy is the subset of the trajectories in the domain’s transition diagram that are desired by the agent’s controller.

Policies of π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} are specified using predicates p​e​r​m​i​t​t​e​dpermitted for authorization policies, o​b​lobl for obligation policies, and static laws similar to those from action language π’œβ€‹β„’\mathscr{AL}:

p​e​r​m​i​t​t​e​d​(a)Β ifΒ c​o​n​dΒ¬p​e​r​m​i​t​t​e​d​(a)Β ifΒ c​o​n​do​b​l​(h)Β ifΒ c​o​n​dΒ¬o​b​l​(h)Β ifΒ c​o​n​d\begin{array}[]{rcl}permitted\left(a\right)&\textbf{ if }&cond\\ \neg permitted\left(a\right)&\textbf{ if }&cond\\ obl\left(h\right)&\textbf{ if }&cond\\ \neg obl\left(h\right)&\textbf{ if }&cond\end{array}

where aa is an action; hh is a happening (i.e., an action or its negation777If o​b​l​(Β¬a)obl(\neg a) is true, then the agent must not execute aa in the current state.); and c​o​n​dcond is a, possibly empty, conjunction of fluents, actions, or their negations. In addition to these strict policy statements, π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} supports defeasible statements and priorities between them as in:

d:normallyΒ p​e​r​m​i​t​t​e​d​(a)Β ifΒ c​o​n​dd:normallyΒ Β¬p​e​r​m​i​t​t​e​d​(a)Β ifΒ c​o​n​dd:normallyΒ o​b​l​(h)Β ifΒ c​o​n​dd:normallyΒ Β¬o​b​l​(h)Β ifΒ c​o​n​dp​r​e​f​e​r​(di,dj)\begin{array}[]{lccl}d:\textbf{normally }&permitted(a)&\textbf{ if }&cond\\ d:\textbf{normally }&\neg permitted(a)&\textbf{ if }&cond\\ d:\textbf{normally }&obl(h)&\textbf{ if }&cond\\ d:\textbf{normally }&\neg obl(h)&\textbf{ if }&cond\\ &prefer(d_{i},d_{j})&&\end{array} (2)

Gelfond and Lobo define policy compliance separately for authorizations vs. obligations:

Definition 1 (Policy Compliance – adapted from Gelfond and Lobo [13])

Authorization: A set AA of actions occurring at a transition system state Οƒ\sigma is strongly compliant with a policy PP if all a∈Aa\in A are known to be permitted at Οƒ\sigma using rules in PP. AA is non-compliant with PP if any of the actions are known to be not permitted at Οƒ\sigma using rules in PP. Otherwise, if PP is unclear and does not specify whether an action a∈Aa\in A is permitted or not at Οƒ\sigma, then the set of actions is weakly compliant.

Obligation: A set of actions AA occurring at state Οƒ\sigma is compliant with an obligation policy PP if a∈Aa\in A whenever o​b​l​(a)obl(a) can be derived from the rules of PP and aβˆ‰Aa\not\in A whenever o​b​l​(Β¬a)obl(\neg a) can be derived from PP at Οƒ\sigma. Otherwise, it is non-compliant.

Note that a set of actions AA can be strongly, weakly, or non-compliant with an authorization policy, but AA can only be compliant or non-compliant with an obligation policy. Given an π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policy (with authorization and obligation policy statements), AA is strongly compliant if it is strongly compliant with its authorization policy and compliant with its obligation policy. Likewise for weak compliance and non-compliance. Computing policy compliance is reduced to the problem of finding answer sets of a logic program obtained by translating the policy rules into ASP. In this translation, predicates o​b​lobl and p​e​r​m​i​t​t​e​dpermitted are extended to include an extra argument II standing for the time step, while l​p​(x)lp(x) denotes the ASP transformation of xx, where xx can be a rule, an action literal, a fluent literal, or a condition.

π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} does not discuss interactions between authorization and obligation policies on the same action, does not define compliance in terms of obligation policies for a trajectory in the dynamic system, and does not compare the degree of compliance of two trajectories. All of these aspects are relevant when modeling a policy-aware intentional agent and are addressed in our work.

3 π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} Architecture

We can now introduce our π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture for policy-aware intentional agents. We focus on two main aspects of π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA}: reframing π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} to fit an agent-centered architecture, and the encoding of different policy compliance behavior modes of an π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} agent.

3.1 Re-envisioning π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} Policies in an Agent-Centered Architecture

Gelfond and Lobo [13] conceived π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} as a means to evaluate policy compliance in a dynamic system. This differs from π’œβ€‹β„β€‹π’œ\mathscr{AIA} in the following ways:

  • β€’

    π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} evaluates trajectories in a domain’s transition diagram from a global perspective whereas π’œβ€‹β„β€‹π’œ\mathscr{AIA} distinguishes between agent actions and exogenous actions, and chooses which agent actions to attempt next.

  • β€’

    π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} evaluates histories at β€œthe end of the day” whereas the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture, while still reasoning over past actions in its diagnosis mode, places an emphasis on planning future actions to achieve a future desired state.

These differences prevent π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policies from interoperating with the π’œβ€‹β„β€‹π’œ\mathscr{AIA} agent architecture out of the box. To address the first issue, we constrain π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policies to describe only agent actions in our π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture. For the second issue, we adjust our policy compliance rules such that only future actions affect policy compliance. Since our focus is on planning, in π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} past actions are always considered β€œcompliant” although they might not have been at the time. For an agent that previously had no choice but a non-compliant action, this allows the agent to conceive of β€œturning a new leaf” and seeking policy-compliant actions in the future.

Also, π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} does not include specification on how authorization policy statements interact with obligation policy statements. For example, consider the following π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policy:

p​e​r​m​i​t​t​e​d​(a)o​b​l​(Β¬a)\begin{array}[]{c}permitted(a)\\ obl(\neg a)\end{array}

which is contradictory, since the agent is permitted to perform action aa but at the same time is obligated to refrain from it. Appealing to common sense, if an agent is obligated to refrain from an action, one would conclude that the action is not permitted. Likewise, it makes sense to say that, if an agent is obligated to do an action, then it must be permitted. Thus, we take these intuitions and create the following non-contradiction ASP axioms, in which we use literals o​b​lobl and p​e​r​m​i​t​t​e​dpermitted expanded to include a new argument II representing the time step:

←a​g​e​n​t​_​a​c​t​i​o​n​(A),o​b​l​(A,I),Β¬p​e​r​m​i​t​t​e​d​(A,I).←a​g​e​n​t​_​a​c​t​i​o​n​(A),o​b​l​(n​e​g​(A),I),p​e​r​m​i​t​t​e​d​(A,I).\begin{array}[]{ll}\leftarrow&agent\_action(A),\ \ obl(A,I),\ \ \neg permitted(A,I).\\ \leftarrow&agent\_action(A),\ \ obl(neg(A),I),\ \ permitted(A,I).\end{array} (3)

These enforce that, at the very least, the authorization and obligation policies do not contradict each other, while allowing for defeasible policies to work appropriately.

We also extend the translation of defeasible policy statements. Suppose we have the following:

normally ​p​e​r​m​i​t​t​e​d​(a)o​b​l​(Β¬a)​ if ​c​o​n​d\begin{array}[]{c}\textbf{normally }permitted(a)\\ obl(\neg a)\ \textbf{ if }\ cond\end{array} (4)

Using Gelfond and Lobo’s approach [13], the corresponding ASP translation would be:

p​e​r​m​i​t​t​e​d​(a,I)←not ​¬p​e​r​m​i​t​t​e​d​(a,I).o​b​l​(n​e​g​(a),I)←l​p​(c​o​n​d).\begin{array}[]{l}permitted(a,I)\leftarrow\textbf{not }\neg permitted(a,I).\\ obl(neg(a),I)\leftarrow lp(cond).\end{array}

where l​p​(c​o​n​d)lp(cond) and o​b​l​(n​e​g​(a),I)obl(neg(a),I) represent the logic programming encoding of c​o​n​dcond and o​b​l​(Β¬a)obl(\neg a), respectively. Based on policy (4), both p​e​r​m​i​t​t​e​d​(a,I)permitted(a,I) and o​b​l​(n​e​g​(a),I)obl(neg(a),I) would be true at a time step II when c​o​n​dcond is met. This violates the non-contradiction axioms in (3). So, we replace the translation of the defeasible statement in (4) with the following encoding:

p​e​r​m​i​t​t​e​d​(a,I)←not ​¬p​e​r​m​i​t​t​e​d​(a,I),not ​o​b​l​(n​e​g​(a),I).\begin{array}[]{lll}permitted(a,I)&\leftarrow&\textbf{not }\neg permitted(a,I),\ \ \textbf{not }obl(neg(a),I).\end{array}

This allows the presence of o​b​l​(Β¬a)obl(\neg a) to be an exceptional case to the defeasible rule.

In general, we propose translating the different types of defeasible statements in (2) as follows, respectively, where predicate a​bab facilitates dealing with possible additional (weak) exceptions:

p​e​r​m​i​t​t​e​d​(a,I)←l​p​(c​o​n​d),not ​a​b​(d,I),not ​¬p​e​r​m​i​t​t​e​d​(a,I),not ​o​b​l​(n​e​g​(a),I).Β¬p​e​r​m​i​t​t​e​d​(a,I)←l​p​(c​o​n​d),not ​a​b​(d,I),not ​p​e​r​m​i​t​t​e​d​(a,I),not ​o​b​l​(a,I).o​b​l​(a,I)←l​p​(c​o​n​d),not ​a​b​(d,I),not ​¬o​b​l​(a,I),not ​¬p​e​r​m​i​t​t​e​d​(a,I).o​b​l​(n​e​g​(a),I)←l​p​(c​o​n​d),not ​a​b​(d,I),not ​o​b​l​(n​e​g​(a),I),not ​p​e​r​m​i​t​t​e​d​(a,I).\begin{array}[]{rll}permitted(a,I)&\leftarrow&lp(cond),\ \ \textbf{not }ab(d,I),\ \ \textbf{not }\neg permitted(a,I),\ \ \textbf{not }obl(neg(a),I).\\ \neg permitted(a,I)&\leftarrow&lp(cond),\ \ \textbf{not }ab(d,I),\ \ \textbf{not }permitted(a,I),\ \ \textbf{not }obl(a,I).\\ obl(a,I)&\leftarrow&lp(cond),\ \ \textbf{not }ab(d,I),\ \ \textbf{not }\neg obl(a,I),\ \ \textbf{not }\neg permitted(a,I).\\ obl(neg(a),I)&\leftarrow&lp(cond),\ \ \textbf{not }ab(d,I),\ \ \textbf{not }obl(neg(a),I),\ \ \textbf{not }permitted(a,I).\end{array}

3.2 Policy-Aware Agent Behavior

The π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture, which is the underlying basis of π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA}, introduces mental fluents and actions in addition to physical ones. In π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA}, we additionally introduce policy fluents and actions needed to reason over policy compliance (see Table 1). The new policy action descriptions encode the effects of future agent actions on policy compliance and provide means for the control loop to deem non-compliant activities futile and execute compliant ones in their place.

Table 1: List of Policy Fluents and Actions in the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} Architecture
Fluents Actions
Inertial: a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(s​t​r​o​n​g)auth\_compliance(strong) i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a)
a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(w​e​a​k)auth\_compliance(weak) i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a)
o​b​l​_​c​o​m​p​l​i​a​n​t​(d​o​_​a​c​t​i​o​n)obl\_compliant(do\_action) i​g​n​o​r​e​_​o​b​l​(a)ignore\_obl(a)
o​b​l​_​c​o​m​p​l​i​a​n​t​(r​e​f​r​a​i​n​_​f​r​o​m​_​a​c​t​i​o​n)obl\_compliant(refrain\_from\_action) i​g​n​o​r​e​_​o​b​l​(n​e​g​(a))ignore\_obl(neg(a))
Defined: p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f), for every physical fluent ff Β Β Β Β Β Β for every physical action aa

For example, the dynamic causal laws in (5) define inertial policy fluents a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(w​e​a​k)auth\_compliance(weak) and a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(s​t​r​o​n​g)auth\_compliance(strong) according to the definitions for strong and weak authorization policy compliance of π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} seen in Definition 1:

aΒ causesΒ Β¬a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(s​t​r​o​n​g)Β ifΒ Β not ​p​e​r​m​i​t​t​e​d​(a)aΒ causesΒ Β¬a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(w​e​a​k)Β ifΒ Β¬p​e​r​m​i​t​t​e​d​(a)\begin{array}[]{lllll}a&\textbf{ causes }&\neg auth\_compliance(strong)&\textbf{ if }&\textbf{ not }permitted(a)\\ a&\textbf{ causes }&\neg auth\_compliance(weak)&\textbf{ if }&\neg permitted(a)\end{array} (5)

These rules are defined for every physical action aa of the transition system. Should an action aa occur where p​e​r​m​i​t​t​e​d​(a)permitted(a) is not known to be true, then the scenario ceases to be strongly compliant (i.e., it becomes weakly compliant). Since a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(s​t​r​o​n​g)auth\_compliance(strong) cannot be made true again by any action, the rest of the scenario remains weakly compliant by inertia. Likewise, should an action aa occur where p​e​r​m​i​t​t​e​d​(a)permitted(a) is false, then the scenario ceases to be weakly compliant (i.e., it becomes non-compliant) and remains in this state by inertia.

For every physical fluent ff, we introduce a new defined policy fluent p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f). This allows us to reuse the π’œβ€‹β„β€‹π’œ\mathscr{AIA} control loop shown in (1) as is. When the agent controller wants to specify that the agent should achieve goal ff in a policy-compliant manner, the controller simply has to initiate action

s​e​l​e​c​t​_​g​o​a​l​(p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f))select\_goal(policy\_compliant(f))

instead of the original s​e​l​e​c​t​_​g​o​a​l​(f)select\_goal(f). The policy fluent p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) is true iff ff is true and a​u​t​h​_​c​o​m​p​l​i​a​n​c​e​(l)auth\_compliance(l) is true, for some minimum compliance threshold ll set by the controller. Thus, when p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) is an agent’s goal, activities below ll-compliance are deemed as futile and the agent works to achieve fluent ff subject ll-compliance.

Authorization Policies and Agent Behavior. To allow for cases when the threshold ll set by the controller is not the maximum possible level of compliance or, in the future, cases when an agent deliberately chooses to act without ll-compliance, we add policy actions i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) and i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a), where aa is a physical action. By executing these actions concurrently with aa, our agent ignores aa’s effect at that time step on weak compliance or non-compliance, respectively. This enables our agent to look for activities with a lower level of compliance if no activities that achieve ff are strongly-compliant. One can imagine that this capability can be used to model multiple agent behaviors, based on what the minimum and maximum requirements of adherence to their authorization policy are. We have parameterized the agent’s behavior as seen in Table 2, and introduced names for these possible agent behaviors.

Require weak Prefer weak over non-compl. Ok with non-compl.
Require strong Paranoid (Invalid) (Invalid)
Prefer strong over weak Cautious Best effort (Invalid)
Ok with weak Subordinate Subordinate when possible Utilitarian
Table 2: π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} Authorization Policy Modes

One behavior mode is for the agent to strictly adhere to its authorization policy such that it never chooses to perform i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a). This causes all non-compliant actions to indirectly cause p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) to be false, if they are executed. Hence, only activities with weakly or strongly compliant actions are considered. Since this mode never dares to become non-compliant, it is called subordinate.

A similar behavior mode causes the agent to never perform i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a). This causes all weak and non-compliant actions to indirectly cause p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) to be false when executed. Hence, only activities with strongly compliant actions are considered. Since weakly compliant actions are actions for which the policy compliance is unknown, this mode is called paranoid as it treats weakly compliant actions as if they were non-compliant.

Another behavior mode allows unrestricted access to the two actions, i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a) and i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a). This mode is called utilitarian because it reduces the behavior of π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} to that of π’œβ€‹β„β€‹π’œ\mathscr{AIA}, where policies are not considered at all.

An interesting feature of the i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a) and i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) actions is the ability to optimize compliance. Using preference statements in ASP, we can require the control loop to minimize the use of these two policy actions. Hence, if it is possible to execute an activity that is strongly compliant, the agent will prefer it over a weakly or non-compliant one (since the use of these actions is required to allow p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) to be true). Under this condition, i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) and i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(a)ignore\_neg\_permitted(a) are only used when it is impossible to achieve the fluent ff in a strongly or weakly compliant manner, respectively.

The combination of compliance optimization with the first three behavior modes allows for more possible configurations. For example, adding optimization to the subordinate option makes a cautious mode. In this mode, the agent will try to mimic the behavior of the paranoid mode (all strongly compliant actions), but ultimately it will reduce to subordinate (all weakly compliant actions) in the worst case. Likewise, adding optimization to the utilitarian mode adds two options: best effort and subordinate when possible. Best effort prefers strong compliance over weak compliance and weak compliance over non-compliance, but ultimately permits non-compliance when no better alternatives exist. Subordinate when possible prefers weak compliance over non-compliance but does not optimize from weak compliance to strong compliance.

A new feature of this approach to optimization is the ability to optimize within the weakly and non-compliant categories. Consider two weakly compliant activities, 1 and 2, where activity 1 has more weakly compliant actions than activity 2. Since weakly compliant actions do require a concurrent i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) action, activity 1 will have more i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) actions than activity 2. Hence, activity 2 will be preferred to activity 1, even though they both fall in the weakly compliant category. Gelfond and Lobo [13] do not consider such a feature.

Obligation Policies and Agent Behavior. So far we discussed authorization policies induced by an π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policy. To address obligation policies, we add policy fluents o​b​l​_​c​o​m​p​l​i​a​n​t​(d​o​_​a​c​t​i​o​n)obl\_compliant(do\_action) and o​b​l​_​c​o​m​p​l​i​a​n​t​(r​e​f​r​a​i​n​_​f​r​o​m​_​a​c​t​i​o​n)obl\_compliant(refrain\_from\_action) with policy actions i​g​n​o​r​e​_​o​b​l​(a)ignore\_obl(a) and i​g​n​o​r​e​_​o​b​l​(n​e​g​(a))ignore\_obl(neg(a)), as seen in Table 1. (For configurability, we consider obligation policies to do actions and to refrain from actions separately). We extend the definition of p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) to require both o​b​l​_​c​o​m​p​l​i​a​n​t​(d​o​_​a​c​t​i​o​n)obl\_compliant(do\_action) and o​b​l​_​c​o​m​p​l​i​a​n​t​(r​e​f​r​a​i​n​_​f​r​o​m​_​a​c​t​i​o​n)obl\_compliant(refrain\_from\_action) to be true. Like with authorization compliance, if o​b​l​(a)obl(a) is true but action aa does not occur, then o​b​l​_​c​o​m​p​l​i​a​n​t​(d​o​_​a​c​t​i​o​n)obl\_compliant(do\_action) becomes false and remains false by inertia. Likewise for o​b​l​_​c​o​m​p​l​i​a​n​t​(r​e​f​r​a​i​n​_​f​r​o​m​_​a​c​t​i​o​n)obl\_compliant(refrain\_from\_action). If i​g​n​o​r​e​_​o​b​l​(a)ignore\_obl(a) or i​g​n​o​r​e​_​o​b​l​(n​e​g​(a))ignore\_obl(neg(a)) are performed, then these effects on the o​b​l​_​c​o​m​p​l​i​a​n​tobl\_compliant fluents are temporarily waived.

Honor o​b​l​(Β¬a)obl(\neg a) Prefer honoring o​b​l​(Β¬a)obl(\neg a) Ignore o​b​l​(Β¬a)obl(\neg a)
Honor o​b​l​(a)obl(a) Subordinate Permit commissions (Not reasonable)
Prefer honoring o​b​l​(a)obl(a) Permit omissions Best effort (Not reasonable)
Ignore o​b​l​(a)obl(a) (Not reasonable) (Not reasonable) Utilitarian
Table 3: π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} Obligation Policy Modes

There are five different configurations (or behavior modes) an agent in the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture can have regarding its obligation policy (see Table 3). When in subordinate mode, the agent will never use either i​g​n​o​r​e​_​o​b​l​(a)ignore\_obl(a) and i​g​n​o​r​e​_​o​b​l​(n​e​g​(a))ignore\_obl(neg(a)) actions. Hence, all activities achieving p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) will be compliant with both aspects of its obligation policy. When in best effort mode, the agent prefers using other actions over these policy actions. Hence, activities will be compliant if possible but may include non-compliant elements when no other goal-achieving activities exist. The permit omissions and permit commissions options are variations of these modes. Mode permit omissions is like best effort with regards to o​b​l​(a)obl(a) policy statements, but like subordinate with regards to o​b​l​(Β¬a)obl(\neg a) policy statements. Likewise, permit commissions is like subordinate with regards to o​b​l​(a)obl(a) policy statements but like best effort regarding o​b​l​(Β¬a)obl(\neg a) statements. Utilitarian mode, like with authorization policies, reduces the behavior of an π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} agent with respect to its obligation policy to that of an π’œβ€‹β„β€‹π’œ\mathscr{AIA} agent.

Behavior Mode Configurations. An agent’s combined authorization and obligation policy configuration can be represented by a 2-tuple (A,O)(A,O), where AA is the authorization mode and OO is the obligation mode. When an π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} agent is running in mode (u​t​i​l​i​t​a​r​i​a​n,u​t​i​l​i​t​a​r​i​a​n)(utilitarian,utilitarian), its behavior reduces to that of an π’œβ€‹β„β€‹π’œ\mathscr{AIA} agent (i.e., policy actions are not used in this mode). This is due to an optimization we provide internally.

For each π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} configuration, we adjust the definition of p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) such that excess policy actions are not required. For instance, in the case of an agent with a subordinate authorization mode, we adjust p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(f)policy\_compliant(f) such that i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(a)ignore\_not\_permitted(a) is never needed since such an agent always disregards strong compliance.

4 Examples

To demonstrate the operations of an agent in the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture, we will introduce a series of examples that illustrate prototypical cases. For conciseness, we will focus on three π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} configurations: (paranoid, subordinate), (best effort, best effort), and (utilitarian, utilitarian), and we limit ourselves to examples about authorization policies.

4.1 Example A: Fortunate case

To begin with a simple case, suppose that two people are in an office space that has four rooms with doors in between them. Room 1 is connected by door d12d_{12} to Room 2. Room 2 is connected by door d23d_{23} to Room 3 and so on. Door d34d_{34} has a lock and is currently in the unlocked position. Suppose our agent, Alice, wants to greet another agent, Bob. This scenario is represented by a dynamic domain description that considers:

  • β€’

    Fluents: d​o​o​r​_​l​o​c​k​e​d​(D)door\_locked(D) for each door DD, i​n​_​r​o​o​m​(P,R)in\_room(P,R), g​r​e​e​t​e​d​_​b​y​(P,A)greeted\_by(P,A) where person PP is greeted by person AA; and

  • β€’

    Actions: m​o​v​e​_​t​h​r​o​u​g​h​(A,D)move\_through(A,D), l​o​c​k​_​d​o​o​r​(A,D)lock\_door(A,D), u​n​l​o​c​k​_​d​o​o​r​(A,D)unlock\_door(A,D), g​r​e​e​t​(A,P)greet(A,P), where AA is the person doing the action, DD is a door, and PP a person (the direct object of the action).

Assume that agent Alice is given a policy specifying that all actions are permitted along with the following pre-computed activity that is stored in her knowledge base as the set of facts:

{activity(1),length(1,4),goal(1,policy_compliant(greeted_by(alice,bob))),c​o​m​p​o​n​e​n​t​(1,1,m​o​v​e​_​t​h​r​o​u​g​h​(a​l​i​c​e,d12)),c​o​m​p​o​n​e​n​t​(1,2,m​o​v​e​_​t​h​r​o​u​g​h​(a​l​i​c​e,d23)),component(1,3,move_through(alice,d34)),component(1,4,greet(alice,bob))}\begin{array}[]{l}\{activity(1),\ \ \ \ length(1,4),\ \ \ \ goal(1,policy\_compliant(greeted\_by(alice,bob))),\\ \ \ component(1,1,move\_through(alice,d_{12})),\ \ component(1,2,move\_through(alice,d_{23})),\\ \ \ component(1,3,move\_through(alice,d_{34})),\ \ component(1,4,greet(alice,bob))\}\\ \end{array}

Before the control loop begins, Alice observes that she is in Room 1, Bob is in Room 4, the door d34d_{34} is unlocked, and that she has not yet greeted Bob.

At timestep 0, the first iteration of the control loop begins. In this first step, Alice analyzes her observations and interprets unexpected observations by assuming undetected exogenous actions occurred. None of her observations are unexpected, so no exogenous actions are assumed to occur. Alice then intends to wait at timestep 0. Alice attempts wait. Alice observes that her wait action was successful and that, in the meantime, the exogenous action

s​e​l​e​c​t​(p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b)))select(policy\_compliant(greeted\_by(alice,bob)))

happened. The time step is incremented and Alice does not observe any fluents.

The second iteration of the control loop begins. Alice analyzes her observation of
s​e​l​e​c​t​(p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b)))select(policy\_compliant(greeted\_by(alice,bob))) and determines that
a​c​t​i​v​e​_​g​o​a​l​(p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b)))active\_goal(policy\_compliant(greeted\_by(alice,bob))) is true. Alice then starts planning to achieve p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b))policy\_compliant(greeted\_by(alice,bob)) and determines that she intends to start activity 1. Since each action in activity 1 is strongly compliant, no policy actions are needed.

The rest of the example is very straight forward and is almost identical to scenarios discussed by [6, 7] in the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture.

4.2 Example B: Strong compliance degrades to weak compliance

Let us consider a less fortunate example, in which a strongly compliant activity becomes weakly compliant due to an unexpected environmental observation. In the same scenario, suppose we modify Alice’s policy from Example A such that regarding g​r​e​e​t​(A,P)greet(A,P) we have:

p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(A,P))Β ifΒ Β¬b​u​s​y​_​w​o​r​k​i​n​g​(P)p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(A,P))Β ifΒ b​u​s​y​_​w​o​r​k​i​n​g​(P),i​n​_​r​o​o​m​(P,R),d​o​o​r​_​c​o​n​n​e​c​t​s​(D,R),k​n​o​c​k​e​d​_​o​n​_​d​o​o​r​(D)\begin{array}[]{lll}permitted(greet(A,P))&\textbf{ if }&\neg busy\_working(P)\\ permitted(greet(A,P))&\textbf{ if }&busy\_working(P),\ \ in\_room(P,R),\\ &&door\_connects(D,R),\ \ knocked\_on\_door(D)\end{array} (6)

We also have new fluents b​u​s​y​_​w​o​r​k​i​n​g​(P)busy\_working(P) and k​n​o​c​k​e​d​_​o​n​_​d​o​o​r​(D)knocked\_on\_door(D), and actions b​e​g​i​n​_​w​o​r​k​i​n​g​(A)begin\_working(A) and k​n​o​c​k​_​o​n​_​d​o​o​r​(A,D)knock\_on\_door(A,D). Let Alice’s knowledge base contain two additional activities, 2 and 3, with the same goal as 1 and defined by the sets of facts:

{activity(2),length(2,5),goal(2,policy_compliant(greeted_by(alice,bob))),c​o​m​p​o​n​e​n​t​(2,1,m​o​v​e​_​t​h​r​o​u​g​h​(a​l​i​c​e,d12)),c​o​m​p​o​n​e​n​t​(2,2,m​o​v​e​_​t​h​r​o​u​g​h​(a​l​i​c​e,d23)),c​o​m​p​o​n​e​n​t​(2,3,k​n​o​c​k​_​o​n​_​d​o​o​r​(a​l​i​c​e,d34)),c​o​m​p​o​n​e​n​t​(2,4,m​o​v​e​_​t​h​r​o​u​g​h​(a​l​i​c​e,d34)),c​o​m​p​o​n​e​n​t​(2,5,g​r​e​e​t​(a​l​i​c​e,b​o​b)),a​c​t​i​v​i​t​y​(3),l​e​n​g​t​h​(3,2),g​o​a​l​(2,p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b))),component(3,1,knock_on_door(alice,d34)),component(3,2,move_through(alice,d34))}\begin{array}[]{l}\{activity(2),\ \ \ \ length(2,5),\ \ \ \ goal(2,policy\_compliant(greeted\_by(alice,bob))),\\ \ \ \ \ component(2,1,move\_through(alice,d_{12})),\ \ component(2,2,move\_through(alice,d_{23})),\\ \ \ \ \ component(2,3,knock\_on\_door(alice,d_{34})),\ \ component(2,4,move\_through(alice,d_{34})),\\ \ \ \ \ component(2,5,greet(alice,bob)),\\ \ \ activity(3),\ \ \ \ length(3,2),\ \ \ \ goal(2,policy\_compliant(greeted\_by(alice,bob))),\\ \ \ \ \ component(3,1,knock\_on\_door(alice,d_{34})),\ \ component(3,2,move\_through(alice,d_{34}))\}\end{array}

Alice observes that Bob is not busy working, in addition to the initial observations of Example A. At timestep 0, the first iteration of the control loop begins. During the second iteration of the control loop (at timestep 1), Alice plans to achieve the p​o​l​i​c​y​_​c​o​m​p​l​i​a​n​t​(g​r​e​e​t​e​d​_​b​y​(a​l​i​c​e,b​o​b))policy\_compliant(greeted\_by(alice,bob)) goal. Since she believes Bob is not busy working, activity 1 is still strongly compliant and so is activity 2. Alice chooses activity 1 over activity 2 because it requires a shorter sequence of actions. She then executes activity 1 like in Example A until she enters Room 3, at which points she observes that Bob is busy working. During the next iteration (at timestep 5), the agent interprets this observation by inferring that b​e​g​i​n​_​w​o​r​k​i​n​g​(b​o​b)begin\_working(bob) happened at the previous timestep (4).

As a result, activity 1 becomes weakly compliant. Since Bob is busy working but Alice has not knocked on the door, no policy statement describes whether our next action, g​r​e​e​t​(a​l​i​c​e,b​o​b)greet(alice,bob), is compliant or not. If Alice is operating in (utilitarian, utilitarian) mode, she continues the execution of activityΒ 1 and greets Bob anyway. (This happens without the use of policy actions due to our internal optimizations). Otherwise, Alice will stop the activity and then either refuse to plan another weakly compliant activity or use a concurrent policy action to dismiss this event.

If our agent is running in (paranoid, subordinate), Alice will refuse to execute a weakly compliant activity. Through planning, Alice will discover that a new activity that includes knocking at the door is strongly compliant (e.g.Β activity 3) and begin its execution. If our agent is running in (best effort, best effort), she will behave likewise because activity 3 is strongly compliant. The difference is that, if there did not exist a strongly compliant activity, she would plan a new activity that involved a policy action and greeted Bob anyway. Alice knocks on door d34d_{34} at timestep 7, greets Bob at timestep 8, and stops activity 3 at timestep 9.

4.3 Example C: Compliance degrades to non-compliant

Suppose we take policy rule (6), make it defeasible, and add this π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} rule :

Β¬p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(A,P))Β ifΒ b​u​s​y​_​w​o​r​k​i​n​g​(P),s​u​p​e​r​v​i​s​o​r​_​t​o​(P,A)\begin{array}[]{lll}\neg permitted(greet(A,P))&\textbf{ if }&busy\_working(P),\ \ supervisor\_to(P,A)\end{array}

Now, let us imagine that Bob is Alice’s supervisor. Similar to Example B, our agent executes activity 1 until the observation that Bob is busy working. This time, we have a strict authorization statement forbidding greeting Bob since he is Alice’s supervisor. Under the (utilitarian, utilitarian) option, we proceed on with activity 1 anyway. With the (paranoid, subordinate) option, our agent stops activity 1 but cannot construct a new activity that achieves the goal subject to its policy. Hence, the goal is futile and the agent waits until its environment changes such that a strongly compliant activity exists. Under the (best effort, best effort) option however, our agent constructs a new activity that contains greets Bob anyway. The activity contains: g​r​e​e​t​(a​l​i​c​e,b​o​b)greet(alice,bob), i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(a​l​i​c​e,b​o​b))ignore\_not\_permitted(greet(alice,bob)), and i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(a​l​i​c​e,b​o​b))ignore\_neg\_permitted(greet(alice,bob)).

4.4 Example D: Hierarchy of contradictory defeasible statements

Further extending Example C, suppose we turn all policy statements from Example A into defeasible ones (i.e., all actions are normally permitted) and add another policy statement:

m​1​(A,D):Β normallyΒ p​e​r​m​i​t​t​e​d​(m​o​v​e​_​t​h​r​o​u​g​h​(A,D))m​2​(A,D):Β normallyΒ Β¬p​e​r​m​i​t​t​e​d​(m​o​v​e​_​t​h​r​o​u​g​h​(A,D))Β ifΒ i​n​_​r​o​o​m​(A,R1),d​o​o​r​_​c​o​n​n​e​c​t​s​(D,R2),p​r​i​v​a​t​e​_​o​f​f​i​c​e​(R2),R1!=R2\begin{array}[]{lllll}m1(A,D):&\textbf{ normally }&permitted(move\_through(A,D))&&\\ m2(A,D):&\textbf{ normally }&\neg permitted(move\_through(A,D))&\textbf{ if }&in\_room(A,R_{1}),\\ &&&&door\_connects(D,R_{2}),\\ &&&&private\_office(R_{2}),\\ &&&&R_{1}!=R_{2}\end{array}

and the static p​r​i​v​a​t​e​_​o​f​f​i​c​e​(R)private\_office(R) with p​r​i​v​a​t​e​_​o​f​f​i​c​e​(r4)private\_office(r_{4}) as a fact. Since we have two contradictory defeasible statements, we need to add a preference between the two (without a preference our agent can non-deterministically choose between which of the two rules to apply). If we add:

p​r​e​f​e​r​(m​2​(A,D),m​1​(A,D))prefer(m2(A,D),m1(A,D))

then, when Alice observes that Bob is not busy working at the beginning of the scenario, an agent running in (paranoid, subordinate) mode will immediately consider the goal to be futile. Unlike in Example C, our agent knows this immediately because p​r​i​v​a​t​e​_​o​f​f​i​c​eprivate\_office is a static, not an unexpected observation. If our agent is running in (best effort, best effort) mode, it creates an activity like activity 1, except that it contains i​g​n​o​r​e​_​n​o​t​_​p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(a​l​i​c​e,b​o​b))ignore\_not\_permitted(greet(alice,bob)) and i​g​n​o​r​e​_​n​e​g​_​p​e​r​m​i​t​t​e​d​(g​r​e​e​t​(a​l​i​c​e,b​o​b))ignore\_neg\_permitted(greet(alice,bob)). Our utilitarian agent, like always, completely ignores our policy and executes activity 1.

5 Implementation

In this section, we discuss two important implementation aspects: the refactoring of the π’œβ€‹β„β€‹π’œ\mathscr{AIA} implementation including its Theory of Intentions and control loop, and the implementation of the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} control loop using clingo’s Python API.

5.1 π’œβ€‹β„β€‹π’œ\mathscr{AIA} Theory of Intentions and Control Loop

Since π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} takes π’œβ€‹β„β€‹π’œ\mathscr{AIA} as a basis, we first update Blount et al.’s [7] π’œβ€‹β„β€‹π’œ\mathscr{AIA} implementation such that it requires a state-of-the-art solver: clingo (version 5.4.1).888 Our updated π’œβ€‹β„β€‹π’œ\mathscr{AIA} implementation is available at https://gitlab.com/0x326/miami-university-cse-700-aia.git and is released under the MIT open-source license. For this purpose, we re-implement the π’œβ€‹β„β€‹π’œ\mathscr{AIA} logic program in ASP using only the description of the architecture presented by Blount et al. [6, 7]. During this process, we make minor modifications to π’œβ€‹β„β€‹π’œ\mathscr{AIA} as a whole. First, we refactor the arrangement of ASP rules into multiple files according to their purpose in the π’œβ€‹β„β€‹π’œ\mathscr{AIA} architecture (e.g.Β whether they are part of the Theory of Intentions, π’œβ€‹β„β€‹π’œ\mathscr{AIA}’s rules for computing models of history, or the π’œβ€‹β„β€‹π’œ\mathscr{AIA} intended action rules). Second, we refactor the names of mental fluents in the Theory of Intentions so that their names are more descriptive and self-documenting. Thirdly, we extensively add inline comments to each ASP rule with reference quotations and page numbers from Blount et al.’s work. Lastly, we make minor corrections to ASP rules to match the translation of particular scenarios (i.e., histories) with the mathematical definitions proposed by Blount et al..

In addition to upgrading the π’œβ€‹β„β€‹π’œ\mathscr{AIA} logic program, we also refactor the implementation of the π’œβ€‹β„β€‹π’œ\mathscr{AIA} control loop. In his dissertation, Blount [6] introduced the π’œβ€‹β„β€‹π’œ\mathscr{AIA} Agent Manager. This is an interactive Java program that allows an end-user to assign values to agent observations in a graphical interface for each control loop iteration. Since this requires manual input, it does not easily lend itself to automation and reproducibility of execution, which are required for performance benchmarking. Furthermore, the π’œβ€‹β„β€‹π’œ\mathscr{AIA} Agent Manager is structured around interacting with an underlying solver using subprocesses and process pipes. While the π’œβ€‹β„β€‹π’œ\mathscr{AIA} Agent Manager could conceivably invoke clingo as a subprocess, clingo 5 provides a unique opportunity for more advanced integrations using its Python API.

Because of these two points, we replace the π’œβ€‹β„β€‹π’œ\mathscr{AIA} Agent Manager with a new implementation of the π’œβ€‹β„β€‹π’œ\mathscr{AIA} control loop written in Python 3.9.0. This new implementation uses a command-line interface and allows for reproducible execution through ASP input files. Since this control loop is also the basis for our π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} implementation, we will discuss it more in the next subsection.

5.2 Python Component

We provide an implementation of the π’œβ€‹β„β€‹π’œ\mathscr{AIA} control loop for the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture.999 Our π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} implementation is available at https://gitlab.com/0x326/miami-university-cse-700-apia.git and is released under the MIT open-source license. The π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} control loop is implemented using Python 3.9.0 and clingo 5.4.1 using clingo’s Python API. We provide two modes: an automatic mode and a manual mode. The automatic mode is intended to be used for normal execution while the manual mode is intended to aid in debugging unexpected output in answer sets. The automatic mode uses a command-line interface to specify the ASP files of the input domain, the observations of the agent, and the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} policy compliance mode the agent should use. The control loop then provides human-readable output as to what happens at each control loop step (see Figure 1 in the appendix).

In the case of unexpected output, the manual mode allows one to examine the answer set at each step of the control loop. It also provides scripts to highlight differences between answer sets of different timesteps in a visual manner and to step through the control loop like one would do in a traditional debugger. Additionally, manual mode addresses certain violations of π’œβ€‹β„β€‹π’œ\mathscr{AIA} and π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} underlying assumptions. For example, it generates an invalid predicate when there exists an action that is neither a physical, mental, or policy action. Likewise when an action is neither an agent action nor an exogenous action. In addition, it generates an invalid predicate when an π’œβ€‹π’ͺ​𝒫​ℒ\mathscr{AOPL} policy statement describes an object that is not declared as an action. These rules have been very useful in debugging the implementation of the π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} architecture and they will aid future end-users who encode and execute scenarios using this architecture. Since these rules are intended during debugging, they are not executed during the automatic mode.

6 Conclusions and Future Work

In this paper, we created an architecture for a policy-aware intentional agent by bridging together previous work on intentional agents [6, 7] and reasoning algorithms for authorization and obligation policies [13]. A main difficulty was adapting the work on policy compliance so that it would be relevant for an agent deciding on which course of actions to take. While Gelfond and Lobo’s work could determine whether a trajectory (i.e., sequence of actions) was strongly compliant, weakly compliant, or non-compliant, we introduced a wider range of agent behavior modes, which additionally explore the interactions between authorization and obligation policies.

This work can be further expanded by refining the decision making process in the planning phase of π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} by introducing a relative ranking system between activities that would achieve the same goal, based on the number of actions that are strongly, weakly, or non-compliant. Moreover, it would be interesting to allow the agent’s controller to switch behavior modes while the agent is active, in the middle of executing an activity.

References

  • [1]
  • [2] Sandra Alves & Maribel Fernandez (2017): A graph-based framework for the analysis of access control policies. Theoretical Computer Science 685, pp. 3–22, 10.1016/j.tcs.2016.10.018.
  • [3] Marcello Balduccini & Michael Gelfond (2008): The AAA Architecture: An Overview. In: Architectures for Intelligent Theory-Based Agents, Papers from the 2008 AAAI Spring Symposium, 2008, AAAI Press, pp. 1–6. Available at https://www.aaai.org/Library/Symposia/Spring/2008/ss08-02-001.php.
  • [4] Steve Barker (2012): Logical Approaches to Authorization Policies. In Alexander Artikis, Robert Craven, NihanΒ Kesim Cicekli, Babak Sadighi & Kostas Stathis, editors: Logic Programs, Norms and Action - Essays in Honor of Marek J. Sergot on the Occasion of His 60th Birthday, Lecture Notes in Computer Science 7360, Springer, pp. 349–373, 10.1007/978-3-642-29414-3_19.
  • [5] Steve Barker, Guido Boella, Dov Gabbay & Valerio Genovese (2014): Reasoning about delegation and revocation schemes in answer set programming. Journal of Logic and Computation 24(1), pp. 89–116, 10.1093/logcom/exs014. Publisher: Oxford Academic.
  • [6] Justin Blount (2013): An architecture for intentional agents. Ph.D. thesis, Texas Tech University.
  • [7] JustinΒ Lane Blount, Michael Gelfond & Marcello Balduccini (2014): Towards a Theory of Intentional Agents. In: 2014 AAAI Spring Symposium Series, pp. 10–17. Available at https://www.aaai.org/ocs/index.php/SSS/SSS14/paper/viewFile/7681/7708.
  • [8] David Ferraiolo, Janet Cugini & DΒ Richard Kuhn (1995): Role-based access control (RBAC): Features and motivations. In: Proceedings of the 11th Annual Computer Security Applications Conference, pp. 241–48. Available at https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=916537.
  • [9] JiΒ Gao, Cheng-Xiang Yuan & Jing Wang (2005): SASA5: A Method System for Supporting Agent Social Activities. Chinese Journal of Computers 28(5), pp. 838–848. Available at http://cjc.ict.ac.cn/eng/qwjse/view.asp?id=1764.
  • [10] Michael Gelfond & Yulia Kahl (2014): Knowledge Representation, Reasoning, and the Design of Intelligent Agents. Cambridge University Press, 10.1017/CBO9781139342124.
  • [11] Michael Gelfond & Vladimir Lifschitz (1991): Classical Negation in Logic Programs and Disjunctive Databases. New Generation Computing 9(3/4), pp. 365–386, 10.1007/BF03037169.
  • [12] Michael Gelfond & Vladimir Lifschitz (1998): Action languages. Electronic Transactions on AI 3(16), pp. 193–210. Available at http://www.ep.liu.se/ej/etai/1998/007/.
  • [13] Michael Gelfond & Jorge Lobo (2008): Authorization and Obligation Policies in Dynamic Systems. In Maria Garcia deΒ la Banda & Enrico Pontelli, editors: Logic Programming, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, pp. 22–36, 10.1007/978-3-540-89982-2_7.
  • [14] Bei-shui Liao, Hua-xin Huang & JiΒ Gao (2006): An Extended BDI Agent with Policies and Contracts. In Zhong-Zhi Shi & Ramakoti Sadananda, editors: Agent Computing and Multi-Agent Systems, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, pp. 94–104, 10.1007/11802372_12.
  • [15] VictorΒ W. Marek & Miroslaw Truszczynski (1999): Stable Models and an Alternative Logic Programming Paradigm. In KrzysztofΒ R. Apt, VictorΒ W. Marek, Mirek Truszczynski & DavidΒ Scott Warren, editors: The Logic Programming Paradigm - A 25-Year Perspective, Artificial Intelligence, Springer, pp. 375–398, 10.1007/978-3-642-60085-2_17.
  • [16] Yan-Bin Peng, JiΒ Gao, Jie-Qin Ai, Cun-Hao Wang & Hang Guo (2008): An Extended Agent BDI Model with Norms, Policies and Contracts. In: 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing, pp. 1–4, 10.1109/WiCom.2008.1197. ISSN: 2161-9654.
  • [17] AnandΒ S. Rao & MichaelΒ P. Georgeff (1991): Modeling Rational Agents within a BDI-Architecture. In: Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR’91). Cambridge, MA, USA, April 22-25, 1991., pp. 473–484. Available at https://dl.acm.org/doi/10.5555/3087158.3087205.
  • [18] KhairΒ Eddin Sabri & Nadim Obeid (2016): A temporal defeasible logic for handling access control policies. Applied Intelligence 44(1), pp. 30–42, 10.1007/s10489-015-0692-8.

Appendix A π’œβ€‹π’«β€‹β„β€‹π’œ\mathscr{APIA} Output

Refer to caption
Figure 1: Automatic execution of Example A using configuration (p​a​r​a​n​o​i​d,s​u​b​o​r​d​i​n​a​t​e)(paranoid,subordinate)