Knowing How to Plan
Abstract
Various planning-based know-how logics have been studied in the recent literature. In this paper, we use such a logic to do know-how-based planning via model checking. In particular, we can handle the higher-order epistemic planning involving know-how formulas as the goal, e.g., find a plan to make sure such that the adversary does not know how to make false in the future. We give a PTIME algorithm for the model checking problem over finite epistemic transition systems and axiomatize the logic under the assumption of perfect recall.
1 Introduction
Standard Epistemic Logic (EL) mainly studies reasoning patterns of knowing that , despite early contributions by Hintikka on formulating other know-wh expressions such as knowing who and why using first-order and higher-order modal logic. In recent years, there is a resurgence of interest on epistemic logics of know-wh powered by the new techniques for fragments of first-order modal logic based on the so-called bundle modalities packing a quantifier and a normal epistemic modality together [27, 25, 22]. Within the varieties of logics of know-wh, the logics of know-how received the most attention in AI (cf. e.g., [26, 9, 21, 17]).
Besides the inspirations from philosophy and linguistics (e.g., [24]), the idea of automated planning in AI also plays an important role in the developments of various logics of know-how. The core idea is (roughly) to interpret knowing how to achieve as a de re statement of knowledge: “there is a plan such that you know that this plan will ensure ”. Here, depending on the exact notion of plans and how much we want to “ensure” that the plan works, there can be different semantics based on ideas from conformant planning and contingent planning in AI. However, as shown by Li and Wang [17], there is a logic core which is independent from the exact notions of plans underlying all these notions of know-how. We can actually unify different planning-based approaches in a powerful framework.
In this paper, we show that the connection between planning and know-how is not merely one way in terms of planning-based know-how, it also makes perfect sense to do know-how-based planning which generalizes the notion of planning to incorporate know-how based goals to be explained in the next subsection. As observed in [17], the typical epistemic planning problem given explicit models can be viewed as model checking problems in our framework with the know-how modality in the language. For example, captures the epistemic planning problem for agent 1 to ensure that agent 2 knows that and keep it a secret from agent 3. Such epistemic planning can also be done in other epistemic approaches, for example, by using dynamic epistemic logic (cf. e.g., [4, 6, 5, 8]). However, we can do much more with the modality in hand, as explained below.
1.1 Higher-order epistemic planning
A distinct feature of modal logic is that we can bring some notions of the meta-language to the object-language. This is not merely formalizing the existing meta-language concepts more precisely since the object language can open new possibilities. Consider the following multi-agent epistemic language of know-that and know-how:
Since we have the know-how operators in the fully compositional logical language, the goal of planning can involve the (Boolean combinations and nestings of) know-how formulas as well, and we call such planning problems higher-order epistemic planning, i.e., planning about planning. By higher-order we do not mean higher-order logic or higher-order epistemic goals in terms of nested know-that formulas.
Although in the single-agent scenario, is equivalent to under reasonable conditions as shown in [9, 17], it still makes sense to do higher-order planning. For example, the planning problem to achieve with the future control to turn it off is expressed by , which is not reducible to a formula without the nesting of .111In contrast to normal modal logic, is invalid thus cannot be applied, cf. e.g., [17].
In a multi-agent setting, given that different agent may have different abilities, the importance of higher-order planning is self-evident. Actually, it is a characteristic human instinct to plan in such a higher-order way. A one-year-old baby girl may not know how to open a bottle but she knows how to use her parents’ knowledge-how to achieve her goal.222As new fathers, both authors know this well and sometimes it can be tiring to be a fully cooperative parent. At the same time, the parents are eager to know how to let their children acquire relevant knowledge-how as early as possible. As a more concrete example about academic collaborations, it often happens that both researchers do not know how to prove a theorem independently, but one knows how to show a critical lemma which can simplify the original problem and thus enable the other to prove the final theorem using her expertise about the simplified statement. In our language, the situation can be expressed as Of course, it depends on the cooperation of agent 2 to finally ensure the goal of agent 1. The nesting of know-how can go arbitrarily deep and also be interactive such as . Such planning based on others’ knowledge-how is at the core of the arts of leadership and management in general.
Higher-order planning also makes perfect sense in non-cooperative settings. For example, it is sometimes important to ensure not only the goal but also that the adversary cannot spoil it in the future. This is expressed by in our language of knowing how. It is also interesting to have mixed goals with both and , such as showing a commitment by achieving while letting the other know you will not be able to change it afterwards: . In the more interactive game-like scenarios, describe a winning strategy-like plan. We will come back to the related work using Alternating-time Temporal Logic (ATL)-like logics at the end of this introduction. Let us first see a concrete example.
Example 1
Suppose a patient is experiencing some rare symptom . To know the cause ( or ), Doctor 1 in the local hospital suggests the patient first take a special fMRI scan () which is only available in the local hospital at the moment, and bring the result to Doctor 2 in a hospital in another city, who is more experienced in examining the result from the scanner. Then Doctor 2 will then know the cause, depending on which different treatments ( or ) should be performed by Doctor 2 to cure the patient. Intuitively, Doctor 1 knows how to let Doctor 2 know how to cure the patient although neither Doctor 1 nor Doctor 2 knows how to cure it without the help of the other. The situation is depicted below as a model with both epistemic relations (dotted lines labelled by ) and action relations (solid lines labelled by ). Note that only can execute action and only 2 can perform or . Reflexive epistemic arrows are omitted.
We would like to have hold at according to the semantics to be introduced.
In this paper, to facilitate such higher-order planning using model checking, we extend the single-agent framework in [17] to a genuine multi-agent setting and study the model checking complexity in details. Technical and computational complications arise due to the introduction of multiple agents with different abilities and knowledge to base their own plans. To stay as close as the intended application of automated planning, we will restrict ourselves to finite models with perfect recall.333cf. [17] for the discussions on the implicit assumption of perfect recall in epistemic planning and the technical difficulty of capturing it by axioms. The model checking algorithm is based on a correspondence between execution trees in the models and the syntactically specified knowledge-based plans. It turns out that this correspondence result also enables us to significantly simplify the proofs of the soundness and completeness of the proof system compared with the method in [9, 17].
We summarize our contributions below.
-
•
A multi-agent framework of logic of knowing how, which can handle the diversity of agents’ abilities and formally specified plans based on each agent’s own knowledge.
-
•
A PTIME model checking algorithm for the full language.
-
•
A complete axiomatization over finite models with perfect recall to ensure the semantics works as desired.444Axiomatizations can also help us to do abstract syntactic reasoning about know-how without fixing a model.
1.2 Related work
Doing automated planning using model checking techniques is not a new idea (cf. e.g., [11]). The most distinctive feature of our framework, compared to other approaches to (epistemic) planning such as [6], is that we can express the planning problem in the object language and incorporate higher-order epistemic planning. Following [10], we base our framework on explicit-state models to lay out the theoretical foundation, instead of using more practical compact representations of models and actions which can generate the explicit-state models with certain properties (cf. [16, 17] for the discussions on the technical connections of the two). In contrast to the informal, model-dependent notions of plans such as policies, we use a plan specification language to formalize knowledge-based plans inspired by [14, 15, 28], but with more expressive branching conditions and more general constructors. Instead of the general structural operational semantics in [17], we give a more intuitive semantics for our knowledge-based plans in this paper.
Comparing to the coalition-based know-how approaches such as [21], we do not have any group know-how modality, but use a much more general notion of plans than the single-step joint action there. Besides the axiomatizations, which are the main focus in the previous work on the logic of know-how, we study the model checking complexity that is crucial for the application of our work to planning. A second-order know-how modality was proposed in [20], where says the coalition knows how coalition can ensure by a single-step joint action, though may not know how themselves. Note that is neither nor in our setting, where may not know the plan that others may use to reach the goal.
The operator is clearly also related to the strategy-based de re variants of the (single-agent) ATL operator under imperfect information (cf. e.g., [23, 13]). In the setting of concurrent games of ATL, the strategies are functions from finite histories (or epistemic equivalence classes of them) to available actions, which induce the resulting plays of the given game that are usually infinite histories, on which temporal properties can be verified. This “global” notion of strategies leads to the problem of revocability of the strategies when nesting the operators: can you change the strategy when evaluating in the scope of another such operator [12, 2]? In contrast, it is crucial that our witness plans for know-how are not global and always terminate, and there is no controversy in the nested form of operators in our setting. It is also crucial that the plans are executed sequentially in our setting, e.g., simply means agent knows how to make sure agent knows how to make sure after agent finishes executing the witness plan for its know-how. Moreover, while the de re variants of ATL are useful in reasoning about (imperfect information) concurrent games (cf. e.g., [19, 18]), our framework is closer to the standard automated planning in AI over transition systems. We leave it to the full version of the paper for a detailed technical comparison.
Structure of the paper
In the rest of the paper, we first introduce briefly an epistemic language in Section 2 to be used to specify the branching conditions of our plan specification language in Section 3. In Section 4 we introduce the semantics of our know-how language and give a PTIME model checking algorithm. Finally, we obtain a complete axiomatization in Section 5 and conclude with future directions in Section 6.
Due to the strict space limitation, we have to omit some proofs.
2 Preliminaries
To specify the knowledge-based plans, we need the following formal language EAL to express the knowledge conditions.
Definition 2 (EAL Language)
Let be a set of propositional letters, be a set of agents, and be a set of atomic actions. The language of Epistemic Action Logic () is defined as follows:
where , , and .
The auxiliary connectives and modalities are defined as abbreviations as usual.
The semantics of EAL formulas is defined on finite models with both epistemic relations and transitions labelled by atomic actions, which will also be used for the know-how logic. In contrast with the single-agent model of [17], for each agent we have a special set of actions that are executable by . Note that an action may be executable by multiple agents. Moreover, we implicitly assume each agent is aware of all the actions, even for those not executable by itself.
Definition 3 (Model)
An Epistemic Transition System (i.e., a model) is a tuple
where: is a non-empty set; is a set of actions for each ; is an equivalence relation for each ; is a binary relation for each ; is a valuation. A frame is a model without the valuation function.
Notations
In the rest of the paper, we use to denote for notational simplicity. We use to denote the set and to denote the set . We sometimes call the equivalence class an -belief-state. A model captures the agents’ abilities to act and the uncertainty about the states.
formulas are given truth values on pointed epistemic transition systems.
Definition 4 (Semantics)
Given a pointed model where and a formula , the semantics is given below:
A formula is valid on a frame if it is true on all the pointed models based on that frame.
Let be a set of states. We sometimes use to denote that for all . Thus denotes that for some , e.g., we may write or .
Here are some standard results about bisimulation that will be useful in the later technical discussion (cf. e.g., [3]).
Definition 5 (Bisimulation)
Given a model , a non-empty symmetric binary relation on the domain of is called a bisimulation iff whenever the following conditions are satisfied:
-
(1).
For any .
-
(2).
for any , if for some then there exists a such that and .
-
(3).
for any , if for some then there exists a such that and .
The pointed models and are said to be bisimilar () if there is a bisimulation such that
We use to denote that for each , there exists such that and vice versa. Given two pointed models and , we use to denote that for each , if and only if . Similarly, denotes that for each , if and only if .
Please note that implies , but the other way around does not work in general.
As an adaption of the Hennessy-Milner theorem for modal logic (cf. e.g., [3]), we have:
Proposition 6
Given a finite model , for each state in , we have that iff .
Moreover, if we only look at the formulas, we have:
Proposition 7
Given a finite model , for each state in , we have that iff .
Most approaches of epistemic planning implicitly assume the property of perfect recall (cf. [17, Sec. 6.5]), here we will also consider this extra property which will play a role in model checking.
Definition 8 (Perfect recall)
Given a model , for each and each , if and then there must exist a state such that and .
In words, if the agent cannot distinguish two states after doing then it could not distinguish them before. Perfect recall (PR) corresponds to the following property depicted below:
3 A specification language for knowledge-based plans
Inspired by [14], we only consider knowledge-based plans for each agent. We introduce the following specification language which is a multi-agent variant of a fragment of the programming language introduced in [17].
Definition 9 (Knowledge-based specification language )
Based on , is defined as follows:
where is the empty plan, and .
Note that the atomic action in the above definition must belong to , but the condition can contain epistemic modalities and actions of other agents, e.g., is a legitimate condition for a knowledge-based plan for even when .
We will abbreviate as . We use to denote the program , and is the empty program .
Let be an epistemic transition system, be a state in , and be a program in . The state set , which is the set of states on which executing on will terminate, is defined by induction on as follows:
We use to denote the set . Given the property of perfect recall, the resulting states of a knowledge-based plan have a nice property.
Proposition 10
If has perfect recall and , then is closed over .
Next we will define a notion of strong executability which is a generalized version of the one in [26].
Definition 11 (Strong executability)
We define a program to be strongly executable on a pointed model by induction on as follows: (1) is always strongly executable on ; (2) is strongly executable on if ; (3) is strongly executable on if is strongly executable on and is strongly executable on each ; (4) is strongly executable on if either and is strongly executable on , or and is strongly executable on . We say that is strongly executable on a set of states if it is strongly executable on for each .
To define the model checking algorithm later, we need to construct the tree of possible executions of a plan where each node is essentially an epistemic equivalence class (usually called a belief state in automated planning).
Definition 12 (Execution tree)
Given a model and an agent , an execution tree of for is a labeled tree , where is a tree consisting of a nonempty set of nodes and a set of edges and is a label function that labels each node in with an -belief-state and each edge in with an action , such that, for all , (1) is strongly executable on ; (2) if and ; (3)there exists such that ; (4)for each , there exists such that and .
We use (or simply ) to refer to the root node of the execution tree . The following two propositions show the relations between execution trees and knowledge-based plans.
Proposition 13
Given a model and a program , if is strongly executable on , then there is a finite execution tree of for such that and that for each leaf node , .
Proposition 14
Let be a finite execution tree of for . There exists such that is strongly executable on and that for each , there exists a leaf node such that .
4 Model checking knowledge-how
Extending the work of [17], we introduce the multi-agent epistemic language of know-how and its semantics and study the model checking complexity in detail.
Definition 15 (ELKh Language)
Let be a set of variables and be a set of agents, the language of epistemic logic with the know-how operator (ELKh) is defined as follows:
where and .
Definition 16 (Semantics)
The truth conditions for basic propositions, Boolean connectives and the epistemic operator are as before in the case of EAL.
The readers can go back to Example 1 to verify the truth of the formula mentioned there.
We can show that ELKh is invariant under bisimualtion defined in Definition 5 which also match labelled transitions (cf. [17]).
Proposition 17
If then iff for all .
In the remainder of this section, we will propose a model checking algorithm for ELKh. The key part of the model checking algorithm is to check the -formulas, and the problem of whether depends on whether there is a good plan for . Our strategy is to reduce the problem of whether there is a good plan for to the nondeterministic planning problem in [7], which we will briefly introduce below.
Definition 18 (Planning problem [7])
Let be a labeled transition system, where is a finite set, is a set of actions, and is a binary relation. A fully observable nondeterministic planning problem is a tuple where is the initial state, and is the goal set.
Given a labeled transition system (note that there is no epistemic relation), we often write as . The sequence is called an execution trace of from . A partial function is called a policy. An execution trace is induced by if for all . It is complete w.r.t. if is not in the domain of .
Definition 19 (Strong plan)
A policy is a strong plan for a planning problem if each complete execution trace induced by from is finite and terminates in .
Proposition 20
Let . The existence of strong plans for a planning problem is in PTIME in the size of .
Proof 4.21.
Now we define the nondeterministic planning problem that corresponds to . Since the language of ELKh is a multi-agent one where different agents have different abilities, the corresponding nondeterministic planning problem must take the agent into account. This makes the reduction more complex than that of single-agent know-how logics.
Given a model and an agent , a planning problem for is where is an -belief-state, is a set of -belief-states, and is a labeled transition system where and for each , is strongly executable on , and there exists .
We will show that the problem of whether indeed can be reduced to a nondeterministic planning problem for (Lemma 4.24). Before that, we need two auxiliary propositions.
Proposition 4.22.
Given a model and an agent , if a program is strongly executable on , then the planning problem where for some has a strong plan.
Proposition 4.23.
Given and , if a planning problem for , , has a strong plan, then there exists such that is strongly executable on and that for each , there exists such that .
Lemma 4.24.
Given a model , we have that iff the planning problem where has a strong plan.
Proof 4.25.
If , it follows that there exists such that is strongly executable on and that for each . By Proposition 10, it follows that for each . Let be the set for some . By Proposition 17, we have that for each . It follows that . Let be the planning problem . By Proposition 4.22, the planning problem has a strong plan. Since , it follows that the planning problem als has a strong plan.
Finally, we are ready to show the upper bound of model checking ELKh.
Theorem 4.26.
The model checking problem of ELKh is in PTIME in the size of models.
Proof 4.27.
Algorithm 2 presents the model checking algorithm MC for ELKh on epistemic transition systems with perfect recall. By Lemma 4.24, we call the procedure StrongPlanExistence presented in Algorithm 1 to check whether . Proposition 20 shows that the procedure StrongPlanExistence can be computed in PTIME of the size of . Since the size of is bounded by the size of , it follows that Algorithm 1 is in PTIME in the size of .
5 Axiomatization
In this section, we axiomatize our logic of know-how over finite models with perfect recall. Note that we cannot apply the generic completeness result in [17] since our setting is not only multi-agent but also with an extra condition of perfect recall. It turns out the axiomatization is the multi-agent variant of the system in [9] which has an extra axiom than the proof system in [17]. Note that although the proof system is similar, the semantics of the in [9] is based on model-dependent functions from equivalence classes to actions instead of the syntactically specified plan in our work.
The proof system is presented in Table 1.
| Axioms | |
|---|---|
| all axioms of propositional logic | |
| Rules | |||
|---|---|---|---|
The property of perfect recall is sufficient to guarantee the validity of , but the axiom cannot characterize the property, as it is shown in [17]. Moreover, the composition axiom in is a weaker version of the axiom in [17].
Due to the axiom , the stronger composition axiom in [17] can be derived in based on .
Proposition 5.28.
.
Next, we will show that the proof system is sound and complete over finite models with perfect recall. By Propositions 13 and 14, it follows that there is a correspondence between execution trees and plans. With this correspondence result, we can significantly simplify the proofs of the soundness and completeness of the proof system than [9] and [17]. More specifically, the validity of Axiom , which was highly non-trivial in [9], follows from the fact that two subsequent execution trees can be combined into one execution tree. (The detailed proof can be found in the following theorem of soundness.) For the completeness, the corresponding execution tree plays an important role in the proof of the truth lemma (Lemma 5.40).
Theorem 5.29 (Soundness).
The proof system is sound over finite models with perfect recall.
Proof 5.30.
The validity of axioms is due to the standard semantics for .
The axiom says if is known then you know how to achieve . Its validity is guaranteed the empty program. The axiom is valid due to Proposition 10. The axiom is the positive introspection axiom for whose validity is due to the semantics for . The axiom says we cannot guarantee contradiction, which indirectly requires that the plan should terminate at someplace and is guaranteed by the condition (1) of the semantics for .
Finally, we will show the validity of the axiom , that is, if . If , it follows that there is a program such that is strongly executable on and that for each . By Proposition 13, there is a finite execution tree of for such that and that for each leaf node , . It follows that for each leaf node . We then have that for each leaf node , there is a program such that is strongly executable on and that for all . By Proposition 13 again, for each leaf of , there is a finite execution tree of for such that and that for each leaf node of , . Moreover, since for all , it follows that for each leaf node of . We then construct an execution tree of for by extending each leaf node of with . Since and all are finite, it follows that is finite. Moreover, we have that the root of is labeled with and for each leaf node of . By Proposition 14, there is a program such that is strongly executable on and that for each , there exists a leaf node of such that . Since for each leaf node of , by Proposition 17, we have that for all . Thus, we have that .
Next, we will show the completeness of . The key is to show that every consistent formula is satisfiable. We will construct a canonical model that consists of all atoms which are maximal consistent sets with respect to a closure set of formulas and show that every consistent formula is satisfied in the canonical model.
Given a set of formulas , let: , , , . Let be a subformula-closed set of ELKh-formulas that is finite.
Definition 5.31.
The closure is defined as:
occurs in .
If is finite, so is . Next, we will use it to build a canonical model with respect to .
Definition 5.32 (Atom).
We enumerate the formulas in by where . The formula set is an atom of if
-
•
or for each ;
-
•
is consistent in .
The following two propositions are similar with their single-agent versions in [9].
Proposition 5.33.
Let be an atom of and . If then there exists such that and .
Proposition 5.34.
Let and be two atoms of such that . We have .
Definition 5.35 (Canonical model).
Given a subformula-closed set , the canonical model is defined as:
-
•
is an atom of ;
-
•
for each , ;
-
•
for each , ;
-
•
for each , we have that and ;
-
•
for each , .
Proposition 5.36.
If and , then . In other words, the model has perfect recall.
Proof 5.37.
We only need to show that . Since , it follows that and . Since , it follows that . Thus, we have that .
Proposition 5.38.
Let be a state in and be executable at . If for all then .
Proof 5.39.
First, we show that is not consistent with . Since is executable at , it follows that and that there is some state in that contains . Moreover, it is obvious that . Assume that is consistent with . By a Lindenbaum argument, there exists an atom of such that . By the definition of , it follows that . Since we know that for all , it follows that . It is contradictory with the fact that is consistent. Thus, is not consistent with . Hence, we have that .
Since , it follows by Rule and Axiom that . Moreover, it follows by Axiom that . Since we have shown that , we have that .
Lemma 5.40 (Truth Lemma).
For each , iff .
Proof 5.41.
We prove it by induction on . The atomic and Boolean cases are straightforward. The case of can be proved by Proposition 5.33. Next, we only focus on the case of .
Right to Left: If , we will show . We first show that is consistent. If not, namely , it follows by Rule that . It follows by Axiom that . Since , it follows by Axiom that , which is in contradiction with the fact that is consistent. Therefore, is consistent.
By Lindenbaum’s Lemma, there exists an atom such that . By the definition of , it follows that and that for each . It means is strongly executable on . For each , by the definition of , it follows that . By Axiom , it follows that . By IH, we then have that for each . Thus, we have that .
Left to Right: Suppose , we will show . Since , it follows that there exists such that is strongly executable on and that for all . By IH, we have that for all . Moreover, for each , if , by Proposition 5.36, it follows that , and we then have that . Moreover, with Proposition 5.33, it is easy to check that for all .
Since is strongly executable on , by Proposition 13, there is a finite execution tree of for such that and that . Since we have shown that for all , it follows that if is a leaf node then is in all states in . Next, we firstly show the following claim:
Please note that the execution tree is finite. We prove the claim above by induction on the height of nodes. If the height of is , it means that is a leaf node. It follows that for each . By Axiom , it follows that for each . With the IH that the claim holds for each node whose height is less than , we will show that the claim holds for nodes whose height is . Given a node whose height is , since , it follows that there is a node such that is an edge in and the height of is less than , in other words, is a child node of . Let and . Since is an execution tree, it follows that is strongly executable on , and then is executable on . For each , by the definition of execution trees, it follows that there is a -child such that . Since is a child of , by IH, it follows that . Thus, we have shown that for each . By Proposition 5.38, it follows that . Thus, we have shown the claim. Since the root is labeled with , we then have that .
Let be a consistent formula. By Proposition 5.33, it follows that there is an atom of such that , where is the set of subformulas of . By Proposition 5.36 and Lemma 5.40, it follows that is satisfiable over finite models with perfect recall. The completeness then follows:
Theorem 5.42 (Completeness).
The proof system is complete over finite models with perfect recall.
From the construction of the canonical model in Definition 5.35, we can see that both the state set and the action set are bounded. This implies that the logic has a small model property. Moreover, by Theorem 4.26, the decidability then follows:
Theorem 5.43.
The logic ELKh is decidable.
6 Conclusion
In this paper, we propose the notion of higher-order epistemic planning by using an epistemic logic of knowing how. The planning problems can be encoded using model checking problems in the framework, which can be computed in PTIME in the size of the the model. We also axiomatize the logic over finite models with perfect recall.
As for future work, besides many theoretical questions about the knowing how logic as it is, we may extend its expressive power to capture conditional knowledge-how, which is very useful in reasoning about planning problems. For example, one may say I know how to achieve given . Note that it is very different from or . The important difference is that such conditional knowledge-how is global, compared to the current local semantics of . This is similar to the binary global operator introduced in [26]. It would be interesting to combine the two notations of know-how in the same framework. On the practical side, we can consider the model checking problems over compact representations of the actions and states using the techniques in [16, 17] connecting the explicit-state models with the compact representations.
References
- [1]
- [2] Thomas Ågotnes, Valentin Goranko & Wojciech Jamroga (2007): Alternating-time temporal logics with irrevocable strategies. In Dov Samet, editor: Proceedings of (TARK 2007), pp. 15–24, 10.1145/1324249.1324256.
- [3] Patrick Blackburn, Maarten de Rijke & Yde Venema (2002): Modal Logic. Cambridge University Press, 10.1017/CBO9781107050884.
- [4] Thomas Bolander & Mikkel Birkegaard Andersen (2011): Epistemic planning for single and multi-agent systems. Journal of Applied Non-Classical Logics 21(1), pp. 9–34, 10.3166/jancl.21.9-34.
- [5] Thomas Bolander, Tristan Charrier, Sophie Pinchinat & François Schwarzentruber (2020): DEL-based epistemic planning: Decidability and complexity. Artificial Intelligence 287, p. 103304, 10.1016/j.artint.2020.103304.
- [6] Thomas Bolander, Martin Holm Jensen & François Schwarzentruber (2015): Complexity Results in Epistemic Planning. In: Proceedings of IJCAI ’15, pp. 2791–2797. Available at http://ijcai.org/Abstract/15/395.
- [7] A. Cimatti, M. Pistore, M. Roveri & P. Traverso (2003): Weak, Strong, and Strong Cyclic Planning via Symbolic Model Checking. Artificial Intelligence 147(1–2), p. 35–84, 10.1016/S0004-3702(02)00374-0.
- [8] Thorsten Engesser, Robert Mattmüller, Bernhard Nebel & Michael Thielscher (2021): Game description language and dynamic epistemic logic compared. Artificial Intelligence 292, p. 103433, 10.1016/j.artint.2020.103433.
- [9] Raul Fervari, Andreas Herzig, Yanjun Li & Yanjing Wang (2017): Strategically knowing how. In: Proceedings of IJCAI ’17, pp. 1031–1038, 10.24963/ijcai.2017/143.
- [10] Malik Ghallab, Dana Nau & Paolo Traverso (2004): Automated planning: theory and practice. Elsevier.
- [11] Fausto Giunchiglia & Paolo Traverso (2000): Planning as Model Checking, 10.1007/10720246_1.
- [12] Andreas Herzig (2015): Logics of knowledge and action: critical analysis and challenges. Autonomous Agents and Multi-Agent Systems 29(5), pp. 719–753, 10.1007/s10458-014-9267-z.
- [13] Wojciech Jamroga & Thomas Ågotnes (2007): Constructive knowledge: what agents can achieve under imperfect information. Journal of Applied Non-Classical Logics 17(4), pp. 423–475, 10.3166/jancl.17.423-475.
- [14] Jérôme Lang & Bruno Zanuttini (2012): Knowledge-Based Programs as Plans - The Complexity of Plan Verification. In: ECAI ’12, pp. 504–509, 10.3233/978-1-61499-098-7-504.
- [15] Jérôme Lang & Bruno Zanuttini (2013): Knowledge-Based Programs as Plans : Succinctness and the Complexity of Plan Existence (Extended Abstract). In: TARK ’13. Available at https://arxiv.org/abs/1310.6429.
- [16] Yanjun Li & Yanjing Wang (2019): Multi-agent Knowing How via Multi-step Plans: A Dynamic Epistemic Planning Based Approach. In: Proceedings of LORI VII, pp. 126–139, 10.1007/978-3-662-60292-8_10.
- [17] Yanjun Li & Yanjing Wang (2021): Planning-Based Knowing How: A Unified Approach. Artificial Intelligence 296, p. 103487, 10.1016/j.artint.2021.103487.
- [18] Bastien Maubert, Aniello Murano, Sophie Pinchinat, François Schwarzentruber & Silvia Stranieri (2020): Dynamic Epistemic Logic Games with Epistemic Temporal Goals. In: Proceedings of ECAI 2020, IOS Press, pp. 155–162, 10.3233/FAIA200088.
- [19] Bastien Maubert, Sophie Pinchinat, François Schwarzentruber & Silvia Stranieri (2020): Concurrent Games in Dynamic Epistemic Logic. In Christian Bessiere, editor: Proceedings of IJCAI 2020, ijcai.org, pp. 1877–1883, 10.24963/ijcai.2020/260.
- [20] Pavel Naumov & Jia Tao (2018): Second-Order Know-How Strategies. In: Proceedings of AAMAS ’18, pp. 390–398, 10.5555/3237383.3237444.
- [21] Pavel Naumov & Jia Tao (2018): Together we know how to achieve: An epistemic logic of know-how. Artificial Intelligence 262, pp. 279–300, 10.1016/j.artint.2018.06.007.
- [22] Anantha Padmanabha, R. Ramanujam & Yanjing Wang (2018): Bundled fragments of first-order modal logic: (un)decidability. In: Proceedings of FSTTCS ’18, 10.4230/LIPIcs.FSTTCS.2018.43.
- [23] Pierre-Yves Schobbens (2004): Alternating-time logic with imperfect recall. Electron. Notes Theor. Comput. Sci. 85(2), pp. 82–93, 10.1016/S1571-0661(05)82604-0.
- [24] Jason Stanley & Timothy Williamson (2001): Knowing how. The Journal of Philosophy, pp. 411–444, 10.2307/2678403.
- [25] Yanjing Wang (2017): A New Modal Framework for Epistemic Logic. In Jérôme Lang, editor: Proceedings of TARK ’17, EPTCS 251, pp. 515–534, 10.4204/EPTCS.251.38.
- [26] Yanjing Wang (2018): A logic of goal-directed knowing how. Synthese 10, pp. 4419–4439, 10.1007/s11229-016-1272-0.
- [27] Yanjing Wang (2018): Beyond knowing that: a new generation of epistemic logics. In: Jaakko Hintikka on knowledge and game theoretical semantics, Springer, Cham, pp. 499–533, 10.1007/978-3-319-62864-6_21.
- [28] Bruno Zanuttini, Jérôme Lang, Abdallah Saffidine & François Schwarzentruber (2020): Knowledge-based programs as succinct policies for partially observable domains. Artificial Intelligence 288, p. 103365, 10.1016/j.artint.2020.103365.