Asymmetric Norms to Approximate the Minimum Action Distance
Abstract
This paper presents a state representation for reward-free Markov decision processes. The idea is to learn, in a self-supervised manner, an embedding space where distances between pairs of embedded states correspond to the minimum number of actions needed to transition between them. Unlike previous methods, our approach incorporates an asymmetric norm parametrization, enabling accurate approximations of minimum action distances in environments with inherent asymmetry. We show how this representation can be leveraged to learn goal-conditioned policies, providing a notion of similarity between states and goals and a useful heuristic distance to guide planning.
To validate our approach, we conduct empirical experiments on both symmetric and asymmetric environments. Our results show that our asymmetric norm parametrization performs comparably to symmetric norms in symmetric environments and surpasses symmetric norms in asymmetric environments.
1 Minimum Action Distance
In this section, we describe the notion of Minimum Action Distance, and we derive useful ways of computing this measure on finite MDPs and continuous or large MDPs.
We start by introducing some notation.
Definition 1
The Minimum Action Distance (MAD) is defined as the minimum number of decision steps to transition between any pair of states .
We can observe that the MAD is an asymmetric distance function [Mennucci, 2013] and must satisfy the following properties:
-
•
and , .
-
•
implies .
-
•
.
1.1 Learning Minimum Action Distance from Adjacency Matrix
In discrete and finite MDPs we can compute the state-transition graph of an MDP. In this section, we will revise how to learn the minimum action distance from the graph adjacency matrix.
A state-transition graph of an MDP is a graph with nodes representing the states in the MDP and the edges representing state adjacency in the MDP. More precisely, , iff . An adjacency matrix represents a graph with a square matrix of size with -value being 1 if and 0 otherwise.
(1) |
Having access to the adjacency matrix we can simply compute the minimum action distance by using the Floyd-Warshall algorithm [Floyd, 1962, Roy, 1959, Warshall, 1962].
The Floyd-Warshall algorithm compares all possible paths through the graph between each pair of vertices. It is able to do this with comparisons in a graph, even though there may be up to edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices until the estimate is optimal.
This dynamic programming procedure relies on having access to the edge weights, which in the case of MAD reduces to having access to the adjacency matrix of when are connected by an edge.
Thanks to this we can define the shortest path on the by just first computing the base cases:
(2) |
and subsequently computing the recursive case leveraging the triangle inequality property:
(3) |
Note that in case we do not have access to the adjacency matrix this can be retrieved by interacting with the environment by visiting all the one-step transitions .
1.2 Symmetric embeddings
The Minimum Action Distance between states is a priori unknown and is not directly observable in continuous and/or noisy state spaces where we cannot simply enumerate the states and construct the adjacency matrix of the MDP. Instead, we will approximate it using the distances between states observed on trajectories. We introduce the notion of Trajectory Distance (TD) as follows:
Definition 2
(Trajectory Distance) Given any trajectory collected in an MDP and given any pair of states along the trajectory such that , we define as
(4) |
i.e. the number of decision steps required to reach from on trajectory .
We can observe that given any state trajectory , choosing any pair of states with , their distance along the trajectory represents an upper bound of the MAD.
(5) |
Given a dataset of trajectories collected by any unknown behavior policy, we can retrieve the MAD by solving the following constrained optimization problem:
(6) |
As a first step we can leverage the triangle inequality to simplify the constrain in 6 and reduce the dependency on the quality of the trajectories in the dataset .
(7) |
where refers to a one-step transition (i.e. ) in the trajectory while indicates all the combinations of states contained in the trajectory dataset .
Note that the first constraint in 7 imposes an upper bound on one-step transitions, i.e. it says that two states at distance one along a trajectory are either the same state or they must satisfy . This allows us to approximate the MAD without having to identify whether two states along a trajectory are the same state or not.
Note that the second constraint in 7 corresponds to the triangle inequality, which thus is a property that holds for the MAD. Moreover, this second constraint implies that we have to calculate it for all the combinations of states contained in which can become intractable for large state spaces.
To address this issue we proposed an alternative formulation based on embedding the MAD in a parametric embedding space where a chosen distance metric that respects the triangle inequality (e.g. any norm ) can be used to enforce the triangle inequality constraint.
The goal is to learn a parametric state embedding such that the distance between any pair of embedded states approximates the Minimum Action Distance.
Steccanella and Jonsson [2022] used this formulation to favour symmetric embeddings, where they use norms as distance functions, e.g. the L1 norm . If we use symmetric embeddings we will have that for any pair of states ,
(8) |
The problem of learning this embedding can then be formulated as a constrained optimization problem:
(9) | ||||
s.t. |
Intuitively, the objective is to make the embedded distance between pairs of states as close as possible to the observed trajectory distance while respecting the upper bound constraints. Without constraints, the objective is minimized when the embedding matches the expected Trajectory Distance between all pairs of states observed on trajectories in the dataset . In contrast, constraining the solution to match the minimum TD with the upper-bound constraints allows us to approximate the MAD. The precision of this approximation depends on the quality of the given trajectories.
To make the constrained optimization problem tractable, we can relax the hard constraints in (9) and convert them into a penalty term to retrieve a simple unconstrained formulation. Moreover, we rely on sampling and from the dataset of trajectories making this formulation amenable for gradient descent and to fit within the optimization scheme of neural networks.
(10) |
where is our penalty term defined as
(11) |
The penalty term introduces a quadratic penalization of the objective for violating the upper-bound constraints .
1.3 Asymmetric embeddings
In the previous section, we have seen how it is possible to define the MAD embedding problem with the use of norms . While the formulation is useful to understand how it is possible to remove the triangle inequality constraint in 7 the Minimum Action Distance is naturally asymmetric and we would like embedding that preserves this asymmetry.
A norm is a function satisfying, :
-
•
(Pos. def.). , unless .
-
•
(Pos. homo.). , for .
-
•
(Subadditivity). .
-
•
(Symmetry). .
An asymmetric semi-norm satisfies , but not necessarily , .
A convex function is a function satisfying . The commonly used ReLU activation, , is convex.
Is easy to observe that any and any function is convex and thus that any asymmetric semi-norm is convex.
Motivated by this relationship between convex functions and norms,Pitis et al. [2020] introduced Wide Norms, a parametric distance that models symmetric and asymmetric norms.
A Wide Norm is any combination of symmetric/asymmetric semi-norms. They are based on the Mahalanobis norm of , parametrized by , defined as .
Asymmetric Wide Norms are defined as:
We can then use the parametrized Wide Norm distance to constrain the triangular inequality on the embedding space:
(12) |
where is our penalty term defined as
(13) |
2 Learning the Transition Model
In the previous section, we showed how to learn a state representation that encodes a distance metric between states. This distance allows us to measure how far we are from the goal state, i.e. . However, on its own, the distance metric does not directly give us a policy for reaching the desired goal state.
In this section we propose a method to learn a transition model of actions, that combined with our state representation allows us to plan directly in the embedded space and derive policies to reach any given goal state. Given a dataset of trajectories and a state embedding , we seek a parametric transition model such that for any triple , .
We propose to learn this model simply by minimizing the squared error as:
(14) |
Once we learned the model, given a goal we can select actions that minimize the distance to the goal in the latent space .
3 Empirical Evaluation
In this section we report a comparison of symmetric norms and WideNorms performance on symmetric and asymmetric environments.
We refer to as the agent that learns the using an L1 norm and to agent the agent that uses WideNorms to approximate the .
In Figure 1 we can observe that is unable to approximate the MAD in asymmetric environments ( Asymmetric PointMass Environment) converging to while approximate the asymmetric correctly.
In Figure 2 we report the performance of the planning on the embedded space using the learned transition model. We can observe that in symmetric environments ( Planning Symmetric PointMass and Planning reach ) both and have similar performance. But when we move to asymmetric environments ( Planning Asymmetric PointMass ) we can observe that agent is unable to learn the true and as a consequence the planning performance degrades. In contrary is able to learn the true and outperforms agent in planning.
The empirical evaluation demonstrates that symmetric norms such as the L1 norm used in this experiment are unable to approximate the MAD in asymmetric environments converging to while WideNorms approximate the asymmetric correctly.
We release the software to reproduce the results at the following repository: https://github.com/lorenzosteccanella/SRL under the branch "NIPS-GCRL-Workshop" and some notebooks that ease the understanding of the code under the branch "main".


References
- Mennucci [2013] Andrea CG Mennucci. On asymmetric distances. Analysis and Geometry in Metric Spaces, 1(2013):200–231, 2013.
- Floyd [1962] Robert W. Floyd. Algorithm 97: Shortest path. Commun. ACM, 5(6):345–, June 1962. ISSN 0001-0782. doi: 10.1145/367766.368168. URL http://doi.acm.org/10.1145/367766.368168.
- Roy [1959] Bernard Roy. Transitivité et connexité. In Extrait des comptes rendus des séances de l’Académie des Sciences, pages 216–218. Gauthier-Villars, July 1959. http://gallica.bnf.fr/ark:/12148/bpt6k3201c/f222.image.langFR.
- Warshall [1962] Stephen Warshall. A theorem on boolean matrices. J. ACM, 9(1):11–12, January 1962. ISSN 0004-5411. doi: 10.1145/321105.321107. URL http://doi.acm.org/10.1145/321105.321107.
- Steccanella and Jonsson [2022] Lorenzo Steccanella and Anders Jonsson. State representation learning for goal-conditioned reinforcement learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 84–99. Springer, 2022.
- Pitis et al. [2020] Silviu Pitis, Harris Chan, Kiarash Jamali, and Jimmy Ba. An inductive bias for distances: Neural nets that respect the triangle inequality. arXiv preprint arXiv:2002.05825, 2020.