This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Hypergraph-based Motion Generation with Multi-modal Interaction Relational Reasoning

Keshu Wu Yang Zhou Haotian Shi Dominique Lord Bin Ran Xinyue Ye
Abstract

The intricate nature of real-world driving environments, characterized by dynamic and diverse interactions among multiple vehicles and their possible future states, presents considerable challenges in accurately predicting the motion states of vehicles and handling the uncertainty inherent in the predictions. Addressing these challenges requires comprehensive modeling and reasoning to capture the implicit relations among vehicles and the corresponding diverse behaviors. This research introduces an integrated framework for autonomous vehicles (AVs) motion prediction to address these complexities, utilizing a novel Relational Hypergraph Interaction-informed Neural mOtion generator (RHINO). RHINO leverages hypergraph-based relational reasoning by integrating a multi-scale hypergraph neural network to model group-wise interactions among multiple vehicles and their multi-modal driving behaviors, thereby enhancing motion prediction accuracy and reliability. Experimental validation using real-world datasets demonstrates the superior performance of this framework in improving predictive accuracy and fostering socially aware automated driving in dynamic traffic scenarios.

keywords:
Interaction representation, hypergraph, relational reasoning, multi-modal prediction, motion prediction, motion generation
\affiliation

[label1]organization=Center for Geospatial Sciences, Applications, and Technology and Department of Landscape of Architecture and Urban Planning, Texas A&M University,addressline=788 Ross St, city=College Station, postcode=77840, state=TX, country=United States \affiliation[label2]organization=Zachry Department of Civil and Environmental Engineering, Texas A&M University,addressline=201 Dwight Look Engineering Building, city=College Station, postcode=77843, state=TX, country=United States \affiliation[label3]organization=Department of Civil and Environmental Engineering, University of Wisconsin-Madison,addressline=1415 Engineering Dr, city=Madison, postcode=53706, state=WI, country=United States

1 Introduction

Understanding traffic interactions and the way they affect future vehicle trajectories is inherently complex [1]. In mixed traffic environments, where human-driven and automated vehicles coexist, this complexity is amplified, requiring precise interaction representation and behavior modeling for reliable motion prediction [2, 3, 4]. These scenarios often present dynamic interaction topologies with underlying relations, with interaction patterns as well topologies continuously evolving depending on the surrounding context as exampled by lane change maneuvers [5, 6]. These relationships play a crucial role in guiding each vehicle’s decision-making processes. Additionally, each vehicle can display multiple possible modalities in driving intentions and behaviors, including both longitudinal (e.g., acceleration and braking) and lateral (e.g., lane-changing and lane-keeping) maneuvers [7, 8]. Furthermore, collective behaviors, arising from interactions within a group of vehicles and encompassing both cooperative and competitive behaviors [9, 10, 12, 11], further complicate the understanding of these interactions[13, 14, 15]. Figure 1 describes the interaction among multi-vehicles and the corresponding multi-modal driving behaviors. Therefore, it is necessary to model the interaction and multi-modality and reason the interaction relation to accurately capture interactions and forecast their future behaviors [16, 17, 18].

Refer to caption
Figure 1: Major challenges: (a) vehicle interaction, (b) behavior multi-modality, and (c) interaction relational reasoning.

Efforts have been made to address the challenges of vehicle interactions and driving behavior multi-modality. Three primary approaches have been developed: social operation methods, attention-driven methods, and graph-based techniques. Social operations use pooling mechanisms to generate socially acceptable trajectories by capturing the influence of surrounding agents [20, 21]. Attention-driven approaches use attention mechanisms to dynamically weigh neighboring agents’ information [22, 23, 24]. Graph-based methods leverage graph structures to model non-Euclidean spatial dependencies, effectively handling varying interaction topologies and predicting dynamic interactions [25, 26, 27, 28]. These complex interactions create uncertainty, complicating the accurate forecasting of a single future trajectory with high confidence due to varying driving behaviors in identical situations [29, 30], driven by individual driver characteristics and psychological factors. Addressing the multi-modality of driving behaviors often involves introducing latent variables, categorized into those with explicit semantics and those without. Models with explicit semantics use latent variables to clearly represent driving intentions, identifying specific maneuvers and behaviors for multi-modal trajectory predictions [30, 31, 32]. Conversely, models without explicit semantics employ generative deep learning techniques, such as Variational Autoencoders (VAEs) [33] and Generative Adversarial Networks (GANs) [21, 34], to produce diverse trajectories by adding noise to encoded features. While these models generate a wide range of possible trajectories, they often struggle with issues related to interpretability and identifying the most effective strategies for sampling from the generated trajectories.

Despite the advances in modeling vehicle interactions and probabilistically forecasting multi-modal future trajectories, the inherent complexity of social dynamics in traffic systems continues to present significant challenges, with limitations arising from the complex nature of agent interactions in two key aspects: First, prevailing methods primarily focus on pair-wise interactions rather group-wise interactions. In multi-vehicle systems, dynamic interactions among vehicles often exhibit cooperative and competitive behaviors [9, 10, 35, 36], which have been rarely explored. Effectively capturing the collective influence of vehicle groups is vital for understanding complex social dynamics, especially in scenarios where vehicles engage in multi-modal behaviors such as lane changes, acceleration, and deceleration. In such contexts, vehicles need to make decisions based on the actions and intentions of several surrounding agents simultaneously, further complicating the task of accurate behavior prediction. For example, competitive behaviors, such as overtaking and lane-changing, can lead to conflicting objectives between vehicles. In the lane changing scenario as depicted in Figure 2, vehicle 1 attempts a right lane change (R) while vehicle 3 maintains lane-keeping (K) and vehicle 4 performs a left lane change (L). Conversely, cooperative interactions, such as car-following, are illustrated by vehicles 1, 3, and 4 forming a platoon, all maintaining lane-keeping while potentially responding to vehicle 2’s left lane change. These examples highlight the need for models that can effectively capture the complexities of both individual and group dynamics in multi-modal traffic scenarios.

Refer to caption
Figure 2: Pair-wise interaction and group-wise interaction in different scenarios.

Second, while traditional graph-based methods such as [25, 26, 27, 37] excel at capturing pair-wise relationships, they are limited in representing the more intricate group-wise interactions. This limitation arises because graph-based approaches model interactions between pairs of agents with fixed topologies, making it difficult to capture the simultaneous influence of multiple agents on each other’s behaviors. Unlike standard graphs, where each edge connects only two nodes, hypergraphs are capable of connecting an entire group of nodes within a shared context through a single hyperedge [38, 39]. As a result, a hypergraph, which is a generalization of a graph where hyperedges can connect multiple nodes to represent the higher-order relationships, offers a more accurate reflection of the group-wise interactions and allows for a more accurate representation of the collective influence of multiple agents in varying traffic conditions. This adaptability is essential for addressing the stochastic nature of human behavior, as hypergraph models can more effectively manage the uncertainty and variability inherent in the interactions resulting from collective behaviors, thereby facilitating more socially inspired automated driving [40, 41, 42].

To address the aforementioned challenges, this paper presents a novel hypergraph-based method for multi-modal trajectories prediction with relational reasoning. The proposed framework contains two parts: Graph-based Interaction-awaRe Anticipative Feasible Future Estimator (GIRAFFE) and Relational Hypergraph Interaction-informed Neural mOtion generator (RHINO). GIRAFFE enables multi-agent, multi-modal motion prediction of preliminary multi-vehicle trajectories, based on while RHINO framework, which utilizes an innovative Agent-Behavior Hypergraph to capture group-wise interactions among various behavior modalities and motion states. Leveraging GroupNet [41] as its backbone, RHINO learns a multi-scale hypergraph topology in a data-driven manner to model group-wise interactions. Through neural message passing across the hypergraph, this approach integrates interaction representation learning and relational reasoning, enhancing the social dynamics of automated driving. Furthermore, a Conditional Variational Autoencoder (CVAE) framework is employed to generate diverse trajectory predictions by sampling from hidden feature distributions, effectively addressing the stochastic nature of agent behaviors.

To summarize, the key contributions of this work are as follows:

  1. 1.

    The framework adopts multi-scale hypergraphs to represent group-wise interactions among different modalities of driving behavior and the corresponding motion states of multiple agents in a flexible manner.

  2. 2.

    This framework incorporates interaction representation learning and relational reasoning to generate motions that are plausible and concurrently in a probabilistic manner depicted by the learned posterior distribution.

The remainder of this paper is structured as follows: Section 2 outlines the problem statement of this research. Section 3 introduces the methodology. Section 4 details the experimental setup and analysis of the results obtained. Section 5 concludes this paper.

2 Problem Statement

2.1 Problem Definition

The vehicle trajectory prediction in the dynamic realm of multi-vehicle interaction context of multi-lane highways involves determining the future movements of a target vehicle based on historical data and multi-modal predictions of its own state and the states of surrounding vehicles. This domain addresses two primary challenges: (i) multi-agent multi-modal trajectory prediction and (ii) prediction-guided motion generation after reasoning.

The objective of trajectory prediction is to estimate the future trajectories of the target vehicle and its surrounding vehicles, given their historical states. The historical states, spanning a time horizon [1,,T][1,\dots,T], are represented as 𝐗1:T={𝐗1,𝐗2,,𝐗T}T×N×C1\mathbf{X}_{1:T}=\{\mathbf{X}_{1},\mathbf{X}_{2},\dots,\mathbf{X}_{T}\}\in\mathbb{R}^{T\times N\times C_{1}}, where NN denotes the number of vehicles and C1C_{1} denotes the number of features, including longitudinal and lateral positions and velocities. Each historical state 𝐗t={𝐱ti|i[1,N],t[1,T]}C1\mathbf{X}_{t}=\{\mathbf{x}_{t}^{i}|\forall i\in[1,N],\forall t\in[1,T]\}\in\mathbb{R}^{C_{1}} at time step tt captures these details for each vehicle ii. Notably, the superscript refers to vehicle indices with i=1i=1 representing the target vehicle, and the subscript to time steps, with C1=4C_{1}=4 for the input data.

The prediction model, 𝐇Pred()\mathbf{H}^{Pred}(\cdot), provides preliminary predictions of multi-modal trajectory candidates 𝐗^T+1:T+FMF×N×M×C2\hat{\mathbf{X}}_{T+1:T+F}^{M}\in\mathbb{R}^{F\times N\times M\times C_{2}} for all the NN vehicles over the future time horizon [T+1,,T+F][T+1,\dots,T+F] with MM modes of driving behaviors. This model takes historical data 𝐗1:T\mathbf{X}_{1:T} as the input and outputs future longitudinal and lateral positions, where C2=2C_{2}=2. The forecasted states 𝐗^fM={𝐱fm,i|m[1,M],i[1,N],f[1,F]}\hat{\mathbf{X}}_{f}^{M}=\{\mathbf{x}_{f}^{m,i}|\forall m\in[1,M],\forall i\in[1,N],\forall f\in[1,F]\} aim to estimate each vehicle’s future trajectory for each behavior mode mm at time step T+fT+f. This is mathematically formulated as:

𝐗^T+1:T+FM=𝐇Pred(𝐗1:T)\hat{\mathbf{X}}_{T+1:T+F}^{M}=\mathbf{H}^{Pred}(\mathbf{X}_{1:T}) (1)

Based on that, the motion generation model 𝐇Gen()\mathbf{H}^{Gen}(\cdot) is further developed to generate plausible trajectories considering the implicit group-wise interactions, using both historical states 𝐗1:T\mathbf{X}_{1:T} and preliminary multi-modal future trajectory candidates 𝐗^T+1:T+FM\hat{\mathbf{X}}_{T+1:T+F}^{M} as the input. The generation model provides KK plausible trajectory 𝐘^T+1:T+FK={𝐘^T+1K,,𝐘^T+FK}F×N×K×C2\hat{\mathbf{Y}}_{T+1:T+F}^{K}=\{\hat{\mathbf{Y}}_{T+1}^{K},\dots,\hat{\mathbf{Y}}_{T+F}^{K}\}\in\mathbb{R}^{F\times N\times K\times C_{2}} for all the NN vehicles for the next FF time steps. Each generated state Y^T+fK={𝐲^T+fk,i|k[1,K],i[1,N],f[1,F]}C2\hat{Y}_{T+f}^{K}=\{\hat{\mathbf{y}}_{T+f}^{k,i}|\forall k\in[1,K],\forall i\in[1,N],\forall f\in[1,F]\}\in\mathbb{R}^{C_{2}} represents the kk-th generated longitudinal and lateral potisions of the ii-th vehicle at time step T+fT+f. The formulation for this generation problem is:

𝐘^T+1:T+FK=𝐇Gen(𝐗1:T,𝐗^T+1:T+FM)\hat{\mathbf{Y}}_{T+1:T+F}^{K}=\mathbf{H}^{Gen}(\mathbf{X}_{1:T},\hat{\mathbf{X}}_{T+1:T+F}^{M}) (2)

3 Methodology

Given the aforementioned problem, we first develop a customized framework architecture. Then, the vital components are further elaborated.

3.1 Framework Architecture

The proposed framework adopts an integrated architecture, as shown in Figure 3, which involves two major components:

  • 1.

    GIRAFFE: Graph-based Interaction-awaRe Anticipative Feasible Future Estimator, which leverages graph representations to capture pair-wise interactions during both the historical and future time horizons, providing preliminary multi-modal trajectories prediction candidates for vehicles.

  • 2.

    RHINO: Relational Hypergraph Interaction-informed Neural mOtion generator, which utilizes multi-scale hypergraph representations to model group-wise interactions and reason the interaction relations among the multi-modal behaviors. Built upon the preliminary multi-modal trajectories by GIRAFFE, RHINO will further generate plausible future trajectories for all vehicles in a probabilistic manner.

Refer to caption
Figure 3: Framework architecture.

The subsequent sections will provide an in-depth explanation of the two principal frameworks.

3.2 GIRAFFE: Graph-based Motion Predictor

In our context, a graph representation 𝒢\mathcal{G} is adopted by modeling NN vehicles as nodes 𝒱N\mathcal{V}\in\mathbb{R}^{N} and the pair-wise interaction as the edges |N×N|\mathcal{E}\in\mathbb{R}^{|N\times N|}. Further, The feature matrix XN×CX\in\mathbb{R}^{N\times C} containing vehicle states (i.e., longitudinal and lateral position and speed) and the adjacency matrix AN×NA\in\mathbb{R}^{N\times N} describing the interactions among nodes are further utilized to describe the graph. By that, we can define an Agent Graph as:

Definition 1 (Agent Graph)

Let 𝒢a\mathcal{G}^{a} be a graph representing the motion states and interaction of NN agents, with each agent represented as a node. 𝒢a\mathcal{G}^{a} is expressed as

𝒢a=(𝒱a,a;Xa,Aa)\mathcal{G}^{a}=(\mathcal{V}^{a},\mathcal{E}^{a};X^{a},A^{a})

where 𝒱aN\mathcal{V}^{a}\in\mathbb{R}^{N} denotes the node set, a|N×N|\mathcal{E}^{a}\in\mathbb{R}^{|N\times N|} denotes the edge set, XaN×CX^{a}\in\mathbb{R}^{N\times C} represents the feature tensor, AaN×NA^{a}\in\mathbb{R}^{N\times N} indicates the adjacency matrix.

To better represent the interaction and relations of the predicted multi-agent multi-modal trajectory candidates with graphs, we expand each agent node to multiple nodes of the number of behavior modes based on our previous work [25], which further renders an Agent-Behavior Graph.

Definition 2 (Agent-Behavior Graph)

Let 𝒢b\mathcal{G}^{b} be a graph representation of the multi-modal motion states of NN agents, with each of MM behavior modes for each agent represented as a node. 𝒢b\mathcal{G}^{b} is expressed as

𝒢b=(𝒱b,b;Xb,Ab)\mathcal{G}^{b}=(\mathcal{V}^{b},\mathcal{E}^{b};X^{b},A^{b})

where 𝒱b|MN|\mathcal{V}^{b}\in\mathbb{R}^{|MN|} denotes the node set, b|MN×MN|\mathcal{E}^{b}\in\mathbb{R}^{|MN\times MN|} denotes the edge set, Xb|MN|×CX^{b}\in\mathbb{R}^{|MN|\times C} represents the feature tensor, Ab|MN|×|MN|A^{b}\in\mathbb{R}^{|MN|\times|MN|} indicates the adjacency matrix.

The transition from an Agent Graph 𝒢a\mathcal{G}^{a} to an Agent-Behavior Graph 𝒢b\mathcal{G}^{b} is by an expansion function 𝐅expand()\mathbf{F}^{expand}(\cdot) as:

𝒢b=𝐅expand(𝒢a){𝒱b=i=1Nm=1M{vi,mb}b=i=1Nj=1N{eij,mnbAija0 and Λmn0,m,n{1,,M}}Xb=i=1Nm=1MXi,maAb=i=1Nj=1N{AijaΛmnm,n{1,,M}}\mathcal{G}^{b}=\mathbf{F}^{expand}\left(\mathcal{G}^{a}\right)\Leftrightarrow\left\{\begin{aligned} \mathcal{V}^{b}&=\bigcup_{i=1}^{N}\bigcup_{m=1}^{M}\left\{v_{i,m}^{b}\right\}\\ \mathcal{E}^{b}&=\bigcup_{i=1}^{N}\bigcup_{j=1}^{N}\left\{e_{ij,mn}^{b}\mid A^{a}_{ij}\neq 0\text{ and }\Lambda_{mn}\neq 0,\forall m,n\in\left\{1,\dots,M\right\}\right\}\\ X^{b}&=\bigcup_{i=1}^{N}\bigcup_{m=1}^{M}X_{i,m}^{a}\\ A^{b}&=\bigcup_{i=1}^{N}\bigcup_{j=1}^{N}\left\{A^{a}_{ij}\otimes\Lambda_{mn}\mid\forall m,n\in\left\{1,\dots,M\right\}\right\}\end{aligned}\right. (3)

As shown in Figure 4, in this process, each agent node via𝒱av_{i}^{a}\in\mathcal{V}^{a} in the Agent Graph is expanded into MM behavior-specific nodes vib𝒱bv_{i}^{b}\in\mathcal{V}^{b}, corresponding to the MM potential behavioral modes of the vehicle. These newly generated behavior nodes, which may exhibit significant interdependencies, are interconnected through edges eijbe_{ij}^{b}, forming a more complex interaction structure. Consequently, the adjacency matrix AbA^{b} is extended to accommodate the expanded node set, resulting in increased dimensionality. Here, Λ\Lambda is a behavior correlation matrix, and each element Λmn\Lambda_{mn} in the matrix represents the correlation between behavior mode mm of one agent and behavior mode nn of another agent. The feature tensor XbX^{b} of the behavior nodes encodes the possible motion states under each behavioral mode, capturing the multi-modal nature of vehicle behavior.

Refer to caption
Figure 4: Definitions of graphs

Based on that, a deep neural network is designed to capture interactions between the target vehicle and surrounding vehicles by 𝒢a\mathcal{G}^{a}, and to represent the output, which consists of predicted multi-agent multi-modal trajectories, by 𝒢b\mathcal{G}^{b}, as illustrated in Figure 5. Three key modules of GIRAFFE 𝐇Pred()\mathbf{H}^{Pred}(\cdot) are introduced below:

Refer to caption
Figure 5: GIRAFFE Framework.
Interaction Encoder

The Interaction Encoder utilizes a Diffusion Graph Convolution Network (DGCN) architecture to encode the dynamic graph embeddings, as described in B. The DGCN captures the bidirectional dependencies among vehicles by applying diffusion convolutions, which consider both forward and reverse processes to model the influence of surrounding vehicles and the target vehicle’s impact on them [43, 44]. This encoder adopts a DGCNH()DGCN_{H}(\cdot) to generate graph embeddings for historical of and a DGCNF()DGCN_{F}(\cdot) to generate future embeddings, merging them into a comprehensive representation that spans the entire time window of interest.

H~T=DGCNH(XT)\tilde{H}_{T}=DGCN_{H}(X_{T}) (4)
H~F=DGCNF(XT)\tilde{H}_{F}=DGCN_{F}(X_{T}) (5)
H~=[H~T,H~F]\tilde{H}=[\tilde{H}_{T},\tilde{H}_{F}] (6)
Intention Predictor

The Intention Predictor addresses the classification of future driving intentions, both laterally and longitudinally. Using the encoded graph representation, two MLP layers reduce the dimensions and encode the features into a latent space. The LatMLP layers with softmax activation then classify the lateral intentions over the future time horizon. These predictions help in understanding the potential maneuvers the vehicle might take, such as lane changes or speed adjustments.

H~IP=MLP(MLP(H~))\tilde{H}^{IP}=MLP(MLP(\tilde{H})) (7)
m^lat=softmax(LatMLP(H~IP))\hat{m}^{lat}=\text{softmax}(LatMLP(\tilde{H}^{IP})) (8)
Multi-modal Decoder

Finally, the Multi-modal Decoder fuses the predicted intentions of multiple agents with the latent space to produce multiple future trajectory distributions for each agent. This decoder uses a trainable weight matrix to combine features from distinct historical and future time steps, emphasizing the importance of sequential motion patterns. The GRU-based decoder ensures temporal continuity in the predicted trajectories, mapping the fused features to a bivariate Gaussian distribution representing the future vehicle positions. This approach allows the model to generate probabilistic predictions for multiple agents.

Whid=softmax(WmapM^)W_{hid}=\text{softmax}(W_{map}\otimes\hat{M}) (9)
Hdec=WhidH~H_{dec}=W_{hid}\cdot\tilde{H} (10)
𝐗^T+1:T+FM=MLP(GRU(MLP(Hdec,M^)))\hat{\mathbf{X}}_{T+1:T+F}^{M}=MLP(GRU(MLP(H_{dec},\hat{M}))) (11)

3.3 RHINO: Hypergraph-based Motion Generator

Unlike traditional graph representations, which are confined to pair-wise relationships, hypergraphs offer a more sophisticated and comprehensive framework for representing group-wise interactions. By connecting multiple vehicles that exhibit strong correlations through hyperedges, hypergraphs enable a more robust analysis and optimization of the complex network of interactions. The concept of an Agent Hypergraph to represent agents and their group-wise interactions is introduced as follows:

Definition 3 (Agent Hypergraph)

Let b\mathcal{H}^{b} be a hypergraph representation of the motion states of NN agents, with each agent represented as a node. The hypergraph a\mathcal{H}^{a} is expressed as

a=(𝒱a,𝒰a;Xa,Ha)\mathcal{H}^{a}=(\mathcal{V}^{a},\mathcal{U}^{a};X^{a},H^{a})

where 𝒱aN\mathcal{V}^{a}\in\mathbb{R}^{N} denotes the node set, 𝒰aL\mathcal{U}^{a}\in\mathbb{R}^{L} denotes the edge set, XaN×CX^{a}\in\mathbb{R}^{N\times C} represents the feature tensor, HaN×LH^{a}\in\mathbb{R}^{N\times L} indicates the incidence matrix, where HijaH^{a}_{ij} indicates whether node viv_{i} is part of the hyperedge uju_{j}.

Refer to caption
Figure 6: Definitions of graphs and hypergraphs

To convert an Agent Graph 𝒢a\mathcal{G}^{a} into an Agent Hypergraph a\mathcal{H}^{a}, we introduce a transformation function 𝐅transform()\mathbf{F}^{transform}(\cdot). This function enables the shift from a pairwise interaction framework to a higher-order interaction model represented by the hypergraph. Formally, the transformation is expressed as:

a=𝐅transform(𝒢a){𝒱a=i=1N{via}𝒰a=j=1K{ujauja𝒱a,Λuj0}Xa=i=1NXiaHa=i=1Nj=1K{Hijaviauja}\mathcal{H}^{a}=\mathbf{F}^{transform}\left(\mathcal{G}^{a}\right)\Leftrightarrow\left\{\begin{aligned} \mathcal{V}^{a}&=\bigcup_{i=1}^{N}\{v_{i}^{a}\}\\ \mathcal{U}^{a}&=\bigcup_{j=1}^{K}\left\{u_{j}^{a}\mid u_{j}^{a}\subseteq\mathcal{V}^{a},\Lambda_{u_{j}}\neq 0\right\}\\ X^{a}&=\bigcup_{i=1}^{N}X_{i}^{a}\\ H^{a}&=\bigcup_{i=1}^{N}\bigcup_{j=1}^{K}\left\{H^{a}_{ij}\mid v_{i}^{a}\in u_{j}^{a}\right\}\end{aligned}\right. (12)

In this transformation, the node set 𝒱a\mathcal{V}^{a} and the feature tensor XaX^{a} remain consistent between the graph and the hypergraph representations. However, the primary modification occurs in the edge formulation. The transformation replaces the pairwise edges of the original Agent Graph with hyperedges that can connect multiple nodes simultaneously. This redefinition of edges as hyperedges within the hypergraph a\mathcal{H}^{a} allows for the modeling of group-wise interactions, where a single hyperedge u𝒰au\in\mathcal{U}^{a} can link more than two nodes, capturing higher-order relationships among agents. The incidence matrix HaH^{a} is updated to reflect this change, where each entry HijaH^{a}_{ij} indicates whether agent viv_{i} participates in hyperedge uju_{j}. As a result, a\mathcal{H}^{a} can represent multi-agent interactions that involve multiple agents simultaneously, providing a richer and more flexible structure for modeling the dynamics of the system.

To enhance the understanding of complex group-wise interactions in multi-agent systems, it is essential to extend the traditional Agent Hypergraph model to account for the diverse behavioral modes of each agent. This is achieved by decomposing each agent node in the Agent Hypergraph a\mathcal{H}^{a} into multiple behavior-specific nodes, which correspond to the different modes of behavior each agent can exhibit. The result of this decomposition is the Agent-Behavior Hypergraph, denoted as b\mathcal{H}^{b}.

Definition 4 (Agent-Behavior Hypergraph)

Let b\mathcal{H}^{b} be a hypergraph representation of the multi-modal motion states of NN agents, with each of MM behavior modes for each agent represented as a node. The hypergraph b\mathcal{H}^{b} is expressed as

b=(𝒱b,𝒰b;Xb,Hb)\mathcal{H}^{b}=(\mathcal{V}^{b},\mathcal{U}^{b};X^{b},H^{b})

where 𝒱b|MN|\mathcal{V}^{b}\in\mathbb{R}^{|MN|} denotes the node set, 𝒰bL\mathcal{U}^{b}\in\mathbb{R}^{L} denotes the edge set, Xb|MN|×CX^{b}\in\mathbb{R}^{|MN|\times C} represents the feature tensor, Hb|MN|×LH^{b}\in\mathbb{R}^{|MN|\times L} indicates the incidence matrix, where HijbH_{ij}^{b} indicates whether node viv_{i} is part of the hyperedge uju_{j}.

To formally describe the process of transitioning from an Agent Hypergraph a\mathcal{H}^{a} to an Agent-Behavior Hypergraph b\mathcal{H}^{b}, the expansion function 𝐅expand()\mathbf{F}^{expand}(\cdot) is applied. This function decomposes each agent node into multiple behavior-specific nodes and updates the hyperedge structure accordingly. The behavior-specific nodes correspond to the different behavior modes, while the hyperedges represent the higher-order interactions among the behavior modes of different agents.

b=𝐅expand(a){𝒱b=i=1Nm=1M{vi,mb}𝒰b=j=1Lm=1Mn=1M{uj,mnbuja0 and Λmn0}Xb=i=1Nm=1MXi,maHb=i=1MNj=1L{Hijbvi,mbuj,mnb}\mathcal{H}^{b}=\mathbf{F}^{expand}\left(\mathcal{H}^{a}\right)\Leftrightarrow\left\{\begin{aligned} \mathcal{V}^{b}&=\bigcup_{i=1}^{N}\bigcup_{m=1}^{M}\left\{v_{i,m}^{b}\right\}\\ \mathcal{U}^{b}&=\bigcup_{j=1}^{L}\bigcup_{m=1}^{M}\bigcup_{n=1}^{M}\left\{u_{j,mn}^{b}\mid u_{j}^{a}\neq 0\text{ and }\Lambda_{mn}\neq 0\right\}\\ X^{b}&=\bigcup_{i=1}^{N}\bigcup_{m=1}^{M}X_{i,m}^{a}\\ H^{b}&=\bigcup_{i=1}^{MN}\bigcup_{j=1}^{L}\left\{H^{b}_{ij}\mid v_{i,m}^{b}\in u_{j,mn}^{b}\right\}\end{aligned}\right. (13)

The node set 𝒱b\mathcal{V}^{b} expands each agent vibv_{i}^{b} into multiple behavior-specific nodes vi,mbv_{i,m}^{b}, where each mm represents a different behavioral mode of the agent. The hyperedges 𝒰b\mathcal{U}^{b} are formed between the behavior-specific nodes based on the group-wise interactions present in the original hypergraph, with the correlation between behavior modes captured by Λmn\Lambda_{mn}. The feature tensor XbX^{b} captures the state of each behavior node, inheriting the feature data from the original hypergraph.The incidence matrix HbH^{b} records whether a behavior-specific node vi,mbv_{i,m}^{b} is part of a hyperedge uj,mnbu_{j,mn}^{b}. In this extended framework, hyperedges can represent these aforementioned complex group-wise interactions by connecting behaviors of multiple vehicles that are influenced simultaneously by a shared context, as illustrated in Figure 6. Therefore, an Agent-Behavior Hypergraph is defined to model the multi-agent, multi-modal system for reasoning about group-wise interaction relations.

In addition to the expansion process, the transformation function 𝐅transform()\mathbf{F}^{transform}(\cdot) converts an Agent-Behavior Graph 𝒢b\mathcal{G}^{b} into an Agent-Behavior Hypergraph b\mathcal{H}^{b}. This transformation replaces the pairwise edges of the graph with hyperedges that capture higher-order interactions between behavior modes across multiple agents. The transformation is guided by the adjacency matrix AbA^{b} of the original graph and the behavior-mode correlation matrix Λmn\Lambda_{mn}. The transformation function 𝐅transform()\mathbf{F}^{transform}(\cdot) is expressed as:

b=𝐅transform(𝒢b){𝒱b=i=1Nm=1M{vi,mb}𝒰b=j=1Lm=1Mn=1M{uj,mnbAijb0 and Λmn0}Xb=i=1NXibHb=i=1MNj=1L{Hijbvi,mbuj,mnb}\mathcal{H}^{b}=\mathbf{F}^{transform}\left(\mathcal{G}^{b}\right)\Leftrightarrow\left\{\begin{aligned} \mathcal{V}^{b}&=\bigcup_{i=1}^{N}\bigcup_{m=1}^{M}\left\{v_{i,m}^{b}\right\}\\ \mathcal{U}^{b}&=\bigcup_{j=1}^{L}\bigcup_{m=1}^{M}\bigcup_{n=1}^{M}\left\{u_{j,mn}^{b}\mid A^{b}_{ij}\neq 0\text{ and }\Lambda_{mn}\neq 0\right\}\\ X^{b}&=\bigcup_{i=1}^{N}X_{i}^{b}\\ H^{b}&=\bigcup_{i=1}^{MN}\bigcup_{j=1}^{L}\left\{H^{b}_{ij}\mid v_{i,m}^{b}\in u_{j,mn}^{b}\right\}\end{aligned}\right. (14)

This structure allows for the connection of behavior nodes across different agents, enabling the representation of interactions among diverse behaviors of multiple agents in a shared context. Hence, the Agent-Behavior Hypergraph not only captures the individual behavior of each agent but also models how these behaviors interact and influence one another within a multi-agent system.

3.3.1 RHINO Framework Architecture

The core of RHINO is to learn a multi-scale Agent-Behavior Hypergraph, where nodes represent the behaviors of agents and hyperedges capture their group-wise interactions. This hypergraph is then used to learn agent and interaction embeddings to better understand the underlying interaction relations. We also incorporate a basic multi-agent trajectory generation system based on the CVAE framework to handle the stochasticity of each agent’s potential behaviors and motion states, generating plausible trajectories for each vehicle.

Refer to caption
Figure 7: RHINO Framework.

Thus, as illustrated in Figure 7, RHINO comprises the following modules:

  • 1.

    Hypergraph Relational Encoder, which transforms both the original historical states and predicted multi-agent multi-modal trajectories into hypergraphs, modeling and reasoning the underlying relation between the vehicles.

  • 2.

    Posterior Distribution Learner, which captures the posterior distribution of the future trajectory given the historical states and the predicted multi-modal future motion states of all the vehicles in the vehicle group.

  • 3.

    Motion Generator, which decodes the embeddings by concurrently reconstructing the historical states and generating the future trajectories.

3.3.2 Hypergraph Relational Encoder

We employ two Hypergraph Relational Encoder modules: a Historical Hypergraph Relational Encoder for handling historical states and a Future Hypergraph Relational Encoder for predicted multi-agent multi-modal trajectories from GIRAFFE. For the Historical Hypergraph Relational Encoder, the input historical states XTX_{T} form an Agent Hypergraph Ta\mathcal{H}_{T}^{a}. For the Future Hypergraph Relational Encoder, the predicted multi-agent multi-modal trajectories X^T+1:T+F\hat{X}_{T+1:T+F} form an Agent-Behavior Hypergraph Fb\mathcal{H}_{F}^{b}, where each agent node is expanded into three lateral behavior nodes with corresponding predicted future states. Both modules share the same structure regardless of the input hypergraph types.

Multi-scale Hypergraph Topology Inference

To comprehensively model group-wise interactions in the hypergraphs at multiple scales, we infer a multi-scale hypergraph to reflect interactions in groups with various sizes. Let ={(0),(1),,(S)}\mathcal{H}=\{\mathcal{H}^{(0)},\mathcal{H}^{(1)},\cdots,\mathcal{H}^{(S)}\} be a multi-scale hypergraph, and 𝒱={v1,v2,,vN}\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\} be a set of nodes. As shown in Figure 8, at any scale ss, (s)=(𝒱,𝒰(s))\mathcal{H}^{(s)}=(\mathcal{V},\mathcal{U}^{(s)}) has a hyperedge set 𝒰(s)={u1(s),u2(s),,uJ(s)}\mathcal{U}^{(s)}=\{u_{1}^{(s)},u_{2}^{(s)},\cdots,u_{J}^{(s)}\} representing group-wise relations with JJ hyperedges. A larger ss indicates a larger scale of agent groups, while (0)=(𝒱,𝒰(0))\mathcal{H}^{(0)}=(\mathcal{V},\mathcal{U}^{(0)}) models the finest pair-wise agent connections. The topology of each (s)\mathcal{H}^{(s)} is represented as an incidence matrix H(s)H^{(s)}.

Refer to caption
Figure 8: Hypergraph encoder.

To understand and quantify the dynamic interactions between agents within a given system, we adopt trajectory embedding to distill the motion states of agents into a compact and informative representation. To infer a multi-scale hypergraph, we construct hyperedges by grouping agents that have highly correlated trajectories, whose correlations could be measured by mapping the trajectories as a high-dimensional feature vector. For the ii-th agent in the system, the trajectory embedding is denoted as qiq_{i}. This embedding is a function of the agent’s motion states, defined over a temporal window extending from time 1 to time TT. The embedding function fqf_{q}, which is a trainable MLP, is responsible for transforming the motion states XiX^{i} into a vector qidq_{i}\in\mathbb{R}^{d}, where dd is the dimensionality of the embedded space. Mathematically, the trajectory embedding is represented as:

qi=fq(Xi)q_{i}=f_{q}(X^{i}) (15)

The affinity between agents is represented by an affinity matrix 𝒜N×N\mathcal{A}\in\mathbb{R}^{N\times N}, which contains the pairwise relational weights between all agents. The affinity matrix is defined as:

𝒜={𝒜ij|i,j=1,,N}\mathcal{A}=\{\mathcal{A}_{ij}|\forall i,j=1,\dots,N\} (16)

Each element 𝒜ij\mathcal{A}_{ij} is computed as the correlation between the trajectory embeddings of the ii-th and jj-th agents. The correlation is the normalized dot product of the two trajectory embeddings, expressed as:

𝒜ij=qiqjqi2qj2\mathcal{A}_{ij}=\frac{q_{i}^{\top}q_{j}}{\|q_{i}\|_{2}\|q_{j}\|_{2}} (17)

Here, 2\|\cdot\|_{2} denotes the L2 norm. The relational weight 𝒜ij\mathcal{A}_{ij} measures the strength of association between the trajectories of the ii-th and jj-th agents, capturing the degree to which their behaviors are correlated. This enables the assessment of interaction patterns and can uncover underlying social or physical laws governing agent dynamics.

The formulation of a hypergraph necessitates the strategic formation of hyperedges that reflect the complex interaction between the nodes in the system. At the outset, the 0-th scale hypergraph (0)\mathcal{H}^{(0)} is considered, where the construction is based on pair-wise connections. Each node establishes a link with another node that has the highest affinity score with it.

As the complexity of the system is scaled up, beginning at scale s1s\geq 1, the methodology shifts towards group-wise connections. This shift is based on the intuition that agents within a particular group should display strong mutual correlations, suggesting a propensity for concerted action. To implement this, a sequence of increasing group sizes {J(s)}s=1S\{J^{(s)}\}_{s=1}^{S} is established. For every node, denoted by viv_{i}, the objective is to discern a group of agents that are highly correlated, ultimately leading to J(s)J^{(s)} groups or hyperedges at each scale ss. The hyperedge associated with a node viv_{i} at a given scale ss is indicated by ui(s)u_{i}^{(s)}. The determination of the most correlated agents is framed as an optimization problem, aiming to link these agents into a hyperedge that accounts for group dynamics:

ui(s)=arg maxΓ𝒱𝒜Γ,Γ1u_{i}^{(s)}=\underset{\Gamma\subseteq\mathcal{V}}{\text{arg max}}\left\|\mathcal{A}_{\Gamma,\Gamma}\right\|_{1} (18)
s.t.|Γ|=J(s);viΓ;i=1,,N\text{s.t.}\;|\Gamma|=J^{(s)};v_{i}\in\Gamma;i=1,\ldots,N (19)

The culmination of this hierarchical structuring is a multi-scale hypergraph, encapsulated by the set {(s)N×N}s=1S\{\mathcal{H}^{(s)}\in\mathbb{R}^{N\times N}\}_{s=1}^{S}, where each scale ss embodies a distinct layer of abstraction in the representation of node relationships within the hypergraph.

Hypergraph Neural Message Passing

In order to discern the patterns of agent motion states from the inferred multi-scale hypergraph, we have tailored a multi-scale hypergraph neural message passing technique to construct the hypergraph topology. This method iteratively acquires the embeddings of vehicles and the corresponding interactions through node-to-hyperedge and hyperedge-to-node processes, as depicted in Figure 9.

Refer to caption
Figure 9: Hypergraph encoder.

The node-to-hyperedge mapping aggregates agent embeddings to generate interaction embeddings. Initially, for any given scale, the initial embedding for the iith agent, vi=qidv_{i}=q_{i}\in\mathbb{R}^{d}. Each node vjv_{j} is associated with a hyperedge uiu_{i}, given that vjv_{j} is an element of uiu_{i}. This mapping facilitates the definition of the hyperedge interaction embedding. The hyperedge interaction embedding for a hyperedge uiu_{i} is defined as a function of the embeddings of the nodes contained within it, modulated by the neural interaction strength rir_{i} and categorized through coefficients ci,lc_{i,l}. The per-category function l\mathcal{F}_{l} models the interaction process for each category, which is crucial for capturing the nuances of different interaction types. Each l\mathcal{F}_{l} is a trainable MLP, responsible for processing the aggregated node embeddings within the context of a specific interaction category. The mathematical formulation is:

ui=ril=1Lci,ll(vjuivj)u_{i}=r_{i}\sum_{l=1}^{L}c_{i,l}\mathcal{F}_{l}\left(\sum_{v_{j}\in u_{i}}v_{j}\right) (20)

The neural interaction strength rir_{i} encapsulates the intensity of the interaction within the hyperedge and is obtained through a trainable model r\mathcal{F}_{r}, applied to a collective embedding ziz_{i} with a sigmoid function σ\sigma as Eq.(21). This collective embedding ziz_{i} is represented as the weighted sum of the individual node embeddings within the hyperedge, signifying the aggregated information of agents in a group as Eq. (22). The weight wjw_{j} for each node is determined by a trainable MLP w\mathcal{F}_{w} as Eq.23.

ri=σ(r(zi))r_{i}=\sigma(\mathcal{F}_{r}(z_{i})) (21)
zi=vjuiwjvjz_{i}=\sum_{v_{j}\in u_{i}}w_{j}v_{j} (22)
wj=w(vj,vmuivm)w_{j}=\mathcal{F}_{w}\left(v_{j},\sum_{v_{m}\in u_{i}}v_{m}\right) (23)

The neural interaction category coefficient cic_{i} represent the model’s reasoning about which type of the interaction is likely for hyperedge uju_{j}, where ci,lc_{i,l} denotes the probability of the ll-th neural interaction category within LL possible categories. These coefficients are computed using a softmax function applied to the output of another trainable MLP c\mathcal{F}_{c}, which is further adjusted by an i.i.d. sample Gumbel distribution gg as described in C. which add some random noise and a temperature parameter τ\tau which controlls the smoothness of probability distribution [45]:

ci=softmax(c(zi)+gτ)c_{i}=\text{softmax}\left(\frac{\mathcal{F}_{c}(z_{i})+g}{\tau}\right) (24)

These components, including neural interaction strength, interaction category coefficients, and per-category functions, provide a comprehensive mechanism for reasoning over complex, higher-order relationships, allowing the model to adapt its understanding of how agents collectively behave in diverse scenarios.

The process of hyperedge-to-node mapping is a pivotal step that allows for the update and refinement of agent embeddings within the hypergraph framework. Each hyperedge uju_{j} is mapped back onto its constituent nodes viv_{i}, assuming every viv_{i} is included in uju_{j}. The primary objective of this phase is to update the embedding of an agent. This is achieved through the function v\mathcal{F}_{v}, which is a trainable MLP. The updated agent embedding v~i\tilde{v}_{i} is the result of the function applied to the concatenation of the agent’s current embedding and the sum of the embeddings of all hyperedges that the agent is part of. Formally, the update rule for the agent embedding is represented as:

v~iv([vi,uj𝒰iuj])\tilde{v}_{i}\leftarrow\mathcal{F}_{v}\left(\left[\,v_{i},\sum_{u_{j}\in\mathcal{U}_{i}}u_{j}\,\right]\right) (25)

where 𝒰i={uj|viuj}\mathcal{U}_{i}=\{u_{j}\,|\,v_{i}\in u_{j}\} denotes the set of hyperedges associated with the ii-th node viv_{i}, and [,][\,\cdot,\cdot\,] symbolize the operation of embedding concatenation. This operation fuses the individual node embedding with the collective information conveyed by the associated hyperedges. This amalgamation is crucial as it encapsulates the influence exerted by the interactions within the hyperedges onto the individual agent.

The Hypergraph Relational Encoder applies the node-to-hyperedge and hyperedge-to-node phases iteratively, allowing agent embeddings to be refined and enriched as relationships within hyperedges evolve. Upon the completion of these iterations, the output is constructed as the concatenation of the agent embeddings across all scales. The final agent embedding matrix 𝐕~a\mathbf{\tilde{V}}^{a} is composed of the embeddings of all agents, where each agent embedding viv_{i} is a concatenation of the embeddings from all scales, expressed as:

𝐕~a=[v~i]N×|d(S+1)|,i[1,,N]\mathbf{\tilde{V}}^{a}=[\tilde{v}_{i}]\in\mathbb{R}^{N\times|d(S+1)|},\quad\forall i\in[1,\ldots,N] (26)

where

v~i=[v~i(0),v~i(1),,v~i(S)]\tilde{v}_{i}=[\tilde{v}_{i}^{(0)},\tilde{v}_{i}^{(1)},\ldots,\tilde{v}_{i}^{(S)}] (27)

3.3.3 Posterior Distribution Learner

Refer to caption
Figure 10: Posterior Distribution Learner.

In our study, we incorporated multi-scale hypergraph embeddings into a multi-agent trajectory generation system using the CVAE framework [46] to address the stochastic nature of each agent’s behavior, as shown in Figure 10. Here, we denote the historical trajectories 𝐗1:T\mathbf{X}_{1:T} as 𝐗T\mathbf{X}_{T}, and denote the predicted future trajectories 𝐗T+1:T+F\mathbf{X}_{T+1:T+F} as 𝐗F\mathbf{X}_{F}. Let logp(𝐗F|𝐗T)\log p(\mathbf{X}_{F}|\mathbf{X}_{T}) denote the log-likelihood of predicted future trajectories 𝐗F\mathbf{X}_{F} given historical trajectories 𝐗T\mathbf{X}_{T}. The corresponding Evidence Lower Bound (ELBO) is defined as follows:

logp(XF|XT)𝔼q(𝐙|𝐗F,𝐗T)logp(𝐗F|𝐙,𝐗T)KL(q(𝐙|𝐗F,𝐗T)p(𝐙|𝐗T)),\begin{split}\log p(X_{F}|X_{T})&\geq\mathbb{E}_{q(\mathbf{Z}|\mathbf{X}_{F},\mathbf{X}_{T})}\log p(\mathbf{X}_{F}|\mathbf{Z},\mathbf{X}_{T})\\ &-\text{KL}(q(\mathbf{Z}|\mathbf{X}_{F},\mathbf{X}_{T})\parallel p(\mathbf{Z}|\mathbf{X}T)),\end{split} (28)

where 𝐙N×dz\mathbf{Z}\in\mathbb{R}^{N\times d_{z}} represents the latent codes corresponding to all agents; p(𝐙|𝐗T)p(\mathbf{Z}|\mathbf{X}_{T}) is the conditional prior of 𝐙\mathbf{Z}, modeled as a Gaussian distribution. KL represents the Kullback–Leibler divergence function. In this framework, q(𝐙|𝐗F,𝐗T)q(\mathbf{Z}|\mathbf{X}_{F},\mathbf{X}_{T}) is implemented through an encoding process for embedding learning, and p(𝐗F|𝐙,𝐗T)p(\mathbf{X}_{F}|\mathbf{Z},\mathbf{X}_{T}) is realized via a decoding process that forecasts the future trajectories 𝐗F\mathbf{X}_{F}.

Thus, the goal of the Posterior Distribution Learner is to derive the Gaussian parameters for the approximate posterior distribution. This involves computing the mean μq\mu_{q} and the variance σq\sigma_{q} based on the final output embeddings 𝐕~Fa\mathbf{\tilde{V}}_{F}^{a} and the target embeddings 𝐕~Ta\mathbf{\tilde{V}}_{T}^{a}. These parameters are generated through two separate trainable MLPs, μ\mathcal{F}_{\mu} and σ\mathcal{F}_{\sigma}, respectively. The latent code ZZ, representing possible trajectories, is then sampled from a Gaussian distribution parameterized by these means and variances. The final output embeddings 𝐕~p\mathbf{\tilde{V}}^{p} are a concatenation of the latent code 𝐙\mathbf{Z}, the final output embeddings 𝐕~Fa\mathbf{\tilde{V}}_{F}^{a}, and the target embeddings 𝐕~Ta\mathbf{\tilde{V}}_{T}^{a}. The equations governing these processes are as follows:

μq=μ(𝐕~Fa,𝐕~Ta)\mu_{q}=\mathcal{F}_{\mu}(\mathbf{\tilde{V}}_{F}^{a},\mathbf{\tilde{V}}_{T}^{a}) (29)
σq=σ(𝐕~Fa,𝐕~Ta)\sigma_{q}=\mathcal{F}_{\sigma}(\mathbf{\tilde{V}}_{F}^{a},\mathbf{\tilde{V}}_{T}^{a}) (30)
𝐙𝒩(μq,Diag(σq2))\mathbf{Z}\sim\mathcal{N}(\mu_{q},\text{Diag}(\sigma_{q}^{2})) (31)
𝐕~p=[𝐙,𝐕~Ta]\mathbf{\tilde{V}}^{p}=[\mathbf{Z},\mathbf{\tilde{V}}_{T}^{a}] (32)

In these notations, μq\mu_{q} and σq\sigma_{q} represent the mean and variance of the approximated posterior distribution. μ\mathcal{F}_{\mu} and σ\mathcal{F}_{\sigma} are the trainable MLPs that produce these parameters. 𝐙\mathbf{Z} denotes the latent code of possible trajectories, and 𝐕~p\mathbf{\tilde{V}}^{p} stands for the output embeddings, which fuses the latent code and the historical embeddings.

3.3.4 Motion Generator

Refer to caption
Figure 11: Motion Generator.

The Motion Generator’s objective is dual: to predict future trajectories and to reconstruct past trajectories from the given embeddings. The decoder accomplishes this by applying successive processing blocks, each contributing a residual that refines the trajectory estimates, as shown in Figure 11. The first processing block, Res1\mathcal{F}_{Res1}, takes the output embeddings 𝐕~p\mathbf{\tilde{V}}^{p} and the target past trajectory 𝐗T\mathbf{X}_{T} to generate initial estimates of the future and reconstructed past trajectories 𝐗^F,1\hat{\mathbf{X}}_{F,1} and 𝐗^T,1\hat{\mathbf{X}}_{T,1} respectively.

𝐗^F,1,𝐗^T,1=Res1(𝐕~p,XT)\hat{\mathbf{X}}_{F,1},\hat{\mathbf{X}}_{T,1}=\mathcal{F}_{Res1}(\mathbf{\tilde{V}}^{p},X_{T}) (33)

Subsequently, the second block, Res2\mathcal{F}_{Res2}, refines these estimates by considering the output embeddings and the residual of the past trajectory, which is the difference between the target past trajectory and the initial reconstructed past trajectory 𝐗T𝐗^T,1\mathbf{X}_{T}-\hat{\mathbf{X}}_{T,1}. This results in the second set of residuals 𝐗^F,2\hat{\mathbf{X}}_{F,2} and 𝐗^T,2\hat{\mathbf{X}}_{T,2}:

𝐗^F,2,𝐗^T,2=𝐗Res2(𝐕~p,𝐗T𝐗^T,1)\hat{\mathbf{X}}_{F,2},\hat{\mathbf{X}}_{T,2}=\mathcal{\mathbf{X}}_{Res2}(\mathbf{\tilde{V}}^{p},\mathbf{X}_{T}-\hat{\mathbf{X}}_{T,1}) (34)

Both Res1\mathcal{F}_{Res1} and Res2\mathcal{F}_{Res2} are composed of a GRU encoder for sequence encoding and two MLPs serving as the output header. The final predicted future trajectory 𝐘^F\hat{\mathbf{Y}}_{F} and the reconstructed past trajectory 𝐗^T\hat{\mathbf{X}}_{T} are obtained by summing the respective residuals from both processing blocks:

𝐘^F=𝐗^F,1+𝐗^F,2\hat{\mathbf{Y}}_{F}=\hat{\mathbf{X}}_{F,1}+\hat{\mathbf{X}}_{F,2} (35)
𝐗^T=𝐗^T,1+𝐗^T,2\hat{\mathbf{X}}_{T}=\hat{\mathbf{X}}_{T,1}+\hat{\mathbf{X}}_{T,2} (36)

This approach enables the model to iteratively refine its predictions and reconstructions, leveraging the capability of deep learning models to capture complex patterns in the data through a series of non-linear transformations.

4 Experiments and Results

4.1 Data Preparations

This research leverages two open-source datasets for the purpose of model training and validation: the Next Generation Simulation (NGSIM) dataset [47],[48] and the HighD dataset [49]. The NGSIM dataset provides a comprehensive collection of vehicle trajectory data, capturing activity from the eastbound I-80 in the San Francisco Bay area and the southbound US 101 in Los Angeles. This dataset encapsulates real-world highway scenarios through overhead camera recordings at a sampling rate of 10Hz. The HighD dataset originates from aerial drone recordings executed at a 25 Hz frequency between 2017 and 2018 in the vicinity of Cologne, Germany. Spanning approximately 420 meters of bidirectional roadways, it records the movements of approximately 110,000 vehicles, encompassing both cars and trucks, traversing an aggregate distance of 45,000 km. After data pre-processing, the NGSIM dataset encompasses 662 thousand rows of data, capturing 1,380 individual trajectories, while the HighD dataset comprises 1.09 million data entries, including 3,913 individual trajectories. For the purpose of training and evaluation of the model, the partition of the data allocates 70%70\% to the training set and 30%30\% to the test set. For the temporal parameters of the model, we adopt T=30T=30 frames to represent the historical horizon and F=50F=50 frames to signify the prediction horizon.

4.2 Training and Evaluation Metrics

Training loss of GIRAFFE

The training loss function for GIRAFFE is a summation of three terms:

PRED=pred+int+fut\mathcal{L}^{PRED}=\mathcal{L}_{pred}+\mathcal{L}_{int}+\mathcal{L}_{fut} (37)

The first component, pred\mathcal{L}_{pred}, is the mean squared error (MSE) between the fused predicted trajectory and the ground truth future trajectory:

pred=𝐘^F𝐘F22\mathcal{L}_{pred}=\|\hat{\mathbf{Y}}_{F}-\mathbf{Y}_{F}\|_{2}^{2} (38)

The second component int\mathcal{L}_{int} represents the negative log-likelihood (NLL) of the predicted driving intentions, treating it as a classification task:

int=NLL(M^;M)=mMmlogP(m^|𝐗T)\mathcal{L}_{int}=NLL(\hat{M};M)=-\sum_{m\in M}m\log P(\hat{m}|\mathbf{X}_{T}) (39)

The third component fut\mathcal{L}_{fut} is the loss associated with the inference of the future-guided graph feature matrix, which is an intermediate output:

fut=H^FHF22\mathcal{L}_{fut}=\|\hat{H}_{F}-H_{F}\|_{2}^{2} (40)
Training loss of RHINO

The training loss function for RHINO is also a summation of three components:

GEN=elbo+recon+var\mathcal{L}^{GEN}=\mathcal{L}_{elbo}+\mathcal{L}_{recon}+\mathcal{L}_{var} (41)

The first component, elbo\mathcal{L}_{elbo}, corresponds to the ELBO loss [46] commonly used in variational autoencoders. It consists of a reconstruction loss term and a regularization term based on the Kullback-Leibler divergence between the learned distribution and a prior distribution:

elbo=α𝐘^F𝐘F22+βKL(𝒩(μq,Diag(σq2))𝒩(0,λI))\mathcal{L}_{elbo}=\alpha\|\hat{\mathbf{Y}}_{F}-\mathbf{Y}_{F}\|_{2}^{2}+\beta\text{KL}\left(\mathcal{N}\left(\mu_{q},\text{Diag}(\sigma_{q}^{2})\right)\,\Big{\|}\,\mathcal{N}\left(0,\lambda I\right)\right) (42)

The second component, recon\mathcal{L}_{recon}, represents the Historical Trajectory Reconstruction loss, which measures how accurately the reconstructed historical trajectories match the true historical data:

recon=λ𝐗^T𝐗T22\mathcal{L}_{recon}=\lambda\|\hat{\mathbf{X}}_{T}-\mathbf{X}_{T}\|_{2}^{2} (43)

The final component, var\mathcal{L}_{var}, is the Variety loss, inspired by Social-GAN [21]. This loss encourages diversity in the predicted future trajectories by minimizing the error across multiple sampled future trajectories:

var=mink𝐘^F(k)𝐘F22\mathcal{L}_{var}=\min_{k}\|\hat{\mathbf{Y}}_{F}^{(k)}-\mathbf{Y}_{F}\|_{2}^{2} (44)

Table 1 presents the hyperparameter configurations used for the network architecture and the training process in the proposed framework.

Table 1: Hyperparameter Settings
Parameter Value Parameter Value
TT 30 decaying factor 0.6
FF 50 α\alpha 1
neuron # of MLPs 128 β\beta 0.8
learning rate 0.001 λ\lambda 0.5

Evaluation metrics To ascertain the predictive accuracy of the model, we employ the Root Mean Square Error (RMSE) as the evaluative criterion. This metric quantitatively measures the deviation between the predicted position, expressed as (y^f,latl,y^f,lonl)(\hat{y}_{f,lat}^{l},\hat{y}_{f,lon}^{l}), and the ground truth position, indicated by (yf,latl,yf,lonl)(y_{f,lat}^{l},y_{f,lon}^{l}) for all time step ff within the predictive horizon prediction horizon [T+1,T+F][T+1,T+F].

RMSE=1LFl=1Lf=T+1T+F((y^f,latlyf,latl)2+(y^f,lonlylonl)2)RMSE=\sqrt{\frac{1}{LF}\sum_{l=1}^{L}\sum_{f=T+1}^{T+F}\left((\hat{y}_{f,lat}^{l}-y_{f,lat}^{l})^{2}+(\hat{y}_{f,lon}^{l}-y_{lon}^{l})^{2}\right)} (45)

where the superscript ll denotes the ll-th test sample from the aggregate test sample set with length LL.

4.3 Results of Trajectory Generation

The experimental results for trajectory generation of the KK trajectories using the HighD dataset are presented in Figure 12. As can be found that, RHINO demonstrates strong generative capabilities, effectively producing plausible motion in a dynamic interactive traffic environment. To provide a more quantitative analysis, trajectory generation inaccuracies are illustrated in Figure 13. The generated longitudinal and lateral trajectories, along with the error box plots and heatmaps, are displayed. The box plot reveals that errors in both axes increase with the prediction time step by the natural of error propagation. However, the errors remain within an acceptable range, indicating decent model performance, which demonstrates high precision in trajectory generation. Notably, the model maintains a lower error margin for shorter prediction horizons, which is critical for short-term planning and reactive maneuvers in dynamic traffic environment.

Refer to caption
Figure 12: Trajectory generation results in highway scenarios.
Refer to caption
Figure 13: Longitudinal and lateral trajectory generation error analysis.
Refer to caption
Figure 14: Trajectory generation with hypergraph inference.

The experiment focused on predicting vehicle trajectories within mixed traffic environments on highways by employing hypergraph inference to model group-based interactions. Through the application of hypergraph models at varying scales s=2,3,5s=2,3,5, the experiment captured the evolution of multi-vehicle interactions across both historical and future horizons. The figures depict these dynamics through three distinct columns: the first column presents vehicle trajectories and the corresponding hyperedges, visualized as polygons that encapsulate groups of interacting vehicles. The second column illustrates the affinity matrix, where both rows and columns represent vehicles, and the strength of their relationships is indicated by the matrix values. The third column shows the incidence matrix, detailing the relationship between nodes and hyperedges, with each column representing a hyperedge and each vehicle’s involvement in that hyperedge marked by a 1 in the corresponding row.

The hypergraph-based approach is particularly effective in modeling complex, higher-order interactions that are beyond the scope of traditional pairwise models. By forming hyperedges that encompass multiple vehicles, the model captures the collective influence that a group’s behavior exerts on an individual vehicle. For instance, in the scenario shown in Figure 14, at scale s=5s=5, when the target vehicle TAR initiates a lane change, the hypergraph reflects the interaction not only with a single neighboring vehicle but also with multiple surrounding vehicles, such as following vehicle F, preceding vehicle P, and preceding vehicle RP in the right lane. More examples are illustrated in A. This capability to model group-wise interactions across different scales is essential for accurately predicting vehicle trajectories in congested highway environments, where the actions of one vehicle can trigger ripple effects that influence an entire group. The hypergraph’s dynamic formation of hyperedges ensures that predicted trajectories remain adaptable and responsive to broader traffic conditions.

4.4 Comparisons and Ablation Study

To evaluate the privileges of our proposed method, the state of art methods (i.e., Social-LSTM (S-LSTM) [20], Convolutional Social-LSTM (CS-LSTM) [31], Planning-informed prediction (PiP) [50], Graph-based Interaction-aware Trajectory Prediction (GRIP) [26], Spatial-temporal dynamic attention network (STDAN) [22] are compared.

The compared results presented in Table 2 and Figure 15. As can be found that, the proposed framework demonstrates good performance with respect to the RMSE across a prediction horizon of 50 frames when compared with existing baseline models. It exhibits a reduced loss in comparison to C-LSTM, CS-LSTM, PiP, and GRIP. These outcomes suggest that the proposed model effectively captures salient features pertinent to long-term predictions. In summary, the proposed framework outperforms baseline models on the HighD dataset and delivers commendable performance on the NGSIM dataset.

Since the RHINO adopts the GIRAFFE, we further compare the trajectory generation capability of RHINO with our previous work [25] and its enhanced version GIRAFFE. Both RHINO model and the enhanced GIRAFFE model consistently outperform the baseline models, demonstrating superior performance in various metrics. This suggests that our proposed approaches effectively address the limitations present in prevailing models by robustly capturing complex interactions.

Table 2: Prediction Error Obtained by Different Models in RMSE (m)
Dataset Horizon (Frame) S-LSTM CS-LSTM PiP GRIP STDAN GIRAFFE RHINO
NGSIM 10 0.65 0.61 0.55 0.37 0.42 0.38 0.32
20 1.31 1.27 1.18 0.86 1.01 0.89 0.78
30 2.16 2.08 1.94 1.45 1.69 1.45 1.34
40 3.25 3.10 2.88 2.21 2.56 2.46 2.17
50 4.55 4.37 4.04 3.16 3.67 3.24 2.97
HighD 10 0.22 0.22 0.17 0.29 0.19 0.19 0.19
20 0.62 0.61 0.52 0.68 0.27 0.42 0.26
30 1.27 1.24 1.05 1.17 0.48 0.81 0.42
40 2.15 2.10 1.76 1.88 0.91 1.13 0.65
50 3.41 3.27 2.63 2.76 1.66 1.56 0.89
Refer to caption
Figure 15: Prediction error obtained by different models in RMSE on NGSIM dataset (left) and HighD dataset (right).

Ablation study is conducted to provide more insights into the performance of our RHINO model, especially the impact of different components on the prediction performance by disabling the corresponding component from the entire RHINO. In particular, we consider the following four variants:

  • 1.

    RHINO w/o HG (hypergraph) variant does not use the multi-scale hypergraphs representation but only adopts the pair-wise connected graph representations in the Hypergraph Relational Encoder.

  • 2.

    RHINO w/o MM (multi-modal) variant does not adopt the multi-agent multi-modal trajectory prediction results and only use the single predicted future states for each agent as the input of the RHINO.

  • 3.

    RHINO w/o PDL (posterior distribution learner) variant skips the Posterior Distribution Learner and directly input the graph embedding into the Motion Generator.

Table 3: Ablation Test Results of RHINO in RMSE (m)
Horizon (Frame) RHINO w/o HG RHINO w/o MM RHINO w/o PDL RHINO
10 0.21 0.22 0.24 0.19
20 0.31 0.37 0.42 0.26
30 0.68 0.73 0.80 0.42
40 0.97 1.06 1.18 0.65
50 1.25 1.34 1.57 0.89
Refer to caption
Figure 16: Ablation Study of RHINO.

An investigation into the effects of model design variations, as presented in Table 3 and Figure 16. The removal of various components from RHINO invariably leads to performance degradation to varying degrees. Compared to the full RHINO, omitting the multi-scale hypergraphs results in an evident increase in prediction error across the prediction horizon, indicating that modeling and reasoning group-wise interactions using hypergraphs, rather than solely pair-wise interactions, enhances prediction accuracy which underscores the necessity of hypergraphs. Further, excluding the multi-agent multi-modal trajectory prediction input leads to a more substantial degradation in performance, highlighting the importance of incorporating multi-modal motion states and discussing the corresponding group-wise interactions among multiple driving behaviors of multiple agents. Lastly, the absence of the Posterior Distribution Learner module emphasizes its critical role in handling the stochasticity of each agent’s behavior. All these experiments justify the effectiveness of the full model.

5 Conclusions

In this study, we proposed a hypergraph enabled multi-modal probabilistic motion prediction framework with reasonings. This framework consists of two main components: GIRAFFE and RHINO. GIRAFFE focuses on predicting the interactive vehicular trajectories considering modalities. Based on that, RHINO, leveraging the flexibility and strengths on modeling the group-wise interactions, facilitate relational reasoning among vehicles and multi-modalities to render plausible vehicles trajectories. The framework extends traditional interaction models by introducing an agent-behavior hypergraph. This approach better aligns with traffic physics while being grounded in the mathematical rigor of hypergraph theory. Further, the approach employs representation learning to enable explicit interaction relational reasoning. This involves considering future relations and interactions and learning the posterior distribution to handle the stochasticity of behavior for each vehicle. As a result, the framework excels in capturing high-dimensional, group-wise interactions across various behavioral modalities.

The framework is tested using the NGSIM and HighD datasets. The results show that the proposed framework effectively models the interactions among groups of vehicles and their corresponding multi-modal behaviors. Comparative studies demonstrate that the framework outperforms prevailing algorithms in prediction accuracy. To further validate the effectiveness of each component, ablation studies were conducted, revealing that the full model performs best.

Several potential extensions of the framework include incorporating road geometries, vehicle types, and real-time weather data to improve trajectory prediction. By integrating weather information from sources like the OpenWeather API, the system could adjust predictions based on conditions such as temperature, wind, and precipitation, enhancing safety and route optimization [51]. Additional enhancements, like traffic signal integration, V2V and V2I communication, and human driver intent, could further improve accuracy and reliability in dynamic urban environments, minimizing disruptions and fostering safer, more informed autonomous driving.

Acknowledgement:

This research is funded by Federal Highway Administration (FHWA) Exploratory Advanced Research 693JJ323C000010. The results do not reflect FHWA’s opinions.

References

  • [1] Gao, J., Sun, C., Zhao, H., Shen, Y., Anguelov, D., Li, C. & Schmid, C. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 11525-11533 (2020)
  • [2] Li, S., Wang, Z., Zheng, Y., Sun, Q., Gao, J., Ma, F. & Li, K. Synchronous and asynchronous parallel computation for large-scale optimal control of connected vehicles. Transportation Research Part C: Emerging Technologies. 121 pp. 102842 (2020)
  • [3] Shi, H., Zhou, Y., Wu, K., Wang, X., Lin, Y. & Ran, B. Connected automated vehicle cooperative control with a deep reinforcement learning approach in a mixed traffic environment. Transportation Research Part C: Emerging Technologies. 133 pp. 103421 (2021), https://www.sciencedirect.com/science/article/pii/S0968090X21004150
  • [4] Shi, H., Zhou, Y., Wu, K., Chen, S., Ran, B. & Nie, Q. Physics-informed deep reinforcement learning-based integrated two-dimensional car-following control strategy for connected automated vehicles. Knowledge-Based Systems. 269 pp. 110485 (2023)
  • [5] Zhou, D., Hang, P. & Sun, J. Reasoning graph-based reinforcement learning to cooperate mixed connected and autonomous traffic at unsignalized intersections. Transportation Research Part C: Emerging Technologies. 167 pp. 104807 (2024)
  • [6] Liu, H., Wu, K., Fu, S., Shi, H. & Xu, H. Predictive analysis of vehicular lane changes: An integrated lstm approach. Applied Sciences. 13, 10157 (2023)
  • [7] Suo, S., Regalado, S., Casas, S. & Urtasun, R. Trafficsim: Learning to simulate realistic multi-agent behaviors. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 10400-10409 (2021)
  • [8] Deo, N. & Trivedi, M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms. 2018 IEEE Intelligent Vehicles Symposium (IV). pp. 1179-1184 (2018)
  • [9] Wang, H., Meng, Q., Chen, S. & Zhang, X. Competitive and cooperative behaviour analysis of connected and autonomous vehicles across unsignalised intersections: A game-theoretic approach. Transportation Research Part B: Methodological. 149 pp. 322-346 (2021)
  • [10] Trentin, V., Artuñedo, A., Godoy, J. & Villagra, J. Multi-modal interaction-aware motion prediction at unsignalized intersections. IEEE Transactions On Intelligent Vehicles. (2023)
  • [11] Ali, Y., Bliemer, M., Zheng, Z. & Haque, M. Cooperate or not? Exploring drivers’ interactions and response times to a lane-changing request in a connected environment. Transportation Research Part C: Emerging Technologies. 120 pp. 102816 (2020)
  • [12] Karle, P., Geisslinger, M., Betz, J. & Lienkamp, M. Scenario understanding and motion prediction for autonomous vehicles—review and comparison. IEEE Transactions On Intelligent Transportation Systems. 23, 16962-16982 (2022)
  • [13] Li, G., Jiang, B., Zhu, H., Che, Z. & Liu, Y. Generative attention networks for multi-agent behavioral modeling. Proceedings Of The AAAI Conference On Artificial Intelligence. 34, 7195-7202 (2020)
  • [14] Han, Z., Fink, O. & Kammer, D. Collective relational inference for learning heterogeneous interactions. Nature Communications. 15, 3191 (2024)
  • [15] Zhong, X., Zhou, Y., Ahn, S. & Chen, D. Understanding heterogeneity of automated vehicles and its traffic-level impact: A stochastic behavioral perspective. Transportation Research Part C: Emerging Technologies. 164 pp. 104667 (2024)
  • [16] Kipf, T., Fetaya, E., Wang, K., Welling, M. & Zemel, R. Neural relational inference for interacting systems. International Conference On Machine Learning. pp. 2688-2697 (2018)
  • [17] Li, J., Yang, F., Tomizuka, M. & Choi, C. Evolvegraph: Multi-agent trajectory prediction with dynamic relational reasoning. Advances In Neural Information Processing Systems. 33 pp. 19783-19794 (2020)
  • [18] Xu, C., Wei, Y., Tang, B., Yin, S., Zhang, Y., Chen, S. & Wang, Y. Dynamic-group-aware networks for multi-agent trajectory prediction with relational reasoning. Neural Networks. 170 pp. 564-577 (2024)
  • [19] Park, D., Ryu, H., Yang, Y., Cho, J., Kim, J. & Yoon, K. Leveraging future relationship reasoning for vehicle trajectory prediction. ArXiv Preprint ArXiv:2305.14715. (2023)
  • [20] Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L. & Savarese, S. Social lstm: Human trajectory prediction in crowded spaces. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 961-971 (2016)
  • [21] Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S. & Alahi, A. Social gan: Socially acceptable trajectories with generative adversarial networks. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 2255-2264 (2018)
  • [22] Chen, X., Zhang, H., Zhao, F., Hu, Y., Tan, C. & Yang, J. Intention-aware vehicle trajectory prediction based on spatial-temporal dynamic attention network for internet of vehicles. IEEE Transactions On Intelligent Transportation Systems. 23, 19471-19483 (2022)
  • [23] Zhang, K., Zhao, L., Dong, C., Wu, L. & Zheng, L. AI-TP: Attention-based interaction-aware trajectory prediction for autonomous driving. IEEE Transactions On Intelligent Vehicles. 8, 73-83 (2022)
  • [24] Kim, H., Kim, D., Kim, G., Cho, J. & Huh, K. Multi-head attention based probabilistic vehicle trajectory prediction. 2020 IEEE Intelligent Vehicles Symposium (IV). pp. 1720-1725 (2020)
  • [25] Wu, K., Zhou, Y., Shi, H., Li, X. & Ran, B. Graph-Based Interaction-Aware Multimodal 2D Vehicle Trajectory Prediction Using Diffusion Graph Convolutional Networks. IEEE Transactions On Intelligent Vehicles. 9, 3630-3643 (2024)
  • [26] Li, X., Ying, X. & Chuah, M. Grip: Graph-based interaction-aware trajectory prediction. 2019 IEEE Intelligent Transportation Systems Conference (ITSC). pp. 3960-3966 (2019)
  • [27] Zhou, H., Ren, D., Xia, H., Fan, M., Yang, X. & Huang, H. Ast-gnn: An attention-based spatio-temporal graph neural network for interaction-aware pedestrian trajectory prediction. Neurocomputing. 445 pp. 298-308 (2021)
  • [28] Huang, Y., Bi, H., Li, Z., Mao, T. & Wang, Z. Stgat: Modeling spatial-temporal interactions for human trajectory prediction. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 6272-6281 (2019)
  • [29] Amirian, J., Hayet, J. & Pettré, J. Social ways: Learning multi-modal distributions of pedestrian trajectories with gans. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition Workshops. pp. 0-0 (2019)
  • [30] Khandelwal, S., Qi, W., Singh, J., Hartnett, A. & Ramanan, D. What-if motion prediction for autonomous driving. ArXiv Preprint ArXiv:2008.10587. (2020)
  • [31] Deo, N. & Trivedi, M. Convolutional social pooling for vehicle trajectory prediction. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition Workshops. pp. 1468-1476 (2018)
  • [32] Messaoud, K., Yahiaoui, I., Verroust-Blondet, A. & Nashashibi, F. Attention based vehicle trajectory prediction. IEEE Transactions On Intelligent Vehicles. 6, 175-185 (2020)
  • [33] Feng, X., Cen, Z., Hu, J. & Zhang, Y. Vehicle trajectory prediction using intention-based conditional variational autoencoder. 2019 IEEE Intelligent Transportation Systems Conference (ITSC). pp. 3514-3519 (2019)
  • [34] Wang, Y., Zhao, S., Zhang, R., Cheng, X. & Yang, L. Multi-vehicle collaborative learning for trajectory prediction with spatio-temporal tensor fusion. IEEE Transactions On Intelligent Transportation Systems. 23, 236-248 (2020)
  • [35] Zhou, Y., Zhong, X., Chen, Q., Ahn, S., Jiang, J. & Jafarsalehi, G. Data-driven analysis for disturbance amplification in car-following behavior of automated vehicles. Transportation Research Part B: Methodological. 174 pp. 102768 (2023)
  • [36] Huang, Z., Wu, J. & Lv, C. Driving behavior modeling using naturalistic human driving data with inverse reinforcement learning. IEEE Transactions On Intelligent Transportation Systems. 23, 10239-10251 (2021)
  • [37] Sheng, Z., Xu, Y., Xue, S. & Li, D. Graph-based spatial-temporal convolutional network for vehicle trajectory prediction in autonomous driving. IEEE Transactions On Intelligent Transportation Systems. 23, 17654-17665 (2022)
  • [38] Gao, Y., Zhang, Z., Lin, H., Zhao, X., Du, S. & Zou, C. Hypergraph learning: Methods and practices. IEEE Transactions On Pattern Analysis And Machine Intelligence. 44, 2548-2566 (2020)
  • [39] Feng, Y., You, H., Zhang, Z., Ji, R. & Gao, Y. Hypergraph neural networks. Proceedings Of The AAAI Conference On Artificial Intelligence. 33, 3558-3565 (2019)
  • [40] Zhou, R., Zhou, H., Gao, H., Tomizuka, M., Li, J. & Xu, Z. Grouptron: Dynamic multi-scale graph convolutional networks for group-aware dense crowd trajectory forecasting. 2022 International Conference On Robotics And Automation (ICRA). pp. 805-811 (2022)
  • [41] Xu, C., Li, M., Ni, Z., Zhang, Y. & Chen, S. Groupnet: Multiscale hypergraph neural networks for trajectory prediction with relational reasoning. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 6498-6507 (2022)
  • [42] Li, J., Hua, C., Ma, H., Park, J., Dax, V. & Kochenderfer, M. Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation. ArXiv Preprint ArXiv:2401.12275. (2024)
  • [43] Wu, Y., Zhuang, D., Labbe, A. & Sun, L. Inductive graph neural networks for spatiotemporal kriging. Proceedings Of The AAAI Conference On Artificial Intelligence. 35, 4478-4485 (2021)
  • [44] Li, Y., Yu, R., Shahabi, C. & Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. ArXiv Preprint ArXiv:1707.01926. (2017)
  • [45] Jang, E., Gu, S. & Poole, B. Categorical reparameterization with gumbel-softmax. ArXiv Preprint ArXiv:1611.01144. (2016)
  • [46] Pagnoni, A., Liu, K. & Li, S. Conditional variational autoencoder for neural machine translation. ArXiv Preprint ArXiv:1812.04405. (2018)
  • [47] J. Colyar and J. Halkias, “U.S. Highway 101 dataset,” 2007.
  • [48] J. Colyar and J. Halkias, “U.S. Highway 80 dataset,” 2006.
  • [49] R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2018, vol. 2018-November. doi: 10.1109/ITSC.2018.8569552.
  • [50] Song, H., Ding, W., Chen, Y., Shen, S., Wang, M. & Chen, Q. Pip: Planning-informed trajectory prediction for autonomous driving. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16. pp. 598-614 (2020)
  • [51] Ye, X., Li, S., Das, S. & Du, J. Enhancing routes selection with real-time weather data integration in spatial decision support systems. Spatial Information Research. 32, 373-381 (2024)

Appendix A Experiment results of trajectory generation with hypergraph inference

Refer to caption
Figure 17: Scenario 2 of Trajectory generation with hypergraph inference.

In Scenario 2, at scale s=5s=5 (Figure 17), the target vehicle TAR forms a strong historical interaction with multiple vehicles, including the preceding vehicle P, preceding vehicle LP in the left lane, and following vehicles LF and F in the left lane. However, in the future states, vehicle LF exits the interaction group, while the preceding vehicle RP in the left lane joins the group as the target vehicle accelerates toward RF.

Refer to caption
Figure 18: Scenario 3 of Trajectory generation with hypergraph inference.

In Scenario 3, at scale s=3s=3 (Figure 18), the target vehicle TAR forms a weak historical interaction with the preceding vehicles P and LP in the left lane. As traffic conditions change over time, these interactions strengthen in the future, highlighting the need to account for dynamic shifts when predicting future states, as historical data alone may not capture the evolving complexity of vehicle interactions. These findings emphasize the importance of adaptive models in predicting multi-agent traffic dynamics.

Appendix B Diffusion Graph Convolutional Networks (DGCN)

The Diffusion Graph Convolutional Networks (DGCN) module models bidirectional dependencies between nodes in the graph embedding, following [43, 44]. The DGCN layer, denoted DGCN()DGCN(\cdot), applies diffusion convolution to the graph signal in both forward and reverse directions:

Hl+1=DGCN(Hl)=k=1K(Tk(A¯f)H~lΘf,lk+Tk(A¯b)H~lΘb,lk)\begin{split}H_{l+1}&=DGCN(H_{l})\\ &=\sum_{k=1}^{K}\left(T_{k}(\bar{A}_{f})\cdot\tilde{H}_{l}\cdot\Theta_{f,l}^{k}+T_{k}(\bar{A}_{b})\cdot\tilde{H}_{l}\cdot\Theta_{b,l}^{k}\right)\end{split} (46)

In this equation, H~l+1\tilde{H}_{l+1} represents the output of the ll-th layer, with the input to the first layer being the masked feature matrix XX. The forward transition matrix A¯f=A/rowsum(A)\bar{A}_{f}=A/\text{rowsum}(A) captures downstream node dependencies, while the backward transition matrix A¯b=AT/rowsum(AT)\bar{A}_{b}=A^{T}/\text{rowsum}(A^{T}) captures upstream dependencies. The function Tk()T_{k}(\cdot) represents a Chebyshev polynomial used to approximate convolution with the kk-th order neighbors of each node. This is expressed as Tk(X)=2XTk1(X)Tk2(X)T_{k}(X)=2X\cdot T_{k-1}(X)-T_{k-2}(X). Learnable parameters Θb,lk\Theta_{b,l}^{k} and Θf,lk\Theta_{f,l}^{k} assign weights to input data in the ll-th layer. The forward diffusion process captures the influence of surrounding vehicles on a given vehicle, while the reverse diffusion process models the influence that the vehicle transmits to its surroundings.

Appendix C Gumbel Softmax Distribution

The Gumbel-Softmax distribution [45] provides a differentiable approximation of the categorical distribution, crucial for neural networks that rely on gradient-based optimization. Traditional categorical sampling, involving non-differentiable operations like argmax, poses challenges for backpropagation. Gumbel-Softmax addresses this by replacing argmax with a differentiable softmax function, allowing gradients to flow during training. Based on the Gumbel-Max trick, which adds Gumbel noise to log-probabilities followed by argmax, the Gumbel-Softmax substitutes softmax to maintain differentiability. The formula of the sample vectors dd is:

di=exp((log(πi)+gi)/τ)j=1kexp((log(πj)+gj)/τ), for i=1,,kd_{i}=\frac{\exp\left(\left(\log(\pi_{i})+g_{i}\right)/\tau\right)}{\sum_{j=1}^{k}\exp\left(\left(\log(\pi_{j})+g_{j}\right)/\tau\right)},\text{ for }i=1,\dots,k (47)

Here, πi\pi_{i} are categorical probabilities, gig_{i} is Gumbel noise, and τ\tau controls the distribution’s sharpness. As τ\tau approaches zero, the Gumbel-Softmax approximates a one-hot distribution. Larger τ\tau values result in a smoother distribution, useful for early training exploration. This property makes Gumbel-Softmax valuable for tasks like variational autoencoders and reinforcement learning, where discrete sampling with differentiability is needed.