GDTS: Goal-Guided Diffusion Model with Tree Sampling for Multi-Modal Pedestrian Trajectory Prediction
Abstract
Accurate prediction of pedestrian trajectories is crucial for improving the safety of autonomous driving. However, this task is generally nontrivial due to the inherent stochasticity of human motion, which naturally requires the predictor to generate multi-modal prediction. Previous works leverage various generative methods, such as GAN and VAE, for pedestrian trajectory prediction. Nevertheless, these methods may suffer from mode collapse and relatively low-quality results. The denoising diffusion probabilistic model (DDPM) has recently been applied to trajectory prediction due to its simple training process and powerful reconstruction ability. However, current diffusion-based methods do not fully utilize input information and usually require many denoising iterations that lead to a long inference time or an additional network for initialization. To address these challenges and facilitate the use of diffusion models in multi-modal trajectory prediction, we propose GDTS, a novel Goal-Guided Diffusion Model with Tree Sampling for multi-modal trajectory prediction. Considering the "goal-driven" characteristics of human motion, GDTS leverages goal estimation to guide the generation of the diffusion network. A two-stage tree sampling algorithm is presented, which leverages common features to reduce the inference time and improve accuracy for multi-modal prediction. Experimental results demonstrate that our proposed framework achieves comparable state-of-the-art performance with real-time inference speed in public datasets.
I INTRODUCTION
Time-series forecasting has been widely applied in various fields, including finance [1], climate [2], and healthcare [3]. With the rapid development of autonomous driving techniques, trajectory prediction, which could be formulated as a time-series forecasting problem, has also gained great attention [4, 5, 6]. Accurate trajectory prediction of surrounding agents can enhance the ability of autonomous systems to handle interactions, thereby improving the safety and effectiveness of the entire system. Many trajectory prediction datasets categorize traffic agents into multiple classes based on their properties [7, 8], with pedestrians also being separated as a distinct class. Since pedestrians are relatively more vulnerable when interacting with other classes of traffic agents, special attention is required. Thus, in this paper, we focus on pedestrian trajectory prediction.
Early works on pedestrian trajectory prediction mainly focused on single-model prediction [9, 10, 11]. However, these deterministic methods did not consider the inherent multi-modality of pedestrian movements. To address this problem, researchers have applied various generative methods to pedestrian trajectory prediction. For instance, Gupta et al. [4] utilized Generative Adversarial Network (GAN) for multi-model future generation, while Trajectron++ [12] used a Conditional Variational Autoencoder (CVAE) decoder. Nevertheless, GAN is hard to train and may face mode collapse, while VAE methods can produce unrealistic results.

Recently, denoising diffusion probabilistic models [13] have achieved remarkable success in computer vision field [14, 15]. Many researchers have also adopted this generative method for robotics and autonomous driving applications. For example, Gu et al. [16] proposed MID based on diffusion models for pedestrian trajectory prediction. Following MID, LED [17] also trained a denoising module using the same standard training schedule and incorporated a leapfrog initializer to skip many denoising steps to reduce inference time. Jiang et al. [18] proposed MotionDiffuser for multi-agent motion prediction and controllable trajectory synthesis.
Incorporating goal information into trajectory prediction models has been shown to improve the precision of predictions [19, 20, 21, 22]. Motivated by this, we combine goal estimation with diffusion models to leverage the "goal-driven" characteristic of human trajectory. Standard diffusion sampling algorithm used for multi-modal pedestrian trajectory prediction [16] simply repeats the sampling procedure to generate multiple possible future trajectories, which could be rather time-consuming, as depicted in Fig. 1(a).
To address the aforementioned challenges, we propose a novel framework for multi-modal trajectory prediction in this work, as shown in Fig. 2. In our proposed framework, the goal estimation module first estimates multiple possible goals to ensure the diversity of the predictions. These goals, along with the history motion information of the target agent, are then fed into the diffusion-based trajectory prediction module. To accelerate the diffusion sampling process, we design the two-stage tree sampling algorithm as Fig. 1(b) illustrates: In the trunk stage of the tree sampling, common feature is first leveraged to generate a roughly denoised future trajectory. The trunk stage only needs to run once for arbitrary numbers of predictions, just like a tree having only one trunk. Then, the branch stage further denoises the trajectory from the trunk stage conditioned on diverse features and generates corresponding trajectory predictions for each goal estimation. We leverage the powerful reconstruction ability of the deterministic diffusion model to further improve the accuracy of the prediction results. Experiment results show that our GBT framework achieves high prediction accuracy and fast inference speed. In summary, the main contributions of this paper are as follows:
-
•
We propose a novel framework for multi-modal pedestrian trajectory prediction named Goal-guided Diffusion model with Tree Sampling (GDTS), which integrates goal estimation into conditional denoising diffusion model to improve the prediction performance.
-
•
We propose a two-stage tree sampling algorithm that increases the prediction accuracy and accelerates the inference speed without the requirement of additional networks.
-
•
We conduct a series of experiments on various datasets, including ETH/UCY, Stanford Drone Dataset and intersection Drone Dataset, and the results attained demonstrate the state-of-the-art performance of our proposed framework in large-size datasets.
II RELATED WORKS
II-A Goal-Guided Pedestrian Trajectory Prediction
In general, pedestrian trajectory is goal-conditioned, as goals seldom change rapidly during the prediction horizon. Precise prediction of goals could reduce uncertainty and increase the accuracy of further trajectory prediction [20]. Several methods have been proposed to incorporate goal information into trajectory prediction. For exmaple, Dendorfer et al. [23] trained multiple generators, and each generator is used to a different distribution associated with a specific mode. Mangalam et al. [24] proposed a method that estimates goal points and generates multi-modal prediction using VAE conditioned on these estimated goal points. In Y-Net [20], a U-Net architecture is employed for goal and trajectory prediction, where the predicted goal information is also fed into the trajectory prediction decoder. Chiara et al. [21] proposed a recurrent network, Goal-SAR, which predicts the position of each future time step in recurrent way based on the goal and all history information before this time step. NSP-SFM [25] proposed by Yue et al.also took advantage of goal estimation network with a similar structure. By predicting the parameter of goal attraction, inter-agent repulsion and environment repulsion in each timestep and generating future using CVAE, this NSP-SFM method reached SOTA performance. However, the recurrent prediction way could be time-comsuming. Moreover, it is unfair for NSP-SFM to tune hyperparameters separately for different scenes in ETH/UCY while other methods (including ours) use only one set of hyperparameters. In this paper, we also divide the pedestrian trajectory prediction task into goal estimation and trajectory prediction.
II-B Diffusion Models and Application
The diffusion probabilistic model was first proposed by Sohl-Dickstein et al. [26] and has developed rapidly since DDPM [13] was proposed. It has achieved great success in various fields, such as computer vision [14, 15] and seq-to-seq models [27, 28]. Diffusion models have also been adopted in robotics applications. For instance, Diffuser [29] generates trajectories for robot planning and control using diffusion models. In Trace and Pace [30], the diffusion model is used to produce realistic human trajectories. In the context of pedestrian trajectory prediction, MID [16] was the first work to adopt the diffusion model for this task. Following the standard DDPM algorithm, a transformer acts as the diffusion network to generate multiple predictions conditioned on the history trajectory.
Despite its effectiveness in trajectory prediction, the diffusion model faces challenges due to the large number of denoising steps, which hampers real-time performance. To address this issue, various improvements have been proposed to accelerate the inference speed of the diffusion models [31]. DDIM [32] generalizes the forward process of the diffusion model as a non-Markovian process to speed up the sampling procedure. Salimans et al. [33] repeated knowledge distillation on a deterministic diffusion sampler to distill a new diffusion model with fewer sampling steps. Specific to multi-modal trajectory prediction, Mao et al. [17] proposes LED which trains an additional leapfrog initializer to accelerate sampling. Similar to our sampling algorithm, LED divides the diffusion sampling procedure into two stages and leverages an extra leapfrog initializer, which can estimate the roughly denoised distribution to skip the first stage. Instead of using an additional neural network, we use the same diffusion network to learn this distribution.

III PROPOSED METHOD
III-A Problem Formulation
The pedestrian trajectory prediction problem can be formulated as follows: Given the current frame , where the semantic map of the scene and history positions of the targeted agent in the past frames are available, the objective is to obtain possible future trajectory prediction of the agents in the next frames . The notation with tilde represents the predicted result.
Our proposed framework divides the overall trajectory prediction task into goal estimation and trajectory prediction, as illustrated in Fig. 2. First, a network is applied to predict the probability distribution heat-map of the goal position. Then, a set of possible goals of the target agent is sampled from this distribution heat-map. Extracting the feature of these goal estimations and history information as guidance, a diffusion denoising network using the two-stage tree sampling algorithm is applied to get the multi-modal future trajectory prediction of the target agent. In the following part of this section, we will introduce the proposed framework in detail. Furthermore, we will introduce the loss function and the training scheme.
III-B Goal Estimation Module
Considering the state-of-the-art performance of Y-Net [20] and Goal-SAR[21] on pedestrian trajectory prediction, we adapt their goal predictor for goal estimation. The goal predictor takes the semantic map and history trajectory as input. Since our work focuses on trajectory prediction, we assume that the semantic network is pre-trained to obtain semantic map , where and are the height and width of the input image, and is the number of semantic classes. Since we use a probability heat-map to describe the goal distribution, to keep the consistency of the input and output, pre-processing is needed before goal estimation to convert the inputted history positions of the frame to a 2D Gaussian probability distribution heat-map for each frame, and the highest probability is assigned to the ground truth position . These position heat-maps are then stacked together to form a trajectory heat-map . Finally, and are concatenated and fed into a U-Net architecture network to predict the future trajectory heat-map , and the last channel represents the position distribution of the last frame , i.e., the goal distribution. By sampling from the goal probability distribution heat-map, estimated goals are obtained, referred to as diverse goals. Meanwhile, the position with the highest probability is selected as the common goal . Therefore, a total of estimated goals are generated.
III-C Trajectory Prediction Module
The trajectory prediction module receives the estimated goals and the history trajectory as input. Following previous works [12, 16], we augment the history state with velocity and acceleration. Additionally, we integrate the goal information into the state by concatenating the vector from the position of this frame to the goal , as shown in lines 3-6 of Algorithm 1. After obtaining the augmented state for each goal estimation, an LSTM encoder is trained to extract the feature f of the state. Similarly, We refer to the feature obtained using as diverse feature, and the feature obtained using as common feature. And f then serves as the guidance of the diffusion. During the forward process of diffusion, noise is added to the ground truth trajectory , and this noise addition is repeated for times to obtain a series of noisy future trajectories ; While in step of the reverse denoising process, the diffusion network predicts a noise conditioned on and f. This predicted noise is then used to denoise the noisy future trajectory to . With the iterative denoising process, the future trajectory prediction can be gradually reconstructed from a Gaussian distribution . Standard sampling algorithms (DDPM [13] and DDIM [32]) repeat the whole sampling procedure to generate multiple modalities, which could be time-consuming. To alleviate this drawback, we propose tree sampling, as shown in Algorithm 1. Our tree sampling algorithm is designed especially for multi-modal prediction and is divided into two stages: the trunk stage and the branch stage. The total number of DDPM diffusion steps is , and the number of DDIM diffusion steps is . The trunk stage consists of steps, and the branch stage consists of steps. In the trunk stage (lines 10-12), we begin by utilizing the common feature to denoise the trajectory. By applying a variant of DDPM which removes the noise term (we call this variant deterministic-DDPM or d-DDPM in short), the deterministic denoised result of the trunk stage is obtained, which could serve as general initialization for further denoising conditioned on different diverse features , i.e., the trunk which links to different branches of this tree. In the subsequent branch stage (lines 15-22), different are used for refinement of to obtain final multi-modal predictions . Since the branch stage needs to be run multiple times for different modalities, we apply DDIM in this stage to increase the inference speed. Experiments demonstrate our combination scheme generates more accurate results in real-world datasets.
III-D Training and Loss Function
Our training scheme consists of two stages to train two modules separately. First, the goal estimation module is trained solely using Binary Cross-Entropy loss, which measures the dissimilarity between the ground truth of future trajectory heat-map and predicted heat-map :
(1) |
(2) |
In the second stage, we train the goal estimation module together with the trajectory prediction module. The training objective of the trajectory prediction module is to learn a distribution for each step of diffusion, following the regular setting of diffusion models [13, 32]:
(3) |
(4) |
IV RESULTS
IV-A Experiments and Datasets
To evaluate the performance of our model, we conducted experiments on three public real-world pedestrian datasets: the ETH/UCY dataset, the Stanford Drone Dataset (SDD) and the intersection Drone Dataset (inD). These datasets capture the movement of pedestrians from a bird’s-eye view perspective and positions are manually annotated. The sample rate of datasets is 2.5 FPS. Given the pedestrian position of the last frames, the task is to predict the trajectory of the following frames.
ETH | HOTEL | UNIV | ZARA1 | ZARA2 | AVG | |||||||
ADE20 | FDE20 | ADE20 | FDE20 | ADE20 | FDE20 | ADE20 | FDE20 | ADE20 | FDE20 | ADE20 | FDE20 | |
Goal-GAN [34] | 0.59 | 1.18 | 0.19 | 0.35 | 0.60 | 1.19 | 0.43 | 0.87 | 0.32 | 0.65 | 0.43 | 0.85 |
MG-GAN [23] | 0.47 | 0.91 | 0.14 | 0.24 | 0.54 | 1.07 | 0.36 | 0.73 | 0.29 | 0.60 | 0.36 | 0.71 |
PECNet [24] | 0.54 | 0.87 | 0.18 | 0.24 | 0.35 | 0.60 | 0.22 | 0.39 | 0.17 | 0.30 | 0.29 | 0.48 |
Y-net [20] | 0.28 | 0.33 | 0.10 | 0.14 | 0.24 | 0.41 | 0.17 | 0.27 | 0.13 | 0.22 | 0.18 | 0.27 |
Goal-SAR [21] | 0.28 | 0.39 | 0.12 | 0.17 | 0.25 | 0.43 | 0.17 | 0.26 | 0.15 | 0.22 | 0.19 | 0.29 |
MID† [16] | 0.57 | 0.93 | 0.21 | 0.33 | 0.29 | 0.55 | 0.28 | 0.50 | 0.20 | 0.37 | 0.31 | 0.54 |
LED [17] | 0.39 | 0.58 | 0.11 | 0.17 | 0.26 | 0.43 | 0.18 | 0.26 | 0.13 | 0.22 | 0.21 | 0.33 |
SingularTrajectory [35] | 0.35 | 0.42 | 0.13 | 0.19 | 0.25 | 0.44 | 0.19 | 0.32 | 0.15 | 0.25 | 0.21 | 0.32 |
GDTS (Ours) | 0.31 | 0.48 | 0.13 | 0.18 | 0.27 | 0.49 | 0.19 | 0.29 | 0.15 | 0.24 | 0.21 | 0.33 |
Datasets The ETH/UCY [36, 37] dataset consists of 5 scenes: ETH, Hotel, Univ, Zara1, and Zara2. Following the common leave-one-scene-out strategy in previous work [4, 20], we use four scenes for training and the remaining one for testing. SDD [38] is a large-scale dataset that contains 11,216 unique pedestrians on the university campus. We use the same dataset split as several recent works [21, 24]: 30 scenes are used for training, and the remaining 17 scenes are used for testing. inD [39] includes 4 different intersection scenes in German and we follow the same evaluation protocol applied in [21, 40, 41], where only pedestrian trajectories are retained and split into train, validation, and test sets by a 70%/10%/20% portion.
Evaluation Metrics Average Displacement Error (ADE) measures the average Euclidean distance between the ground truth and the predicted positions of the entire future trajectory, while Final Displacement Error (FDE) considers only the Euclidean distance between the final positions. Since our model generates multi-model predictions for stochastic future, we report the best-of- ADE and FDE of the predictions, denoted as ADEN and FDEN.
Implementation Details All the experiments were conducted on an NVIDIA 3080Ti GPU with PyTorch implementation. The goal predictor is implemented using a 5-layer U-Net as the backbone. The architecture of the diffusion network is a three-layer transformer encoder similar to MID [16], and readers can refer to the original paper for more details. We adopt a batch size of . An Adam optimizer is employed with an initial learning rate of , and exponential annealing is applied for the learning rate. The goal estimation module is trained for epochs first and then trained with trajectory prediction module for epochs. The DDPM diffusion step is , the DDIM diffusion step is , and the trunk step is . Additionally, we utilize Test-Time-Sampling-Trick (TTST) in our model to improve the accuracy of goal estimation [20], and the number of TTST samples is selected to be . The goal estimation loss coefficient and The final number of predictions .

IV-B Quantitative Results
Baseline We compare GDTS to different methods: Goal-based methods include Goal-GAN [34], MG-GAN [23], PECNet [24], Y-net [20], Goal-SAR [21]. We do not include NSP-SFM [25] here due to its unfair hyperparameters tuning for each scene in ETH/UCY. MID [16], LED [17] and SingularTrajectory [35] are methods which leverage diffusion model for pedestrian trajectory prediction. Specifically for inD, besides goal-based methods, we also compare our model with some SOTA methods: Social-GAN [4], ST-GAT [42], AC-VRNN [40] and GSGFormer [41], which are all training with the same dataset split as our model. Note that recent works with different evaluation protocal [43] are not included for fair comparison.
Sampling Algorithm | ADE20 | FDE20 | Inference (ms) | |
MID(DDPM) | - | 7.61 | 14.30 | 139 |
DDPM | - | 7.76 | 11.91 | 65 |
DDIM | - | 7.91 | 12.01 | 24 |
TS | 5 | 7.61 | 11.88 | 24 |
TS | 50 | 7.56 | 12.08 | 21 |
TS (Ours) | 30 | 7.42 | 11.57 | 24 |
Discussion Results in Table I show that our model achieves comparable accuracy in the ETH/UCY dataset. Furthermore, results of experiments on the SDD are reported in Table II. With the combination of explicit guide guidance and diffusion denoising, our model achieves the best ADE20 and FDE20 among all compared methods. Results in inD shown in Table III are in line with the previous table, further confirming the effectiveness of GDTS. Since SDD and inD are substantially larger than the ETH/UCY dataset, we argue that our method is more suitable for datasets with larger sizes.
Trunk Stage | Branch Stage | ADE20 | FDE20 | Inference (ms) | ||
d-DDPM | DDPM | 30 | 70 | 7.74 | 11.83 | 65 |
d-DDPM | d-DDPM | 30 | 70 | 7.36 | 11.63 | 65 |
DDIM | DDIM | 6 | 14 | 8.02 | 12.62 | 21 |
d-DDPM | DDIM | 30 | 14 | 7.42 | 11.57 | 24 |
IV-C Ablation Study
Sampling Algorithm of Diffusion To investigate the influence of different sampling algorithms, we replace tree sampling (TS) with DDPM and DDIM, and conduct the experiment on SDD. We report the average inference time for one agent in addition to the ADE20 and FDE20. Since LED [17] and SingularTrajectory [35] do not have an official implementation nor reported inference time on SDD, we only include MID which uses DDPM for comparison. The results in Table IV verify that our proposed tree sampling algorithm performs the best and has a shorter inference time than DDPM and DDIM, indicating the superiority of our sampling algorithm. Compared with MID, our GDTS method reduces the inference time from 139 ms to around 25 ms.
Trunk Step of Tree Sampling We also explore the effect of the trunk step . As shown in Table IV, does not significantly influence the inference time since we use DDIM in the branch stage which already has a fast sampling speed. However, the accuracy of our method decreases when is either too large or too small. When is large, small makes it challenging for the branch stage to drive the roughly denoised trajectory to various modalities, limiting the diversity of the results. When is small, the branch stage that uses DDIM undertakes most of the denoising task, without obtaining enough guidance from the trunk stage, which degrades the performance.
Combination scheme of Tree Sampling We compare different combinations of DDPM, d-DDPM, and DDIM in tree sampling algorithm to verify the effectiveness of our scheme. Note that the result of the trunk stage must be deterministic, hence only d-DDPM and DDIM () can be used in the trunk stage. Results in Table V demonstrate that using d-DDPM in both stages gets the lowest ADE20 and FDE20, probably because d-DDPM focuses on reconstruction rather than generation, i.e., accuracy rather than diversity. However, the inference time is much longer than our scheme, and our scheme achieves a good balance between prediction accuracy and inference speed.
IV-D Qualitative Results
Fig. 3 presents the qualitative results of our GDTS method on the ETH/UCY and SDD datasets. The target pedestrian in the scene (a) turns right while moving straight in the scene (b) and (c). From the left column, it can be observed that the goals from the final prediction (in cyan) are closer to the ground truth (in yellow) compared to the goal estimations obtained from the goal estimation module (in blue), indicating that the diffusion models can improve FDE. The middle column shows the common goal and the roughly denoised trajectory generated by the trunk stage, demonstrating that the common goal can guide the denoised trajectory growing toward the correct direction. The right column shows the final prediction results, demonstrating the ability of GDTS to generate future trajectory predictions with different modalities.
V CONCLUSIONS
In conclusion, GDTS is introduced in this work for multi-modal pedestrian trajectory prediction. The proposed development is a framework that integrates goal estimation and the diffusion probabilistic model. Our novel tree sampling algorithm leverages common feature for multiple modalities generation to accelerate inference speed. Experiments on real-world datasets demonstrate that our GDTS method can predict multiple scene-compliant trajectories and achieve state-of-the-art performance with real-time inference speed in real-world datasets. Nevertheless, the interaction between agents is currently not considered in this work, yet it is crucial for human motion prediction. Predicting additional probabilistic score for each modality could help with downstream decision-making and planning. In this sense, we will explore the integration of multi-agent interactions into our framework.
References
- [1] Omer Berat Sezer, Mehmet Ugur Gudelek and Ahmet Murat Ozbayoglu “Financial time series forecasting with deep learning: A systematic literature review: 2005–2019” In Applied Soft Computing 90, 2020, pp. 106181
- [2] Pradeep Hewage et al. “Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station” In Soft Computing 24, 2020, pp. 16453–16482
- [3] Soumyendu Banerjee and Girish Kumar Singh “A new approach of ECG steganography and prediction using deep learning” In Biomedical Signal Processing and Control 64 Elsevier, 2021, pp. 102151
- [4] Agrim Gupta et al. “Social GAN: Socially acceptable trajectories with generative adversarial networks” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2255–2264
- [5] Yuying Chen et al. “HGCN-GJS: Hierarchical Graph Convolutional Network with Groupwise Joint Sampling for Trajectory Prediction” In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 13400–13405
- [6] Sheng Wang et al. “Improving Autonomous Driving Safety with POP: A Framework for Accurate Partially Observed Trajectory Predictions” In IEEE International Conference on Robotics and Automation, 2024, pp. 14450–14456 IEEE
- [7] Holger Caesar et al. “nuscenes: A multimodal dataset for autonomous driving” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11621–11631
- [8] Scott Ettinger et al. “Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9710–9719
- [9] Dirk Helbing and Peter Molnar “Social force model for pedestrian dynamics” In Physical Review E 51.5 APS, 1995, pp. 4282
- [10] Alexandre Alahi et al. “Social LSTM: Human trajectory prediction in crowded spaces” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 961–971
- [11] Anirudh Vemula, Katharina Muelling and Jean Oh “Social Attention: Modeling attention in human crowds” In IEEE International Conference on Robotics and Automation, 2018, pp. 4601–4607 IEEE
- [12] Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty and Marco Pavone “Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data” In European Conference on Computer Vision, 2020, pp. 683–700
- [13] Jonathan Ho, Ajay Jain and Pieter Abbeel “Denoising diffusion probabilistic models” In Advances in Neural Information Processing Systems 33, 2020, pp. 6840–6851
- [14] Prafulla Dhariwal and Alexander Nichol “Diffusion models beat GANs on image synthesis” In Advances in Neural Information Processing Systems 34, 2021, pp. 8780–8794
- [15] Dmitry Baranchuk et al. “Label-Efficient Semantic Segmentation with Diffusion Models” In International Conference on Learning Representations, 2022
- [16] Tianpei Gu et al. “Stochastic trajectory prediction via motion indeterminacy diffusion” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17113–17122
- [17] Weibo Mao et al. “Leapfrog Diffusion Model for Stochastic Trajectory Prediction” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5517–5526
- [18] Chiyu Jiang et al. “MotionDiffuser: Controllable multi-agent motion prediction using diffusion” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9644–9653
- [19] Junru Gu, Chen Sun and Hang Zhao “DenseTNT: End-to-end trajectory prediction from dense goal sets” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 15303–15312
- [20] Karttikeya Mangalam, Yang An, Harshayu Girase and Jitendra Malik “From goals, waypoints & paths to long term human trajectory forecasting” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 15233–15242
- [21] Luigi Filippo Chiara et al. “Goal-driven self-attentive recurrent networks for trajectory prediction” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2518–2527
- [22] Görkay Aydemir, Adil Kaan Akan and Fatma Güney “ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
- [23] Patrick Dendorfer, Sven Elflein and Laura Leal-Taixé “MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
- [24] Karttikeya Mangalam et al. “It is not the journey but the destination: Endpoint conditioned trajectory prediction” In European Conference on Computer Vision, 2020, pp. 759–776
- [25] Jiangbei Yue, Dinesh Manocha and He Wang “Human trajectory prediction via neural social physics” In European Conference on Computer Vision, 2022, pp. 376–394 Springer
- [26] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan and Surya Ganguli “Deep unsupervised learning using nonequilibrium thermodynamics” In International Conference on Machine Learning, 2015, pp. 2256–2265
- [27] Kashif Rasul, Calvin Seward, Ingmar Schuster and Roland Vollgraf “Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting” In International Conference on Machine Learning, 2021, pp. 8857–8868
- [28] Shansan Gong et al. “DiffuSeq: Sequence to sequence text generation with diffusion models” In arXiv preprint arXiv:2210.08933, 2022
- [29] Michael Janner, Yilun Du, Joshua Tenenbaum and Sergey Levine “Planning with Diffusion for Flexible Behavior Synthesis” In International Conference on Machine Learning, 2022
- [30] Davis Rempe et al. “Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 13756–13766
- [31] Hanqun Cao et al. “A survey on generative diffusion model” In arXiv preprint arXiv:2209.02646, 2022
- [32] Jiaming Song, Chenlin Meng and Stefano Ermon “Denoising Diffusion Implicit Models” In International Conference on Learning Representations, 2021
- [33] Tim Salimans and Jonathan Ho “Progressive Distillation for Fast Sampling of Diffusion Models” In International Conference on Learning Representations, 2022
- [34] Patrick Dendorfer, Aljoša Ošep and Laura Leal-Taixé “Goal-GAN: Multimodal Trajectory Prediction Based on Goal Position Estimation” In Asian Conference on Computer Vision, 2020
- [35] Inhwan Bae, Young-Jae Park and Hae-Gon Jeon “SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 17890–17901
- [36] Stefano Pellegrini, Andreas Ess, Konrad Schindler and Luc Van Gool “You’ll never walk alone: Modeling social behavior for multi-target tracking” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2009, pp. 261–268 IEEE
- [37] Alon Lerner, Yiorgos Chrysanthou and Dani Lischinski “Crowds by example” In Computer Graphics Forum 26.3, 2007, pp. 655–664
- [38] Alexandre Robicquet, Amir Sadeghian, Alexandre Alahi and Silvio Savarese “Learning social etiquette: Human trajectory understanding in crowded scenes” In European Conference on Computer Vision, 2016, pp. 549–565
- [39] Julian Bock et al. “The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections” In 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 1929–1934 IEEE
- [40] Alessia Bertugli et al. “AC-VRNN: Attentive Conditional-VRNN for multi-future trajectory prediction” In Computer Vision and Image Understanding 210 Elsevier, 2021, pp. 103245
- [41] Zhongchang Luo, Marion Robin and Pavan Vasishta “GSGFormer: Generative Social Graph Transformer for Multimodal Pedestrian Trajectory Prediction” In arXiv preprint arXiv:2312.04479, 2023
- [42] Yingfan Huang et al. “STGAT: Modeling spatial-temporal interactions for human trajectory prediction” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6272–6281
- [43] Mozhgan Nasr Azadani and Azzedine Boukerche “Hierarchical Transformers for Motion Forecasting based on Inverse Reinforcement Learning” In IEEE Transactions on Vehicular Technology IEEE, 2024