Answering from Sure to Uncertain: Uncertainty-Aware Curriculum Learning for Video Question Answering
Abstract
While significant advancements have been made in video question answering (VideoQA), the potential benefits of enhancing model generalization through tailored difficulty scheduling have been largely overlooked in existing research. This paper seeks to bridge that gap by incorporating VideoQA into a curriculum learning (CL) framework that progressively trains models from simpler to more complex data. Recognizing that conventional self-paced CL methods rely on training loss for difficulty measurement, which might not accurately reflect the intricacies of video-question pairs, we introduce the concept of uncertainty-aware CL. Here, uncertainty serves as the guiding principle for dynamically adjusting the difficulty. Furthermore, we address the challenge posed by uncertainty by presenting a probabilistic modeling approach for VideoQA. Specifically, we conceptualize VideoQA as a stochastic computation graph, where the hidden representations are treated as stochastic variables. This yields two distinct types of uncertainty: one related to the inherent uncertainty in the data and another pertaining to the model’s confidence. In practice, we seamlessly integrate the VideoQA model into our framework and conduct comprehensive experiments. The findings affirm that our approach not only achieves enhanced performance but also effectively quantifies uncertainty in the context of VideoQA.
Index Terms:
Video question answering, curriculum learning, uncertainty, stochastic computation graph.Video question answering (VideoQA) has garnered increasing attention from researchers in recent years [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Significant efforts have been dedicated to enhancing various aspects of this task, including video encoding [13, 4], interaction between video and questions [14, 6], and feature fusion [2, 5]. Nevertheless, existing works train VideoQA models in a random order, overlooking the fact that optimizing a VideoQA model essentially involves a teaching process, during which the model learns to answer questions of varying difficulties. It has been proven that presenting training examples in a meaningful order, as opposed to random order, can enhance the generalization capacity of models across a wide range of tasks [15, 16, 17]. Such strategies refer to curriculum learning (CL) [18] wherein the model is gradually exposed to basic knowledge before advancing to more complex concepts, mimicking human learning. In this work, our goal is to incorporate VideoQA into CL, aiming to enhance the performance of the models.
A major challenge in integrating CL with VideoQA is quantifying data difficulty. Many existing self-paced curriculum learning (SPL) approaches utilize the training loss as a measure of difficulty quantification [19, 20, 21, 22]. However, this approach has its shortcomings. Firstly, the training loss generally measures the discrepancy between predictions and ground truth but cannot precisely reflect the inherent difficulty of data. For instance, a math problem could be difficult even if it has a simple answer. Secondly, the training loss varies significantly during training and across different tasks, necessitating a meticulous design of the training scheduler for stable optimization and improved performance [23, 20, 24]. In response to these challenges, we propose an enhancement to CL for VideoQA by incorporating the principle of uncertainty into the dynamic scheduling of difficulty. We term this approach uncertainty-aware curriculum learning (UCL) for VideoQA. Compared to the training loss, uncertainty offers the advantage of being independent of the ground truth and better reflecting the inherent difficulty of the data. Intuitively, a high-uncertainty video-question pair indicates the presence of potential noise or the model’s lack of confidence in its prediction, making it more challenging to handle.
To quantify data uncertainty and alleviate its negative impact, we propose the utilization of probabilistic modeling for VideoQA. Specifically, we treat VideoQA as a stochastic computation graph [25], wherein a video and a question serve as inputs, subsequently undergoing encoding into stochastic representations by a visual encoder and a text encoder. The final predictive distribution of the answer is derived through a combination of the video-question interaction module and the answer prediction module, employing variational inference [26, 27]. Within the framework of probabilistic modeling, we define two forms of uncertainty: feature uncertainty, which gauges the intrinsic uncertainty in the data, and predictive uncertainty, which quantifies the model’s confidence in its predictions. Notably, our approach to probabilistic modeling and uncertainty quantification remains applicable to both classification and regression tasks. Our contributions can be summarized as follows:
-
•
We develop a self-paced CL framework for VideoQA, where the difficulty of data is measured by the uncertainty that reflects the inherent characteristic of data.
-
•
We propose probabilistic modeling for VideoQA by considering VideoQA as a stochastic computation graph to capture the data uncertainty and mitigate its impact.
-
•
We integrate VideoQA into our uncertainty-aware curriculum learning framework and conduct extensive experiments. The results show that our method achieves better performance and valid uncertainty quantification.
I Related Work
I-A Video Question Answering
Video question answering (VideoQA) is the generalization of visual question answering [28, 29, 30, 31, 32] from image domain to the video domain. It requires temporal reasoning over a sequence of events in videos, and various techniques are exploited, such as the attention mechanism [33, 34], graph neural networks [2, 35, 36], memory networks [5, 4], and hierarchical structures [37, 13]. For example, a dual-LSTM-based approach with both spatial and temporal attention is proposed in [38]. For example, MASN [2] models each object as a graph node and captures the spatial and temporal dependencies of all objects with graph neural networks. HQGA [13] is developed to model the video as a conditional graph hierarchy to align with the multi-granular nature of questions. , which achieves great results on MSVD and MSRVTT [39]. Besides the efforts in improving the model structure, many works develop new frameworks or methodologies for this task [3, 40]. For example, invariant grounding is exploited in [3] for VideoQA, which aims to find the question-critical scenes whose causal relations with answers are invariant. The atemporal probe (ATP) [40] is presented to degrade the video-language task to image-level understanding, which provides a stronger baseline on the performance of image-level understanding in the video-language setting than random frames. Recently, large-scale video-text pretraining has shown great power in promoting the performance of multimodal video understanding [41, 42, 43]. For instance, MERLOT [41], VIOLET [42], and All-in-One [43] attain the state-of-the-art performance on several VideoQA benchmarks. However, they require large-scale data and computational resources for training. In this work, we follow [2, 13, 40, 3] and make comparisons only to the methods without pretraining to show the effectiveness of modeling approaches instead of that of more training data. Despite the progress made in VideoQA, existing works do not consider the impact of the order of training samples or the uncertainty in the data. In this work, we propose a new training framework for VideoQA concerning appropriate difficulty scheduling based on uncertainty.
I-B Curriculum Learning
Curriculum learning (CL) [18] emulates the human learning process by starting with easier tasks and gradually progressing to more challenging ones. Two central components of CL are the difficulty measure and the training scheduler. In the case of self-paced CL (SPL, where difficulty is measured during training), the loss function is often used as the difficulty measure [19, 20, 21, 22]. Initially, during training, samples with higher losses are excluded from optimization. As training advances, the threshold is gradually increased to incorporate more complex data into the optimization process. However, relying solely on loss might not accurately represent the inherent difficulty of data, as difficulty is an intrinsic attribute of samples and should be independent of ground truth labels. To overcome this limitation, we propose employing uncertainty as the difficulty measure for SPL. To the best of our knowledge, [44] is the only work that also uses uncertainty for CL. However, our method is essentially different from it: 1) We derive uncertainty by probabilistic modeling, while it obtains data uncertainty by a pretrained language model (predefined CL); 2) We perform CL by re-weighting the data, while it adopts baby step [18] to arrange data; 3) We focus on VideoQA, while it addresses neural machine translation. The pretrained model and the training scheduler based on baby step make [44] more complex to implement than ours.
I-C Uncertainty Modeling
In Bayesian modeling, there exist two main types of uncertainty: model (epistemic) uncertainty and data (aleatoric) uncertainty [45, 46]. Specifically, model uncertainty accounts for uncertainty in the model parameters and comes from our ignorance about which model generated the data. This type of uncertainty can be reduced by giving more training data. As for data uncertainty, it captures the inherent noise in our observations such as blur in images and videos. This type of uncertainty is an inherent characteristic of data and cannot be alleviated with more collected data. A great number of methods exploit data uncertainty and achieve considerable improvements in various tasks [47, 48, 49, 50]. For example, DeNet [51] is proposed to resolve query uncertainty and label uncertainty in temporal grounding, where a decoupling module and a de-bias mechanism are designed for the probabilistic language encoding and diverse temporal regression. UGPT [52] is presented for complex action recognition, where the attention scores in Transformer are modeled as probabilistic variables to capture the complex and long-term interaction of actions. Besides, uncertainty has also been applied to CL [53, 44]. For instance, in the case of [44], they incorporated data and model uncertainty into the CL process for neural machine translation, focusing on pre-computed data uncertainty to facilitate a baby-step-based CL approach. However, this approach introduces complexity due to the need for prior data uncertainty computation. In contrast, our method stands out by dynamically learning uncertainty during training, which is then utilized to adjust sample weights, leading to a more straightforward and practical implementation. Moreover, the work presented in [53] utilized snippet-level uncertainty to assign varying weights to different snippets in the context of weakly-supervised temporal action localization. This was accomplished through the lens of evidential deep learning [54, 55], with a focus on addressing intra-action variation. Our approach, however, distinguishes itself by introducing probabilistic modeling for uncertainty quantification, with the primary objective of enhancing the generalization capacity of VideoQA models. More importantly, our framework is designed to be a versatile tool applicable to diverse VideoQA models and to improve their performance.
II Uncertainty-Aware Curriculum Learning for Video Question Answering
II-A Uncertainty-Based Curriculum Learning
II-A1 Self-Paced Curriculum Learning Revisit
Curriculum learning (CL) is a training strategy where the networks are trained from easy data to hard data [19, 20, 21, 22]. It mimics the organization of curricula for humans, i.e., starting from basic knowledge to complex concepts. Initially, we introduce the concept of self-paced curriculum learning (SPL) as follows. Given the training set , where is the observation, is the ground truth, and is the number of training data, SPL aims to minimize the following loss in epoch ,
(1) |
where is a network parameterized by , is a loss function (such as MSE and cross-entropy), is the set of weights ranging from 0 to 1 for training data at epoch , and is the regularization preventing from dropping to 0. In the original SPL [19], norm is exploited as the regularization, i.e.,
(2) |
where is a scheduler (increasing function) determining the difficulty changing during training. The optimal can be analytically solved and given as follows,
(3) |
An intuitive explanation of the original SPL is that, in epoch , only the training data whose loss are less than are used for optimization, where the loss can be regarded as a difficulty measure and is the threshold. As the training proceeds, is gradually increased to include more difficult data.
Recognizing that hard/binary weighting of samples might restrict the flexibility of SPL, soft regularizers have been developed [20, 22]. Nonetheless, these SPL methods still rely on the training loss to determine difficulty [19, 20, 21, 22], which has limitations in two aspects: 1) The training loss typically measures the distance between predictions and ground truth, failing to precisely reflect the inherent data difficulty that should remain independent of ground truth; 2) The training loss varies significantly throughout training and across different tasks, necessitating a more meticulous design of for stable optimization and improved performance.
II-A2 Uncertainty-Based Curriculum Learning
To tackle the limitations of conventional SPL approaches, we introduce an approach called uncertainty-based curriculum learning. In this method, we leverage uncertainty to quantify the level of difficulty within the curriculum learning framework. In contrast to relying on the training loss, the utilization of uncertainty offers a measure that remains independent of the ground truth and effectively captures the intricacies of the data. Intuitively, a sample with high uncertainty signifies the potential presence of noise or indicates that the model lacks confidence in its prediction, thereby classifying the sample as more challenging to handle.
Concretely, in our SPL, the loss function in epoch is defined as follows,
(4) |
where is computed as , and represents the (normalized) uncertainty of the sample , which will be explained in more detail later. denotes the Sigmoid function, and is a monotonically increasing function. At the outset of training, samples with low uncertainty (easier data) are assigned higher weights compared to samples with high uncertainty (more challenging examples). As training progresses (with increasing ), the weights of low-uncertainty samples and high-uncertainty samples converge towards equality, ultimately resulting in the involvement of all data.
Essentially, the uncertainty (and the weight based on it) proposed in this work is a function of network parameters. Consequently, it would also possess gradients with respect to these parameters during the backward propagation process. However, this should be prevented because the model would predict the results that have no uncertainty eventually if no constraint is applied to the weight . Previous SPL methods prevent this by applying a regularization to the weight as shown in Eq. 1. Nevertheless, we lack knowledge of the uncertainty prior across the training set, making it challenging to determine an appropriate weight regularization. To overcome this hurdle, we take an alternative approach: we detach the weight from the computation graph and nullify the gradients with respect to the parameters, i.e.,
(5) |
II-B Probabilistic Modeling for VideoQA
A remaining challenge for our UCL is the uncertainty quantification. To tackle this challenge, we propose probabilistic modeling for VideoQA. Specifically, we consider VideoQA as a stochastic computation graph, where the input nodes are the video and the question . Following video encoding (parameterized by ) and question encoding (parameterized by ), we obtain the stochastic video representation and the stochastic question representation . The distribution of the answer can be deduced by variational inference:
Since the integral over is intractable, we sample and from and , respectively, for times to approximate the predictive distribution of , i.e.,
(6) |
where and . In practice, we use the reparameterization trick [56] to make the sampling differentiable. can be specified for classification and regression as follows,
(7) | |||
(8) |
where is the predicted logits ( is the number of classes), and and are the predicted expectation and variance.
The proposed stochastic computation graph for VideoQA is optimized by maximizing the evidence lower bound (ELBO) that is derived as follows111The derivation can be found in Appendix.,
where is the Kullback–Leibler (KL) divergence. The KL divergence can also be regarded as regularization to and , preventing them from degradating to deterministic representations. Since and are irrelevant to optimization, once we apply normal Gaussian prior to and (), the objective can be re-written and approximated as follows,
where is a hyper-parameter. For classification and regression, is specified as
where is the predictive probability for the correct class , and is the ground truth value.
We then define two types of uncertainty for difficulty quantification based on our probabilistic modeling: feature uncertainty and predictive uncertainty. Feature uncertainty measures inherent uncertainty in data such as noise, blur and occlusion in videos. This type of uncertainty is computed based on the variance of across different sampling results. Concretely, the feature uncertainty for classification is defined as the variance of predicted logits, while the feature uncertainty for regression is computed as the variance of predicted expectations, i.e.,
(9) | |||
(10) |
where and are the average of predicted logits and that of predicted expectations, respectively, and the power in Eq. 9 is performed element-wisely. Feature uncertainty essentially measures the difference among the predictions from different sampled features. That is to say, if the variances of feature and are zeros (i.e., there is no uncertainty in them), the computed feature uncertainty would be zero because the random sampling degrades to a deterministic process. As for the predictive uncertainty, it measures the confidence of the model in the final outputs. Concretely, this type of uncertainty for classification is defined as the entropy of the output distribution, while for regression, we use the predicted variance as the predictive uncertainty, i.e.,
(11) | |||
(12) |
where is the predicted probability of class . A lower predictive uncertainty means the model has more confidence in its prediction. The proposed feature uncertainty and predictive uncertainty depict the inherent characteristics of data and are agnostic to ground truth, which are more reasonable for the difficulty measurement in SPL.
Distinguishing itself from aleatoric and epistemic uncertainty, our uncertainty serves a distinct purpose: our feature uncertainty assesses the uncertainty linked to high-level video features, while predictive uncertainty conveys the model’s confidence in its predictions. In contrast, aleatoric uncertainty quantifies uncertainty within observations, while epistemic uncertainty pertains to uncertainty in model parameters. Importantly, it’s worth noting that aleatoric and epistemic uncertainty can also be incorporated into our UCL framework. We substantiate this further by presenting a comparison of various uncertainties in the supplementary.
Despite that the above two types of uncertainty can measure the difficulty of data, they cannot be directly applied to SPL because their ranges are intractable, which makes it difficult to determine suitable schedulers for SPL (similar to the design of in the original SPL). To address this issue, we assume they are Gaussian distributed and normalize them to the standard Gaussian distribution, i.e.,
(13) |
where represents or , and and are the mean and the variance of the uncertainty , respectively. Consequently, and they are statistically bounded. Since the mean and variance for the whole dataset are intractable, we use batch normalization [57] as a practical means to estimate the normalized uncertainty. In this work, we exploit the two types of normalized uncertainty for SPL. To maintain simplicity, we still use to denote the normalized or . Fig. 1 demonstrates the overview of our framework.
Our uncertainty-aware curriculum learning framework is designed to be versatile and adaptable, fitting seamlessly into various model structures. Alg. 1 outlines the pseudo code for our uncertainty-aware curriculum learning applied to VideoQA. It’s worth noting that the algorithm encompasses both classification-based (CB) VideoQA and regression-based (RB) VideoQA. For CB VideoQA, retain the blue lines (the tops of Line 11, 13, 14, 16) and disregard the red ones (the bottoms of Line 11, 13, 14, 16). Conversely, for RB VideoQA, follow the opposite steps. Furthermore, we have chosen predictive uncertainty for CB VideoQA and feature uncertainty for RB VideoQA in the examples provided. It’s important to understand that these uncertainties are interchangeable. The KL divergence regularization for the feature distributions within the loss is omitted here for the sake of simplicity.

II-C Uncertainty-Aware Curriculum Learning for Video Question Answering
The proposed framework is agnostic to the essential modules in VideoQA models (i.e., video encoder, question encoder, video-question interaction module, and answer decoder). Therefore, it can be applied to existing VideoQA methods as a plug-and-play method. In this section, we show how a VideoQA model can be adapted into our framework. Specifically, we choose MASN [2] for our purpose because 1) it is a typical VideoQA model with clear VideoQA modules; 2) it uses various types of visual features that may contain more uncertainty in the hidden space.
MASN exploits four types of visual features, i.e., the global/local appearance/motion feature. In this paper, we extract these features as described in [2]. Concretely, for a video of frames, we can obtain the global appearance features , global motion features , local appearance features , and local motion features , where , and is the number of objects in each frame.
There are two parallel streams in the video encoder of MASN for appearance and motion feature encoding, respectively. Since the two streams are identical in structure, we elaborate on them without specifying appearance or motion. The visual encoder is essentially a graph convolution network (GCN) [58] where the object features are modeled as a graph. Specifically, the graph node ( is the dimension of the node representations) is computed as follows,
(14) | |||
(15) | |||
(16) |
where is the coordinate of the bounding box of the -th object in the -th frame, is the sinusoidal function for time-step encoding [59], represents concatenation, and are parameters. For simplicity, we flatten the two indexes of and denote them as , where is the number of all objects.
We combine all the nodes to form a object matrix . The adjacent matrix for the graph is computed based on the dot-product of the projected nodes, i.e,
(17) |
where are parameters. The visual features are then encoded by a GCN, i.e., , where contains interaction information among objects.
In this work, we encode the video to a probabilistic representation, which is defined as the combination of all encoded objects . is the encoded -th object in the video, which is modeled as a multivariate Gaussian distribution, i.e., , where is the expectation and the variance, and is the -order identity matrix (we assume the dimensions in are independent). The expectation and variance are obtained from with a two-layer perceptron (MLP) as follows,
(18) |
where is the -th row in . In practice, we use the reparameterization trick to sample sets of visual representations , where and (, and represents element-wise multiplication). Note that the process above is parallel for both appearance and motion encoding, and thus two types of visual representations are obtained. is then exploited for video-question interaction.
The question encoder comprises word embedding and LSTM [60]. Specifically, the words are transformed into 300D vectors using GloVe [61]. Subsequently, a linear projection and LSTM are utilized to process the word sequence. To summarize, the LSTM encodes each question into a sequence of hidden states. We do not apply probabilistic modeling to question encoding as our prior experiments show it brings no improvement for MASN.
After obtaining the appearance/motion representation and the question encoding, appearance/motion-question interaction is applied, by which the appearance/motion-question interacted representation is obtained as follows. For the sake of simplicity, we will not explicitly denote the sampling index since the processing is the same across the sets of visual features. Upon obtaining (applicable to either appearance or motion features) and question encoding ( and represents the number of words in the question), we combine each of these sets into a matrix. These matrices are represented as and , respectively. The video-question interaction is performed using the bilinear attention network (BAN) [14] as follows,
(19) |
where , , and is the attention map. Four blocks of BAN are applied, and the final result is the video-question interacted representation, which is denoted as .
The above stream is applied to appearance features and motion features in parallel. Then two types of video-question representation, for appearance and for motion, are obtained. and are fused with the motion-appearance-centered attention. Specifically, three types of attention are computed as follows,
(20) | |||
(21) | |||
(22) |
where , and the function refers to scaled dot-product attention [59], taking as inputs query, key, and value. Then, a residual connection [62] and a LayerNorm [63] are further applied as follows,
(23) |
where , and is the appearance-centered/motion-centered/mix attention. We then fused the results with the guidance of questions as follows,
(24) | |||
(25) |
where is the sum of along the first dimension. Finally, the video-question feature is computed by aggregating along the first dimension as follows,
(26) |
where is the -th row of , and projects to a scalar. is the fused feature for answer prediction. Given that we have sets of appearance/motion representations, we compute features .
The answer decoder varies according to the task at hand. For the regression task, the answer is predicted by projecting to two scalars (predicted expectation and variance). For the classification task, the answer distribution is computed by projecting to logits and applying Softmax to the logits. For the multi-choice task, we first concatenate the question and each answer, and then model VideoQA as binary classification. The prediction is the answer with the highest estimated probability of being correct. Importantly, yields predictions, and the ultimate outcome is their average. The optimization follows our uncertainty-aware CL.
III Experiments
III-A Implementation Details
In the probabilistic modeling, the visual representations are sampled five times during training and ten times during testing. As for the CL scheduler , we employ a linear222Concave/convex functions yield similar results to the linear one after tuning hyperparameters. function with increasing values over epochs, ranging from 3 to 7. The model is trained with a learning rate of and a batch size of 32, utilizing the Adam optimizer [64]. All experiments are conducted using PyTorch with NVIDIA A100 GPUs.
We evaluate our methods on four datasets: TGIF-QA [38], NExT-QA [65], MSVD-QA [39], and MSRVTT-QA [39]. Specifically, TGIF-QA consists of four sub-tasks: Count, FrameQA, Action, and Transition. For instance, Count is formulated as regression, FrameQA involves open-ended tasks (multi-class classification), while Action and Transition are multi-choice tasks (binary classification for each option). NExT-QA is a multi-choice dataset encompassing description, chronology, and causality. MSVD-QA and MSRVTT-QA are open-ended VideoQA datasets. Regarding evaluation metrics, we employ mean square error (MSE) for Count and accuracy for the other sub-tasks and datasets.
III-B Comparisons with Existing Methods
Models | Count | FrameQA | Action | Trans. |
---|---|---|---|---|
B2A [36] | 3.71 | 57.5 | 75.9 | 82.6 |
HAIR [66] | 3.88 | 60.2 | 77.8 | 82.3 |
HOSTER [67] | 4.13 | 58.2 | 75.6 | 82.1 |
MASN [2] | 3.64∗ | 58.5∗ | 82.1∗ | 85.7∗ |
HQGA [13] | 3.97∗ | 61.3 | 76.9 | 85.6 |
IGV [3] | 3.67∗ | 52.8∗ | 78.5∗ | 85.7∗ |
UCLQAF | 3.22 | 60.5 | 84.0 | 87.7 |
UCLQAP | 3.22 | 60.4 | 84.0 | 87.8 |
In this section, we compare our model with the existing methods. As we discussed in Related Work, we follow HQGA [13], IGV [3], and ATP [40], and compare only with the methods without large-scale video-text pretraining for fair comparisons. We carefully selected the compared methods based on specific criteria: 1) the state of the art in recent years such as MASN [2], IGV [3], and HQGA [13]; 2) the methods that report results on the corresponding datasets in the original paper. In summary, we compare the best and result-available methods on each dataset.
III-B1 Results on TGIF-QA
We first compare our methods with the existing ones on TGIF-QA. The results are shown in Table I. Note that UCLQAF/UCLQAP in the table represents our model trained with feature/predictive-uncertainty-aware CL. As we can see from the results, both UCLQAF/UCLQAP achieve state-of-the-art performance on all sub-tasks except for FrameQA. The improvements on the three sub-tasks are significant compared with the previous methods. As for FrameQA, HQGA [13] achieves the best results for its hierarchical structure that is effective in capturing the global dependencies, and such global dependencies are crucial to FrameQA. Note that the state-of-the-art method, IGV [3], is not effective on FrameQA and achieves the worst performance. Besides, UCLQAF and UCLQAP have similar performance on all sub-tasks of the TGIF-QA dataset.
III-B2 Results on NExT-QA
We compare our methods with previous ones on NExT-QA. The results (accuracy on the validation/testing set) are shown in Table II. As we can see from the results, our UCLQAF achieves better performance than the state-of-the-art methods, i.e., IGV [3], ATP [40] and HQGA [13]. Specifically, the improvement in the accuracy on descriptive questions are significant compared to previous results. As for UCLQAF, it further promotes the performance to a noticeable extent compared to HQGA and UCLQAF. Besides, the improvement on causal questions is noteworthy. Nevertheless, our methods obtain inferior results on temporal questions. We assume the reason is that MASN models the relations among objects across frames, which diminishes the impact of temporal information on answer prediction.
III-B3 Results on MSVD-QA and MSRVTT-QA
We have conducted a comparative analysis of our methodologies against previous approaches using the MSVD-QA [39] and MSRVTT-QA [39] datasets, with the results presented in Table III. Notably, UCLQAP and IGV [3] jointly achieve the highest performance on MSVD-QA, whereas UCLQAF exhibits slightly lower results. Conversely, when considering MSRVTT-QA, both UCLQAP and UCLQAF trail behind IGV. This discrepancy may be attributed to the datasets’ inherent properties. MSRVTT-QA features clear causal relations, which IGV capitalizes on by effectively leveraging invariant grounding to identify pivotal scenes for causal relation reasoning.
Models | Val. | Testing | |||
---|---|---|---|---|---|
Causal | Temp. | Descrip. | All | ||
CoMem [5] | 44.2 | 45.9 | 50.0 | 54.4 | 48.5 |
HCRN [37] | 48.2 | 47.1 | 49.3 | 54.0 | 48.8 |
HME [4] | 48.1 | 46.8 | 48.9 | 57.4 | 49.2 |
MASN∗ [2] | 50.8 | 47.7 | 49.4 | 57.8 | 49.8 |
HGA [6] | 49.7 | 48.1 | 49.1 | 57.8 | 50.0 |
IGV [3] | — | 48.6 | 51.7 | 59.6 | 51.3 |
ATP [40] | — | 48.6 | 49.3 | 65.0 | 51.5 |
HQGA [13] | 51.4 | 49.0 | 52.3 | 59.4 | 51.8 |
UCLQAF | 51.6 | 49.4 | 50.6 | 61.9 | 51.8 |
UCLQAP | 52.3 | 50.3 | 50.5 | 61.8 | 52.2 |
III-C More Analysis
III-C1 Ablation Study
We conduct ablation studies to show the impact of the proposed probabilistic modeling and uncertainty-aware CL. The baseline model is constructed by discarding probabilistic modeling, where the visual representations (including appearance and motion) are deterministic variables instead of random variables, and CL is not applied in the training process. Besides, we also evaluate the model with probabilistic modeling but trained without CL. The results on TGIF-QA are shown in Table V. As we can from the results, the baseline model achieves considerable results. Furthermore, the probabilistic modeling improves the performance on all sub-tasks of TGIF-QA, which means modeling the uncertainty in the visual feature space is beneficial to capturing robust spatial-temporal dependencies in videos for question answering. Furthermore, feature-uncertainty-aware CL (CLF) and predictive-uncertainty-aware CL (CLP) both improve the performance on all sub-tasks. Specifically, the improvements on Count, FrameQA, and Action are significant. We assume the reason for the less significant improvement on Transition is that the variance of the uncertainty distribution of this dataset is small, which diminishes the impact of CL.
CL | TGIF-QA | MSVD | MSRVTT | |||
---|---|---|---|---|---|---|
Count | FrameQA | Action | Trans. | |||
/ | 3.33 | 59.7 | 82.8 | 87.2 | 38.9 | 36.1 |
CLH | 3.59 (+0.26) | 59.9 (+0.2) | 83.0 (+0.2) | 86.6 (-0.6) | 38.8 (-0.1) | 36.4 (+0.3) |
CLL | 3.46 (+0.13) | 60.0 (+0.3) | 82.8 (+0.0) | 87.1 (-0.1) | 39.4 (+0.5) | 36.4 (+0.3) |
CLF | 3.22 (-0.11) | 60.5 (+0.8) | 84.0 (+1.2) | 87.7 (+0.5) | 40.6 (+1.7) | 37.3 (+1.2) |
CLP | 3.22 (-0.11) | 60.4 (+0.7) | 84.0 (+1.2) | 87.8 (+0.6) | 40.8 (+1.9) | 37.3 (+1.2) |
PM | CLF | CLP | Count | FrameQA | Action | Trans. |
---|---|---|---|---|---|---|
3.64 | 58.5 | 82.1 | 85.7 | |||
✓ | 3.33 (-0.31) | 59.7 (+1.2) | 82.8 (+0.7) | 87.2 (+1.5) | ||
✓ | ✓ | 3.22 (-0.42) | 60.5 (+2.0) | 84.0 (+1.9) | 87.7 (+2.0) | |
✓ | ✓ | 3.22 (-0.42) | 60.4 (+1.9) | 84.0 (+1.9) | 87.8 (+2.1) |
III-C2 Curriculum Learning
We compare our uncertainty-aware curriculum learning with the original SPL (CLH) [19] and the SPL with linear regularizer (CLL) [20]. The results on three datasets are shown in Table IV. The baseline model is our model with only probabilistic modeling (the second row in Table V). For fair comparisons, we also apply probabilistic modeling to the model trained with CLH and CLL. As we can see from the results, CLH makes little difference to the performance on MSVD-QA, and it has a negative impact on Count and Transition. Besides, the improvements on FrameQA, Action, and MSRVTT-QA are marginal. As for CLL, it brings obvious improvement only on MSVD-QA, while the impact on other datasets is either little or negative. In contrast, the proposed CLP and CLF both achieve obvious improvements on all sub-tasks and datasets, which shows the superiority of our methods over traditional SPL.
III-C3 Uncertainty Quantification
We have conducted a comparison of the uncertainty-aware curriculum learning, employing distinct forms of uncertainty quantification. Specifically, we have juxtaposed the uncertainty quantification technique proposed in [46] () with our two variations of uncertainty ( and ) for both regression and classification tasks on Count and FrameQA, respectively. The results, as depicted in Table VI, are contrasted against a baseline model trained without curriculum learning. From the outcomes, it is evident that curriculum learning enhanced with yields improvements in performance for both the regression task (Count) and the classification task (FrameQA). This underscores the efficacy of our uncertainty-aware curriculum learning framework, demonstrating its adaptability to different uncertainty quantification methodologies. Furthermore, our proposed types of uncertainty provide even more significant performance gains compared to , largely attributed to the added benefit of probabilistic modeling.
Uncertainty | Count | FrameQA |
---|---|---|
Baseline | 3.64 | 58.5 |
3.50 (-1.4) | 59.7 (+1.2) | |
3.22 (-4.2) | 60.5 (+2.0) | |
3.22 (-4.2) | 60.4 (+1.9) |
Models | TGIF-Action | MSVD-QA | ||
---|---|---|---|---|
w/o UCL | w/ UCL | w/o UCL | w/ UCL | |
HGA | 76.0 | 78.5 (+2.5) | 33.1 | 34.7 (+1.6) |
HQGA | 76.9 | 78.8 (+1.9) | 39.7 | 41.5 (+1.8) |
IGV∗ | 78.5 | 79.6 (+1.1) | 35.6 | 37.0 (+1.4) |
MASN | 82.1 | 84.0 (+1.9) | 38.0 | 40.8 (+2.8) |

III-C4 Generalization Ability
Demonstrating the robustness of our approach in terms of generalization across various VideoQA models, we also applied it to HGA, HQGA, and IGV. The outcomes, presented in Table VII, consistently showcase substantial enhancements across these models, both for TGIF-Action (multi-choice) and MSVD-QA (open-ended) datasets. Notably, a straightforward modification results in accuracy improvements exceeding 1.5% in the majority of cases. These findings underscore the versatility of our UCL framework, showcasing its applicability across a diverse range of VideoQA models to achieve superior performance.
III-C5 Quality of Uncertainty
To quantitatively analyze the accuracy of our uncertainty, we discretize the predictive uncertainty (normalized to [0,1]) into ten levels and compute the accuracy within each level. The Uncertainty-Accuracy curve on TGIF-Action and TGIF-Transition is shown in Fig. 2. We have observed a negative correlation between confidence and accuracy, which supports the validity of our uncertainty estimation. Therefore, a promising application of our model is that it can quantify the uncertainty in VideoQA data and assess the difficulty of videos and QA pairs, which can greatly accelerate the collection of challenging data.


III-C6 Uncertainty Visualization
Fig. 3(a) shows some examples of high predictive uncertainty in TGIF-Transition, where the uncertainty of each option is provided. As the figure shows, the wrong predictions are of high predictive uncertainty, which means our model has less confidence in its predictions and thus makes incorrect choices. Furthermore, there exists obvious ambiguity in the high-uncertainty videos, e.g., in the right video of Fig. 3(a), while the man is dancing, he also performs a pointing action that is very obvious, so it makes sense that our model has low confidence in both “Dance” and “Point”. Fig. 3(b) illustrates some examples of high feature uncertainty in TGIF-Transition. As we can see from the figure, the videos of high feature uncertainty are generally in poor visual quality. For example, the question of the left video in Fig. 3(b) asks about the actions on TV, which are not clear enough to tell what actually happens. Besides, for the right video, the man moves very quickly, and it is hard to tell whether he is laughing or smiling. These examples demonstrate that poor visual quality results in high feature uncertainty, which could have a negative impact on the accuracy of the predictions.
III-D Hyper-parameter Analysis
III-D1 KL Divergence Weight
In our probabilistic modeling, we apply KL divergence regularization to the stochastic representations to prevent their degradation into deterministic ones, with the regularization strength balanced by a weight denoted as . The impact of varying on the model trained with predictive-uncertainty-aware CL for NExT-QA is presented in Table VIII. The reported results include both validation and testing accuracy. As observed from the table, performance peaks at , while larger values of lead to decreased accuracy. This pattern is likely attributed to the high-quality videos in the NExT-QA dataset, where minor uncertainty exists in the feature space. Introducing strong regularization to the features might inadvertently compromise information integrity. Similar analyses were conducted on TGIF-QA, MSVD, and MSRVTT datasets, yielding optimal results around or across various datasets or sub-tasks within TGIF-QA. This trend suggests that larger values are advantageous for datasets with visually unsatisfactory videos (such as TGIF-QA), facilitating the learning of more robust representations.
Val. | Testing | ||||
---|---|---|---|---|---|
Causal | Temp. | Descrip. | All | ||
52.2 | 48.4 | 49.9 | 62.6 | 51.2 | |
52.2 | 49.7 | 50.7 | 61.7 | 52.0 | |
52.3 | 50.3 | 50.5 | 61.8 | 52.2 | |
51.8 | 49.7 | 49.4 | 61.9 | 51.6 | |
51.1 | 48.8 | 49.5 | 60.9 | 51.0 |
\ | 5 | 6 | 7 |
---|---|---|---|
1 | 3.24 / 3.22 | 3.33 / 3.35 | 3.26 / 3.30 |
2 | 3.29 / 3.30 | 3.28 / 3.30 | 3.25 / 3.29 |
3 | 3.26 / 3.25 | 3.31 / 3.30 | 3.32 / 3.33 |
III-D2 Training Scheduler
The training scheduler plays a pivotal role in controlling the rate at which the difficulty level changes in the curriculum learning process. In this study, a linear increasing function with respect to the epoch is adopted, given by:
(27) |
where corresponds to the total number of training epochs (), and and () serve as hyper-parameters that govern the rate of difficulty adaptation. The results for different settings of (1, 2, 3) and (5, 6, 7) on the Count task within TGIF-QA are displayed in Table IX, featuring the reported mean squared error (MSE) on the validation set. The outcomes reveal that the optimal performance is achieved when and . In our experimental procedures, the selection of the optimal and values was based on validation performance across all datasets (or sub-tasks within TGIF-QA). Generally, for smaller-scale datasets (or sub-tasks) like Count and FrameQA, smaller and more rapidly increasing scheduler values (e.g., ) tend to yield better results. Conversely, for larger-scale datasets such as Transition and NExT-QA, a larger, slower-increasing scheduler (e.g., ) is preferred. This distinction arises from the observation that larger-scale data necessitate a longer duration to adapt to the evolving difficulty level effectively.
Model | C | F | A | T | #Param. (M) |
MASN [2] | 3.64∗ | 58.5∗ | 82.1∗ | 85.7∗ | 25.7 |
HQGA [13] | 3.97∗ | 61.3 | 76.9 | 85.6 | 11.0 |
IGV [3] | 3.67∗ | 52.8∗ | 78.5∗ | 85.7∗ | 34.5 |
UCLQA1 | 3.22 | 60.4 | 83.4 | 87.5 | 27.3 |
UCLQA3 | 3.24 | 60.4 | 83.6 | 87.6 | |
UCLQA7 | 3.21 | 60.4 | 83.9 | 87.6 | |
UCLQA10 | 3.22 | 60.4 | 84.0 | 87.8 |
III-D3 Sampling Times
Table X presents the outcomes of our model with varied sampling times during inference on TGIF-QA (trained with predictive-uncertainty-aware CL). Throughout training, a consistent sampling time of 5 is employed across all sub-tasks. The results indicate that, during the testing phase, different sampling times have minimal impact on predictions across all sub-tasks except for Action. This observation likely stems from the enhanced robustness of our model due to probabilistic modeling. Consequently, minor disruptions in the encoded visual representations exert limited influence on video-question interaction and answer prediction. For Action, performance improves with more sampling times. However, the enhancement becomes marginal when the sampling time is large. Table X also provides insights into model parameters. Notably, HQGA features the fewest parameters, while IGV boasts the most. In comparison, our model and MASN exhibit similar parameter counts. Importantly, the sampling is performed after video encoding, and the subsequent repeated computation can be implemented in a parallel way instead of the serial one. Consequently, the inference speed remains relatively unaffected. For instance, the inference time (in seconds) of UCLQA1(MASN)/UCLQA3/UCLQA7/UCLQA10 on Count is 201/208/215/221 on an NVIDIA A100 GPU.
IV Conclusion
In this paper, we propose a novel uncertainty-aware CL framework for VideoQA, where difficulty is gauged using uncertainty as a measure. To capture data uncertainty and mitigate its negative impact, we present a probabilistic modeling for VideoQA. Specifically, VideoQA is reformulated as a stochastic computation graph, wherein hidden representations of videos and questions become stochastic variables. Within this probabilistic modeling framework, we define feature uncertainty and predictive uncertainty to guide curriculum learning. In practice, we seamlessly integrate the VideoQA model into our framework and conduct experiments to demonstrate the superiority of our methods. The results illustrate that our approach achieves state-of-the-art performance across multiple datasets, while also providing meaningful uncertainty quantification for VideoQA.
V Future Work
Current VideoQA methodologies primarily leverage deep neural networks for deterministic video and text encoding, yielding encoded features [2, 13, 3]. However, inherent uncertainty is present in data due to factors like noise, blur, and occlusion within videos. To mitigate this challenge, we propose probabilistic modeling, where representations are treated as random variables. This approach aims to diminish the influence of uncertainty and quantify it effectively. Conversely, a model employing feature-level probabilistic modeling might exhibit reduced sensitivity to subtle changes in videos. In other words, a model trained within our framework might possess a diminished ability to differentiate visually similar concepts, potentially impacting fine-grained video comprehension. This specific concern will be a focal point in our forthcoming work.
Appendix: Derivation of ELBO
The elaborate derivation of the evidence lower bound (ELBO) can be found in Eq. 28.
(28) |
References
- [1] D. Huang, P. Chen, R. Zeng, Q. Du, M. Tan, and C. Gan, “Location-aware graph convolutional networks for video question answering,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 021–11 028.
- [2] A. Seo, G.-C. Kang, J. Park, and B.-T. Zhang, “Attend what you need: Motion-appearance synergistic networks for video question answering,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 6167–6177.
- [3] Y. Li, X. Wang, J. Xiao, W. Ji, and T.-S. Chua, “Invariant grounding for video question answering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2928–2937.
- [4] C. Fan, X. Zhang, S. Zhang, W. Wang, C. Zhang, and H. Huang, “Heterogeneous memory enhanced multimodal attention model for video question answering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1999–2007.
- [5] J. Gao, R. Ge, K. Chen, and R. Nevatia, “Motion-appearance co-memory networks for video question answering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6576–6585.
- [6] P. Jiang and Y. Han, “Reasoning with heterogeneous graph alignment for video question answering,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 109–11 116.
- [7] J. Jiang, Z. Liu, and N. Zheng, “Livlr: A lightweight visual-linguistic reasoning framework for video question answering,” IEEE Transactions on Multimedia, vol. 25, pp. 5002–5013, 2023.
- [8] F. Zhang, R. Wang, F. Zhou, Y. Luo, and J. Li, “Psam: Parameter-free spatiotemporal attention mechanism for video question answering,” IEEE Transactions on Multimedia, pp. 1–16, 2023.
- [9] W. Zhang, S. Tang, Y. Cao, S. Pu, F. Wu, and Y. Zhuang, “Frame augmented alternating attention network for video question answering,” IEEE Transactions on Multimedia, vol. 22, no. 4, pp. 1032–1041, 2020.
- [10] Z. Guo, J. Zhao, L. Jiao, X. Liu, and F. Liu, “A universal quaternion hypergraph network for multimodal video question answering,” IEEE Transactions on Multimedia, vol. 25, pp. 38–49, 2023.
- [11] J. Wang, B.-K. Bao, and C. Xu, “Dualvgr: A dual-visual graph reasoning unit for video question answering,” IEEE Transactions on Multimedia, vol. 24, pp. 3369–3380, 2022.
- [12] T. Qian, R. Cui, J. Chen, P. Peng, X. Guo, and Y.-G. Jiang, “Locate before answering: Answer guided question localization for video question answering,” IEEE Transactions on Multimedia, pp. 1–10, 2023.
- [13] J. Xiao, A. Yao, Z. Liu, Y. Li, W. Ji, and T.-S. Chua, “Video as conditional graph hierarchy for multi-granular question answering.” AAAI, 2022.
- [14] J.-H. Kim, J. Jun, and B.-T. Zhang, “Bilinear attention networks,” Advances in neural information processing systems, vol. 31, 2018.
- [15] L. Yang, Y. Shen, Y. Mao, and L. Cai, “Hybrid curriculum learning for emotion recognition in conversation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 10, 2022, pp. 11 595–11 603.
- [16] Z. Zhou, X. Ning, Y. Cai, J. Han, Y. Deng, Y. Dong, H. Yang, and Y. Wang, “Close: Curriculum learning on the sharing extent towards better one-shot nas,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX. Springer, 2022, pp. 578–594.
- [17] P. Soviany, R. T. Ionescu, P. Rota, and N. Sebe, “Curriculum learning: A survey,” International Journal of Computer Vision, vol. 130, no. 6, pp. 1526–1565, 2022.
- [18] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in Proceedings of the 26th annual international conference on machine learning, 2009, pp. 41–48.
- [19] M. Kumar, B. Packer, and D. Koller, “Self-paced learning for latent variable models,” Advances in neural information processing systems, vol. 23, 2010.
- [20] L. Jiang, D. Meng, T. Mitamura, and A. G. Hauptmann, “Easy samples first: Self-paced reranking for zero-example multimedia search,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 547–556.
- [21] Q. Zhao, D. Meng, L. Jiang, Q. Xie, Z. Xu, and A. G. Hauptmann, “Self-paced learning for matrix factorization,” in Twenty-ninth AAAI conference on artificial intelligence, 2015.
- [22] M. Gong, H. Li, D. Meng, Q. Miao, and J. Liu, “Decomposition-based evolutionary multiobjective optimization to self-paced learning,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 2, pp. 288–302, 2018.
- [23] X. Wang, Y. Chen, and W. Zhu, “A survey on curriculum learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4555–4576, 2021.
- [24] H. Li and M. Gong, “Self-paced convolutional neural networks.” in IJCAI, 2017, pp. 2110–2116.
- [25] J. Schulman, N. Heess, T. Weber, and P. Abbeel, “Gradient estimation using stochastic computation graphs,” Advances in Neural Information Processing Systems, vol. 28, 2015.
- [26] M. J. Wainwright, M. I. Jordan et al., “Graphical models, exponential families, and variational inference,” Foundations and Trends® in Machine Learning, vol. 1, no. 1–2, pp. 1–305, 2008.
- [27] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, “An introduction to variational methods for graphical models,” Machine learning, vol. 37, no. 2, pp. 183–233, 1999.
- [28] B. Qin, H. Hu, and Y. Zhuang, “Deep residual weight-sharing attention network with low-rank attention for visual question answering,” IEEE Transactions on Multimedia, vol. 25, pp. 4282–4295, 2023.
- [29] H. Zhong, J. Chen, C. Shen, H. Zhang, J. Huang, and X.-S. Hua, “Self-adaptive neural module transformer for visual question answering,” IEEE Transactions on Multimedia, vol. 23, pp. 1264–1273, 2021.
- [30] T. Qian, J. Chen, S. Chen, B. Wu, and Y.-G. Jiang, “Scene graph refinement network for visual question answering,” IEEE Transactions on Multimedia, vol. 25, pp. 3950–3961, 2023.
- [31] J. Yu, W. Zhang, Y. Lu, Z. Qin, Y. Hu, J. Tan, and Q. Wu, “Reasoning on the relation: Enhancing visual representation for visual question answering and cross-modal retrieval,” IEEE Transactions on Multimedia, vol. 22, no. 12, pp. 3196–3209, 2020.
- [32] Y. Liu, W. Wei, D. Peng, X.-L. Mao, Z. He, and P. Zhou, “Depth-aware and semantic guided relational attention network for visual question answering,” IEEE Transactions on Multimedia, vol. 25, pp. 5344–5357, 2023.
- [33] J. Jiang, Z. Chen, H. Lin, X. Zhao, and Y. Gao, “Divide and conquer: Question-guided spatio-temporal contextual attention for video question answering,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 101–11 108.
- [34] Y. Jang, Y. Song, C. D. Kim, Y. Yu, Y. Kim, and G. Kim, “Video question answering with spatio-temporal reasoning,” International Journal of Computer Vision, vol. 127, no. 10, pp. 1385–1412, 2019.
- [35] J. Wang, B.-K. Bao, and C. Xu, “Dualvgr: A dual-visual graph reasoning unit for video question answering,” IEEE Transactions on Multimedia, vol. 24, pp. 3369–3380, 2021.
- [36] J. Park, J. Lee, and K. Sohn, “Bridge to answer: Structure-aware graph interaction network for video question answering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 526–15 535.
- [37] T. M. Le, V. Le, S. Venkatesh, and T. Tran, “Hierarchical conditional relation networks for video question answering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9972–9981.
- [38] Y. Jang, Y. Song, Y. Yu, Y. Kim, and G. Kim, “Tgif-qa: Toward spatio-temporal reasoning in visual question answering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2758–2766.
- [39] D. Xu, Z. Zhao, J. Xiao, F. Wu, H. Zhang, X. He, and Y. Zhuang, “Video question answering via gradually refined attention over appearance and motion,” in Proceedings of the 25th ACM international conference on Multimedia, 2017, pp. 1645–1653.
- [40] S. Buch, C. Eyzaguirre, A. Gaidon, J. Wu, L. Fei-Fei, and J. C. Niebles, “Revisiting the” video” in video-language understanding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2917–2927.
- [41] R. Zellers, X. Lu, J. Hessel, Y. Yu, J. S. Park, J. Cao, A. Farhadi, and Y. Choi, “Merlot: Multimodal neural script knowledge models,” Advances in Neural Information Processing Systems, vol. 34, pp. 23 634–23 651, 2021.
- [42] T.-J. Fu, L. Li, Z. Gan, K. Lin, W. Y. Wang, L. Wang, and Z. Liu, “Violet: End-to-end video-language transformers with masked visual-token modeling,” arXiv preprint arXiv:2111.12681, 2021.
- [43] Y. Zeng, X. Zhang, H. Li, J. Wang, J. Zhang, and W. Zhou, “X2-vlm: All-in-one pre-trained model for vision-language tasks,” arXiv preprint arXiv:2211.12402, 2022.
- [44] Y. Zhou, B. Yang, D. F. Wong, Y. Wan, and L. S. Chao, “Uncertainty-aware curriculum learning for neural machine translation,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 6934–6944.
- [45] A. Der Kiureghian and O. Ditlevsen, “Aleatory or epistemic? does it matter?” Structural safety, vol. 31, no. 2, pp. 105–112, 2009.
- [46] A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” Advances in neural information processing systems, vol. 30, 2017.
- [47] J. Chang, Z. Lan, C. Cheng, and Y. Wei, “Data uncertainty learning in face recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5710–5719.
- [48] Y. He, C. Zhu, J. Wang, M. Savvides, and X. Zhang, “Bounding box regression with uncertainty for accurate object detection,” in Proceedings of the ieee/cvf conference on computer vision and pattern recognition, 2019, pp. 2888–2897.
- [49] W. Yang, T. Zhang, X. Yu, T. Qi, Y. Zhang, and F. Wu, “Uncertainty guided collaborative training for weakly supervised temporal action detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 53–63.
- [50] Y. Shi and A. K. Jain, “Probabilistic face embeddings,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6902–6911.
- [51] H. Zhou, C. Zhang, Y. Luo, Y. Chen, and C. Hu, “Embracing uncertainty: Decoupling and de-bias for robust temporal grounding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8445–8454.
- [52] H. Guo, H. Wang, and Q. Ji, “Uncertainty-guided probabilistic transformer for complex action recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 052–20 061.
- [53] M. Chen, J. Gao, S. Yang, and C. Xu, “Dual-evidential learning for weakly-supervised temporal action localization,” in European Conference on Computer Vision. Springer, 2022, pp. 192–208.
- [54] A. Amini, W. Schwarting, A. Soleimany, and D. Rus, “Deep evidential regression,” Advances in Neural Information Processing Systems, vol. 33, pp. 14 927–14 937, 2020.
- [55] M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” Advances in neural information processing systems, vol. 31, 2018.
- [56] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
- [57] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. PMLR, 2015, pp. 448–456.
- [58] M. Welling and T. N. Kipf, “Semi-supervised classification with graph convolutional networks,” in J. International Conference on Learning Representations (ICLR 2017), 2016.
- [59] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- [60] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
- [61] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
- [62] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [63] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
- [64] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- [65] J. Xiao, X. Shang, A. Yao, and T.-S. Chua, “Next-qa: Next phase of question-answering to explaining temporal actions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9777–9786.
- [66] F. Liu, J. Liu, W. Wang, and H. Lu, “Hair: Hierarchical visual-semantic relational reasoning for video question answering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1698–1707.
- [67] L. H. Dang, T. M. Le, V. Le, and T. Tran, “Hierarchical object-oriented spatio-temporal reasoning for video question answering,” arXiv preprint arXiv:2106.13432, 2021.
![]() |
Haopeng Li is a Ph.D student in the School of Computing and Information Systems, University of Melbourne. He received his B.S. degree in the School of Science, Northwestern Polytechnical University, and the Master degree in the School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University. His research interests include computer vision, video understanding, and artificial intelligence. |
![]() |
Qiuhong Ke received her PhD degree from The University of Western Australia in 2018. She is a Lecturer (Assistant Professor) at Monash University. Before that, she was a Postdoctoral Researcher at Max Planck Institute for Informatics and a Lecturer at University of Melbourne. Her thesis “Deep Learning for Action Recognition and Prediction” has been awarded “Dean’s List-Honourable mention” by The University of Western Australia in 2018. She was awarded “1962 Medal” for her work in video recognition technology by the Australian Computer Society in 2019. She was also awarded Early Career Researcher Award by Australia Pattern Recognition Society in 2020. Her research interests include computer vision and machine learning. |
![]() |
Mingming Gong is a Lecturer (Assistant Professor) in data science with the School of Mathematics and Statistics, the University of Melbourne. His research interests include causal reasoning, machine learning, and computer vision. He has authored and co-authored 40+ research papers on top venues such as ICML, NeurIPS, UAI, AISTATS, IJCAI, AAAI, CVPR, ICCV, ECCV, with 10+ oral/spotlight presentations and a best paper finalist at CVPR19. He has served as area chairs of NeurIPS’21 and ICLR’21, senior program committee members of AAAI’19-20, IJCAI’20-21, program committee members of ICML, NeurIPS, UAI, CVPR, ICCV, and reviewers of TPAMI, AIJ, MLJ, TIP, TNNLS, etc. He received the Discovery Early Career Researcher Award from Australian Research Council in 2021. |
![]() |
Tom Drummond is the Melbourne Connect Chair of Digital Innovation for Society at the University of Melbourne. Prior to July 2021, he was Head of Department of Electrical and Computer Systems Engineering at Monash University, where he was also Chief Investigator at Monash Node Leader for the ARC Centre of Excellence in Robotic Vision and prior to September 2010, he was a University Senior Lecturer at the University of Cambridge. His research interests include High Performance Computing, Machine Learning and Computer Vision, with a particular emphasis on real-time systems for Augmented Reality, Robotics and Assistive Technologies. He has been awarded the Konderink prize and the ISMAR 10 year impact award. He has been awarded ARC and EU Framework research grants totalling in excess of $35M AUD as well as numerous funded industry collaborations. |