LADA: Look-Ahead Data Acquisition via Augmentation for Active Learning
Abstract
Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high. Besides active learning, data augmentation is also an effective technique to enlarge the limited amount of labeled instances. However, the potential gain from virtual instances generated by data augmentation has not been considered in the acquisition process of active learning yet. Looking ahead the effect of data augmentation in the process of acquisition would select and generate the data instances that are informative for training the model. Hence, this paper proposes Look-Ahead Data Acquisition via augmentation, or LADA, to integrate data acquisition and data augmentation. LADA considers both 1) unlabeled data instance to be selected and 2) virtual data instance to be generated by data augmentation, in advance of the acquisition process. Moreover, to enhance the informativeness of the virtual data instances, LADA optimizes the data augmentation policy to maximize the predictive acquisition score, resulting in the proposal of InfoMixup and InfoSTN. As LADA is a generalizable framework, we experiment with the various combinations of acquisition and augmentation methods. The performance of LADA shows a significant improvement over the recent augmentation and acquisition baselines which were independently applied to the benchmark datasets.
Introduction
Large-scale datasets in the big data era have opened the blooming of artificial intelligence, but the data labeling requires significant efforts from human annotators. Therefore, an adaptive sampling, i.e. Active Learning, has been developed to select the most informative data instances in learning the decision boundary (Cohn, Ghahramani, and Jordan 1996; Tong 2001; Settles 2009). This selection is difficult because it is influenced by the learner and the dataset at the same time. Hence, the understanding of the relation between the learner and the dataset has become the components of active learning, which queries the next training example by the informativeness for learning the decision boundary.


after Max Entropy

learnable augmentation

learnable augmentation
Besides active learning, data augmentation is another source of providing virtual data instances to train models. The labeled data may not cover the full variation of the generalized data instances, so the learner has used the data augmentation; particularly in the vision community (Liu and Ferrari 2017; Perez and Wang 2017; Cubuk et al. 2019). Conventional data augmentation has been a simple transformation of labeled data instances, i.e. flipping, rotating, etc. Recently, the data augmentation has expanded to become a deep generative model, such as Generative Adversarial Networks (GAN) (Goodfellow et al. 2014) or Variational Autoencoder (VAE) (Kingma and Welling 2014), that generate virtual examples. Since the conventional augmentations and the generative model-based augmentations perform the Vicinal Risk Minimization (VRM)(Chapelle et al. 2001), they assume that the virtual data instances in the vicinity share the same label, which leads to limiting the feasible vicinity. To overcome the limited vicinity of VRM, Mixup and its variants have been proposed by interpolating multiple data instances (Zhang et al. 2017). The pair of interpolated features and labels, or the Mixup instance, becomes a virtual instance to enlarge the support of the training distribution.
Data augmentation and active learning intend to overcome the scarcity of labeled dataset in different directions. First, active learning emphasizes the optimized selection of the unlabeled real-world instances for the oracle query, so there is no consideration on the benefit of the virtual data generation. Second, the data augmentation focuses on generating an informative virtual data instance without intervening on the data selection stage, and without the potential assistance from oracle. These differences motivate us to propose the Look-Ahead Data Acquisition via augmentation, or LADA framework.
LADA looks ahead the effect of data augmentation in advance of the acquisition process, and LADA selects data instances by considering both unlabeled data instances and virtual data instances generated by data augmentation, at the same time. Whereas the conventional acquisition function does not consider the potential gain of the data augmentation, LADA contemplates the informativeness of the virtual data instances by integrating data augmentation into the acquisition process. Figure 1 describes the different behavior of LADA and conventional acquisition functions when applying data augmentation to active learning.
Here are our contributions from the methodological and the experimental perspectives. First, we propose a generalized framework, named LADA, that looks ahead the acquisition score of the virtual data instance to be augmented, in advance of the acquisition. Second, we train the data augmentation policy to maximize the acquisition score to generate informative virtual instances. Particularly, we propose two data augmentation methods, InfoMixup and InfoSTN, which are trained by the feedback of acquisition scores. Third, we substantiate the proposed framework by implementing the variations of acquisition-augmentation frameworks with known acquisitions and augmentation methods.
Preliminaries
Problem Formulation
This paper trains a classifier network, , with dataset while our scenario is differentiated by assuming and . Here, is a set of unlabeled data instances, and is a labeled dataset. Given these notations, a data augmentation function, , transforms a data, , into a modified data, ; where is a parameter describing the policy of transformation, and is the vicinity set of (Chapelle et al. 2001). On the other hand, a data acquisition function, , calculates a score of each data instance, , based on the current classifier, ; and represents the instance selection strategy in the learning procedure of with the instance, .
Data Augmentation
In the conventional data augmentations, in indicates the predefined degree of rotating, flipping, cropping, etc. is manually chosen by the modeler to describe the vicinity of each data instance.
Another approach of modeling is utilizing the feedback from the current classifier network, . Spatial Transformer Network (STN) is a transformer to generate a virtual example by training to minimize the cross-entropy (CE) loss of the transformed data (Jaderberg et al. 2015):
(1) |
where is the ground-truth label of the data instance, .
Recently, Mixup-based data augmentations generate a virtual data instance from the vicinity of a pair of data instances. In Mixup, becomes the mixing policy of two data instances, and (Zhang et al. 2017):
(2) |
where the labels are also mixed by the proportion . While Eq.(2) corresponds to the input feature mixture, Manifold Mixup mixes the hidden feature maps from the middle of neural networks to learn smoother decision boundary at multiple levels of representations (Verma et al. 2018). Whereas is a fixed value without any learning process, AdaMixup learns by adopting a discriminator, (Guo, Mao, and Zhang 2019):
(3) |
Active Learning
We focus on the pool-based active learning with uncertainty score (Settles 2009). Given this scope of active learning, the data acquisition function measures the utility score of the unlabeled data instances, i.e.
The traditional acquisition functions measure the predictive entropy, (Shannon 1948); or the variation ratio, (Freeman 1965). The recent acquisition function calculates the hypothetical disagreement by on a data instance, (Houlsby et al. 2011).
Besides the classifier network, , additional modules are applied to measure the acquisition score. To find the most dissimilar instance in compared to , a discriminator, , is introduced to estimate the probability of belonging to (Sinha, Ebrahimi, and Darrell 2019):
(4) |
To diversely select uncertain data instances, the gradient embedding from the pseudo label, , is used in k-MEANS++ seeding algorithm (Ash et al. 2020):
(5) |
Active Learning with Data Augmentation
There are a few prior works in integrating active learning and data augmentation effectively. Bayesian Generative Active Deep Learning (BGADL) integrates acquisition and augmentation by selecting data instances via , then augmenting the selected instances via , which is VAE, afterward (Tran et al. 2019). However, BGADL limits the vicinity to preserve the labels, and BGADL demands on large labeled instances to train generative models. More importantly, BGADL does not consider the potential gain of data augmentation in the process of acquisition.


in LADA with Max Entropy and Manifold Mixup
Methodology
A contribution of this paper is proposing an integrated framework of data augmentation and acquisition, so we start from formulating such a framework. Afterward, we propose an integrated function for acquisition and augmentation as an example of the implemented framework.
Look-Ahead Data Acquisition via Augmentation
Since we look ahead the acquisition score of augmented data instances, it is natural to integrate the functionalities of acquisitions and augmentations. This paper proposes a Look-Ahead Data Acquisition via augmentation, or LADA framework. Figure 2(a) depicts the LADA framework, which consists of the data augmentation component and the acquisition component. The goal of LADA is enhancing the informativeness of both 1) real-world data instance, which is unlabeled at current, but will be labeled by the oracle in the future; and 2) virtual data instance, which will be generated from the unlabeled data instances that are selected. This goal is achieved by looking ahead of their acquisition scores before actual selections for the oracle annotations.
Specifically, LADA trains the data augmentation function, , to maximize the acquisition score of the transformed data instance of before the oracle annotations. Eq.(6) specifies the learning objectives of the augmentation policy via the feedback from acquisition.
(6) |
With the optimal corresponding to , calculates the acquisition score of (see Eq.(7)), and the score also considers the utility of the augmented instance, :
(7) |
Whereas the proposed LADA framework is a generalized framework that can adopt the various types of acquisition and augmentation functions, this section mainly adopts Mixup for , i.e. and Max Entropy for , i.e. . To begin with, we introduce an integrated single function to substitute the composition of functions as for generality.
Integrated Augmentation and Acquisition: InfoMixup
As we introduce LADA with to look ahead the acquisition score of the virtual data instances, can be a simple composition of well-known acquisition functions and augmentation functions where the policy of augmentation is fixed. However, this does not enhance the informativeness of the virtual data instances. Hence, we propose the integration where the policy of data augmentation is optimized to maximize the acquisition score, within a single function. Here, we introduce InfoMixup as a learnable data augmentation.
Data Augmentation
First, we propose InfoMixup, which is an adaptive version of Mixup to integrate the data augmentation into active learning. InfoMixup learns its mixing policy, , by the objective functions Eq.(8), where maximizes the acquisition score of the virtual data instance resulting from mixing two randomly paired data instances, and :
(8) |
InfoMixup is the starting ground where we correlate the data augmentation guided by the data acquisition from the perspective of the predictive classifier entropy.
We adopt Manifold Mixup as the data augmentation at the hidden layer. Specifically, the pair of is processed through the current classifier network, , until the propagation reaches the randomly selected -th layer111Throughout this paper, we denote the forward path from the layer to the layer of the classifier network as , where -th is the input layer and -th is the output layer. Hence, .. Afterwards, the -th feature maps are concatenated and processed by the policy generator network, , to predict that maximizes the acquisition score.
Data Augmentation Policy Learning
As we formulate the Mixup based augmentation, we propose a policy generator network, , to perform the amortized inference on the Beta distribution of InfoMixup. While we provide the details of the policy network in Appendix A.2 and Figure 2(b), we formulate this inference process as Eq.(9) and Eq.(10).
(9) | ||||
(10) | ||||
To train the parameters, , of the policy generator network, , the paired features are mixed-up with sampled ’s. Using a -th sampled from the Beta distribution inferred by , the feature maps and are mixed to produce as the below:
(11) | ||||
(12) |
By processing for the rest layers of the classifier network, the predictive class probability of the mixed features is obtained as . In order to generate a useful virtual instance through InfoMixup, the policy generator network has a loss function to minimize the negative value of the predictive entropy as Eq.(13), and the predictive entropy is a component of , which provides the incentive for the integration of acquisition and augmentation. The gradient of this loss function is calculated by averaging the entropy values of the replicated mixed features. It should be noted that Eq.(13) embed in the generation of , so the gradient can be estimated via the Monte-Carlo sampling (Hastings 1970). Figure 2(b) illustrates the forward and the backward paths for the training process of the policy generator network.
(13) | ||||
In the backpropagation, we have a process of sampling s from the Beta distribution parameterized by . To enable the backpropagation signals to pass by, we follow the reparameterization technique of the optimal mass transport (OMT) gradient estimator, which utilizes the implicit differentiation (Jankowiak and Obermeyer 2018; Jankowiak and Karaletsos 2019). Appendix B provides the details of our OMT gradient estimator in the backpropagation process.
Data Acquisition by Learned Policy
After optimizing the mixing policy, , for -th pair of unlabeled data instances, , we calculate the joint acquisition score of the data pair by aggregating the individual acquisition scores of 1) , 2) , and 3) their mixed feature maps, as the below:
(14) | ||||
(15) | ||||
As we calculate the acquisition score by including the predictive entropy of the InfoMixup feature map, the acquisition is influenced by the data augmentation. More importantly, this integration is reciprocal because the optimal augmentation policy of InfoMixup comes from the acquisition score. This reciprocal relation is an example of motivating the LADA framework by overcoming the separation between the augmentation and the acquisition.
If we take InfoMixup as an example of LADA, we show that InfoMixup generates a virtual sample with a high predictive entropy in the class estimation, which could be regarded as a decision boundary region that is not clearly explored, yet. The unexplored region is identified by the optimal policy of in the acquisition.
Here, we introduce a pipelined variant in to emphasize the worth of the integration. One possible variation is incorporating Mixup-based data augmentation and acquisition function as a two-step model, where 1) the acquisition function selects the data instances whose individual scores are the highest, 2) then Mixup is afterward applied to the selected instances. However, this method may increase the criteria of individual data instances laid in the first and the second terms of Eq.(15), but it may not optimize the criteria of their mixing process laid in the last part of Eq.(15) since it has not considered the effect of Mixup in the selection process. This may not enhance the informativeness of the virtual data instances. We compare this variation with LADA in the Experiments Section.
Training Set Expansion through Acquisition
We assume that we start the active learning iteration with already acquired labeled dataset . With the allowed budget per acquisition as , we acquire top- pairs, i.e. among the subsets, , with .
(16) |
At this moment, oracle annotates the true labels on . Also, we have a virtual instance dataset, , generated by InfoMixup with the optimal mixing policy, :
(17) |
where . Here, is dynamically inferred by the neural network of per each pair.
Up to this phase, our training dataset becomes and . Our proposed algorithm, described in Algorithm 1, utilizes for this active learning iteration only, with various s sampled at each training epoch. The classifier network’s parameter, , is learned via the gradient of the cross-entropy loss,
(18) | ||||
(19) |
where is the corresponding ground-truth label annotated from the oracle for Eq.(18); or the mixed label according to the mixing policy for Eq.(19).
LADA with Various Augmentation-Acquisition
Since we propose the integrated framework of acquisition function and data augmentation to look ahead the informativeness of the data, we can use various acquisition functions and data augmentations in our LADA framework. For example, we may substitute the Max Entropy, which is the feedback of the acquisition function to the data augmentation in InfoMixup, with another simple feedback, Var Ratio. Also, if we apply the VAAL acquisition function, LADA with VAAL trains the generator network, , to maximize the discriminator’s indication on the unlabeled dataset, , for the generated instances.
Similarly, we may substitute the data augmentation of InfoMixup with Spatially Transform Networks (STN) (Jaderberg et al. 2015), a.k.a. InfoSTN. STN may be trained with a subset of unlabeled data as input to maximize their predictive entropy when propagated to the current classifier network. The score to pick the most informative data is formulated as , where is the spatially transformed output of the data , and is the corresponding prediction by the current classifier network. We provide more details in Appendix C.
Experiments
Baselines and Datasets
This section denotes the proposed framework as LADA, and we specify the instantiated data augmentation and acquisition by its subscript, i.e. the proposed InfoMixup as which adopts Max Entropy as data acquisition and Mixup as data augmentation to select and generate informative samples. If we change the entropy measure to the Var Ratio or the discriminator logits of VAAL, it results in the subscript of VarMix or VaalMix, respectively. Also, if we change the augmentation policy to the STN network, the subscript becomes EntSTN.
We compare our models to 1) Coreset (Sener and Savarese 2018); 2) BADGE (Ash et al. 2020); and 3) VAAL (Sinha, Ebrahimi, and Darrell 2019) as the baselines for active learning. We also include some data augmented active learning: 1) BGADL, 2) Manifold Mixup; and 3) AdaMixup. Here, BGADL is an integrated data augmentation and acquisition method, but it should be noted that BGADL has no learning mechanism in the augmentation from the feedback of acquisition. We also add ablated baselines to see the effect of learning , so we introduce the fixed case as . The classifier network, , adopts Resnet-18 (He et al. 2016), and the policy generator network, , consists of a much smaller neural network. Appendix A.2 provides more details on the networks and their training.
We experiment the above-mentioned models with three benchmark datasets: FashionMNIST (Fashion) (Xiao, Rasul, and Vollgraf 2017), SVHN (Netzer et al. 2011), and CIFAR-10 (Krizhevsky, Hinton et al. 2009). Throughout our experiments, we repeat the experiments for five times to validate the statistical significance, and the maximum acquisition iteration is limited to 100. More details about the treatment on each dataset are in Appendix A.1.
We evaluate the models under the pool-based active learning scenario. We assume that the model has 20 training instances, which are randomly chosen and balanced. As the active learning iteration progresses, we acquire 10 additional training instances at each iteration, and we use the same amount of oracle queries for all models, which results in selecting top-5 pairs when adopting Mixup as data augmentation in the LADA framework.
Method | Fashion | SVHN | CIFAR-10 | Time | Param. | |
---|---|---|---|---|---|---|
Baselines | Random | 80.960.62 | 73.922.80 | 35.271.36 | 1 | - |
BALD | 80.990.59 | 75.662.07 | 34.712.28 | 1.36 | - | |
Coreset | 78.470.30 | 68.573.13 | 28.250.89 | 1.54 | - | |
BADGE | 80.940.98 | 70.891.91 | 28.601.17 | 1.31 | - | |
BGADL | 78.421.05 | 63.501.56 | 35.082.20 | 4.69 | 13M | |
Entropy-based | Max Entropy | 80.931.85 | 72.570.76 | 34.970.71 | 1.01 | - |
Ent w.ManifoldMixup | 82.310.38 | 72.691.29 | 35.880.85 | 1.03 | - | |
Ent w.AdaMixup | 81.300.83 | 73.000.39 | 35.671.75 | 1.03 | 5K | |
83.081.34 | 75.731.48 | 36.340.88 | 1.06 | - | ||
83.670.29 | 76.550.31 | 37.041.34 | 1.32 | 77K | ||
82.370.58 | 72.081.67 | 35.551.34 | 1.02 | 5K | ||
81.830.55 | 73.800.81 | 36.180.69 | 1.20 | 5K | ||
VAAL -based | VAAL | 82.670.29 | 75.010.66 | 39.820.86 | 3.55 | 301K |
82.630.29 | 76.831.05 | 44.422.12 | 3.56 | 301K | ||
82.600.49 | 77.920.51 | 44.561.40 | 3.60 | 378K | ||
VarRatio -based | Var Ratio | 81.050.18 | 74.071.87 | 34.990.73 | 1.01 | - |
83.110.66 | 76.012.64 | 35.981.68 | 1.06 | - | ||
84.470.89 | 76.090.94 | 36.840.51 | 1.33 | 77K |



Quantitative Performance Evaluations
Table 3 shows the average test accuracy, and the accuracy of each replication represents the best accuracy over the acquisition iterations. Since we introduce the generalizable framework, Table 3 separates the performances by the instantiated acquisition functions. The group of baselines does not have any learning mechanism on the acquisition metric, and this group has the worst performances. We suggest three acquisition functions to be adopted by our LADA framework, which are 1) the predictive entropy by the classifier, 2) the discriminator logits in VAAL, and 3) the classifier variation ratio. Given that VAAL uses a discriminator and a generator, the VAAL-based model has more parameters to optimize in terms of complexity, which provides an advantage in a complex dataset, such as CIFAR-10.
When we examine the general performance gains across datasets, we find the best performers as in Fashion; and in SVHN and CIFAR-10. In terms of the data augmentation, Mixup-based augmentation outperforms STN augmentation. As the dataset becomes complex, the greater gain of performance is achieved by in SVHN or CIFAR-10, compared to Fashion. In all combinations of baselines and datasets, the integrations of augmentation and acquisition, a.k.a. LADA variations, show the best performance in most cases. In terms of the ablation study, the learning of the data augmentation policy, , is meaningful because the 10 learning case of LADA is better than the fixed case in 12 variations of LADA. Figure 3 shows the convergence speed to the best test accuracy by each model. As the dataset becomes complex, the performance gain by LADA becomes apparent.
Additionally, we compare the integrated framework to the pipelined approach. Max Entropy does not have an augmentation part, so it becomes the simplest model. Then, Ent w.Manifold Mixup adds the Manifold Mixup augmentation, but it does not have a learning process on the mixing policy. Finally, Ent w.AdaMixup has a learning process on the mixing policy, but the learning is separated from the acquisition. These pipelined approaches show lower performances than the integration cases of LADA.
Finally, as LADA is a generalizable framework to work with the various acquisition and augmentation functions, Figure 4(a) and Figure 4(b) show the ablation study on the instantiated LADA with the VAAL acquisition function and the augmentation function, respectively. The figures confirm the effects of both integration and learnable augmentation policy with feedback from the acquisition.


Qualitative Analysis on Acquired Data Instances
Besides the quantitative comparison, we need reasoning on the behavior of LADA. Therefore, we selected to contrast to the pipelined approach. We investigate on 1) achieving the informative data instances by acquisition, 2) the validity of the optimal in the augmentation learned from the policy generator network , and 3) examining the coverage of the explored space.




To check the informativeness of data instances, Figure 5 shows the different acquisition process between Max Entropy and . Max Entropy selects a data instance with the highest predictive entropy value. Compared to Max Entropy, selects a pair of two data instances with the highest value of the aggregated predictive entropy, which is the summation of the predictive entropy from two data instances and one InfoMixup instance. By mixing two unlabeled data instances with the corresponding optimal mixing policy , the virtual data instance, generated along with the vicinal space, results in a high entropy value, which can be higher than the selected instance by Max Entropy. The virtual data instance helps the current classifier model to clarify the decision boundary between two classes along the interpolation line of two mixed real instances.
To confirm the validity of the optimal , we compare three cases of 1) the inferred (); 2) the fixed (); and 3) the pipelined model’s (Ent w.Manifold Mixup). Figure 6(a) shows the entropy of the virtual data instances over the acquisition process. As expected, the optimal learned in produces the highest entropy over the acquisition process, but it should be noted that the differentiation becomes significant after some acquisition iterations, which comes from the requirement of training the classifier. Figure 6(b) shows the distribution of entropy of virtual instances, with the median value of each interval as -axis. This also shows that the optimal has the highest density beyond the interval of the median 2.2.

of the virtual data

with entropy values
To examine the coverage of the explored latent space, Figure 7 illustrates the latent space of the acquired data instances and the augmented data instances. Ent w.AdaMixup has a potential capability of interpolating distantly paired data instances, but its learned limits a sample of to be placed near either one of the paired instances because of the aversion on the manifold intrusion. Therefore, in the experiments, Ent w.AdaMixup ends up exploring the space near the acquired instances. The generated virtual data instances by show further exploration than Ent w.AdaMixup. The latent space makes the linear interpolation of to be curved by the manifold, but it keeps the interpolation line of the curved manifold. The extent of the interpolation is broader than AdaMixup because the optimal is guided by the entropy maximization, which is adversarial in a sense. This adversarial approach is different from the aversion of the manifold intrusion because the latter is more conservative to the currently learned parameter.


Conclusions
In the real world where gathering a large-scale labeled dataset is difficult because of the constrained human or computational resources, learning the deep neural network requires an effective utilization of the limited resources. This limitation motivated the integration of data augmentation and active learning. This paper proposes a generalized framework for such integration, named as LADA, which adaptively selects the informative data instances by looking ahead the acquisition score of both 1) the unlabeled data instances and 2) the virtual data instances to be generated by data augmentation, in advance of the acquisition process. To enhance the effect of the data augmentation, LADA learns the augmentation policy to maximize the acquisition score. With repeated experiments on the various datasets and the comparison models, LADA shows a considerable performance by selecting and augmenting informative data instances. The qualitative analysis shows the different behavior of LADA that finds the vicinal space of high acquisition score by learning the optimal policy.
Ethics Statement
In the real world, the limited amount of labeled dataset makes it hard to train the deep neural networks and high cost of the annotation cost becomes problematic. This leads to the decision on what to select and annotate first, which calls upon the active learning. Besides the active learning, effectively enlarging the limited amount of labeled dataset is also considerable. With this motivations, we propose a framework that can adopt various types of acquisitions and augmentations that exist in machine learning field. By looking ahead the effect of data augmentation in the process of acquisition, we can select data instances that are informative if selected and labeled but also augmented. Moreover, by learning the augmentation policy in advance of the actual acquisition process, we enhance the informativeness of the generated virtual data instances. We believe that the proposed LADA framework can improve the performance of deep learning models, especially when the annotation by human experts is expensive.
References
- Ash et al. (2020) Ash, J. T.; Zhang, C.; Krishnamurthy, A.; Langford, J.; and Agarwal, A. 2020. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. In ICLR.
- Chapelle et al. (2001) Chapelle, O.; Weston, J.; Bottou, L.; and Vapnik, V. 2001. Vicinal risk minimization. In Advances in neural information processing systems, 416–422.
- Cohn, Ghahramani, and Jordan (1996) Cohn, D. A.; Ghahramani, Z.; and Jordan, M. I. 1996. Active learning with statistical models. Journal of artificial intelligence research 4: 129–145.
- Cubuk et al. (2019) Cubuk, E. D.; Zoph, B.; Mane, D.; Vasudevan, V.; and Le, Q. V. 2019. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, 113–123.
- Freeman (1965) Freeman, L. 1965. Elementary applied statistics: for students in behavioral science. Wiley. URL https://books.google.co.kr/books?id=r4VRAAAAMAAJ.
- Goodfellow et al. (2014) Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, 2672–2680. Cambridge, MA, USA: MIT Press.
- Guo, Mao, and Zhang (2019) Guo, H.; Mao, Y.; and Zhang, R. 2019. Mixup as locally linear out-of-manifold regularization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 3714–3722.
- Hastings (1970) Hastings, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications .
- He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
- Houlsby et al. (2011) Houlsby, N.; Huszar, F.; Ghahramani, Z.; and Lengyel, M. 2011. Bayesian Active Learning for Classification and Preference Learning. CoRR abs/1112.5745.
- Jaderberg et al. (2015) Jaderberg, M.; Simonyan, K.; Zisserman, A.; et al. 2015. Spatial transformer networks. In Advances in neural information processing systems, 2017–2025.
- Jankowiak and Karaletsos (2019) Jankowiak, M.; and Karaletsos, T. 2019. Pathwise Derivatives for Multivariate Distributions. In Chaudhuri, K.; and Sugiyama, M., eds., The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, volume 89 of Proceedings of Machine Learning Research, 333–342. PMLR. URL http://proceedings.mlr.press/v89/jankowiak19a.html.
- Jankowiak and Obermeyer (2018) Jankowiak, M.; and Obermeyer, F. 2018. Pathwise Derivatives Beyond the Reparameterization Trick. In Dy, J. G.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, 2240–2249. PMLR. URL http://proceedings.mlr.press/v80/jankowiak18a.html.
- Kingma and Welling (2014) Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In Bengio, Y.; and LeCun, Y., eds., 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. URL http://arxiv.org/abs/1312.6114.
- Krizhevsky, Hinton et al. (2009) Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images .
- Liu and Ferrari (2017) Liu, B.; and Ferrari, V. 2017. Active learning for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, 4363–4372.
- Maaten and Hinton (2008) Maaten, L. v. d.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research 9(Nov): 2579–2605.
- Netzer et al. (2011) Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading digits in natural images with unsupervised feature learning .
- Perez and Wang (2017) Perez, L.; and Wang, J. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621 .
- Sener and Savarese (2018) Sener, O.; and Savarese, S. 2018. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In International Conference on Learning Representations.
- Settles (2009) Settles, B. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.
- Shannon (1948) Shannon, C. E. 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27(3): 379–423.
- Sinha, Ebrahimi, and Darrell (2019) Sinha, S.; Ebrahimi, S.; and Darrell, T. 2019. Variational adversarial active learning. In Proceedings of the IEEE International Conference on Computer Vision, 5972–5981.
- Tong (2001) Tong, S. 2001. Active learning: theory and applications, volume 1. Stanford University USA.
- Tran et al. (2019) Tran, T.; Do, T.; Reid, I. D.; and Carneiro, G. 2019. Bayesian Generative Active Deep Learning. In Chaudhuri, K.; and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, 6295–6304. PMLR. URL http://proceedings.mlr.press/v97/tran19a.html.
- Verma et al. (2018) Verma, V.; Lamb, A.; Beckham, C.; Najafi, A.; Courville, A.; Mitliagkas, I.; and Bengio, Y. 2018. Manifold mixup: Learning better representations by interpolating hidden states .
- Xiao, Rasul, and Vollgraf (2017) Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 .
- Zhang et al. (2017) Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 .