Active Learning for Deep Neural Networks on Edge Devices
Abstract
When dealing with deep neural network (DNN) applications on edge devices, continuously updating the model is important. Although updating a model with real incoming data is ideal, using all of them is not always feasible due to limits, such as labeling and communication costs. Thus, it is necessary to filter and select the data to use for training (i.e., active learning) on the device. In this paper, we formalize a practical active learning problem for DNNs on edge devices and propose a general task-agnostic framework to tackle this problem, which reduces it to a stream submodular maximization. This framework is light enough to be run with low computational resources, yet provides solutions whose quality is theoretically guaranteed thanks to the submodular property. Through this framework, we can configure data selection criteria flexibly, including using methods proposed in previous active learning studies. We evaluate our approach on both classification and object detection tasks in a practical setting to simulate a real-life scenario. The results of our study show that the proposed framework outperforms all other methods in both tasks, while running at a practical speed on real devices.
1 Introduction
With advances in both hardware and software domains, running inference of deep neural networks (DNNs) on edge devices is now practical, even for relatively cheap devices [34, 15, 41]. Although these devices have lower computation resources and storage than servers, inference on these devices has many advantages over inference on servers; as it does not require sending input data (e.g., image) over the network, the latency becomes low, communication cost is decreased, and privacy is protected.
As DNNs are considerably large and require large computation for training [42], a model for the edge is usually trained on a server and subsequently deployed to the device. In this setting, it is unlikely that the model constructed in the beginning is “perfect” for two reasons. One is a problem of data. While deep learning is often very greedy for data [39], the gathering and labeling of data are time-consuming and expensive tasks. Moreover, the required amount and quality of data for acquiring the desired accuracy is unknown. Thus preparing data in sufficient amounts before training is challenging. Another reason is the problem known as domain shift. When the model is deployed after training on a server, depending on its deployment site, the distribution of input data may differ from that of training data due to several factors, such as temporal or environmental ones; Ultimately, constructing a model robust for these variations is ideal. For edge devices, however, this is unrealistic since such an adaptive model would require considerable memory or computational resources.

For these reasons, continuous updating of the model is preferable to one-shot training. In this scenario, the model also benefits from having real input data to train with (i.e., the training data distribution is the same as the target test data). Meanwhile, it is generally unrealistic to send all incoming data to a server. The amount of data will rapidly become too huge, especially if the processing is performed at a high frequency. At this time, the cost induced by the communication to the remote server is prohibitive, especially if the connection fee is on a pay-as-you-go basis. Furthermore, almost all data will contain a considerable number of similar samples, considering chronologically consequent data. For these reasons, we need to perform data selection on the device.
Selecting data and training with it is known as active learning, which is a well studied field [36]. Thus, in this paper we consider the problem we face as active learning on edge devices. Unfortunately, however, many approaches proposed in the literature are unable to trivially extend to the scenario we consider: as discuss in Section 3, it is natural to assume that there are more constraints on edge devices.
In this paper, we first discuss the properties that reflect the real-world constraints and formalize the novel problem, active learning on edge devices. The constraints of this problem are manifold, making it difficult to apply existing active learning methods. Thus we propose a task-agnostic framework for solving this problem. In this framework, we reduce the problem to a submodular maximization satisfying those constraints. As a result, we can solve the problem with theoretical guarantees on solution optimality by this framework. We show an overview in Figure 1. In the empirical evaluations, we show actual constructions for different tasks and demonstrate that our approach outperforms all other methods. In addition, we also show that our framework can be used practically through evaluations on a real device.
2 Related work
Active Learning
Active learning aims to train a model with few labeled samples by selecting data to label from a (large) pool of unlabeled data [36]. This field has been studied for a long time, and since larger amounts of data became available, its importance further increased. In a standard active learning setup, data is selected from a set of candidates and then fed to an oracle (e.g., a human annotator) that returns annotation. The model is trained with the labeled data, and this process is typically iterated multiple times (we call each iteration a round). It is assumed that labeling is costly; this is the primary motivation for reducing the amount of data to label.
While it is common to choose data from candidates, two settings exist depending on the accessibility of candidate data: pool-based and stream-based setting [36]. The former allows access to all candidate data, and it is the most popular setting. In the latter setting, we can only look at each sample once in a determined order and keep only a (small) subset of the data. Therefore, in the pool-based setting, we can consider the relationship between arbitrary data in the ground set, while it is limited in stream-setting. Moreover, we distinguish two policies by the amount of data sent to an oracle at a time; one-by-one and batch-mode. For the one-by-one strategy, we only need to consider how to evaluate each sample solely, such as uncertainty [16, 9, 44], expected model output change [8, 37] or expected error reduction [33]. While Amin et al. [2] showed that these methods can be simply extended to batch-mode, they still evaluate each sample independently (see Section 3 to further discussion about batch-mode). However, it is known that these kinds of approaches are ineffective for DNNs [35]. A batch-mode method, which has to take both informativeness and diversity into account, is thus required.
Various approaches are proposed to select a diversified batch of data. Sener and Savarese [35] formalizes the active learning problem as a core-set selection in the image space. Ash et al. [3] proposes to create diversity through probabilistic sampling with probability based on the distance between samples. Kirsche et al. [21] considers the mutual information between a batch and model parameters, although their target is Bayesian Neural Networks. Since submodular functions, which are described later, naturally model the diversity, some works reduce active learning to submodular optimization [14, 45, 18]. However, these methods are pool-based and cannot easily be applied to the stream setting. As described above, the stream setting compels us to consider relationship information, making the batch-mode active learning in this setting challenging.
Submodular Maximization
If a set function has the following diminishing returns property, it is called submodular function: where and . Since this property is similar to concavity, various problems can reduce to finding the set maximizing , i.e., a submodular maximization problem [22]. There are many variations of this problem, depending on the constraints or the nature of . One of the most popular settings is when is a non-negative monotone submodular function, and the size of the set is bounded. Here, submodular function is monotone if for all we have . Although this problem is NP-hard, it is proven that simple greedy algorithm guarantees approximation ratio, i.e., the worst-case solution is greater than times the optimal solution [30]. This is a promising approach when we can access all samples; however, recent applications, including data mining and machine learning, require treating large amounts of data and/or sequentially obtained data. To solve the submodular maximization problem in this stream setting, several algorithms have been proposed. Badanidiyuru et al. [4] proposes Sieve-Streaming, which is the first one-pass streaming algorithm. This algorithm has a approximation ratio with memory. After this work, the improved algorithms Sieve-Streaming++ [19] and ThreeSieves [5] were proposed. The former improves memory usage to while keeping the same approximation ratio, and the latter decreases it to but makes the lower bound guarantee probabilistic. We provide an overview of the algorithms in Appendix A. In the experiments (Section 5), we use Sieve-Streaming++ to solve the problem our framework formalizes.
3 Active learning for DNNs on edge devices
3.1 Setting
In this paper, we consider a situation where a DNN model is running on a device and continuously performing inference on input data (e.g., from a camera, a microphone, among others). The model is trained on a server and deployed to the device. Our aim is to select data useful for training from the incoming data flow, without stopping inference. When simultaneously running the prediction and selection, we assume the inference and the screening of one sample must be completed before processing the next sample.
To consider common devices such as Raspberry Pi and NVIDIA Jetson, we assume that the available storage space is largely smaller than the total incoming data size. Since one of the advantages of these devices is their relatively low cost, we do not consider the use of extended storage space. In addition, we also assume that the amount of data that can be sent is limited. As described in Section 1, it is natural to assume that the communication cost cannot be ignored. In this setting, we cannot take the pool-based approach. To solve the problem via this approach, it would require to store all the data or send it to the server. Since this is not possible due to the above-mentioned reasons, we have to consider the stream setting, where data arrive continuously, and only a part of them can be stored.
As a deep learning context, we also have to consider batch-setting, as prior works mentioned [35, 3]. In this setting, multiple data samples are selected in one round, in contrast to the traditional method of selecting one per round. With this, one may have to consider correlations among multiple samples rather than treating each sample solely.
Consequently, we can regard the problem as stream-based batch-mode active learning. There are mainly two types of batch-mode method regarding the terminating condition: 1) greedily select and send to oracle as soon as the batch reaches a given size [2] or 2) send the batch of data to the oracle after seeing a given number of samples. While the former has the potential to reduce unnecessary selection, the amount of data selected per hour is unpredictable at a given sample frequency and it would not be preferable in many practical applications. Thus, we consider the latter setting. Under these assumptions, we need to consider the nature of data, which is different from the usual active learning setting. Moreover, we consider two scenarios that reflect problems faced in practical applications. These are closely related to the evaluation of the method in this problem.
Peculiarity of data
To the best of our knowledge, most active learning studies consider data that is in some sense filtered, i.e., it is (implicitly) assumed that every candidate is a useful data sample for training. For example, when selecting data from an existing dataset, almost all samples are presumably relevant (except for mislabeling, etc.). However, when we consider data selection on a device, it is natural to assume that the data will be in various conditions such as unclear, blurred, redundant, among others. We generally classify those into the three following statuses: “imbalanced”, “duplicated”, and “unuseful”. The first one indicates class imbalance, i.e., some classes of data rarely appear, while instances of other classes often appear. The second condition is when almost identical samples exist in candidates. If the model runs at a high speed, the frequency of data capture is also high. Then, a data sample would be often similar to its surrounding samples. The last status describes the presence of data such as “non-object” or “background” samples. For example, in a face detection task, images without a face in it are not useful for increasing accuracy. We have to consider those peculiarities in the evaluation. We show examples from real data in Appendix (Figure 4).
Scenarios
With regard to active learning on edge devices, we consider two main scenarios: cold-start and domain adaptation. Although both scenarios predict and gather data in parallel, the former scenario considers training the model with a small initial dataset. This makes deployment earlier; consequently, the total operating time increases. The latter is a scenario where the model is initially trained with a dataset, which is usually not small, but its distribution is different from the target environment and is later updated with the collected data. This is intended to treat the domain shift, as mentioned above. There is a similar problem setting called unsupervised domain adaptation [10, 29], but this is different in that no label of target domain data is available. In both scenarios, the goal is to adapt the model to its deployment environment.
3.2 Problem Definition
Let be the instance space, whole incoming data set (i.e., ground set). Let be a partition of by round . We aim to select at each round a subset , , of data to train a model, where is a given constant. Since we consider the stream setting, we cannot access all at the same time. Explicitly, at step in round , we have to decide whether store or discard a sample as soon as it arrives. At this time, we can evaluate only stored samples and . Moreover, we make no assumption on the independence between samples, i.e., samples are not independent and identically distributed (iid). In a typical stream (e.g., a video stream), there is indeed a temporal correlation between following samples, and therefore samples are not independent.
As an active learning problem, we assume we have an initial dataset at the beginning. For each round, after selecting , we label all samples in , add them to the dataset (i.e., ), and retrain the model with . Let a model parameter be and its loss function be . Depending on the distribution where the samples in and are drawn from, we define the following two scenarios. Note that in both scenarios, there is no constraint on the nature of the loss function , i.e., we can consider any supervised tasks.
Definition 3.1 (Cold-start Scenario).
Given , whose samples are drawn from the same distribution as . Let this distribution be and . In this scenario, the goal is minimizing .
Definition 3.2 (Domain Adaptation Scenario).
Given , whose samples are drawn from a different distribution than . Let these distributions be and respectively. In this scenario, the goal is minimizing .
While we assume the existence of an initial dataset in this paper, its construction would be a good research topic. Especially in the cold-start scenario, one may consider collecting data randomly or only with a model-agnostic method using the device, and create from scratch the dataset. This collection method could be inspired by the approach we propose.
4 Approach
4.1 Active Learning Framework
Let be a set function . We define representing a valueness of . The goal is to select subset which maximize . To meet the conditions of a stream setting, the computation of must depend only on the elements of the set . Here, we divide the valueness of into the informativeness and diversity: the informativeness represents the total contribution of usefulness for each sample of the batch taken solely. Generally, the sum of each samples’ informativeness is the upper bound of the informativeness of a set and it is smaller when there are some dependencies between samples. To compensate for this, we introduce the diversity, which measures the value of the batch as a whole considering relation between samples, e.g., representing how a batch contains little overlapping. Any valueness of a sample due to the presence of another sample in the batch, i.e. relationship between samples, is considered in the diversity criterion. These criteria are expected to deal with the peculiarities of data described in Section 3: informativeness will be high for rare-class and useful data, and diversity will be high if samples are less duplicated.
To consider these criteria simultaneously, we define the objective function as the linear combination of these:
(1) |
where are respectively the function for informativeness and diversity, and are hyperparameters for balancing. To obtain solutions with worst-case guarantees, we aim to maximize in a submodular optimization manner. Then, this formalization is convenient: if both and are monotone submodular, is also monotone submodular. Thus the policy is to construct each function as a monotone submodular function. In the following, we describe how we design each function.
Informativeness
Quantifying how informative a sample is is a task-dependent problem. For example, softmax entropy [36] may be a good candidate for classification tasks, but is insufficient for object detection tasks [11, 28] since it does not reflect the results of predicted bounding boxes. However, regardless of the task, the computation of each sample’s informativeness can be done independently. Thus we define by using a function and denote as
(2) |
This is a form of linear modular function, and as it is a monotone submodular function, thus fulfills the condition. The benefit of this formalization is that we can utilize various functions proposed in the previous active learning studies [36, 17] as , since many focus on the same problem, i.e., how to quantify the sample informativeness. This means there is room to choose the best function for the target task. We show concrete examples of constructed function in Section 5.
Diversity
In contrast to the informativeness, we have to consider the relationships of samples in a batch, i.e., it cannot be represented as the sum of scores by sample. Usually, this relation is defined through similarity, i.e., high diversity means low similarity between samples. While there are many ways to represent the similarity in a set, designing it under the submodular constraint is not trivial. Therefore, constructing the diversity function of a set is summed up in two points: how we evaluate (dis)similarity between two samples and how we translate it into diversity.
As described in Section 2, expressing diversity is studied in various contexts. However, in most works, the proposed measure does not satisfy the constraints we consider, e.g., BADGE [3] requires a ground set for representing the diversity of one subset. Utilizing log-determinant [25, 24] allows for flexible design while satisfying constraints. Considering the similarity matrix for all where is a similarity function. Note that is always a symmetric matrix, and all the diagonal values are equal to its largest value by construction. Considering the problem with combinatorial optimization perspective, let be the sub-matrix of indexed by , i.e., for . Then, the diversity is described as
(3) |
where is the identity matrix and is a scaling parameter. Here, what we are interested in is what kind of functions we can use for . Since similarity metrics have task- and context-dependent aspects (e.g., pictures of a bird and an airplane may be similar in image space but are not in terms of meaning), we want to allow various functions. Fortunately, we can use any positive semidefinite kernel [13] as a similarity function. It is known that if is symmetric and its smallest eigenvalue is larger than 1, the function is monotone submodular [38]. Thus if is positive semidefinite (i.e. is also positive semidifinite), in Eq. (3) is monotone submodular.
4.2 Computational Cost Reduction
Depending on the computational resource of the device we want to run our framework on, reducing the computation is necessary, even though the performance is sacrificed. We describe the possibilities for cost reduction below.
Determinant Calculation
One of the heaviest computation in our framework is the determinant evaluation in Eq. (3), which is usually . However, if we focus on the fact that is incrementally constructed, that is, for the set always includes for the set as the leading principal submatrix, we can make this calculation by elementary linear algebra (details in Appendix B). Also, there exists a approximation algorithm for log determinants [7]. When we can tolerate approximation errors, this can be a reasonable choice.
Dividing
As described above, one of the heaviest computation is the determinant calculation, and it depends on the maximum batch size . Thus, one possible approach to decrease computation is to decompose into where . To simplify the problem, let . Then, the computation cost for each is , and the total computation becomes , compared to the initial . To take advantage of this relaxation, we should make an assumption such as “no similar data appear in different divided substreams”. Note that this is different from initially defining small . If is divided, the model is trained after multiple batch selection (i.e., after selection of batches of size ), while it is done after a single batch selection of samples in the default setting.
Memoization
When we can use extra memory, memoization is a promising approach to decrease computation. The most effective element to use memoization on is the value of . Although this is a standard technique, it is especially efficient when we use Sieve-Streaming++ because it often calculates the same on different sieves.
5 Experiments
In this section, we evaluate our framework’s effectiveness and show the construction of concrete objective functions for specific tasks. To show how broadly our approach can be applied, we perform experiments with various datasets, tasks, and settings. At first, we evaluate an image classification task using some toy datasets. After that, we evaluate a more complex task, object detection, with a large practical dataset. We also evaluate our framework on a real device (Raspberry Pi 4B). If not specified otherwise, for all experiments with our method, only memoization is used among the computational cost reduction options discussed in Section 4.2. Our experiments on a server have been run on an Intel Xeon E5-2623 CPU with an Nvidia Tesla V100 GPU machine.
5.1 Image Classification




For image classification task, we perform evaluations on both cold-start and domain adaptation scenarios, described in Section 3. In the experiments in this section, we are given images sequentially (i.e., as a stream) for each round and select up to from it. The model is trained with all selected data in previous rounds.
Simulating data peculiarity
As discussed in Section 3, we have to consider peculiarity of data. To simulate class-imbalance, we divide the number of samples by for half of the categories, similar to experiments in [1]. Besides, all images are replicated four times, blurred by Gaussian noise with a standard deviation of (i.e., 5 blurred similar images), and appear continuously on the stream to simulate high-frequency camera capture as discussed in Section 3. Also, to simulate the existence of many unuseful images, we fill each stream with many “non-object” images ( images for our experiments). Consequently, an additional class is added to the task, i.e., the former -class classification becomes a -class classification, but instances of this added class are not included in the evaluation dataset.
Methods
To solve the stream submodular optimization problem, we use the known approximation algorithms Sieve-Streaming++ [19] (which we denote as SSPP) with . As a comparison to our method, we consider an extension of a classic active learning method: To the best of our knowledge, there are no existing methods for the problem we consider, but some standard active learning methods can be adapted with trivial modifications. For example, uncertainty sampling with entropy [36], which is a one-by-one pool-based method, can be extended such that it maximizes the sum of sample entropy in the batch (see detailed description in Appendix A). To evaluate the importance of considering the problem-specific properties discussed in Section 3, we compare this method with ours. Moreover, we also compare with random selection as a baseline. We denote these methods as Entropy and Random respectively.
Objective Function
As a measure of informativeness of each sample (i.e., in Eq. (2)), we use the entropy of softmax output [36]: Let be the i-th value of softmax output for the input ,
To evaluate the diversity of a set with Eq. (3), we define as polynomial kernel, which is positive definite [13]: For a given pair of images , their similarity is evaluated as where is a middle layer output.
5.1.1 Cold-start Scenario
In this experiment, we use the popular datasets MNIST [26], Street View House Numbers (SVHN) [31], and CIFAR-10 [23], which consists of color images of objects from 10 classes. As a cold-start setting, we set and training from scratch in each round, i.e., after selecting a batch of up to images from the stream. After the training, the model is evaluated on the test data in each dataset.
The results are shown in Figure 2 (SVHN result is shown in Appendix E). At first, we can see that our proposed framework with SSPP achieves the best performance for all dataset. Interestingly, only in MNIST experiments, the easiest task here, Entropy also seems to work well. In the other experiments, Entropy is as good as or even worse than Random, even though it uses the same informativeness criteria as SSPP. These results show the importance of considering diversity and suggest that its influence is stronger on more complex tasks. The effect of considering diversity is also reflected in the unique number of selected images. As the images in a stream are duplicated in this experiment, the number of unique images is a good indicator to see how diverse were the batches selected. Through all experiments, this number is SSPP Random Entropy and the highest is three times the lowest. We report the precise numbers in Appendix E (Table 3).
5.1.2 Domain Adaptation Scenario
For the domain adaptation scenario, we perform our evaluation on two pairs of datasets, MNIST SVHN and STL-10 [6] CIFAR-10 111 Since STL-10 is small to construct a stream from, we do not consider CIFAR-10 STL-10. . We removed one class from both datasets in the latter setting since there are only 9 classes in common. As a domain adaptation scenario, the source dataset is available from the beginning. At each round, we train a model after appending the initial (source) dataset with images from the other dataset (i.e., target dataset). The evaluation is done on the test split of the target dataset.
The results are shown in figure 2 (SVHN MNIST result is shown in Appendix E). Similar to the cold-start experiments, we observe that our approach with SSPP outperforms all others. The tendency is the same as the results of the other setting, it is a good indication that our framework performs well regardless of the scenario it is applied on. The numbers about selected images are reported in Appendix E (Table 3).
5.2 Object Detection
Detection of objects in an image is a complex task requiring relatively heavy computation. However, several models designed with a good balance between performance and latency exist, coping with an edge-computing setup [41, 47, 34]. In this domain-adaptation experiment, we use the approach Single-Shot Multibox Detector (SSD) [28] with the popular backbone Mobile Net v2 [34].
We use two datasets in the experiments, the popular COCO (2017) dataset [27] as the source dataset, which includes images with bounding box annotations of many categories, and BDD100K [46] as the target dataset, which is a large-scale dataset of driving videos. Most frames in the BDD100K dataset videos are not annotated with bounding box information, so we use the state-of-the-art detector EfficientDet [43] as an oracle for annotating selected image samples. The videos of BDD100K dataset are captured at 30 FPS, creating a stream with many redundant images. In this setting, selecting a diversified batch of samples is very challenging, and it is well suited for evaluating the performance of our method. We compare the same algorithms as in classification for solving the stream submodular optimization problem, with parameter set for SSPP, if not specified otherwise.
Objective function
The detection task involves both regression of bounding box parameters and classification of object label, so ideally, we want a measure of the informativeness that copes with both aspects. In the context of active learning with detection tasks, several uncertainty functions designed for object detection are proposed [17]. We use a combination of localization stability and classification uncertainty as the objective function because those two do not rely on the architecture of the model. Keeping the notation of the paper, i.e., respectively and :
with a tradeoff parameter between classification uncertainty and localization stability (fixed to in our experiments). For evaluating the diversity of a set, we use the same kernel function as the image classification experiments, which is task-independent. The middle layer output used in this experiment is the concatenation of feature maps of the model, just before feeding the SSD extractor layers.
5.2.1 Domain Adaptation Scenario

In this experiment, we initially train the detector on , the train part of COCO dataset. Then, for each round we select a batch of images from a stream composed of concatenated videos of BDD100k dataset (the stream used in each round starts at the end of the previous one), with each round’s stream consisting of frames. Since the class of objects in the detection task must exist in both source and target datasets, we used the class bus; instances of it are not omnipresent in street footages, so selecting samples containing instances is non-trivial. The selected data is annotated by the oracle and appended to the previous dataset . After training, the new model performance is evaluated on the validation split of BDD100K.
The results are shown in Figure 3. As for classification tasks, SSPP outperforms the other methods, showing the potential of this approach for complex tasks. Entropy curves is however below the Random baseline. The streams are composed of videos with images taken at high frequency, it is then not surprising that Entropy approach performs poorly, as it lacks a mechanism taking the diversity into account. We provide in Appendix E both a comparison between SSPP with different values of , and examples of batch selected by each algorithm.
5.3 Real World Evaluation
While our experiments above were conducted on a server for evaluation and reproducibility purposes, our framework is designed to run on edge devices. The two main issues for programs running on edge devices are the the memory footprint and the execution speed, so we perform a benchmark of the different methods we studied for both aspects.
5.3.1 Execution Speed
We compare the execution speed of the methods used in the previous experiments. As the informativeness part of the objective function complexity depends on the model prediction, for a fair comparison, we assert that the input images are the same for all measurements. Thus, instead of capturing images from a camera, we load images stored beforehand on the device. The source code for our framework used on the device is the same as the one used in server experiments above running on CPU, but as the model inference is the bottleneck of computation, we optimized the inference computation using GPGPU 222 We used the implementation publicly available at https://github.com/Idein/py-videocore6 (GPL). .
For each method, we measure the elapsed time between the processing of two successive inputs and average it on iterations. Results are compiled in Table 1. The first line is representing inference speed when no selection is performed, for reference. Firstly, we observe that the speed for all selection techniques are in the same order of magnitude, around 1 FPS. Entropy uses a greedy approach, so besides evaluating the objective function, there is close to no computation overhead. The best performing SSPP is also the most computation-intensive, and because of the high number of sieves it uses, both reducing or augmenting have a significant impact on the speed. The parameter directly influences the number of sieves considered during the computation in SSPP and is a tradeoff parameter between speed and accuracy. At fixed , each sieve’s size depends on , and so does the size of the matrix to compute the determinant of, for the evaluation diversity. This confirms the potential of Dividing discussed in Section 4. To prove the soundness of our approach, we also show at the second line the elapsed time when performing inference and data sending over HTTP performed for each sample. It corresponds to the case discussed in Section 1, where the selection of samples is done on a server. The computation overhead due to sending data is too heavy to allow a practical speed, with the average elapsed time an order of magnitude higher compared to the other. This confirms the necessity of screening samples on the device and only sending selected samples.
Batch size | ||
---|---|---|
(inference only) | (0.190.00) | (0.190.00) |
(Sending all data) | 3.110.03 | 3.110.03 |
Entropy | 0.550.02 | 0.550.02 |
SSPP () | 0.570.02 | 0.610.02 |
SSPP () | 0.590.02 | 0.660.02 |
SSPP () | 0.780.03 | 1.230.05 |
5.3.2 Memory Usage
While Memory Swapping virtually increases the available memory, performing too much read/write on the storage device (e.g. SD cards) can harm its longevity and/or degrade execution speed. Among our studied methods, SSPP has the highest space complexity, so we measured its memory footprint during the experiments, with and without memoization discussed in Section 4. While memoization is mainly used to reduce execution speed at the cost of more memory use, when enabling memoization we also introduced a look-up table to store only once images that are present in multiple sieves, allowing to reduce the memory footprint of the studied methods. With and , the average memory usage obtained were MB and MB. The first observation is that efficient caching of samples allows to significantly reduce the memory footprint (by more than ). Secondly, even for the most memory-heavy setting of Table 1, the mean consumption stayed under GB, meaning that all settings allows an operation without relying on memory swapping for most edge-devices.
6 Conclusion
We formalized active learning for DNNs on edge devices. Based on the considerations about the nature of the problem, a solution method is required to be 1) stream-based and 2) in batch-mode. We proposed a general framework to solve this problem as a stream submodular maximization problem, which takes both the informativeness of sole data and the diverseness of a set of data into account. We evaluated our approach on classification and object detection tasks with several datasets on servers and real-world edge devices. Empirical results showed that our approach outperforms other methods and can run even on low computational resources devices.
References
- [1] Rahaf Aljundi, Nikolay Chumerin, and Daniel Olmeda Reino. Identifying Wrongly Predicted Samples: A Method for Active Learning. arXiv:2010.06890, 2020.
- [2] Kareem Amin, Corinna Cortes, Giulia DeSalvo, and Afshin Rostamizadeh. Understanding the Effects of Batching in Online Active Learning. In International Conference on Artificial Intelligence and Statistics, pages 3482–3492. PMLR, 2020.
- [3] Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. In International Conference on Learning Representations, 2020.
- [4] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’14, pages 671–680. ACM Press, 2014.
- [5] Sebastian Buschjäger, Philipp-Jan Honysz, and Katharina Morik. Very Fast Streaming Submodular Function Maximization. arXiv:2010.10059, 2020.
- [6] Adam Coates, Honglak Lee, and Andrew Ng. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 215–223. JMLR Workshop and Conference Proceedings, 2011.
- [7] Kun Dong, David Eriksson, Hannes Nickisch, David Bindel, and Andrew G. Wilson. Scalable Log Determinants for Gaussian Process Kernel Learning. Advances in Neural Information Processing Systems, 30:6327–6337, 2017.
- [8] Alexander Freytag, Erik Rodner, and Joachim Denzler. Selecting Influential Examples: Active Learning with Expected Model Output Changes. In European Conference on Computer Vision, pages 562–577, 2014.
- [9] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep Bayesian Active Learning with Image Data. In International Conference on Machine Learning, pages 1183–1192. PMLR, 2017.
- [10] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
- [11] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.
- [12] Bernard Haasdonk and Claus Bahlmann. Learning with Distance Substitution Kernels. In Pattern Recognition, Lecture Notes in Computer Science, pages 220–227. Springer, 2004.
- [13] Thomas Hofmann, Bernhard Schölkopf, and Alexander J. Smola. Kernel methods in machine learning. Annals of Statistics, 36(3):1171–1220, 2008.
- [14] Steven C. H. Hoi, Rong Jin, Jianke Zhu, and Michael R. Lyu. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 417–424. Association for Computing Machinery, 2006.
- [15] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1314–1324, 2019.
- [16] A. J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372–2379, 2009.
- [17] Chieh-Chi Kao, Teng-Yok Lee, Pradeep Sen, and Ming-Yu Liu. Localization-Aware Active Learning for Object Detection. In Computer Vision – ACCV 2018, Lecture Notes in Computer Science, pages 506–522. Springer International Publishing, 2019.
- [18] V. Kaushal, R. Iyer, S. Kothawade, R. Mahadev, K. Doctor, and G. Ramakrishnan. Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision. In 2019 IEEE Winter Conference on Applications of Computer Vision, pages 1289–1299, 2019.
- [19] Ehsan Kazemi, Marko Mitrovic, Morteza Zadimoghaddam, Silvio Lattanzi, and Amin Karbasi. Submodular Streaming in All Its Glory: Tight Approximation, Minimum Memory and Low Adaptive Complexity. In International Conference on Machine Learning, pages 3311–3320. PMLR, 2019.
- [20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
- [21] Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. In Advances in Neural Information Processing Systems, pages 7026–7037, 2019.
- [22] Andreas Krause and Daniel Golovin. Submodular Function Maximization. In Tractability, pages 71–104. Cambridge University Press, 2012.
- [23] Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images. Technical report, 2009.
- [24] Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Foundations and Trends® in Machine Learning, 5(2-3):123–286, 2012.
- [25] Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse gaussian process methods: The informative vector machine. In Advances in Neural Information Processing Systems, volume 15, pages 625–632. MIT Press, 2003.
- [26] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- [27] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. Microsoft coco: Common objects in context, 2015.
- [28] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision, pages 21–37, 2016.
- [29] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 97–105. PMLR, 2015.
- [30] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14(1):265–294, 1978.
- [31] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. NIPS, 2011.
- [32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. arXiv:1912.01703, 2019.
- [33] Nicholas Roy and Andrew McCallum. Toward Optimal Active Learning through Sampling Estimation of Error Reduction. In International Conference on Machine Learning, ICML ’01, pages 441–448. Morgan Kaufmann Publishers Inc., 2001.
- [34] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
- [35] Ozan Sener and Silvio Savarese. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In International Conference on Learning Representations, 2018.
- [36] Burr Settles. Active Learning Literature Survey. Technical Report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
- [37] Burr Settles, Mark Craven, and Soumya Ray. Multiple-instance active learning. In Advances in Neural Information Processing Systems, volume 20, pages 1289–1296. Curran Associates, Inc., 2008.
- [38] Dravyansh Sharma, Ashish Kapoor, and Amit Deshpande. On Greedy Maximization of Entropy. In International Conference on Machine Learning, pages 1330–1338. PMLR, 2015.
- [39] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. In Proceedings of the IEEE International Conference on Computer Vision, pages 843–852, 2017.
- [40] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning, pages 1139–1147. PMLR, 2013.
- [41] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. MnasNet: Platform-Aware Neural Architecture Search for Mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2820–2828, 2019.
- [42] Mingxing Tan and Quoc Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In International Conference on Machine Learning, pages 6105–6114. PMLR, 2019.
- [43] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection, 2020.
- [44] Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, and Claudio Gentile. Neural Active Learning with Performance Guarantees. In Advances in Neural Information Processing Systems, volume 34, pages 7510–7521. Curran Associates, Inc., 2021.
- [45] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in Data Subset Selection and Active Learning. In International Conference on Machine Learning, pages 1954–1963. PMLR, 2015.
- [46] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2636–2645, 2020.
- [47] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices, 2017.
Appendix A About Algorithms
A.1 Stream submodular maximization algorithms
As discussed in Section 2, three algorithms were proposed to solve submodular maximization problems in a stream setting: Sieve-Streaming, Sieve-Streaming++ and ThreeSieves. The base idea behind these three algorithms is to estimate the required marginal gain for a current set to achieve approximation. In Sieve-Streaming and Sieve-Streaming++, the algorithm simultaneously holds several sets (called sieves) and their corresponding required marginal gain as a threshold. When a new sample comes, each sieve evaluates if the marginal gain by adding the new sample is higher or lower than its threshold, then respectively appends or discards it. At the end of the data stream, the algorithm returns the sieve with the highest set value as a solution. In ThreeSieves, instead of holding multiple thresholds, the algorithm keeps only the highest threshold and gradually decreases it by statistical analysis.
A.2 Extended simple AL algorithms
Active learning methods from previous work designed for non-batch setting often use an uncertainty function that evaluates the informativeness of a single data sample. It can be extended to the batch setting by taking the sum of the contributions of samples in a set of data. This is a special case of our framework without diversity, i.e., in Eq. (1). Note that in this case, Eq. (1) is modular (linear), and we can get the optimal solution by a greedy approach.
A.3 Related methods
We show a comparison of related methods in Table 2. It emphasizes that we cannot apply existing methods to the problem we consider.
Appendix B Efficient Determinant Calculation
By construction, a similarity matrix for is described as follow:
where . Then, we can calculate determinant using as follow:
(4) |
This is when and is given. While calculating an inverse matrix is generally computation heavy, it can be efficiently calculated when a matrix is defined as Eq. (4):
(5) |
where . Note that a similarity matrix is always symmetric, thus is always held. Using this property, Eq. (5) is also efficiently calculated.



Appendix C Diversity Construction Details
As discussed in Section 4, we have to define a kernel function to evaluate the diversity of a set. There are many possible choices for to measure the similarity between a pair of images . For example,
where is a middle layer output, is a scaling parameter and JSD is short-hand for Jensen-Shannon divergence. Here, is known as a polynomial kernel and are known as an RBF kernel, and both kernels are positive definite [13]. This selection can be interpreted as choosing which space we focus on, i.e., consider the similarity in the image space, feature vector space, etc.
Appendix D More Implementation Details
Through all our experiments, we used the machine learning framework PyTorch [32].
D.1 Image Classification
For the classification task, we use a two-layer convolutional neural network as a classifier. We trained the model by Adam [20] optimizer with iterations, batch size . In the beginning, it is randomly initialized and trained with an initial dataset . Then, for each round, we are given images sequentially (i.e., as a stream) and select up to from it. These images are appended to the current dataset ( at round ). In each round, we trained the model from scratch. As a simulation of manual annotation, the labels for each image are given after selection so that the labels are unknown during selection. In domain adaptation experiments, we equally sample from source domain images and target domain images to form training batches, since they are highly imbalanced, especially in early rounds. For the “non-object” image, we used a monochrome image obtained by taking the mean color (i.e., the means of each RGB channel) of the next real image in the stream. The elapsed time per round was a few minutes in each experiment.
D.2 Object Detection
For the detection task, we based our model on a PyTorch implementation of SSD 333 https://github.com/lufficc/SSD (MIT). . The training setup for each round of our experiments is the following: input image size of , iterations with batch size , momentum SGD [40] optimizer with learning rate divided by at iterations and . For the oracle, we used the pretrained model EfficientDet D7 444 https://github.com/rwightman/efficientdet-pytorch (Apache Licence 2.0). , with a score threshold of . For saving up computation, the round (i.e., first training before selection) was skipped in all experiments and replaced by the loading of a model checkpoint trained with the same hyperparameters. The annotator has close to state-of-the-art precision but is not perfect, and we observed in experiments cases 555 this occurred on around % of annotated images of false positive (e.g., a cropped truck that was wrongly identified as a bus), that strongly impacted the trained model accuracy. Dealing with noisy labeled data is beyond the scope of this work, so we manually removed the experiments containing this kind of annotation.
As images with no bounding box annotation are ignored in our detector’s loss function, we filtered out images from when the oracle detects no object. Also, if after filtering, no image remains (i.e., ), then , so we skipped the training of round .
In this setting, the elapsed time during selection was negligible compared to the training time, and training a model took around hours, so a single experiment of active learning took up to hours.
Appendix E More Experimental Results
E.1 More classification results


E.2 Number of Selected Images
MNIST | SVHN | CIFAR-10 | ||||
---|---|---|---|---|---|---|
#unique | #selected | #unique | #selected | #unique | #selected | |
Random | 10748 | 38400 | 10748 | 38400 | 10748 | 38400 |
Entropy | 7532 | 38400 | 8316 | 38400 | 8383 | 38400 |
SSPP | 263823 | 37963 | 278330 | 38122 | 265725 | 38163 |
SVHN MNIST | MNIST SVHN | STL-10 CIFAR-10 | ||||
---|---|---|---|---|---|---|
#unique | #selected | #unique | #selected | #unique | #selected | |
Random | 10748 | 38400 | 10748 | 38400 | 10748 | 38400 |
Entropy | 5517 | 38400 | 84310 | 38400 | 8826 | 38400 |
SSPP | 33158 | 38291 | 28519 | 38222 | 335316 | 38291 |
We report the number of selected images in classification experiments (Section 5) in Table 3. In this experiment, the images in a stream are duplicated, and the number of unique images is a good indicator to see how diverse were the batches selected. Moreover, we show the total number of selected images, since SSPP may select fewer images than allowed ( in this experiments). While Random and Entropy always select the maximum number of images, we can see that SSPP selects fewer. This could have some impact on performance.
E.3 Study of Parameter in Sieve-Streaming++

As discussed in Section 2, the parameter of Sieve-Streaming++ allows a tradeoff between the complexity of the algorithm and the optimality of the solution. We thus report in Figure 6 the performance of the method depending on the value of . Conform to our expectations, the higher the , the lower the performance. This is consistent with the approximation ratio of the algorithm (). However, the performance gain between and is small. This shows signs of plateauing for smaller values of epsilon in this setting, while the difference of computation cost is still large. Based on this result, and the fact the user has a minimum requirement in terms of execution speed, the procedure for determining the value of in practice could be as follows: since there is a value of at which the improvement in accuracy reaches a ceiling, if one can afford to search this value through preliminary experiments (i.e., by performing labeling and training), the final is chosen by considering it, within the range of values that allows practical execution speed. Otherwise, measuring the execution speed with several can be done at almost no cost, so one may just use the smallest fulfilling the speed requirements.
E.4 Examples of Selected Samples
We show the batches of images selected by different methods on the same round in Figure 7, 8, 9. The batch selected by Entropy in Figure 9 contains a lot of informative samples, as we can see instances hard to detect (e.g., bus located far from the camera) or close to decision boundaries (e.g., trucks looking like buses). The redundance of samples in the batch is the consequence of the absence of a diversity component. The batch selected by SSPP in Figure 7 shows a good tradeoff between informative samples with diversity.


