Multi-Domain Multi-Task Rehearsal for Lifelong Learning
Abstract
Rehearsal, seeking to remind the model by storing old knowledge in lifelong learning, is one of the most effective ways to mitigate catastrophic forgetting, i.e., biased forgetting of previous knowledge when moving to new tasks. However, the old tasks of the most previous rehearsal-based methods suffer from the unpredictable domain shift when training the new task. This is because these methods always ignore two significant factors. First, the Data Imbalance between the new task and old tasks that makes the domain of old tasks prone to shift. Second, the Task Isolation among all tasks will make the domain shift toward unpredictable directions; To address the unpredictable domain shift, in this paper, we propose Multi-Domain Multi-Task (MDMT) rehearsal to train the old tasks and new task parallelly and equally to break the isolation among tasks. Specifically, a two-level angular margin loss is proposed to encourage the intra-class/task compactness and inter-class/task discrepancy, which keeps the model from domain chaos. In addition, to further address domain shift of the old tasks, we propose an optional episodic distillation loss on the memory to anchor the knowledge for each old task. Experiments on benchmark datasets validate the proposed approach can effectively mitigate the unpredictable domain shift.
Introduction
Lifelong learning, also known as continual learning and incremental learning, aims to continually learn new knowledge from a sequence of tasks over a lifelong time. In contrast to traditional supervised learning, the lifelong setting helps machine learning work like a more realistic human learning by acquiring a new skill quickly with new training data. All the while, catastrophic forgetting (French 1999; Kirkpatrick et al. 2017) is the main challenge for lifelong learning, which happens when the learner forgets the knowledge of old tasks while learning a new task. To seek a balance between the old tasks and the new task, many methods have been proposed to handle the catastrophic forgetting in recent years. Following (De Lange et al. 2019), their methods can be categorized into Rehearsal (Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b; Guo et al. 2019), Regularization (Li and Hoiem 2016; Chaudhry et al. 2018a; Dhar et al. 2019) and Parameter Isolation (Mallya, Davis, and Lazebnik 2018; Yoon et al. 2017). Regularization-based and parameter isolation-based methods store no data from old tasks and highly rely on extra regularizers or architectures, resulting in their lower performance than the rehearsal-based methods. Rehearsal-based methods store a small number of samples in the training set, the model will retrain the saved data when training the the new task to avoid forgetting.


At each step of lifelong learning (see Fig. 1(a)), the most existing rehearsal-based methods (Rebuffi, Kolesnikov, and Lampert 2016; Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b; Guo et al. 2019) focus on training the new task while treating the stored data from old tasks as the constraints to preserve their performance. However, the old tasks in these methods may suffer from unpredictable domain shift that arises from two significant factors in the lifelong learning process: 1) The Data Imbalance between old and new task. The shrinkage of training data of old tasks leads to their domains will be prone to shift that manifests as the catastrophic forgetting. 2) The Task Isolation among all tasks (old and new), which makes such domain shift toward unpredictable directions and the boundary between any two tasks may become weak.
To address the unpredictable domain shift, in this paper, we propose a Multi-Domain Multi-Task (MDMT) Rehearsal method inspired by the multi-domain multi-task learning (Yang and Hospedales 2014) that considers both multiple tasks w.r.t. multiple domains and trains them equally. Specifically, as shown in Fig. 1(b), we first retrain the old tasks along with new task training parallelly rather than setting them as the constraints. We separate all these tasks by a Cross-Domain Softmax, which extends the softmax for each isolated task by combining the logits of all other seen tasks and separates them from each other. Then, to further alleviate the unpredictable domain shift, we propose to leverage a Two-level Angular Margin (TAM) loss to encourage the intra-class/task compactness and the inter-class/task discrepancy on the basis of Cross-Domain Softmax. In addition, we present an optional Episodic Distillation (ED) loss on all buffer memories for old tasks that suppress the domain shift by storing the latent representations of each sample in memories. We evaluate our MDMT rehearsal on four popular lifelong learning datasets for image classification and achieve new state-of-the-art performance. The experimental results show the proposed MDMT rehearsal can significantly mitigate the unpredictable domain shift. Our contributions are three-fold: (1) We propose a Multi-Domain Multi-Task Rehearsal method for lifelong learning, which parallelly and equally trains the old and new tasks and separate them by a Cross-Domain Softmax function. (2) We propose a Two-level Angular Margin (TAM) loss for lifelong learning to further boost the Cross-Domain Softmax for the sake of intra-class/task compactness and the inter-class/task discrepancy. (3) We build an optional Episodic Distillation loss to reduce the domain shift in lifelong progress.
Related Work
Lifelong learning. In contrast to static machine learning (He et al. 2016; Deng et al. 2018; Lyu et al. 2019; Lyu, Feng, and Wang 2020), Lifelong Learning (Ring 1997; Thrun 1998) is proposed to improve the self-learning ability of the machine that continually learns new knowledge. The previous solutions to the catastrophic forgetting (French 1999; Kirkpatrick et al. 2017) have been proposed in recent years and can be categorized into regularization-based, parameter isolation-based and rehearsal-based methods (De Lange et al. 2019). Regularization-based methods (Li and Hoiem 2016; Chaudhry et al. 2018a; Dhar et al. 2019) store no data but propose to use extra regularization terms in the loss function to consolidate previous knowledge. Parameter isolation-based methods (Mallya, Davis, and Lazebnik 2018; Yoon et al. 2017) freeze the task-specific parameters and grow new branches for new tasks to bring in new knowledge. Although many progress have been made on regularization-based and parameter isolation-based methods, their performance still has a big gap with rehearsal-based methods. Rehearsal-based methods store some knowledge of old tasks to remind the model. According to different saved forms, existing methods can be categorized into three groups: (1) by saving the raw data (Rehearsal, e.g., image) (Rebuffi, Kolesnikov, and Lampert 2016; Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b; Guo et al. 2019), the model can retrain the saved data with the training on the new task together, and these methods often construct single objective for the new task and set the saved data from old tasks as the constrains; (2) by saving the latent features for selected samples (Latent-rehearsal) (Pellegrini et al. 2019), the model slows down learning at the layers below the rehearsal layer and leaves the layers above free to learn at full pace; (3) by building generative model to synthesize data (Pseudo-rehearsal) (Shen et al. 2020; van de Ven and Tolias 2018; Lesort et al. 2019), the knowledge can be saved as parameters rather than data. In this paper, we only consider the native rehearsal by storing raw data in image classification.
Multi-domain multi-task learning. Multi-domain learning (Nam and Han 2016; Tang and Jia 2020) and multi-task learning (Lin et al. 2019; Sener and Koltun 2018) have been studied for years. Multi-domain learning refers to sharing information about the same problem across different contextual domains, while multi-task learning addresses sharing information about different problems in the same domain. By considering both multiple domains and multiple tasks, Multi-domain multi-task (MDMT) learning was first proposed in (Yang and Hospedales 2014), and has been applied to classification (Peng and Dredze 2016) and semantic segmentation (Fourure et al. 2017), etc.. The common method for MDMT learning is to construct parallel data streams and seek to build the correlations among tasks. Here, we explain why we decide to formulate the lifelong learning problem into a MDMT learning problem. (1) By storing some samples of a task into a memory, MDMT learning can significantly train them together, which helps mitigate the task isolation in the traditional rehearsal-based lifelong learning. (2) MDMT learning can help suspending the domain shift to some extent by making classifiers perceive each other.
Margin loss and distillation loss. The margin based Softmax explicitly adds a margin to each logit to improve feature discrimination. L-Softmax (Liu et al. 2016) and SphereFace (Liu et al. 2017) add multiplicative angular margin to squeeze each class. CosFace (Wang et al. 2018b, a) and ArcFace (Deng et al. 2019) add additive cosine margin and angular margin, respectively, for easier optimization. Based on ArcFace, we propose a Two-level Angular Margin loss to guarantee both inter-class/task compactness and intra-class/task discrepancy. The knowledge distillation (Hinton, Vinyals, and Dean 2015) from the teacher network to the student network transfers the knowledge about smoothed probability distribution of the output layer of the teacher network to the student network. Inspired by this, we propose to build distillation loss between the old and new models on old tasks by storing the latent representation of stored data.

Methodology
Multi-domain multi-task rehearsal
Suppose there are different tasks with respect to datasets . For the -th dataset (task), , where is the -th input data, is the corresponding label and is the number of samples. can be split into a training set and a testing set , and we denote as in our presentation for simple denotation. Lifelong learning aims at learning a predictor , which can predict tasks that have been learned at any time. The rehearsal-based lifelong learning (Rebuffi, Kolesnikov, and Lampert 2016; Lopez-Paz and Ranzato 2017; Riemer et al. 2018; Chaudhry et al. 2018b; Guo et al. 2019) builds a memory buffer with small-size for each previous task , i.e., . Following (Lopez-Paz and Ranzato 2017), when training a task , for all that , the rehearsal-based lifelong learning can be modeled as a single objective optimizing problem:
(1) | ||||
s.t. |
where is the empirical loss. is the shared parameter across all tasks while and are the task-specific parameters. The constraints above are designed to prevent the performance degradation of previous tasks. Then, the problem can be reduced to find an optimal gradient that benefits all tasks. To inspect the increase in old tasks’ loss, (Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b; Guo et al. 2019) compute the angle between the gradient of each old task and the proposed gradient update on the current task.
However, such a single objective optimization on the current task for rehearsal-based lifelong learning over-emphasizes the new task while ignoring the difference among tasks. In other words, the old tasks can only play the role of source domain to be transferred into the current training model. The domain of old tasks will significantly shift because of the rectified gradient that the gradient norm of new task is much larger than the old tasks’, which may induce the domain overlap.
In contrast, this paper treats the problem as a Multi-Domain Multi-Task (MDMT) learning problem to jointly and equally improve the current task as well as the old tasks:
(2) | ||||
s.t. |
where if and if . means the distance between two domains. For the tasks w.r.t. datasets , a MDMT rehearsal model trains tasks parallelly and equally. The constraints above mean the domain distance between any two tasks should not be smaller than the model trained on the last task. Note that we only consider the situation that the tasks are irrelevant as the common lifelong learning.
We make two key operations to solve the Eq.(2) efficiently. First, we transform the multi-objective optimization as a single-objective optimization problem by ensembling all these objectives as the traditional solution to multi-task learning (Lin et al. 2019; Sener and Koltun 2018).
(3) |
Second, it exists high memory-cost to calculate the distance between any two domains and store old predictors , but we can do this in a simple yet effective way by extending the softmax function for each task as
(4) |
where
(5) |
is the batch size for task and denotes the -th column of the weight in the last fully-connected layer for task and is the class number. We name this extension as Cross-Domain Softmax (CDS), which combines the logits from other classifiers and is similar to a native softmax to a classification problem with total class. Here, we discuss the difference. For MDMT rehearsal, different tasks never share a same classifiers as common classification, i.e., the classifiers for different tasks lack mutual perception. By combining the logits form other tasks, the tasks can perceive and separate from each other. The previous methods update the model by the optimal gradient that highly rely on the angle between the gradients of old and new tasks. In contrast, we directly obtain the hybrid gradient for the shared layers by ensembling the gradients from the new task and old tasks as .
We compare our MDMT rehearsal with some famous rehearsal-based lifelong works:

iCaRL (Rebuffi, Kolesnikov, and Lampert 2016) saves small number of samples to make the model not to forget old class, but they classify samples by the nearest prototype, which is not suitable for task-incremental lifelong learning because the task-specific parameters are ignored.
GEM/A-GEM (Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b) propose to solve forgetting by finding the optimal gradient that saves the old tasks from being corrupt, and they focus on training the new task with single objective optimization while ignore the domain shift of old tasks.
ER (Chaudhry et al. 2019a) extends Experience Replay (Rolnick et al. 2019) for reinforcement lifelong learning and be proven better than A-GEM. However, they never consider the relations among all tasks, which makes the domains of old task may significantly shift.
PRD (Hou et al. 2018) proposes to treat lifelong learning as a multi-task learning problem and proposes to build a distillation module with one saved CNN expert as teacher for each old task. Differently, we would like to build a MDMT rehearsal that leverage the expanded softmax without saving many extra models.
Two-level angular margin loss
The proposed MDMT rehearsal helps to jointly and equally train the new task and retrain the old tasks, making all tasks perceive each other. Nonetheless, the softmax loss is not efficient enough because it does not explicitly encourage intra-class compactness and inter-class discrepancy, in coping with which, large margin based softmax is widely used in recent discriminative problems (Deng et al. 2019; Liu et al. 2016). However, these methods cannot be directly applied to MDMT rehearsal based lifelong learning because these methods place the large margin only to single task and can not be applied to multiple tasks scenario.
In this paper, we propose two levels margin, i.e., class level and task level, on softmax for each task (Eq. (4)). Our work is based on the popular large margin based softmax method Arcface (Deng et al. 2019) where the large margin is added to the angle between weight and feature, which has been proven effective and efficient. Specifically, Arcface deletes the bias and transforms the logit fed into the softmax as , where is the angle between the weight and the feature , then an angular margin is placed between different classes
(6) |
where the individual weight is fixed to by normalization and the embedding feature is fixed to by normalization and rescale. The normalization on features and weights makes the predictions only depend on the angle between them. Such a geodesic distance margin between the sample and centers makes the prediction gain more intra-class compactness and inter-class discrepancy.
Based on Eq. (6), we propose our Two-level Angular Margin (TAM) loss for the task
(7) |
where
(8) | ||||
In Eq. (7), we add class-level margin and task-level margin on the angular. is similar to in Eq. (6), which controls the intra-task class compactness and discrepancy and be proven effective on discriminative learning (Deng et al. 2019). controls the task compactness and discrepancy, which ensures the knowledge of each task not to mix up with others.
As shown in Fig. 3, the proposed TAM loss produces two advantages for MDMT rehearsal based lifelong learning. First, TAM helps the model to better discriminate into a task. Although the CDS has a better angle between feature and its target weight, TAM loss even reduce the angle to a smaller than CDS, which expresses the effect of . Second, TAM loss mitigates the domain overlap caused by the domain shift by forcing tasks to separate. We can also see that for the angles among weights center, TAM loss can significantly separate old and new tasks, which expresses the effect of . However, it is still difficult to omit the domain shift because of the extreme data imbalance between old tasks and new task. Thus, we construct an optional Episodic distillation loss for the MDMT rehearsal based lifelong process.
Method | Permuted MNIST | Split CIFAR | ||||||
---|---|---|---|---|---|---|---|---|
Joint | - | - | - | - | - | - | ||
VAN | ||||||||
EWC | ||||||||
MAS | - | - | ||||||
RWalk | - | - | ||||||
MER | - | - | - | - | - | |||
GEM | - | - | ||||||
A-GEM | ||||||||
ER | ||||||||
MEGA | ||||||||
MDMT-R | ||||||||
Method | Split CUB | Split AWA | ||||||
Joint | - | - | - | - | - | - | ||
VAN | ||||||||
EWC | ||||||||
MAS | - | - | ||||||
RWalk | - | - | ||||||
PI | - | - | ||||||
A-GEM | ||||||||
ER | ||||||||
MEGA | ||||||||
MDMT-R |
Episodic distillation
In this paper, we propose a simple yet effective solution to further mitigate the domain shift for old task named Episodic Distilllation (ED) loss. The main role the ED loss played is to reduce the feature distribution change along with the lifelong process as far as possible. First, apart from the sampled training data stored in memory, i.e., , we also store the corresponding latent representations when they are first trained, denoted as . Then, we train the model with an updated objective:
(9) |
where
(10) |
is the ED loss that can be in many formats, and we choose the Mean Square Error (MSE). By training with Eq. (9) in each step, we can ease the shift effectively.
ED loss is an optional loss function and builds extra memory buffers to save the latent representation for each sample in memories. The extra memory buffers do increase the memory cost to some extent, but still very small in compared with the whole training set. In our implementation, we save the representation from the fc layer before the last one, which is a vector with length from 256 to 2048 for different network. That means the cost of the representation memory is even smaller than the data memory.
Total algorithm
We follow A-GEM (Chaudhry et al. 2018b) that unite memory of all old tasks for efficient training. Let and be the united data and representation memory for old tasks. For each step, we will sample a batch of data from the united memory. In this way, the previous tasks will be optimized by an average gradient instead of all gradients for previous tasks, which speeds up the training.
We show the detailed process in Algorithm 1 including training and evaluation procedure. First, the storage of memory feature in StoreMem to be as the anchor of old task in current task training. Second, the gradient to be updated depends not only the gradient on old and current tasks using TAM loss, but the gradient on feature difference by ED loss. The evaluation procedure is similar with the previous works.
ED | ||||||
---|---|---|---|---|---|---|
- | - | - | ||||
- | - | |||||
0.0 | 0.0 | - | ||||
0.1 | 0.0 | - | ||||
0.0 | 0.01 | - | ||||
0.1 | 0.01 | - | ||||
0.4 | 0.01 | - | ||||
0.4 | 0.05 | - | ||||
0.4 | 0.1 | - | ||||
0.1 | 0.01 |

Experiments
Experimental Settings
We evaluate the proposed method on four image recognition datasets. (1)Permuted MNIST. (Kirkpatrick et al. 2017): this is a variant of standard MNIST dataset of handwritten digits with 20 tasks. Each task has a fixed random permutation of the input pixels which is applied to all the images of that task. (2)Split CIFAR. (Zenke, Poole, and Ganguli 2017): this dataset consists of 20 disjoint subsets of CIFAR-100 dataset (Krizhevsky, Hinton et al. 2009), where each subset is formed by randomly sampling 5 classes without replacement from the original 100 classes. (3)Split CUB. (Chaudhry et al. 2018b): the CUB dataset (Wah et al. 2011) is split into 20 disjoint subsets by randomly sampling 10 classes without replacement from the original 200 classes. (4)Split AWA. (Chaudhry et al. 2018b): this dataset consists of 20 subsets of the AWA dataset (Lampert, Nickisch, and Harmeling 2009). Each subset is constructed by sampling 5 classes with replacement from a total of 50 classes and the same class can appear in different subsets.
We leverage four existing metrics to evaluate the performance and catastrophic forgetting. (1) Average Accuracy () after the model has been trained continuously done till task . In particular, is the average accuracy on all the tasks after the last task has been learned. (2) Forgetting Measure (Chaudhry et al. 2018a) () is the average forgetting after the model has been trained continuously with all the mini-batches for task . (3) Learning Curve Area (Chaudhry et al. 2018a). () is the area of the convergence curve for any average -shot performance after the model has been trained for all the tasks, where . (4) Long-Term Remembering (Guo et al. 2019). () quantifies the accuracy drop on each task relative to the accuracy just right after the task has been learned. The detailed descriptions and the formulas can be shown in the supplementary materials.


Following the previous works (Lopez-Paz and Ranzato 2017; Chaudhry et al. 2018b; Guo et al. 2019), for Permuted MNIST we adopt a standard fully-connected network with two hidden layers, where each layer has 256 units with ReLU activation. For Split CIFAR we use a reduced ResNet18 (He et al. 2016). For Split CUB and Split AWA, we use a standard ResNet18.
Comparison with the state-of-the-arts
We compare the proposed method with the state-of-the-art methods including EWC (Kirkpatrick et al. 2017), MAS (Aljundi et al. 2018), RWalk (Chaudhry et al. 2018a), PI (Zenke, Poole, and Ganguli 2017), GEM (Lopez-Paz and Ranzato 2017), MER (Riemer et al. 2018), ER (Chaudhry et al. 2019b) A-GEM (Chaudhry et al. 2018b) and MEGA (Guo et al. 2019). Specifically, EWC, MAS, RWalk and PI are regularization-based methods that prevent the important weights from changing too much. GEM, MER, ER, AGEM and MEGA are rehearsal-based methods that rectifies the gradient guided by the stored data. VAN is a single supervised model trained continuously on the sequence of tasks. We also compare with the baseline that jointly trains all datasets with different classifiers together.
First, as shown in Tab. 1, the quantitative results of the proposed method outperform other state-of-the-arts. For , the performances of our method show the superiority on all four datasets. This indicates the less forgetting on old tasks and better learning on new tasks through the lifelong training by reducing unpredictable domain shift. evaluates the fine-grained batch-level forgetting on all tasks and never cares the Acc value. We get good except on Split CIFAR with slight worse ( vs. ) than MER. MER has a better but poor because it adopts a complex meta learning strategy. For , it evaluates the training speed on the first 10 training batches for each task, our method has the best only on Split CUB. This is because the TAM and ED losses may slow the early training to mitigate domain overlap, but the following training will be improved significantly. LTR focuses on long-term remembering and our method outperforms other methods on these datasets except Split CUB. We think this is because the dataset CUB contains similar classes of birds, which means less impact of TAM and ED losses because of similar representations. In Fig. 4, we show the average accuracy trends in the continual process (from to ), which also indicate the better performance of the MDMT-R.
In Tab. 2, we then analyze the importance of the main components including TAM and ED loss on Split CIFAR. The first row is the results with only vanilla softmax. By adding ED loss, the average accuracy gets a little improvement. By adding TAM loss, the performance obtains larger gains, and we select the best and as the hyperparameters where and means the Cross-Domain Softmax. By adding both TAM and ED loss, we obtain a dramatic improvement in performance compared to the vanilla softmax and the state-of-the-art methods, which means the TAM and ED loss can significantly reduce the catastrophic forgetting.
Domain shift observation
In this section, we would like to show some observations of domain shift using t-distributed Stochastic Neighbor Embedding (t-SNE) (Maaten and Hinton 2008) on Permuted MNIST. First, in order to intuitively reflect the task relation of the proposed method during the training process, we visualize the final feature distribution, i.e., trained after the task 17, of task 1, 9 and 17 in Fig. 5. A-GEM and MEGA cannot guarantee the task boundaries, which means generating some mix area and makes the task easy to misclassify. The proposed MDMT rehearsal separates each class in three task while obtain explicit task boundary, which means the proposed method is able to encourage the intra-class/task compactness and inter-class/task discrepancy. As shown in Fig. 6, we also show the domain shift of task 1 after the model trained on task 1, 9 and 17, respectively. The previous methods A-GEM and MEGA cannot reduce the domain shift at all, which makes them sustainable to forget. The proposed MDMT rehearsal method can significantly mitigate the unpredictable domain shift. Without ED loss, our MDMT rehearsal still gets some unpredictable domain shift (such as task 1 after 1 and 9) because of the shrink of training data.
Conclusion
In this paper, we address catastrophic forgetting, a major drawback of state-of-the-art lifelong learning study, by considering the unpredictable domain shift of old tasks in the training sequence. To this end, we proposed a Multi-Domain Multi-Task rehearsal method, which effectively makes all tasks perceive each other. Then we proposed a Two-level Angular Margin loss to further encourage the intra-class/task compactness and inter-class/task discrepancy. Finally, an optional Episodic Distillation loss was proposed to mitigate domain shift. We have tested the proposed approach on four image classification benchmark datasets. Extensive experiments show the superiority of our approach over state-of-the-art methods.
Acknowledgment
This work was supported by the Natural Science Foundation of China (Nos. 62072334, 61671325, 61876121, 61672376 and U1803264) and Jiangsu Provincial Key Research and Development Program (No. BE2017663). The authors would like to thank constructive and valuable suggestions for this paper from the experienced reviewers and AE.
References
- Aljundi et al. (2018) Aljundi, R.; Babiloni, F.; Elhoseiny, M.; Rohrbach, M.; and Tuytelaars, T. 2018. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV).
- Chaudhry et al. (2018a) Chaudhry, A.; Dokania, P. K.; Ajanthan, T.; and Torr, P. H. 2018a. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV).
- Chaudhry et al. (2018b) Chaudhry, A.; Ranzato, M.; Rohrbach, M.; and Elhoseiny, M. 2018b. Efficient Lifelong Learning with A-GEM. In International Conference on Learning Representations.
- Chaudhry et al. (2019a) Chaudhry, A.; Rohrbach, M.; Elhoseiny, M.; Ajanthan, T.; Dokania, P. K.; Torr, P. H.; and Ranzato, M. 2019a. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486 .
- Chaudhry et al. (2019b) Chaudhry, A.; Rohrbach, M.; Elhoseiny, M.; Ajanthan, T.; Dokania, P. K.; Torr, P. H. S.; and Ranzato, M. 2019b. On Tiny Episodic Memories in Continual Learning. arXiv preprint arXiv:1902.10486 .
- De Lange et al. (2019) De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; and Tuytelaars, T. 2019. Continual learning: A comparative study on how to defy forgetting in classification tasks. arXiv preprint arXiv:1909.08383 .
- Deng et al. (2018) Deng, C.; Wu, Q.; Wu, Q.; Hu, F.; Lyu, F.; and Tan, M. 2018. Visual grounding via accumulated attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7746–7755.
- Deng et al. (2019) Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
- Dhar et al. (2019) Dhar, P.; Singh, R. V.; Peng, K.-C.; Wu, Z.; and Chellappa, R. 2019. Learning without memorizing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
- Fourure et al. (2017) Fourure, D.; Emonet, R.; Fromont, É.; Muselet, D.; Neverova, N.; Trémeau, A.; and Wolf, C. 2017. Multi-task, multi-domain learning: Application to semantic segmentation and pose regression. Neurocomputing .
- French (1999) French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences .
- Guo et al. (2019) Guo, Y.; Liu, M.; Yang, T.; and Rosing, T. 2019. Learning with Long-term Remembering: Following the Lead of Mixed Stochastic Gradient. arXiv preprint arXiv:1909.11763 .
- He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition.
- Hinton, Vinyals, and Dean (2015) Hinton, G. E.; Vinyals, O.; and Dean, J. 2015. Distilling the Knowledge in a Neural Network. CoRR .
- Hou et al. (2018) Hou, S.; Pan, X.; Change Loy, C.; Wang, Z.; and Lin, D. 2018. Lifelong learning via progressive distillation and retrospection. In Proceedings of the European Conference on Computer Vision (ECCV).
- Kirkpatrick et al. (2017) Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences .
- Krizhevsky, Hinton et al. (2009) Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images .
- Lampert, Nickisch, and Harmeling (2009) Lampert, C. H.; Nickisch, H.; and Harmeling, S. 2009. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition.
- Lesort et al. (2019) Lesort, T.; Gepperth, A.; Stoian, A.; and Filliat, D. 2019. Marginal replay vs conditional replay for continual learning. In International Conference on Artificial Neural Networks.
- Li and Hoiem (2016) Li, Z.; and Hoiem, D. 2016. Learning Without Forgetting. In European Conference on Computer Vision.
- Lin et al. (2019) Lin, X.; Zhen, H.-L.; Li, Z.; Zhang, Q.-F.; and Kwong, S. 2019. Pareto Multi-Task Learning. In Advances in Neural Information Processing Systems.
- Liu et al. (2017) Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; and Song, L. 2017. SphereFace: Deep Hypersphere Embedding for Face Recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017.
- Liu et al. (2016) Liu, W.; Wen, Y.; Yu, Z.; and Yang, M. 2016. Large-margin softmax loss for convolutional neural networks. In ICML.
- Lopez-Paz and Ranzato (2017) Lopez-Paz, D.; and Ranzato, M. 2017. Gradient episodic memory for continual learning. In Advances in neural information processing systems.
- Lyu, Feng, and Wang (2020) Lyu, F.; Feng, W.; and Wang, S. 2020. vtGraphNet: Learning weakly-supervised scene graph for complex visual grounding. Neurocomputing 51–60.
- Lyu et al. (2019) Lyu, F.; Wu, Q.; Hu, F.; Wu, Q.; and Tan, M. 2019. Attend and imagine: Multi-label image classification with visual attention and recurrent neural networks. IEEE Transactions on Multimedia 1971–1981.
- Maaten and Hinton (2008) Maaten, L. v. d.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research .
- Mallya, Davis, and Lazebnik (2018) Mallya, A.; Davis, D.; and Lazebnik, S. 2018. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV).
- Nam and Han (2016) Nam, H.; and Han, B. 2016. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition.
- Pellegrini et al. (2019) Pellegrini, L.; Graffieti, G.; Lomonaco, V.; and Maltoni, D. 2019. Latent replay for real-time continual learning. arXiv preprint arXiv:1912.01100 .
- Peng and Dredze (2016) Peng, N.; and Dredze, M. 2016. Multi-task Multi-domain Representation Learning for Sequence Tagging. CoRR .
- Rebuffi, Kolesnikov, and Lampert (2016) Rebuffi, S.; Kolesnikov, A.; and Lampert, C. H. 2016. iCaRL: Incremental Classifier and Representation Learning. CoRR .
- Riemer et al. (2018) Riemer, M.; Cases, I.; Ajemian, R.; Liu, M.; Rish, I.; Tu, Y.; and Tesauro, G. 2018. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910 .
- Ring (1997) Ring, M. B. 1997. CHILD: A first step towards continual learning. Machine Learning .
- Rolnick et al. (2019) Rolnick, D.; Ahuja, A.; Schwarz, J.; Lillicrap, T.; and Wayne, G. 2019. Experience replay for continual learning. In Advances in Neural Information Processing Systems.
- Sener and Koltun (2018) Sener, O.; and Koltun, V. 2018. Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems.
- Shen et al. (2020) Shen, G.; Zhang, S.; Chen, X.; and Deng, Z.-H. 2020. Generative Feature Replay with Orthogonal Weight Modification for Continual Learning. arXiv preprint arXiv:2005.03490 .
- Tang and Jia (2020) Tang, H.; and Jia, K. 2020. Discriminative Adversarial Domain Adaptation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020.
- Thrun (1998) Thrun, S. 1998. Lifelong learning algorithms. In Learning to learn. Springer.
- van de Ven and Tolias (2018) van de Ven, G. M.; and Tolias, A. S. 2018. Generative replay with feedback connections as a general strategy for continual learning. arXiv preprint arXiv:1809.10635 .
- Wah et al. (2011) Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and Belongie, S. 2011. The caltech-ucsd birds-200-2011 dataset .
- Wang et al. (2018a) Wang, F.; Cheng, J.; Liu, W.; and Liu, H. 2018a. Additive Margin Softmax for Face Verification. IEEE Signal Process. Lett. .
- Wang et al. (2018b) Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; and Liu, W. 2018b. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018.
- Yang and Hospedales (2014) Yang, Y.; and Hospedales, T. M. 2014. A Unified Perspective on Multi-Domain and Multi-Task Learning. In International Conference on Learning Representations.
- Yoon et al. (2017) Yoon, J.; Yang, E.; Lee, J.; and Hwang, S. J. 2017. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547 .
- Zenke, Poole, and Ganguli (2017) Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. Proceedings of machine learning research .
Appendix
Evaluation Metrics
Average Accuracy. () Average accuracy after the model has been trained continually with all the mini-batches up till task is defined as
(11) |
where is the mean accuracy of task on mini-batches. In particular, is the average accuracy on all the tasks after the last task has been learned.
Forgetting Measure (Chaudhry et al. 2018a). () Average forgetting after the model has been trained continually with all the mini-batches up till task is defined as
(12) |
where is the forgetting on task after the model is trained with all the mini-batches up till task and computed as
(13) |
Learning Curve Area (Chaudhry et al. 2018a). () is the area of the convergence curve for any average -shot performance after the model has been trained for all the tasks, where .
(14) |
where
(15) |
Intuitively, LCA measures the learning speed of different lifelong learning algorithms. A higher value of LCA indicates that the model learns quickly.
Long-Term Remembering (Guo et al. 2019). () quantifies the accuracy drop on each task relative to the accuracy just right after the task has been learned, which is defined as
(16) |
After the model is trained on all the tasks, LTR quantifies the accuracy drop on task relative to .
Implementation details
Seed initialization. We implement our method with Tensorflow. The results are the average of 5 runs, where the seeds of Numpy and Tensorflow are both from 1234 to 1238 for all compared methods.
Selected classes. For Split CUB and AWA, AGEM has provided fixed classes selections. For Perm MNIST, each task has a fixed random permutation by shuffling the pixels in images, the seeds are the same with the training. Split CIFAR has 20 disjoint subsets by extracted each 5-classes in sequence from class 1 to 100. Train/test splits: For Perm MNIST and Split CIFAR, the data splits are the same as vanilla MNIST and CIFAR. For Split CUB and AWA, AGEM has provided fixed train/test list.
Hyper-Parameters selection. We report the hyper-parameters considered for different experiments of , and . The searching spaces for the hyper parameters are: : [0.01, 0.05, 0.1 (Permuted MNIST, Split CIFAR), 0.2 (Split AWA), 0.3, 0.4 (Split CUB), 0.5]; : [0.01 (Permuted MNIST, Split CIFAR), 0.02 (Split AWA), 0.03, 0.04, 0.05 (Split CUB), 0.06, …, 0.1]; : [12, 16, 20 (Split CUB), 24 (Split CIFAR), 28, 32 (Permuted MNIST, Split AWA), 36, …, 64]. Note that we do not conduct exhaustive hyper-parameter search on learning rate and regularization factors and reuse the same as A-GEM (Chaudhry et al. 2018b). The learning rates for each dataset are: Perm MNIST: , Split CIFAR and CUB: , Split AWA: .
Method | Inference time (ms) | Memory | ||||
---|---|---|---|---|---|---|
MNIST | CIFAR | CUB | AWA | Training | Testing | |
VAN | - | - | - | - | ||
EWC | - | - | - | - | ||
GEM | - | - | - | - | ||
A-GEM | 32.58 | 28.74 | 81.83 | 97.47 | ||
MEGA | 37.63 | 26.38 | 84.96 | 109.67 | ||
MDMT-R | 35.16 | 30.09 | 90.47 | 97.51 |
Evolution of LCA
We show the evolution of during the first ten mini-batches on the four datasets, where represents the learning speed of new task in the first mini-batches. As shown in Fig. 7, the proposed MDMT rehearsal based lifelong learning outperforms other methods on Split CUB, however a slight worse than MEGA. In our option, this is because the ED loss tries to make the model not fall into catastrophic forgetting at the training beginning. Thus, the proposed MDMT rehearsal based lifelong learning do not achieve fast improvements at the beginning of each task.
Computational cost and memory complexity
We show the computational cost and memory complexity in Table 3 by evaluating the training time on single RTX 2080Ti GPU card. Easy to see, the inference time of the proposed method is comparable with A-GEM and MEGA on four datasets. We can also see the memory cost of training and testing procedure in the right of Table 3. Compared to the previous baselines based on EpsMem, the cost of the proposed method has a slight increase , which arises from the saving of latent representation of old tasks. However, such an increase is very small compared to the total training cost because in most situations the latent representation (a vector) has smaller size than the image (a 3-channel tensor).
Raw Record of Learning Process
In this section, we report the raw record of learning process for the proposed method, MEGA and A-GEM. The entry of the matrix for each method is the test accuracy of the -th task after the model is trained on the -th task.

PERMUTED MNIST
MDMT-R:
0.9658 0.1207 0.1267 0.0846 0.1393 0.0835 0.1005 0.0883 0.1218 0.1004 0.0952 0.1267 0.1263 0.1041 0.0929 0.0910 0.0995
0.9638 0.9676 0.1059 0.0995 0.1204 0.0740 0.0849 0.0955 0.1126 0.1292 0.1146 0.1180 0.1302 0.0945 0.0867 0.1021 0.0981
0.9574 0.9638 0.9612 0.0842 0.0935 0.0925 0.0793 0.1035 0.1002 0.1226 0.1073 0.1043 0.1086 0.1024 0.0962 0.0706 0.0822
0.9548 0.9596 0.9652 0.9554 0.0954 0.0790 0.1026 0.1120 0.0935 0.1198 0.0961 0.1186 0.1247 0.0833 0.1099 0.0753 0.0821
0.9498 0.9578 0.9607 0.9660 0.9679 0.0854 0.0889 0.1347 0.1134 0.1258 0.1014 0.1410 0.1396 0.1091 0.0994 0.0822 0.1033
0.9477 0.9544 0.9595 0.9627 0.9655 0.9662 0.0998 0.1316 0.1121 0.1262 0.1203 0.1498 0.1303 0.0968 0.1044 0.0817 0.0903
0.9451 0.9532 0.9568 0.9607 0.9630 0.9636 0.9669 0.1310 0.1004 0.1341 0.1004 0.1286 0.1438 0.1127 0.1124 0.0834 0.0990
0.9415 0.9503 0.9550 0.9559 0.9610 0.9588 0.9644 0.9612 0.0977 0.1129 0.0831 0.1223 0.1353 0.1027 0.1154 0.0748 0.0992
0.9359 0.9466 0.9483 0.9543 0.9574 0.9557 0.9600 0.9609 0.9625 0.1141 0.1028 0.1318 0.1331 0.1005 0.1128 0.0909 0.1003
0.9326 0.9424 0.9466 0.9529 0.9536 0.9563 0.9572 0.9590 0.9632 0.9621 0.0958 0.1082 0.1196 0.1095 0.1116 0.0762 0.1005
0.9313 0.9426 0.9465 0.9499 0.9512 0.9528 0.9556 0.9557 0.9598 0.9627 0.9660 0.1218 0.1369 0.1405 0.1088 0.0821 0.1195
0.9273 0.9397 0.9438 0.9450 0.9468 0.9513 0.9506 0.9540 0.9571 0.9605 0.9640 0.9630 0.1645 0.1249 0.0974 0.0684 0.1202
0.9224 0.9363 0.9376 0.9446 0.9469 0.9466 0.9498 0.9496 0.9541 0.9588 0.9600 0.9619 0.9630 0.1373 0.1011 0.0633 0.1328
0.9178 0.9326 0.9377 0.9418 0.9419 0.9453 0.9481 0.9484 0.9518 0.9562 0.9586 0.9594 0.9618 0.9649 0.1074 0.0798 0.1247
0.9167 0.9296 0.9341 0.9349 0.9356 0.9439 0.9417 0.9460 0.9495 0.9535 0.9553 0.9558 0.9596 0.9623 0.9629 0.0775 0.1267
0.9132 0.9269 0.9314 0.9338 0.9346 0.9418 0.9414 0.9443 0.9478 0.9511 0.9533 0.9544 0.9559 0.9597 0.9628 0.9663 0.1293
0.9111 0.9221 0.9277 0.9303 0.9341 0.9387 0.9398 0.9409 0.9451 0.9491 0.9495 0.9520 0.9555 0.9566 0.9595 0.9625 0.9621
MEGA:
0.9613 0.1091 0.1229 0.0832 0.1374 0.0708 0.0907 0.1017 0.1165 0.1286 0.0979 0.1182 0.1188 0.0886 0.0968 0.0854 0.0928
0.9535 0.9645 0.0895 0.0997 0.1191 0.0685 0.0803 0.1022 0.1165 0.1472 0.1054 0.1112 0.1264 0.1027 0.0872 0.0979 0.0993
0.9391 0.9556 0.9596 0.0996 0.1020 0.0900 0.0807 0.1083 0.0959 0.1400 0.1001 0.1012 0.1096 0.1085 0.0977 0.0716 0.0768
0.9295 0.9473 0.9527 0.9477 0.1113 0.0725 0.0856 0.1033 0.0884 0.1209 0.0847 0.1149 0.1285 0.0939 0.1193 0.0867 0.0824
0.9206 0.9405 0.9437 0.9569 0.9611 0.0785 0.0884 0.1113 0.0926 0.1189 0.0936 0.1337 0.1544 0.1154 0.1282 0.1010 0.0994
0.9119 0.9347 0.9378 0.9481 0.9547 0.9594 0.0916 0.1200 0.1013 0.1042 0.0908 0.1380 0.1415 0.1199 0.1210 0.0908 0.0847
0.9083 0.9261 0.9348 0.9419 0.9478 0.9556 0.9575 0.1092 0.1040 0.1083 0.0863 0.1202 0.1286 0.1177 0.1250 0.0801 0.0931
0.9100 0.9192 0.9291 0.9332 0.9419 0.9462 0.9527 0.9598 0.1152 0.1132 0.0945 0.1054 0.1248 0.1228 0.1187 0.0945 0.0934
0.9022 0.9133 0.9215 0.9271 0.9344 0.9381 0.9430 0.9476 0.9551 0.1187 0.1162 0.1119 0.1364 0.1249 0.1108 0.1012 0.1059
0.8974 0.9074 0.9147 0.9242 0.9289 0.9348 0.9367 0.9404 0.9519 0.9571 0.1102 0.1032 0.1536 0.1261 0.1122 0.1036 0.1093
0.8957 0.9042 0.9146 0.9193 0.9229 0.9313 0.9321 0.9318 0.9409 0.9545 0.9591 0.1144 0.1368 0.1373 0.1143 0.1092 0.1169
0.8863 0.8981 0.9056 0.9127 0.9148 0.9220 0.9270 0.9249 0.9335 0.9444 0.9528 0.9564 0.1517 0.1103 0.1051 0.1002 0.1390
0.8840 0.8992 0.9054 0.9085 0.9149 0.9179 0.9238 0.9198 0.9304 0.9419 0.9441 0.9523 0.9570 0.1292 0.1083 0.0926 0.1301
0.8808 0.8901 0.8994 0.8986 0.9084 0.9113 0.9168 0.9185 0.9239 0.9360 0.9382 0.9453 0.9522 0.9589 0.1017 0.0941 0.1277
0.8770 0.8850 0.8957 0.8926 0.9012 0.9101 0.9090 0.9132 0.9188 0.9301 0.9358 0.9368 0.9430 0.9508 0.9521 0.0946 0.1334
0.8752 0.8806 0.8911 0.8854 0.8965 0.9070 0.9062 0.9059 0.9145 0.9265 0.9286 0.9338 0.9374 0.9434 0.9462 0.9601 0.1291
0.8732 0.8765 0.8824 0.8809 0.8945 0.9024 0.9016 0.9007 0.9088 0.9202 0.9228 0.9276 0.9279 0.9376 0.9408 0.9521 0.9556
A-GEM:
0.9613 0.1091 0.1229 0.0832 0.1374 0.0708 0.0907 0.1017 0.1165 0.1286 0.0979 0.1182 0.1188 0.0886 0.0968 0.0854 0.0928
0.9509 0.9645 0.0956 0.0991 0.1304 0.0696 0.0840 0.1033 0.1219 0.1454 0.1064 0.1133 0.1314 0.1043 0.0883 0.0979 0.0973
0.9410 0.9545 0.9615 0.0995 0.0964 0.0921 0.0710 0.1126 0.1176 0.1402 0.1112 0.1026 0.1185 0.1204 0.1077 0.0779 0.0761
0.9299 0.9450 0.9540 0.9546 0.1046 0.0788 0.0959 0.1033 0.1096 0.1266 0.1015 0.1152 0.1476 0.0885 0.1375 0.0984 0.0831
0.9151 0.9361 0.9425 0.9551 0.9588 0.0803 0.0809 0.1143 0.1063 0.1227 0.1066 0.1253 0.1436 0.1154 0.1131 0.1079 0.0915
0.9068 0.9312 0.9401 0.9450 0.9566 0.9590 0.0892 0.1189 0.1285 0.1086 0.1007 0.1433 0.1279 0.1179 0.1097 0.0892 0.0865
0.9015 0.9228 0.9339 0.9385 0.9473 0.9548 0.9586 0.1063 0.1073 0.1102 0.1048 0.1164 0.1291 0.1284 0.1341 0.0854 0.1024
0.8980 0.9155 0.9248 0.9280 0.9356 0.9403 0.9539 0.9580 0.1015 0.1231 0.1129 0.1125 0.1267 0.1133 0.1220 0.0921 0.0985
0.8952 0.9055 0.9201 0.9182 0.9273 0.9310 0.9447 0.9435 0.9512 0.1098 0.1374 0.1166 0.1264 0.1064 0.1183 0.0986 0.1048
0.8846 0.8996 0.9083 0.9154 0.9189 0.9267 0.9363 0.9339 0.9513 0.9558 0.1243 0.1095 0.1179 0.1137 0.1126 0.0945 0.0979
0.8764 0.8977 0.9011 0.9073 0.9086 0.9167 0.9292 0.9274 0.9386 0.9481 0.9631 0.1116 0.1099 0.1417 0.1100 0.0975 0.1166
0.8710 0.8882 0.8937 0.8922 0.9043 0.9077 0.9151 0.9174 0.9279 0.9324 0.9518 0.9572 0.1346 0.1240 0.0964 0.0930 0.1283
0.8625 0.8822 0.8847 0.8855 0.9013 0.8990 0.9093 0.9088 0.9189 0.9295 0.9411 0.9458 0.9533 0.1309 0.1059 0.0987 0.1139
0.8581 0.8784 0.8774 0.8817 0.8954 0.8938 0.8986 0.9003 0.9082 0.9225 0.9307 0.9350 0.9435 0.9603 0.1048 0.1023 0.1048
0.8492 0.8674 0.8732 0.8697 0.8828 0.8826 0.8930 0.8898 0.8962 0.9098 0.9184 0.9267 0.9290 0.9425 0.9542 0.1012 0.1070
0.8322 0.8700 0.8644 0.8493 0.8765 0.8787 0.8904 0.8848 0.8883 0.8979 0.9110 0.9158 0.9177 0.9299 0.9403 0.9609 0.1179
0.8438 0.8603 0.8555 0.8488 0.8864 0.8785 0.8798 0.8702 0.8916 0.8968 0.9076 0.9094 0.9092 0.9228 0.9228 0.9463 0.9551
SPLIT CIFAR
MDMT-R:
0.6828 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6704 0.6292 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6912 0.6136 0.6596 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6732 0.5828 0.6316 0.7064 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6604 0.5944 0.6288 0.7012 0.7120 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6736 0.6140 0.5988 0.6792 0.7064 0.7584 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6680 0.5944 0.6068 0.6752 0.6976 0.7800 0.6996 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6824 0.6036 0.6096 0.6780 0.6780 0.7236 0.6952 0.7236 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6716 0.6028 0.6084 0.6812 0.6840 0.7396 0.6772 0.7120 0.7744 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6872 0.6136 0.6140 0.6848 0.6864 0.7432 0.6728 0.6916 0.7492 0.7052 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6604 0.6040 0.6104 0.6908 0.6788 0.7368 0.6764 0.6980 0.7628 0.6700 0.7592 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6548 0.5888 0.6136 0.6784 0.6832 0.7136 0.6836 0.6744 0.7440 0.6472 0.7460 0.6820 0.0000 0.0000 0.0000 0.0000 0.0000
0.6484 0.5964 0.6100 0.6640 0.6744 0.7224 0.6540 0.6804 0.7224 0.6504 0.7264 0.6728 0.7424 0.0000 0.0000 0.0000 0.0000
0.6628 0.5936 0.6152 0.7008 0.6940 0.7272 0.6624 0.6788 0.7372 0.6504 0.7232 0.6712 0.7256 0.7772 0.0000 0.0000 0.0000
0.6648 0.5928 0.6204 0.6936 0.6848 0.7172 0.6624 0.6900 0.7308 0.6484 0.7168 0.6424 0.7336 0.7672 0.6684 0.0000 0.0000
0.6704 0.5920 0.6152 0.6984 0.6988 0.7260 0.6732 0.6764 0.7260 0.6464 0.7064 0.6292 0.7216 0.7520 0.6780 0.7612 0.0000
0.6736 0.6140 0.6168 0.6928 0.6996 0.7228 0.6792 0.6908 0.7284 0.6736 0.7136 0.6452 0.7208 0.7496 0.6492 0.7416 0.7920
MEGA:
0.6472 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6260 0.5824 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6324 0.5700 0.6300 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6236 0.5496 0.5624 0.6452 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6132 0.5612 0.6048 0.6736 0.6960 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6140 0.5628 0.5692 0.6632 0.6792 0.7688 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5780 0.5500 0.5864 0.6364 0.6792 0.7420 0.6868 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5756 0.5308 0.5764 0.6292 0.6500 0.6820 0.6580 0.6580 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6212 0.5704 0.5876 0.6376 0.6600 0.7056 0.6636 0.6916 0.7376 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5992 0.5580 0.5828 0.6212 0.6528 0.6512 0.6496 0.6700 0.7276 0.6732 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6104 0.5552 0.5804 0.6396 0.6700 0.6960 0.6568 0.6752 0.7412 0.6752 0.7432 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5880 0.5516 0.5816 0.6248 0.6552 0.6568 0.6412 0.6284 0.7044 0.6304 0.7196 0.6660 0.0000 0.0000 0.0000 0.0000 0.0000
0.6020 0.5628 0.5792 0.6164 0.6444 0.6636 0.6356 0.6536 0.7008 0.6132 0.7000 0.6336 0.7108 0.0000 0.0000 0.0000 0.0000
0.6124 0.5692 0.5924 0.6420 0.6516 0.6912 0.6352 0.6492 0.6848 0.6400 0.6872 0.6312 0.7280 0.7596 0.0000 0.0000 0.0000
0.6012 0.5468 0.5908 0.6128 0.6552 0.6852 0.6288 0.6428 0.6704 0.6176 0.6988 0.6268 0.7088 0.7348 0.6324 0.0000 0.0000
0.6244 0.5588 0.5960 0.6432 0.6448 0.6868 0.6388 0.6540 0.6896 0.6056 0.6920 0.6192 0.7124 0.7344 0.6560 0.7604 0.0000
0.6088 0.5896 0.5840 0.6552 0.6716 0.6904 0.6584 0.6372 0.7032 0.6300 0.6900 0.5864 0.6832 0.7140 0.6348 0.7264 0.7780
A-GEM:
0.6772 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5948 0.5764 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6324 0.5828 0.6432 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5980 0.5384 0.5396 0.6456 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5864 0.5404 0.5576 0.6436 0.7004 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5728 0.5392 0.5068 0.5940 0.6344 0.7180 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5572 0.5404 0.5308 0.6224 0.6116 0.6520 0.6688 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6064 0.5356 0.5492 0.5872 0.6164 0.6532 0.6296 0.6724 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6060 0.5472 0.5528 0.6236 0.5920 0.6348 0.6076 0.6348 0.6972 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6004 0.5080 0.4960 0.6128 0.5656 0.6356 0.5752 0.6140 0.6580 0.6792 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5992 0.5408 0.5332 0.5964 0.5928 0.6520 0.5928 0.6304 0.6764 0.5916 0.7364 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5748 0.5020 0.5104 0.5936 0.6016 0.6184 0.5772 0.6164 0.6408 0.5688 0.6776 0.6436 0.0000 0.0000 0.0000 0.0000 0.0000
0.6056 0.5100 0.5200 0.5916 0.6012 0.6056 0.5816 0.6060 0.6236 0.5808 0.6288 0.5768 0.7332 0.0000 0.0000 0.0000 0.0000
0.6184 0.5344 0.5308 0.5888 0.6116 0.6188 0.6012 0.6248 0.6136 0.5836 0.6428 0.5688 0.6524 0.7392 0.0000 0.0000 0.0000
0.6012 0.5220 0.5488 0.6008 0.5828 0.6048 0.5728 0.5884 0.6356 0.5740 0.6476 0.5540 0.6520 0.6804 0.6460 0.0000 0.0000
0.5984 0.5360 0.5520 0.5808 0.5704 0.6184 0.6068 0.6108 0.6452 0.5404 0.6520 0.5256 0.6624 0.6512 0.5864 0.7388 0.0000
0.6232 0.5356 0.5412 0.6104 0.6080 0.6248 0.5944 0.5900 0.6492 0.5872 0.6468 0.5352 0.6336 0.6520 0.5908 0.6444 0.7508
SPLIT CUB
MDMT-R:
0.4895 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6745 0.5760 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7401 0.7400 0.6662 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7442 0.7663 0.7934 0.7369 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7583 0.7702 0.7941 0.8119 0.6626 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7710 0.7837 0.7941 0.7424 0.7669 0.7327 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7811 0.7697 0.8204 0.8044 0.7630 0.8067 0.7581 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7971 0.7810 0.8303 0.8311 0.8019 0.8095 0.8337 0.7485 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7703 0.8008 0.8249 0.8081 0.7980 0.8369 0.8662 0.7686 0.7511 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.8047 0.8215 0.8316 0.8165 0.8089 0.8049 0.8463 0.7734 0.7968 0.7640 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7924 0.8011 0.8303 0.8435 0.8126 0.8358 0.8531 0.7990 0.7823 0.8107 0.8219 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7886 0.7963 0.8114 0.8250 0.7875 0.8222 0.8249 0.8164 0.7612 0.8091 0.8305 0.7917 0.0000 0.0000 0.0000 0.0000 0.0000
0.8000 0.8034 0.8516 0.8224 0.8052 0.8355 0.8185 0.7993 0.7778 0.8177 0.8461 0.8107 0.8405 0.0000 0.0000 0.0000 0.0000
0.8276 0.8085 0.8549 0.8120 0.8289 0.8487 0.8130 0.8041 0.7675 0.8183 0.8329 0.8060 0.8332 0.7764 0.0000 0.0000 0.0000
0.7938 0.8199 0.8154 0.8069 0.8141 0.8358 0.8395 0.7959 0.8041 0.8286 0.8365 0.8043 0.8527 0.7462 0.8142 0.0000 0.0000
0.8215 0.7984 0.8517 0.8471 0.8237 0.8352 0.8538 0.8115 0.7959 0.8302 0.8366 0.8304 0.8494 0.8036 0.8071 0.8064 0.0000
0.8342 0.8331 0.8544 0.8695 0.8449 0.8628 0.8789 0.8328 0.8235 0.8601 0.8938 0.8434 0.8538 0.8096 0.8195 0.8166 0.7844
MEGA:
0.4050 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6876 0.5088 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6844 0.7244 0.6263 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7071 0.7445 0.7753 0.6553 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7294 0.7483 0.7941 0.7787 0.6857 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7049 0.7266 0.7712 0.7538 0.7528 0.6366 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7271 0.7488 0.7877 0.7893 0.7861 0.7905 0.6830 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7316 0.7737 0.7920 0.7867 0.7851 0.7956 0.8179 0.6816 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7503 0.7525 0.7860 0.7947 0.7838 0.7818 0.8148 0.7912 0.6934 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7445 0.7475 0.7819 0.7599 0.7680 0.7834 0.7901 0.7858 0.7883 0.7085 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7506 0.7732 0.8053 0.8129 0.8049 0.8099 0.8275 0.7938 0.8041 0.8146 0.7211 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.7475 0.7670 0.7860 0.7982 0.7818 0.7892 0.8133 0.7881 0.8014 0.8145 0.7958 0.7144 0.0000 0.0000 0.0000 0.0000 0.0000
0.7393 0.7572 0.7907 0.8006 0.7821 0.8043 0.8123 0.7629 0.7905 0.7902 0.8161 0.7622 0.7262 0.0000 0.0000 0.0000 0.0000
0.7492 0.7501 0.8031 0.8098 0.7956 0.8020 0.8053 0.7811 0.7873 0.8081 0.8159 0.7672 0.7981 0.6794 0.0000 0.0000 0.0000
0.7488 0.7661 0.7958 0.7830 0.7845 0.7873 0.8007 0.7759 0.7902 0.8139 0.8142 0.7746 0.7984 0.7651 0.6831 0.0000 0.0000
0.7538 0.7857 0.8116 0.8170 0.8050 0.8052 0.8277 0.8024 0.8070 0.8220 0.8301 0.7767 0.8210 0.7839 0.7777 0.7505 0.0000
0.7728 0.7846 0.8203 0.7995 0.8141 0.8219 0.8201 0.7973 0.8109 0.8366 0.8390 0.7880 0.8300 0.7948 0.8078 0.8112 0.7418
A-GEM:
0.4263 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4383 0.5243 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4642 0.5220 0.6064 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4850 0.5420 0.6057 0.6765 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4935 0.5378 0.6328 0.6042 0.6621 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4678 0.5071 0.6369 0.5585 0.5966 0.6137 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4909 0.5630 0.6435 0.6369 0.6140 0.6278 0.6825 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4928 0.5607 0.6157 0.5992 0.6025 0.6037 0.6572 0.6617 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4850 0.5401 0.6090 0.5830 0.6004 0.5980 0.6384 0.6520 0.6850 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4901 0.5542 0.6086 0.5858 0.6080 0.5985 0.6206 0.6533 0.6607 0.6964 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4909 0.5455 0.6465 0.6220 0.6241 0.6186 0.6094 0.6197 0.6181 0.6569 0.7128 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5075 0.5675 0.6266 0.6126 0.5999 0.6037 0.6073 0.6214 0.6004 0.6424 0.6547 0.6673 0.0000 0.0000 0.0000 0.0000 0.0000
0.4909 0.5588 0.5943 0.5852 0.5898 0.6018 0.6188 0.5974 0.6266 0.6228 0.6792 0.6281 0.7035 0.0000 0.0000 0.0000 0.0000
0.5297 0.5535 0.6220 0.6281 0.6383 0.6037 0.6234 0.5967 0.6277 0.6051 0.6393 0.5950 0.6965 0.6630 0.0000 0.0000 0.0000
0.5104 0.5677 0.6446 0.6198 0.6135 0.6093 0.6100 0.5710 0.5912 0.6314 0.6312 0.5951 0.6924 0.6422 0.6584 0.0000 0.0000
0.5196 0.5527 0.6350 0.6043 0.6384 0.6009 0.6226 0.5971 0.6028 0.6210 0.6507 0.6318 0.6757 0.6003 0.5899 0.7272 0.0000
0.5345 0.5519 0.6395 0.6064 0.6335 0.6000 0.5879 0.5899 0.5869 0.6509 0.6486 0.6332 0.6914 0.6130 0.5983 0.6806 0.7216
SPLIT AWA
MDMT-R:
0.3792 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5119 0.4335 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5147 0.4985 0.4556 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5382 0.5230 0.5662 0.3475 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5292 0.5307 0.5874 0.4767 0.4884 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5725 0.5134 0.5924 0.5043 0.5974 0.4749 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5690 0.5398 0.6020 0.4712 0.5798 0.6075 0.4768 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5645 0.5055 0.6122 0.4804 0.5792 0.5996 0.5433 0.5065 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5561 0.5574 0.6216 0.5301 0.6216 0.6023 0.5638 0.5725 0.4626 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5941 0.5437 0.6306 0.5239 0.6218 0.6206 0.5343 0.5575 0.5019 0.5462 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.6117 0.5522 0.6058 0.5104 0.6036 0.6073 0.5348 0.5694 0.5112 0.6453 0.5159 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5784 0.5168 0.5892 0.5313 0.6071 0.5785 0.5446 0.5509 0.5224 0.6316 0.5863 0.5257 0.0000 0.0000 0.0000 0.0000 0.0000
0.5837 0.5408 0.6202 0.5282 0.6289 0.6234 0.5625 0.5494 0.5276 0.6285 0.6178 0.6038 0.4910 0.0000 0.0000 0.0000 0.0000
0.5962 0.5716 0.6666 0.5407 0.6461 0.6104 0.5786 0.5918 0.5276 0.6605 0.6201 0.6079 0.5582 0.5378 0.0000 0.0000 0.0000
0.6177 0.5444 0.6474 0.5467 0.6466 0.6326 0.5556 0.6075 0.5505 0.6552 0.6383 0.6123 0.6076 0.6551 0.5770 0.0000 0.0000
0.5900 0.5438 0.6428 0.5373 0.6405 0.5971 0.5792 0.5991 0.5428 0.6470 0.6332 0.5935 0.5679 0.6485 0.6678 0.5730 0.0000
0.6184 0.5875 0.6696 0.5620 0.6434 0.6194 0.5766 0.6226 0.5566 0.6739 0.6475 0.6349 0.5714 0.6531 0.6640 0.6051 0.5604
MEGA:
0.4101 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4791 0.4377 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5061 0.5174 0.4116 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5184 0.5219 0.4962 0.4741 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4931 0.5219 0.4828 0.5142 0.4026 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5122 0.5357 0.4894 0.5293 0.5053 0.4680 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5242 0.5226 0.5038 0.5219 0.5215 0.5605 0.4820 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5383 0.5359 0.5216 0.5323 0.5204 0.5551 0.5774 0.4682 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5291 0.5179 0.4850 0.5367 0.5043 0.5380 0.5565 0.5111 0.4698 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5386 0.5106 0.4995 0.5245 0.5150 0.5470 0.5457 0.4862 0.5380 0.4619 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5612 0.5569 0.5260 0.5566 0.5387 0.5765 0.5847 0.5483 0.5444 0.5622 0.5542 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.5593 0.5593 0.5416 0.5611 0.5345 0.5348 0.5539 0.5358 0.5418 0.5363 0.5720 0.4725 0.0000 0.0000 0.0000 0.0000 0.0000
0.5471 0.5522 0.5333 0.5383 0.5159 0.5410 0.5638 0.5234 0.5234 0.5342 0.5614 0.5611 0.5026 0.0000 0.0000 0.0000 0.0000
0.5557 0.5561 0.5391 0.5368 0.5093 0.5442 0.5743 0.5026 0.5484 0.5126 0.5788 0.5548 0.5641 0.5386 0.0000 0.0000 0.0000
0.5240 0.5563 0.5366 0.5465 0.5102 0.5659 0.5688 0.5071 0.5320 0.5183 0.5865 0.5705 0.5295 0.6206 0.5483 0.0000 0.0000
0.5541 0.5575 0.5219 0.5559 0.5310 0.5490 0.5983 0.5121 0.5466 0.5204 0.5929 0.5626 0.5466 0.6064 0.6291 0.4661 0.0000
0.5542 0.5469 0.5093 0.5677 0.5136 0.5471 0.5581 0.4954 0.5326 0.5169 0.5857 0.5771 0.5345 0.5964 0.6081 0.5187 0.4655
A-GEM:
0.4127 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4256 0.4422 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4436 0.4445 0.4058 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4371 0.4784 0.4334 0.4463 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4339 0.4795 0.4236 0.4258 0.3963 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4226 0.4674 0.4311 0.4505 0.3864 0.4495 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4279 0.4462 0.4268 0.4217 0.3854 0.4254 0.4239 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4621 0.4733 0.4421 0.4563 0.4239 0.4356 0.4489 0.4299 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4672 0.4774 0.4363 0.4402 0.4265 0.4387 0.4503 0.4129 0.4431 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4659 0.4417 0.4385 0.4386 0.4188 0.4419 0.4655 0.4075 0.3971 0.4286 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4555 0.4657 0.4495 0.4645 0.4133 0.4374 0.4717 0.4083 0.4289 0.4144 0.5037 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.4425 0.4463 0.4277 0.4532 0.4301 0.4319 0.4865 0.4288 0.4043 0.4021 0.4172 0.4478 0.0000 0.0000 0.0000 0.0000 0.0000
0.4427 0.4582 0.4438 0.4527 0.4345 0.4638 0.4895 0.4327 0.4181 0.4295 0.4745 0.4303 0.4889 0.0000 0.0000 0.0000 0.0000
0.4538 0.4519 0.4066 0.4696 0.4099 0.4459 0.4859 0.4080 0.3931 0.3827 0.4382 0.3854 0.4055 0.4736 0.0000 0.0000 0.0000
0.4476 0.4786 0.4148 0.4879 0.4269 0.4576 0.5014 0.4532 0.4264 0.4137 0.4535 0.4178 0.3955 0.4809 0.4941 0.0000 0.0000
0.4482 0.4635 0.4114 0.4720 0.4231 0.4527 0.5082 0.4076 0.4231 0.4262 0.4436 0.4122 0.3915 0.4809 0.4762 0.4286 0.0000
0.4530 0.4605 0.4294 0.4519 0.4388 0.4649 0.4885 0.4514 0.4395 0.4236 0.4760 0.4495 0.4105 0.4621 0.4730 0.4039 0.4646