Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural Network
Abstract
Deep multi-task learning attracts much attention in recent years as it achieves good performance in many applications. Feature learning is important to deep multi-task learning for sharing common information among tasks. In this paper, we propose a Hierarchical Graph Neural Network (HGNN) to learn augmented features for deep multi-task learning. The HGNN consists of two-level graph neural networks. In the low level, an intra-task graph neural network is responsible of learning a powerful representation for each data point in a task by aggregating its neighbors. Based on the learned representation, a task embedding can be generated for each task in a similar way to max pooling. In the second level, an inter-task graph neural network updates task embeddings of all the tasks based on the attention mechanism to model task relations. Then the task embedding of one task is used to augment the feature representation of data points in this task. Moreover, for classification tasks, an inter-class graph neural network is introduced to conduct similar operations on a finer granularity, i.e., the class level, to generate class embeddings for each class in all the tasks use class embeddings to augment the feature representation. The proposed feature augmentation strategy can be used in many deep multi-task learning models. we analyze the HGNN in terms of training and generalization losses. Experiments on real-world datastes show the significant performance improvement when using this strategy.
1 Introduction
Multi-task learning (Caruana, 1997; Zhang & Yang, 2017) aims to leverage useful information contained in multiple learning tasks to improve their performance simultaneously. During past decades, many multi-task learning models have been proposed to identify the shared information which can take a form of the instance, feature, and model, leading to three categories including instance-based multi-task learning (Bickel et al., 2008), feature-based multi-task learning (Argyriou et al., 2006; Obozinski et al., 2006; Liu et al., 2009; Zhang et al., 2010; Lozano & Swirszcz, 2012; Shinohara, 2016; Misra et al., 2016; Liu et al., 2019), and model-based multi-task learning (Ando & Zhang, 2005; Jacob et al., 2008; Bonilla et al., 2007; Zhang & Yeung, 2010; Jalali et al., 2010; Kumar & III, 2012; Han & Zhang, 2015, 2016; Zhang et al., 2018).
For multi-task learning, it is important to decide how to represent a task and how to leverage the relationship between tasks to improve the performance of all the tasks based on the task representation. For the first issue, multi-task learning can use the instance, feature or parameter as a media to construct the task representation. Built on the solution to the first issue, the task relationship can be learned via the chosen media, leading to a classification of multi-task learning including instance-based multi-task learning, feature-based multi-task learning, and parameter-based multi-task learning.
Different from existing studies, we think that the training dataset of a task contains important information to determine the task representation in the first issue and we can extract information from the training dataset as the task representation. However, it is not easy to derive a representation from a dataset based on convolutional neural networks or recurrent neural networks. Our solution is to represent a dataset as a graph where nodes represent data points and edges denote the similarities between data points. For the second issue, the task relationship can also be represented in a graph. So the two important issues can be related to graphs.
Inspired by this idea, in this paper, we propose a Hierarchical Graph Neural Network (HGNN) to further improve the performance of multi-task learning models by learning augmented features. The HGNN consists of two-level graph neural networks. In the first level, an intra-task graph neural network is to learn a powerful representation for each data point in a task by aggregating its neighbored data points in this task. Based on the representation learned in the first level, we can generate the task embedding, which is a representation for this task, in a way similar to max pooling. For classification tasks, we can generate the class embedding for each class in this task based on max pooling. Based on task embeddings of all the tasks generated in the first level, an inter-task graph neural network in the second level updates all the task embeddings based on the attention mechanism. For classification tasks, an inter-class graph neural network is introduced in the second level to update all the class embeddings based on neighbored class embeddings. Finally, each of the learned task embeddings as well as the class embeddings for classification tasks is used to augment the feature representation of all the data points in the corresponding task. The proposed HGNN can be used in many multi-task learning models. We analyze the use of HGNN in terms of both the training loss and generalization loss. Extensive experiments show the effectiveness of the proposed HGNN.
2 Related Works
Liu et al. (2018) explore the problem of learning the relationship between multiple tasks dynamically and formulate this problem as a message passing process over a graph neural network. Meng et al. (2018) solve relative attribute learning via a message passing scheme on a graph and the main idea is that relative attribute learning naturally benefits from exploiting the dependency graph among different relative attributes of images. The multi-task attention network proposed in (Liu et al., 2019) consists of a single shared network containing a global feature pool and a soft-attention module for each task that allows to learn task-specific feature-level attentions. The soft-attention module can learn both task-specific features from global features and shared features across different tasks. Lu et al. (2019) present a graph star net which utilizes the message-passing and attention mechanisms for multiple prediction tasks, including node classification, graph classification, and link prediction. Even though the aforementioned works propose GNN or the attention mechanism for multi-task learning, none of them use a hierarchical version of GNN as well as the attention mechanism to learn augmented features for multi-task learning, which is the focus of this paper.

Kim et al. (2019) propose a hierarchical attention network for stock prediction which can selectively aggregate information on different relation types and add the information to each representation of the company for the stock market prediction. By considering the market index as an entire graph and constituent companies as individual nodes, this method is used for predicting not only individual stock prices but also market index movements, which is similar to the graph classification task. However, after obtaining the additional information to each representation, it only uses the neighbored nodes but not all the nodes in the graph to aggregate the information, hence it cannot get whole information of the graph. Moreover, this method adds the additional information to the original feature representation, which is different from the concatenation method used in this paper. Ryu et al. (2019) propose a Hierarchical graph Attention-based Multi-Agent actor-critic (HAMA) method, which employs a hierarchical graph neural network to effectively model the inter-agent relationships in each group of agents and inter-group relationships among groups, and additionally employ inter-agent and inter-group attentions to adaptively extract state-dependent relationships among agents. However, similar to (Kim et al., 2019), the HAMA method, a network stacking multiple Graph Attention Networks (GAT) hierarchically, only processes local observations of each agent but not all the information in the graph, which cannot capture enough information. Different from these two methods, we first use an intra-task graph neural network to generate a task embedding for each task by using all the data in this task. By using all the data in a task, we can get a more accurate and robust task embedding. Then, an inter-task graph neural network is used to update task embeddings of all the tasks based on the inter-task structure, which makes each task embedding contains useful information from embeddings of other tasks. The class embeddings can be obtained in a similar way. So the task embeddings can represent the relationship of all the tasks and the class embeddings can represent the relationship of all the classes in all the tasks. To the best of our knowledge, we are the first to use the feature augmentation strategy in multi-task learning.
3 Hierarchical Graph Neural Network
In this section, we introduce the proposed architecture, the Hierarchical Graph Neural Network (HGNN), for deep multi-task learning. Whilst the architecture can be incorporated into any multi-task learning network, in the following sections we show how to build the HGNN upon a multi-task network.
3.1 Overview of The Architecture
The HGNN consists of two-level GNNs. The first-level GNN is an intra-task GNN to aggregate all the information contained in the data of a task to generate a task representation, which is called the task embedding. In the second level, based on the generated task embeddings in the first level, an inter-task GNN is used to update all the task embeddings by sharing information among all the tasks. Finally the task embeddings are used to augment the feature representation of the data to improve the learning performance. For classification tasks, we can learn augmented features in a fine granularity - the class level. That is, another intra-task GNN is used to aggregate all the information in a class of a task to generate a class embedding. Then based on class embeddings in all the tasks, another inter-class GNN is used to update them. Finally, both task embeddings and class embeddings are used to augment the feature representation. The whole architecture of HGNN is shown in Figure 1.
3.2 The Model
Suppose that there are multi-class classification tasks where each task has classes. The training dataset of the th task consists of pairs of data samples and corresponding labels, i.e., where .
For , we first define its hidden representation as
(1) |
where can be any activation function such as the ReLU function, and are shared parameter among all the tasks.
For the intra-task GNN, we first construct an adjacency matrix for the th task based on the hidden representation and label information. Specifically, the th entry in , , can be defined as
where denotes the norm of a vector. Then the intra-task GNN can be defined as
(2) |
where can be any activation function such as the ReLU function, , , denotes a vector of all ones with an appropriate size, and are the parameters in the GNN. The th column in denoted by is a new hidden representation for . in Eq. (2) can make similar data points in the same class have similar representations in and dissimilar data points from different classes have dissimilar representations. The intra-task GNN can have two or more layers each of which is defined as in Eq. (2).
Based on the intra-task GNN, the task embedding of the th task is defined as
where the max operation is conducted elementwisely. So the task embedding is obtained via the max pooling on all the data points in the th task based on the hidden representation learned by the intra-task GNN. Similarly, the class embedding of the th class in th task is defined as
which means that the class embedding of the th class in the th task is obtained via the max pooling on all the data points in the th class of the th task based on the intra-task GNN. We have tried other pooling methods such as the mean pooling but the performance is inferior to that of the max pooling. One reason is that the max pooling can bring some nonlinearity but the mean pooling is a linear operation.
The task embeddings for the tasks can form a graph. The inter-task GNN is responsible of learning for the graph constructed by task embeddings to generate new task embeddings by exchanging information among tasks. Here we use GAT as an implementation of the inter-task GNN.
In order to learn powerful task embeddings, each task embedding is first transformed by a weight matrix . Then we perform self-attention on the task embeddings. That is, an attentional mechanism computes attention coefficients as
where the attentional mechanism we use is the cosine function, which is different from the original GAT. To make coefficients comparable across different task embeddings, we normalize them over by using the softmax function as
Attention values can be viewed as a measure of task relations between each pair of tasks. Once obtained, the normalized attention coefficients are used as combination coefficients to compute a linear combination of the new task embeddings before potentially applying a nonlinear activation function as
Based on this equation, we can see that contains useful information from embeddings of other tasks. In experiments, the inter-task GNN adopts two such layers to generate the new task embeddings.
Similarly, the class embeddings also can form a graph. We uses another inter-class GNN to generate new class embeddings in a way similar to task embeddings.
The learned task embeddings and class embeddings can be used to augment the data feature representation to form a more expressive one as , where denotes the concatenation operation. Then data in such augmented representation can be fed into a deep multi-task learning model to learn class labels.
3.3 Testing Process
At the testing process, we do not know the true label, hence we cannot directly concatenate the class embedding to the hidden representation. We use the following method to solve this problem. For each testing sample, we concatenate the class embedding of each class to the hidden representation as its new hidden representation and then compute the prediction probability that the testing sample belongs to class via the softmax function used in the multi-task neural network. Finally we choose class with the largest prediction probability as the predicted label. In mathematics, we predict the class label of a testing sample as
(3) |
where denotes a set of positive integers no larger than , denotes the transformed testing sample before feeding into the HGNN as shown in Eq. (1), and denotes the multi-task neural network used. Note that in the prediction rule (3), the concatenated class embedding changes with .
3.4 Extension to Regression Tasks
For the regression problems, there are only continuous labels and we cannot define class embeddings. So we only use task embeddings as the augmenter feature. Furthermore, the adjacency matrix for the th task is different from classification tasks. Specifically, the th entry in , , for a regression task is defined as
Since there is no class embedding, we do not need the prediction rule as in Eq. (3). The rest is identical to classification tasks.
4 Analysis
The proposed approach to augment the feature representation in HGNN is novel and here we provide some analyses to give insights into this model. For simplicity, we consider a linear single-task learning model by utilizing the task embedding only, which can provide insights for understanding deep multi-task learning models with task embeddings as well as class embeddings learned in HGNN.
The input space, which is a subset of a vector space, is denoted by and the output space is denoted by . Training samples are distributed according to some unknown distribution . Let be the loss function, where denotes the dimension of the label space. The learning function is defined as where the superscript ⊺ denotes the transpose and is abused to denote the parameter in this linear learner. The expected loss is defined as . The empirical loss is defined as . The data matrix is defined as and the label matrix is defined as . denotes the task embedding and is the task embedding matrix for all the training data, where denotes a column vector of all ones with an appropriate size.
Let us consider two models. The objective function of model 1 is formulated as
(4) |
and that of model 2 is
(5) |
where , , , , is an identity matrix. So model 1 is a ridge regression model which can be applied to both classification and regression tasks and model 2 is a variant of model 1 with the task embedding incorporated. For training losses of those two models, we have the following result.111All the proofs can be found in the appendix.
Theorem 1
If and satisfy where means that is positive semidefinite, then the training loss of model 2 with the task embedding is always lower than that of model 1 without the task embedding. That is, we have
(6) |
Remark 1
Theorem 1 implies that for a model, incorporating the task embedding to augment the feature representation will incur a lower training loss than that without the task embedding. From the perspective of the model capacity, model 1 is a reduced version of model 2 by setting the task embedding to be zero and hence mode 2 has a larger capacity than model 1, making model 2 possess a large chance to have a lower training loss. The condition proposed in Theorem 1 is very easy to check and we can adjust to ensure the positive semidefiniteness of the condition.
We also analyze the generalization bound of the two models. We first rewrite problems (4) and (5) into equivalent formulations as
For the above two problems, we have the following result.
Theorem 2
Suppose , the task embedding satisfies the condition in Theorem 1. Then for any , with probability at least , we have
Remark 2
According to Theorem 2, the generalization upper-bound of model 2 with the use of the task embedding is lower than that without the task embedding because of the lower training loss of model 2 which has been proved in Theorem 1. This may imply that there is a large chance that the expected loss of model 2 is lower than that of model 1, which can be verified in empirical studies in the next section.
5 Experiments
In this section, we conduct empirical studies to test the performance of the proposed HGNN.
5.1 Experimental Settings
We conduct experiments on several benchmark datasets, including ImageCLEF, Office-Caltech-10, Office-Home and SARCOS.
The ImageCLEF dataset is the benchmark for Image- CLEF domain adaptation challenge which contains about 2,400 images from 12 common categories shared by four tasks including Caltech-256 (C), ImageNet ILSVRC (I), Pascal VOC 2012 (P), and Bing (B). There are 50 images in each category and 600 images in each task.
The Office-Caltech-10 dataset includes 10 common categories shared by the Office-31 and Caltech-256 datasets. It contains four domains: Caltech (C) that is sampled from Caltech-256 dataset, Amazon (A) that contains images collected from the amazon website, Webcam (W) and DSLR (D) that are taken by the web camera and DSLR camera under the office environment. In our experiment, we regard each domain as a task.
The Office-Home dataset has 15,500 images across 65 classes in the office and home settings from four domains with a large domain discrepancy: Artistic images (Ar), Clip art (Cl), Product images (Pr), and Real-world images (Rw). In our experiment, we regard each domain as a task.
The SARCOS dataset studies a multi-output problem of learning the inverse dynamics of 7 SARCOS anthropomorphic robot arms, each of which corresponds to a task, based on 21 features, including seven joint positions, seven joint velocities, and seven joint accelerations. By following (Zhang & Yeung, 2010), we randomly sample 2000 data points from each output to construct the dataset.
Since the proposed HGNN can be combined with many deep multi-task learning models as discussed before, we incorporate the HGNN into the Deep Multi-Task Learning (DMTL) which shares the first several layers as the common hidden feature representation for all the tasks as did in (Caruana, 1997; Zhang et al., 2014; Liu et al., 2015; Zhang et al., 2015; Mrksic et al., 2015; Li et al., 2015), Deep Multi-Task Representation Learning (DMTRL) (Yang & Hospedales, 2017a), and Trace Norm Regularised Deep Multi-Task Learning (TNRMTL) (Yang & Hospedales, 2017b), respectively, to show the benefit of the learned augmented features.
In experiments, we leverage the VGG-19 network pretrained on the ImageNet dataset as the backbone of the feature generator . After that, all the multi-task learning models adopt a two-layer fully-connected architecture (#data_dim 600 #classes) and the ReLU activation function is used. The first layer is shared by all tasks to learn a common representation and the second layer is for task-specific outputs. The HGNN learns 8-dimensional task embeddings and 8-dimensional class embeddings.
We use Adam with the learning rate varying as , where is the number of the iteration. By following GAT, We fix in experiments. We adopt mini-batch SGD with . Each experiment repeats for 5 times and we report the average performance as well as the standard deviation.
5.2 Experimental Results
5.2.1 Results on Classification Tasks
For classification tasks, the performance measure is the classification accuracy. To investigate the effect of the size of the training dataset on the performance, we vary the proportion of training data from 50% to 70% at an interval of 10% and plot the average test accuracy of different methods in Figures 2-4. According to results reported in these figures, we can see that the incorporation of the HGNN into baseline models improves the classification accuracy of all baseline models especially when the training proportion is small. As reported in Figures 3 and 4, the incorporation of the HGNN boosts the performance of all the baseline on the Office-Caltech-10 and Office-Home datasets. For the DMTRL and TNRMTL models, the improvement is significant with the use of the HGNN. Moreover, when using augmented features learned by the HGNN, the standard deviation becomes smaller than the corresponding baseline model without using the HGNN under every experimental setting, which implies that the HGNN can improve the stability of baseline models to some extent.



5.2.2 Results on Regression Tasks
For regression tasks, the performance measure is the mean square error. The test errors on the SARCOS dataset are shown in Figure 5 where the training proportion is 70%. As shown in Figure 5, after using the HGNN, the test error of each baseline model has a significant decrease, which demonstrate the effectiveness of augmented features learned in the HGNN method. With other training proportions, we have observed similar phenomena that the use of the HGNN can improve the performance of baseline models, and due to page limit, we did not plot the results in the figures.

5.3 Ablation Study

To study the effectiveness of task embeddings and class embeddings in the HGNN model, we study two variants of HGNN, including HGNN(T) that only augments with the task embedding and HGNN(C) that only augments with the class embedding. The comparison among baseline models, HGNN, variants of HGNN on the Office-Caltech-10 dataset is shown in Figure 6. According to the results, we can see that the use of only the class embedding in HGNN(C) or the task embedding in HGNN(T) can improve the performance of baseline models, which shows that augmented features learned in two ways are effective. HGNN(C) seems better than HGNN(T) in this experiment. One reason is that class embeddings may contain more discriminative features for the classification task. Figure 6 also indicates that using both task embeddings and class embeddings achieves the best performance, which again verifies the usefulness of the HGNN.


4 | 8 | 16 | 32 | 64 | |
---|---|---|---|---|---|
DMTL_HGNN | 81.221.27 | 82.031.98 | 81.181.43 | 80.410.88 | 79.890.65 |
DMTRL_HGNN | 81.210.80 | 82.071.47 | 81.640.77 | 81.152.35 | 81.481.78 |
TNRMTL_HGNN | 82.351.66 | 81.110.94 | 82.271.19 | 81.740.91 | 81.200.40 |
4 | 8 | 16 | 32 | 64 | |
---|---|---|---|---|---|
DMTL_HGNN | 82.371.13 | 82.031.98 | 81.750.56 | 80.711.30 | 81.652.84 |
DMTRL_HGNN | 81.731.39 | 82.071.47 | 80.401.57 | 81.942.03 | 80.742.16 |
TNRMTL_HGNN | 80.073.36 | 81.110.94 | 80.641.11 | 80.170.98 | 80.470.80 |
5.4 Visualization
To dive deeper into the learned features, we plot in Figures 7 and 8 the t-SNE embeddings of the feature representations learned for the four tasks on Office-Caltech-10 dataset by TNRMTL and TNRMTL_HGNN, respectively, at the training and testing processes. We observe that the data based on the representation derived by the HGNN model are more separable among classes in each task during either the training process or the testing process. This phenomenon verifies the effectiveness of the augmented features learned in the HGNN to help discriminate data points in different classes of all the tasks.
5.5 Sensitivity Analysis
We conduct the sensitivity analysis of the performance with respect to the dimension of task embedding () and class embedding (), respectively, on the ImageCLEF dataset. The results are shown in Tables 1 and 2. According to the results, we can see that and are a good choice in most cases even though in some case, a lower value 4 performs better. When the dimension is not so large (e.g., not large than 32), the performance changes a little, making the choice of the dimension insensitive. However, when using a larger dimension (e.g., 64), the classification accuracy drops significantly, implying that the HGNN prefers a small dimension.
6 Conclusion
In this paper, we propose a hierarchical graph neural network (HGNN) to learn augmented features for deep multi-task learning. The proposed HGNN has two levels. In the first level, the intra-task graph neural network is used to learn a powerful representation for each data point in a task by aggregating information from its neighbors in this task. Based on the learned representation, we can learn the task embedding for each task as well as the class embedding if any. The inter-task graph neural network as well inter-class graph neural network is used to update each task embedding and each class embedding. Finally the learned task embedding and class embedding can be used to augment the data representation. Extensive experiments show the effectiveness of the proposed HGNN. In our future work, we are interested in applying the HGNN to other multi-task learning models.
References
- Ando & Zhang (2005) Ando, R. K. and Zhang, T. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853, 2005.
- Argyriou et al. (2006) Argyriou, A., Evgeniou, T., and Pontil, M. Multi-task feature learning. In Advances in Neural Information Processing Systems 19, pp. 41–48, 2006.
- Bickel et al. (2008) Bickel, S., Bogojeska, J., Lengauer, T., and Scheffer, T. Multi-task learning for HIV therapy screening. In Proceedings of the Twenty-Fifth International Conference on Machine Learning, pp. 56–63, 2008.
- Bonilla et al. (2007) Bonilla, E., Chai, K. M. A., and Williams, C. Multi-task Gaussian process prediction. In Advances in Neural Information Processing Systems 20, pp. 153–160, Vancouver, British Columbia, Canada, 2007.
- Caruana (1997) Caruana, R. Multitask learning. Machine Learning, 28(1):41–75, 1997.
- Han & Zhang (2015) Han, L. and Zhang, Y. Learning multi-level task groups in multi-task learning. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015.
- Han & Zhang (2016) Han, L. and Zhang, Y. Multi-stage multi-task learning with reduced rank. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016.
- Jacob et al. (2008) Jacob, L., Bach, F., and Vert, J.-P. Clustered multi-task learning: A convex formulation. In Advances in Neural Information Processing Systems 21, pp. 745–752, 2008.
- Jalali et al. (2010) Jalali, A., Ravikumar, P. D., Sanghavi, S., and Ruan, C. A dirty model for multi-task learning. In Advances in Neural Information Processing Systems 23, pp. 964–972, Vancouver, British Columbia, Canada, 2010.
- Kakade et al. (2009) Kakade, S. M., Sridharan, K., and Tewari, A. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Koller, D., Schuurmans, D., Bengio, Y., and Bottou, L. (eds.), Advances in Neural Information Processing Systems 21, pp. 793–800. 2009.
- Kim et al. (2019) Kim, R., So, C. H., Jeong, M., Lee, S., Kim, J., and Kang, J. HATS: A hierarchical graph attention network for stock movement prediction. CoRR, abs/1908.07999, 2019.
- Kumar & III (2012) Kumar, A. and III, H. D. Learning task grouping and overlap in multi-task learning. In Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012.
- Li et al. (2015) Li, S., Liu, Z., and Chan, A. B. Heterogeneous multi-task learning for human pose estimation with deep convolutional neural network. IJCV, 113(1):19–36, 2015.
- Liu et al. (2009) Liu, H., Palatucci, M., and Zhang, J. Blockwise coordinate descent procedures for the multi-task lasso, with applications to neural semantic basis discovery. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009.
- Liu et al. (2018) Liu, P., Fu, J., Dong, Y., Qiu, X., and Cheung, J. C. K. Multi-task learning over graph structures. CoRR, abs/1811.10211, 2018.
- Liu et al. (2019) Liu, S., Johns, E., and Davison, A. J. End-to-end multi-task learning with attention. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1871–1880, 2019.
- Liu et al. (2015) Liu, W., Mei, T., Zhang, Y., Che, C., and Luo, J. Multi-task deep visual-semantic embedding for video thumbnail selection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3707–3715, 2015.
- Lozano & Swirszcz (2012) Lozano, A. C. and Swirszcz, G. Multi-level lasso for sparse multi-task regression. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012.
- Lu et al. (2019) Lu, H., Huang, S. H., Ye, T., and Guo, X. Graph star net for generalized multi-task learning. CoRR, abs/1906.12330, 2019.
- Meng et al. (2018) Meng, Z., Adluru, N., Kim, H. J., Fung, G., and Singh, V. Efficient relative attribute learning using graph neural networks. In Proceedings of the 15th European Conference on Computer Vision, 2018.
- Misra et al. (2016) Misra, I., Shrivastava, A., Gupta, A., and Hebert, M. Cross-stitch networks for multi-task learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3994–4003, 2016.
- Mrksic et al. (2015) Mrksic, N., Séaghdha, D. Ó., Thomson, B., Gasic, M., Su, P., Vandyke, D., Wen, T., and Young, S. J. Multi-domain dialog state tracking using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pp. 794–799, 2015.
- Obozinski et al. (2006) Obozinski, G., Taskar, B., and Jordan, M. Multi-task feature selection. Technical report, Department of Statistics, University of California, Berkeley, June 2006.
- Ryu et al. (2019) Ryu, H., Shin, H., and Park, J. Multi-agent actor-critic with hierarchical graph attention network. CoRR, abs/1909.12557, 2019.
- Shinohara (2016) Shinohara, Y. Adversarial multi-task learning of deep neural networks for robust speech recognition. In Proceedings of the 17th Annual Conference of the International Speech Communication Association, pp. 2369–2372, 2016.
- Yang & Hospedales (2017a) Yang, Y. and Hospedales, T. M. Deep multi-task representation learning: A tensor factorisation approach. In Proceedings of the 6th International Conference on Learning Representations, 2017a.
- Yang & Hospedales (2017b) Yang, Y. and Hospedales, T. M. Trace norm regularised deep multi-task learning. In Proceedings of the 6th International Conference on Learning Representations, Workshop Track, 2017b.
- Zhang et al. (2015) Zhang, W., Li, R., Zeng, T., Sun, Q., Kumar, S., Ye, J., and Ji, S. Deep model based transfer and multi-task learning for biological image analysis. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1475–1484, 2015.
- Zhang & Yang (2017) Zhang, Y. and Yang, Q. A survey on multi-task learning. CoRR, abs/1707.08114, 2017.
- Zhang & Yeung (2010) Zhang, Y. and Yeung, D.-Y. A convex formulation for learning task relationships in multi-task learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pp. 733–742, 2010.
- Zhang et al. (2010) Zhang, Y., Yeung, D., and Xu, Q. Probabilistic multi-task feature selection. In Advances in Neural Information Processing Systems 23, pp. 2559–2567, 2010.
- Zhang et al. (2018) Zhang, Y., Wei, Y., and Yang, Q. Learning to multitask. In Advances in Neural Information Processing Systems 31, pp. 5776–5787, 2018.
- Zhang et al. (2014) Zhang, Z., Luo, P., Loy, C. C., and Tang, X. Facial landmark detection by deep multi-task learning. In Proceedings of the 13th European Conference on Computer Vision, pp. 94–108, 2014.
Appendix
Proof for Theorem 1
Proof. By setting the derivation of problems (4) and (5) to zero, these two problems have close form solutions as
Then
It is easy to show
By left-multiplying by and right-multiplying by , we can get
So can be simplified as
In order to prove
We need to require
(7) |
where . Note that
Eq. (7) is equivalent to
which is also equivalent to
So we require
which is the condition.
Proof for Theorem 2
Lemma 1
If we assume , . For any Lipschitz loss function , with Lipschitz constant . For any and with probability at least over the sample: Let be as in the norms example, that is , where . For all , we have
Then we can give a proof of Theorem 2.
Proof. Since the loss function has a Lipschitz constant 2, according to Lemma 1, we let and then we can reach the conclusion.