Cover Letter
Dear editor,
We are submitting a revised manuscript entitled ”Toward Enhanced Robustness in Unsupervised Graph Representation Learning: A Graph Information Bottleneck Perspective ” for your consideration for publication as a communication in IEEE Transactions on Knowledge and Data Engineering (TKDE). I outline briefly below the main contributions of this manuscript and reasons why this manuscript deserves serious consideration for publication to TKDE.
The manuscript mainly investigate the robust Unsupervised Graph Representation Learning (UGRL) problem. Recent studies have revealed that GNNs are vulnerable to adversarial attacks. Most existing robust graph learning methods measure model robustness based on label information, rendering them infeasible when label information is not available. To learn robust node representations, a straightforward direction is to employ the widely used Infomax technique from typical UGRL to address the robust UGRL problem. Nonetheless, directly transplanting the Infomax technique from typical UGRL to robust UGRL may involve a biased assumption. The assumption may bias the representations to embed adversarial information in the adversarial graph, resulting in undesirable performance on downstream tasks. In light of the limitation of Infomax, we propose a novel unbiased robust UGRL method called Robust Graph Information Bottleneck (RGIB), which is grounded in the Information Bottleneck (IB) principle. Our RGIB attempts to learn robust node representations against adversarial perturbations by preserving the original information in the benign graph while eliminating the adversarial information in the adversarial graph. There are mainly two challenges to optimize RGIB: 1) high complexity of adversarial attack to perturb node features and graph structure jointly in the training procedure; 2) mutual information estimation upon adversarially attacked graphs. To tackle these problems, we further propose an efficient adversarial training strategy with only feature perturbations and an effective mutual information estimator with subgraph-level summary. Moreover, we theoretically establish a connection between our proposed RGIB and the robustness of downstream classifiers, revealing that RGIB can provide a lower bound on the adversarial risk of downstream classifiers. Extensive experiments over several benchmarks and downstream tasks demonstrate the effectiveness and superiority of our proposed method.
We deeply appreciate your consideration of our manuscript, and we look forward to receiving more comments from the reviewers. If you have any queries, please don’t hesitate to contact me.
Sincerely yours,
Jihong Wang