Asymmetric Contrastive Multimodal Learning for Advancing Chemical Understanding
Abstract
The versatility of multimodal deep learning holds tremendous promise for advancing scientific research and practical applications. As this field continues to evolve, the collective power of cross-modal analysis promises to drive transformative innovations, leading us to new frontiers in chemical understanding and discovery. Hence, we introduce Asymmetric Contrastive Multimodal Learning (ACML) as a novel approach tailored for molecules, showcasing its potential to advance the field of chemistry. ACML harnesses the power of effective asymmetric contrastive learning to seamlessly transfer information from various chemical modalities to molecular graph representations. By combining pre-trained chemical unimodal encoders and a shallow-designed graph encoder with 5 layers, ACML facilitates the assimilation of coordinated chemical semantics from different modalities, leading to comprehensive representation learning with efficient training. We demonstrate the effectiveness of this framework through large-scale cross-modality retrieval and isomer discrimination tasks. Additionally, ACML enhances interpretability by revealing chemical semantics in graph presentations and bolsters the expressive power of graph neural networks, as evidenced by improved performance in molecular property prediction tasks from MoleculeNet and TDC. ACML exhibits its capability to revolutionize chemical research and applications, providing a deeper understanding of the chemical semantics of different modalities.
Keywords: Multimodal Learning, Molecular Graph Representation Learning, Chemical Data Mining, Contrastive Learning, Interpretability, Graph Neural Network.
1 Introduction
Multimodal deep learning (MMDL) is a thriving and interdisciplinary research field that is dedicated to enhancing artificial intelligence (AI) capabilities in comprehending, reasoning, and deducing valuable information across various communicative modalities [1]. It serves as an effective approach for the communication and integration of diverse data sources (text, images, audio, video, sensor data, etc), inspiring the innovative creation of more accurate and powerful AI models. For example, DALL-E2 [2] can showcase substantial progress in image generation and modification given a short text prompt. Witnessing the recent surge of research in image and video comprehension [3, 4], text-to-image generation [2, 5], and embodied autonomous agents [6, 7], multidisciplinary research, including biology, chemistry, and physics, also starts to embrace multimodal deep learning to unlock profound insights into complex systems, tackle challenging scientific problems, and push the boundaries of knowledge [8, 9, 10, 11, 12].

In the realm of chemistry, MMDL has been extensively adapted to facilitate the representation learning across several chemical modalities, including chemical language, chemical notations, and molecular graphs [14, 15, 16, 17, 18, 19, 20]. It is essential to note that the potential of MMDL extends far beyond the above modalities [21, 22, 23, 24, 25, 26, 27, 28]. For instance, the molecular graphical depiction (referred to as image in the rest of the paper) [29] can offer powerful visual illustrations of molecules, providing unique insights into their spatial arrangements. Nuclear magnetic resonance (NMR) spectra, including 1H NMR and 13C NMR, offer a view of functional groups and characteristic features, enabling a deeper understanding of molecular behaviors and functionalities. Additionally, mass spectra from techniques such as gas chromatography–mass spectrometry (GCMS) and liquid chromatography–mass spectrometry (LCMS) provide a clue of molecular composition and fragmentation, aiding the identification and characterization of compounds. However, representing molecular information comprehensively using a single modality is a challenging task [30]. For example, string-based representations like SMILES [31] lack important topology information, whereas molecular depictions fail to include electronic information. MMDL could provide a solution by allowing the communication and integration of various heterogeneous modalities. However, there is limited work exploring such advantages in chemical systems, transcending the limitations associated with each modality.
Among the implementations of MMDL, contrastive learning has emerged as one of the most prevalent self-supervised approaches for information communication across multiple modalities [32]. Its objective is to align multiple views of the same instance across modalities while distancing the representations from different instances. One typical approach is to adopt a coordinated mechanism, like CLIP (Contrastive Language-Image Pretraining) framework [13], to enhance multimodal communication. The coordinated model establishes correspondences between the learned representations by first projecting these representations onto a joint hidden space and then contrasting positive pairs (similar instances) against negative pairs (dissimilar instances). In the domain of chemistry, great efforts have been directed towards using contrastive learning for tasks such as property predictions [16], unimodal pattern mining [33], and zero-shot classifications [21]. However, there remains a notable gap in research investigating the potential for integrating information across different modalities and elucidating the interpretability of learned representations within the context of multimodal coordination.
Many recent works have demonstrated strong advantages of molecular images [34], SMILES [35], GCMS [36], and molecular graphs [37] to produce molecular representations for drug discovery. But what contributes to such superiority? It’s important to recognize that the drug discovery process operates on multiple layers of informational hierarchy. These layers range from atomic-level properties such as hydrogen-bonding acceptors (HBA) to motif-level characteristics like hydrogen-bonding donors (HBD) and the number of stereogenic (chiral) centers, and finally to molecular-level attributes such as molecular weight, topological polar surface area (TPSA), and the logarithmic partition coefficient (LogP). To answer the question, we may need to investigate how diverse modalities impact on these hierarchical properties.
In light of these opportunities, we propose a novel approach called Asymmetric Contrastive Multimodal Learning (ACML) specifically designed for molecules. It leverages contrastive learning between the molecular graph and other prevalent chemical modalities, including G-SMILES, G-Image, G-1H NMR, G-13C NMR, G-GCMS, and G-LCMS pairs (G for the abbreviation of Graph), to transfer the information from other chemical modalities into the graph representations in an asymmetric way. As graph representation can express hierarchical molecular information via carefully crafted graph neural networks (GNNs) [38, 39, 40, 41, 42, 43, 44, 45], we regard it as a valuable receptor containing basic topology information of a given molecule to assimilate information from other modalities. Unlike other initiatives in molecular properties prediction [46, 47], our emphasis is primarily on direct data mining utilizing graph latent representation. ACML enables graph representation learning to capture knowledge across various chemical modalities, promoting a more holistic understanding of the hierarchical molecular information within diverse input chemical modalities. This multimodal learning framework not only enhances the interpretability of learned representations but also holds the potential to significantly improve the expressive power of graph neural networks within the field of chemistry. Figure 1 illustrates the conceptual view of the ACML framework.
In our proposed framework, each ACML model involves the graph encoder and one chemical unimodal encoder, for example, G-SMILES represents the multimodal learning between graph and SMILES modalities. We utlize a dedicated pre-trained unimodal encoder for each chemical modalitiy (SMILES, Image, 1H NMR, 13C NMR, GCMS, and LCMS), keeping its parameters fixed in the ACML training stage. Concurrently, we train a simple and shallow graph encoder to effectively capture different representations of each modality. This approach allows the fixed chemical encoder to effectively transfer knowledge to graph encoders through asymmetric contrastive learning. Finally, we analyze and compare the embeddings generated by the trained graph encoders to discover their explanatory power. In summary, the advantages of the ACML framework can be viewed as follows:
-
1.
Leverage effective asymmetric contrastive learning between graph and chemical modalities.
-
2.
Achieve an efficient training scheme using the shallow graph encoder and pre-trained chemical encoders.
-
3.
Empower knowledge transfer from chemical modalities into graph encoder during cross-modal chemical semantics learning.
-
4.
Demonstrate the effectiveness and interpretability of graph learning in ACML, including three key tasks: (1) cross-modality retrieval (2) isomer discrimination, (3) revealing chemical semantics in graph embeddings, and (4) molecular property prediction.
2 Results
2.1 ACML Framework
The proposed ACML framework is built upon the contrastive learning framework inspired by [13]. It effectively promotes correspondences between modalities of the same molecule (positive pairs) while contrasting them with different molecules (negative pairs). The whole pipeline, elaborated in Section A.1, is composed of four components: an unimodal encoder for molecular modalities, a graph encoder formalized by graph neural networks, a non-linear projection head, and the temperature-scaled cross-entropy contrastive loss.
The ACML framework exhibits adaptability to diverse encoder designs, facilitating information extraction and compression. In our study, the encoders for each modality encompass Convolutional Neural Networks, trained to capture features for subsequent reconstruction tasks. Due to the inherent characteristics of each modality, the particulars of the encoder designs exhibit subtle variations. For instance, while 13C NMR and 1H NMR spectra both pertain to nuclear magnetic resonance, HNMR presents a higher degree of noise in comparison to 13C NMR, along with a greater prevalence of peak overlap. In contrast, 13C NMR peaks demonstrate enhanced distinctiveness. To address these disparities, 1H NMR data may require some preprocessing to extract peak-related, condensed information which leads to a lightweight encoder that strategically aims at extracting pivotal features from the peaks. For 13C NMR, the unaltered experimental data can be employed directly. The encoder architecture for 13C NMR has more convolutional layers, tailored to capture intricate details.
2.2 Cross-modality Retrieval
Molecules can be represented by a variety of modalities, such as depiction images, SMILES notations, and traditional chemical characterization techniques. However, each modality possesses distinct capabilities and constraints when it comes to deciphering molecular structures. In the context of structural attributes, molecular depiction image and isomeric SMILES notation offer a detailed overview of the whole molecule from atom level to molecule level, facilitating the elucidation of their three-dimensional atomic configurations. NMR spectra provide distinct windows into the carbon and hydrogen atomic landscapes of the molecule, offering partial structural information about the molecule. Mass spectra provide crucial information about molecular mass and patterns of fragmentation, encompassing both molecular-level and motif-level information. To evaluate each ACML model, we conduct a cross-modality retrieval task: determine whether the model can accurately match a chemical modality to its corresponding graph modality from a large database of molecular graphs. Higher retrieval accuracy indicates better multi-modal coordination between graph and chemical representations. Note that the sample in the graph database never appears in training.

Figure 2 presents the retrieval accuracy results for all ACML models, including G-SMILES, G-Image, G-1H NMR, G-13C NMR, G-GCMS, and G-LCMS. These results are obtained by calculating the top k accuracy when matching one molecule’s chemical modality to its corresponding graph within a graph database of varying scales. The task becomes more challenging as the scale increases. When the scale transitions from 1,000 (1K) to 1 million (1M), the G-Image gives the best results: the top-10 accuracy drops from 100% to 94.8%, which only experiences a reduction of 5.2%. G-SMILES gives the second best results, as top 10 accuracy drops from 100% to 86.5%. When it comes to G-1H NMR, the results are not as good as G-Image and G-SMILES, as expected. Specifically, when the scale transitions from 1K to 10K, the top-10 accuracy drops from 94.9% to 79.7%. And with 1M unseen molecules, the top-10 accuracy is 43.6%. G-13C NMR spectrum exhibits similar performance, as the top-10 accuracy from 93.9% to 81.2% until 43.8% when the scale transitions from 1K to 10K and up to 1M. Notably, G-13C NMR performs better than G-1H NMR when the search space expands. Intuitively, this aligns with the expectation, since the 13C NMR peaks are typically sharp and barely overlap in small molecules. On the contrary, the 1H NMR spectrums are more prone to signal overlap, especially in complex molecular structures. Although the encoders of 13C NMR and 1H NMR have been designed to cope with such differences, the training dataset inevitably falls short of encompassing the entirety of structural variations within the zero-shot validation set. Additionally, 1H NMR might encounter sensitivity issues, especially for compounds with low hydrogen content. Nonetheless, both the G-1H NMR and G-13C NMR exhibit satisfactory discrimination power in identifying the most probable graphs given the NMR sequences. This demonstrates the flexibility of our ACML framework with features from different spaces and dimensions generated from different designs of encoders.
When it comes to mass spectra modality, the results do not align as favorably as with Images, SMILES, and NMR spectra, as initially anticipated. For G-GCMS, when the scale transitions from 1K to 10K, the top-10 accuracy experiences a certain degree of reduction from 75.2% to 46.3%. When the validation database extends to 1M, the top-10 accuracy dropped to 19.4%. For G-LCMS, when the scale transitions from 1K to 10K, the top 10 accuracy experiences a certain degree of reduction from 45.5% to 32.2%. When the validation database extends to 1M, the top-10 accuracy dropped to 16.9%. One may observe that G-GCMS has better performance on molecular recognition than G-LCMS. The reason for that is GCMS is typically carried out under harsh conditions, resulting in a wide array of fragmentation that unveils the molecular composition. In contrast, LCMS is generally performed under milder conditions, resulting in reduced fragmentation. Particularly, LC-MS/MS primarily concentrates on localized information near the precise molecular weight. However, in terms of cross-modality retrieval, LCMS offers less comprehensive information.
2.3 Isomer Discrimination
Task Description.
Besides calculating accuracy among a large group of molecules, in which many molecules could be intrinsically different, isomer recognition stands as one of the most challenging tasks in molecular identification due to the intricacy of isomeric structures. Distinguishing between stereoisomers, which share the same connectivity but differ in spatial arrangement, poses a particular challenge and demands advanced analytical methods and meticulous data interpretation.
To thoroughly evaluate the model’s discriminative power, we selected 140 isomer pairs from the test dataset in a systematic way, including both structural isomers and stereoisomers. First, all SMILES representations were converted to molecular formulas using the RDKit package in Python. Molecules sharing the same molecular formula were grouped, and pairs within each group were identified as isomers. To ensure meaningful comparisons, we visualized the isomer pairs and excluded those with significantly different structures (e.g., ring vs. chain) while retaining only pairs with high structural similarity (differing by fewer than three bonds). From the remaining isomer pairs, we randomly selected 40 stereoisomers and 100 structural isomers to form a representative test set. The selected pairs are provided in Supplementary Material S1, with the isomer distribution shown in Table 1.
Results and Analysis.
Both human experts and the ACML model were tasked with associating NMR spectra to the most likely molecular isomers. Besides the binary classifications, the ACML model produces confidence scores that indicate the alignment strength between the NMR spectra and potential molecular structures. Higher confidence scores represent stronger alignment using our proposed framework. When comparing the expert’s performance across different NMR spectroscopy techniques, the expert demonstrated better accuracy using HNMR compared to CNMR spectra. This is consistent with the fact that protons have the highest natural abundance ( 99%), resulting in denser HNMR signals that provide more information for distinguishing between isomers. In contrast, 13C’s lower natural abundance ( 1%) means fewer signals are available for analysis. The performance comparison between the model and the expert is presented in Table 2. Our model outperforms the human expert for both 1H NMR and 13C NMR tasks, with a much more significant lead for 13C NMR tasks. This demonstrates the effectiveness of representation learning using our proposed framework.
Structural Isomer | Stereoisomer | |
---|---|---|
Count (Avg. Similarity) | Count (Avg. Similarity) | |
1H NMR | 47 (0.43) | 15 (1.00) |
13C NMR | 61 (0.47) | 17 (1.00) |
1H NMR | 13C NMR | |||
---|---|---|---|---|
Structural Isomer | Stereoisomer | Structural Isomer | Stereoisomer | |
Model Accuracy | 82.6% | 51.6% | 82.9% | 60.0% |
Human Accuracy | 78.0% | 51.6% | 51.9% | 57.1% |
To visualize the model’s performance, four cases of 13C NMR are shown in Figure 3 and 1H NMR examples are shown in Figure 4. The percentage on the arrows represents the ACML model’s confidence score, with higher percentages indicating greater certainty in matching the NMR spectra to specific molecular isomers. These scores provide a probabilistic measure of the model’s confidence in structural assignments.


2.4 Visualization of Molecular Representations
Task Description.
In the ACML framework, the graph modality serves as an information receptor capable of bringing forth the information or chemical rules from another modality via coordination. Thus, it may reveal distinct dimensions of chemical significance among diverse chemical modalities. To illuminate variations of chemical semantics embedded in hidden graph representations acquired from various ACML frameworks, we computed graph embeddings of previously unseen 30,000 molecules using pre-trained graph encoders for analysis. Given 8 important chemical characteristics in drug discovery: molecular weight (MW), the logarithmic partition coefficient (LogP), hydrogen-bonding acceptor (HBA), hydrogen-bonding donor (HBD), topological polar surface area (PSA), the number of rotatable bonds (#R-Bonds), and the number of stereogenic (chiral) centers (#C-Centers), we decided to examine how these characteristics correlate with the graph embeddings. The scope of this inquiry extends to understanding the impact of these modalities on crucial parameters within the domain of drug discovery, highlighting the strengths of each molecular modality.
Evaluations and Baselines.
We decomposed the feature dimensions via PCA [49] and mapped the resulting embeddings onto a two-dimensional plane. The outcomes were visualized as 2D points with coloring based on their corresponding chemical property values. Figure 5 presents the visualization results. Notably, a remarkable consistency in the distribution of points concerning the chemical property values was observed. This observation underscores the proficiency of the pre-trained graph encoder in capturing the inherent chemical semantics. We then conducted a quantitative evaluation of the relationship between graph embedding and the chemical properties using Pearson correlation coefficients (PCC) [50]. In our approach, we first established a linear regression model to determine the optimal linear combination of two PCA components that maximizes the Pearson correlation score between the linear combination of PCA components and chemical properties of interest. We also showed the graph embeddings from other pretraining graph encoders and performed PCA in the same fashion to compare how the latent representations differ with respect to ACML. Among the baselines, L2P-GNN [51] and MGSSL [52] develop motif-level self-supervised tasks during pretraining, while MolCLR [30] and MoMu [53] use contrastive learning in pretraining.
Results and Analysis.
Table 3 presents the PCC scores of all ACML frameworks and baselines. All graph embedding visualizations are provided in Supplementary Material S2. The knowledge of the chemical modalities was captured and highlighted by the degree of correlation, particularly with respect to key properties relevant to drug discovery. Among the methods analyzed, the ACML frameworks generally outperform baseline models, achieving the highest PCCs in seven out of eight properties, with the exception of #R-Bonds, where MoMu achieves the top PCC (0.617). Regarding MW, G-Image and G-SMILES are leading performers. Molecular weights can effectively be expressed by the pixels of molecular depiction image, and structural motifs encoded by SMILES. These account for the superior performance of these two models in this context. Additionally, G-GCMS and G-LCMS also demonstrate competitive performances. This can be attributed to their direct measurement of molecular mass in spectrometry. Regarding #HBA, G-SMILES emerges as the top performer. It is expected because electronegative HBA atoms, such as oxygen (O) or nitrogen (N), are directly denoted in SMILES. Regarding #HBD, G-Image is the top-performing model, as HBD groups, such as hydroxyl groups (-OH), amine groups (-NH2), and carboxylic acids (-COOH), are explicitly illustrated in the images. However, hydrogens are not always contained in SMILES, resulting in G-SMILES has a slightly lower performance. Regarding LogP, G-13C NMR is the game winners, followed by the MGSSL (BFS) baseline. This outcome is attributable to the fact that LogP measures the solubility of a solute in both water and organic solvents, making it highly reliant on electronic information within the molecule. This electronic information is well captured by the 13C NMR techniques, and it remains concealed in other modalities such as Image, SMILES, or mass spectrometry. Regarding PSA, G-SMILES achieves the highest, followed closely by G-Image. The strong performance of G-SMILES reflects its explicit representation of electronegative atoms, such as oxygen and nitrogen, which directly contribute to PSA. Besides, G-Image captures polar regions through pixel-based molecular depictions. Regarding #R-Bonds, MoMu demonstrates the highest PCC, while G-1H NMR closely follows as the second best and significantly outperforms all other modalities. The reason is the rotation of chemical bonds can be derived from the magnetic equivalence of vicinal and geminal protons, while not revealed in other modalities.
Remark 1. Note that PCC only measures the linear correlation and doesn’t account for any nonlinear associations, a low PCC score doesn’t necessarily imply a lack of correlation. For example, when we examined the embedding visualization of G-1H NMR for the "#R-Bonds" task shown in Figure 5, we observed clear continuous color distributions, despite the PCC value being relatively low at 0.587.
Remark 2. These 8 chemical properties play vital roles in drug discovery. If the graph neural network is able to reveal this information, it offers the potential for deep models to understand the basic philosophy of molecular design and would even make breakthroughs in drug discovery. Note that these properties are never seen in ACML learning and the molecules we selected are also never seen in training, the surprising findings demonstrate our ACML framework can make a transition of chemical meaning to graph encoder, enhancing its interpretability.

MW | #HBA | #HBD | LogP | PSA | #R-Bonds | #C-Centers | QED | |
L2P-GNN | 0.053 | 0.026 | 0.027 | 0.029 | 0.026 | 0.024 | 0.237 | 0.056 |
MGSSL (BFS) | 0.259 | 0.639 | 0.618 | 0.700 | 0.670 | 0.331 | 0.789 | 0.371 |
MGSSL (DFS) | 0.255 | 0.638 | 0.504 | 0.673 | 0.649 | 0.266 | 0.759 | 0.381 |
MoMu | 0.263 | 0.305 | 0.225 | 0.386 | 0.280 | 0.617 | 0.609 | 0.213 |
MolCLR | 0.228 | 0.294 | 0.207 | 0.164 | 0.244 | 0.378 | 0.558 | 0.182 |
G-Image | 0.742 | 0.671 | 0.724 | 0.355 | 0.736 | 0.325 | 0.630 | 0.597 |
G-SMILES | 0.623 | 0.892 | 0.716 | 0.564 | 0.869 | 0.244 | 0.830 | 0.492 |
G-1H NMR | 0.202 | 0.361 | 0.227 | 0.588 | 0.303 | 0.587 | 0.388 | 0.235 |
G-13C NMR | 0.195 | 0.673 | 0.569 | 0.805 | 0.659 | 0.277 | 0.641 | 0.188 |
G-GCMS | 0.553 | 0.676 | 0.623 | 0.444 | 0.670 | 0.076 | 0.848 | 0.435 |
G-LCMS | 0.522 | 0.403 | 0.317 | 0.357 | 0.385 | 0.144 | 0.669 | 0.223 |
2.5 Molecular Property Prediction Tasks
Molecular property prediction is a critical graph-level prediction task used to assess the generalization capability of pre-trained graph encoders. In other words, it evaluates how well the encoder’s learned representations can be adapted to downstream tasks. Additionally, finetuning on a small dataset is challenging because models primarily depend on their pre-training capabilities in this scenario. By using a limited dataset, we can better assess the model’s representation learning capabilities and understand how well the pre-trained knowledge transfers to specific tasks. Thus, we intentionally choose to finetune the model using relatively small datasets to evaluate the pretrained graph encoder from the ACML framework across a range of downstream tasks.
2.5.1 Experimental Settings
Datasets.
We employ two key molecular benchmark resources. The first, MoleculeNet [54], is a commonly used benchmark specially designed for testing machine learning methods of molecular properties. The second, Therapeutics Data Commons (TDC) [55], is a recent platform providing molecular datasets and learning tasks for therapeutics. The molecular data of TDC are in SMILES format, we convert them into molecular graphs using OGB package [56]. Our focus is on tasks involving small molecule property prediction, with an emphasis on properties relevant to drug discovery. Therefore, in MoleculeNet, we consider 6 classification and 2 regression datasets. In TDC, we consider 12 binary classification datasets about (ADME) properties (absorption, distribution, metabolism, and excretion). A small-model drug needs to travel from the site of administration (e.g., oral) to the site of action (e.g., a tissue) and then decompose, and exit the body. To do that safely and efficaciously, the chemical is required to have numerous ideal ADME properties. For classification tasks, ROC-AUC is used as the evaluation metric, and for regression tasks, RMSE is used.
Models and Baselines.
For each task, we employ the GIN [57] with 5 layers as the backbone graph encoder, initialized with weights pre-trained by the ACML framework. We compare our models with other pretraining baselines on molecular graph learning. Among the baselines, L2P-GNN [51] and MGSSL [52] with different generation orders (BFS and DFS), MotifConv [44] propose self-supervised tasks within graphs to capture substructural or motif-level information during pretraining, while MolCLR [30] and MoMu [53] develop contrastive learning using augmented graphs or other modalities during pretraining. MotifConv was excluded from TDC experiments due to its high complexity during pretraining. For a fair comparison, all baselines also use the 5-layer GIN model as the backbone graph encoder. Experiments using GIN without pretraining are also conducted for comparison. More experimental details can be found in Section B.2.2. For each dataset, the average and standard deviation of model performance are reported based on 5 independent runs.
2.5.2 Results and Analysis
MoleculeNet Datasets.
Table 4 shows the test performance on MoleculeNet. First, by comparing models without pretrainnig, we observed that the best ACMLs generally outperform by big margins. Specifically, G-Image makes improvements in all datasets, including an average increase in AUC-ROC of 10.0% in BACE, 16.4% in Clintox, 7.9% in HIV, and reductions in RMSE with a 33.3% average decrease in ESOL and a 47.9% decrease in FreeSolv. These results suggest that contrastive learning with G-Image effectively captures useful information in molecular depictions, leading to a better understanding of the overall molecular structure. Second, when compared with other self-supervised pretraining strategies, the proposed ACML framework achieves the best performance on 7 of 8 datasets. These improvements indicate that our ACML framework is a powerful self-supervised learning strategy, which is easy to implement and requires minimal manual design of self-supervised tasks. Third, by comparing ACMLs using different chemical modalities as inputs, we observed that G-Image generally outperforms other modalities as it achieves the best performance in BACE, Sider, and ESOL, and ranks second in FreeSolv. Although G-Image does not lead in BBBP, Tox21 and HIV, its performance is close to the top. This is likely due to its ability to capture comprehensive molecular structure information, which is crucial for most molecular property prediction tasks. These observations on the superiority of G-Image are also consistent with the analysis in Section 2.4, where graph representations from G-Image exhibit strong correlations with various chemical properties relevant to drug discovery. In contrast, other modalities provide specific insights that could be beneficial for certain tasks but not universally effective. For instance, G-1H NMR achieves the highest average ROC-AUC of 83.9% in Clintox, 81.6% in Tox21, and 84.9% in HIV, but performs worse compared to the no-pre-trained models in BBBP (-3.3%) and Sider (-3.3%). Similar cases are observed for other tasks and other modalities.
BACE | BBBP | Clintox | Sider | Tox21 | HIV | Esol | FreeSolv | |
GIN (no pretrain) | 79.7 4.9 | 87.3 3.6 | 53.0 6.5 | 61.6 2.5 | 78.3 2.4 | 76.2 5.8 | 1.262 0.156 | 4.672 1.549 |
L2P-GNN | 81.9 0.4 | 87.0 0.8 | 54.0 2.4 | 61.7 1.8 | 77.4 0.6 | 78.0 2.6 | 1.772 0.479 | 3.733 1.323 |
MGSSL (DFS) | 79.7 0.8 | 70.5 1.1 | 79.7 2.2 | 60.5 0.7 | 76.4 0.4 | 79.5 1.1 | 0.914 0.095 | 3.401 0.896 |
MGSSL (BFS) | 79.1 0.9 | 69.7 0.1 | 80.7 2.1 | 61.8 0.8 | 76.5 0.3 | 78.8 1.2 | 0.861 0.056 | 2.596 0.997 |
MotifConv | 82.0 5.5 | 90.0 3.1 | 65.5 13.9 | 62.7 2.8 | 80.2 1.5 | 80.0 4.3 | 1.401 0.153 | 3.697 0.400 |
MoMu | 72.1 0.3 | 70.2 0.4 | 71.1 3.0 | 56.1 0.4 | 74.5 0.5 | 77.2 1.5 | 1.413 0.104 | 3.374 0.752 |
MolCLR | 74.6 1.1 | 68.0 2.0 | 62.0 2.7 | 56.3 2.2 | 73.9 0.5 | 72.1 1.3 | 1.354 0.213 | 2.817 0.911 |
G-Image | 89.7 0.8 | 87.4 1.0 | 69.4 2.3 | 62.9 1.6 | 80.9 0.8 | 84.1 1.7 | 0.842 0.002 | 2.431 0.002 |
G-SMILES | 85.6 0.5 | 83.7 0.5 | 68.5 0.4 | 58.6 0.4 | 81.0 0.2 | 82.8 0.7 | 0.940 0.002 | 2.337 0.002 |
G-1H NMR | 86.6 1.3 | 84.0 3.8 | 83.9 7.1 | 58.3 0.9 | 81.6 0.1 | 84.9 0.9 | 1.059 0.077 | 2.664 0.301 |
G-13C NMR | 87.5 1.1 | 81.7 3.2 | 75.6 6.0 | 54.8 0.8 | 81.6 0.5 | 84.4 0.7 | 1.049 0.037 | 3.340 0.314 |
G-GCMS | 84.7 2.1 | 79.1 2.1 | 67.7 8.9 | 60.3 2.7 | 79.9 1.1 | 83.7 0.9 | 1.116 0.038 | 3.807 0.277 |
G-LCMS | 79.7 4.7 | 86.5 3.4 | 62.3 9.6 | 60.9 0.6 | 81.4 0.8 | 82.2 0.5 | 1.108 0.136 | 3.886 0.196 |
HIA | PAMPA Permeability | Bioavailability | PGP Inhibition | CYP2C19 Inhibition | CYP2D6 Inhibition | |
---|---|---|---|---|---|---|
GIN (no pretrain) | 93.5 1.7 | 67.1 1.1 | 63.4 0.9 | 89.1 0.6 | 87.4 0.3 | 84.8 0.4 |
L2P-GNN | 82.8 2.0 | 50.3 2.0 | 61.7 1.9 | 76.6 1.9 | 83.1 0.2 | 80.2 0.3 |
MGSSL (BFS) | 96.6 0.5 | 70.9 0.9 | 62.1 0.3 | 90.3 0.6 | 88.0 0.1 | 85.0 0.1 |
MGSSL (DFS) | 91.8 1.0 | 65.4 4.4 | 59.9 3.6 | 87.9 0.4 | 87.2 0.2 | 83.6 0.4 |
MoMu | 91.1 2.7 | 67.5 2.0 | 61.6 0.8 | 88.8 0.4 | 86.4 0.1 | 83.2 0.4 |
MolCLR | 94.0 1.5 | 70.1 2.9 | 65.0 2.0 | 88.0 0.6 | 87.1 0.4 | 85.1 0.5 |
G-Image | 93.2 1.0 | 69.4 1.5 | 67.8 7.0 | 90.5 0.9 | 88.1 0.2 | 85.2 0.3 |
G-SMILES | 95.5 0.9 | 71.5 2.6 | 64.1 8.9 | 92.3 2.9 | 88.5 0.2 | 86.8 0.1 |
G-1H NMR | 94.9 2.2 | 72.7 2.6 | 72.3 2.5 | 90.3 2.4 | 88.3 0.3 | 85.7 0.6 |
G-13C NMR | 96.5 4.7 | 70.9 2.9 | 67.5 3.1 | 90.7 1.9 | 88.4 0.5 | 85.6 0.8 |
G-GCMS | 96.9 1.6 | 67.5 3.1 | 68.0 2.6 | 91.3 0.8 | 87.9 0.2 | 84.3 0.8 |
G-LCMS | 92.2 0.2 | 64.7 2.5 | 67.9 1.8 | 90.9 0.5 | 87.7 0.3 | 83.7 0.3 |
CYP3A4 Inhibition | CYP1A2 Inhibition | CYP2C9 Inhibition | CYP2C9 Substrate | CYP2D6 Substrate | CYP3A4 Substrate | |
---|---|---|---|---|---|---|
GIN (no pretrain) | 86.5 0.5 | 92.0 0.3 | 86.5 0.4 | 56.6 2.6 | 77.9 2.8 | 56.7 2.9 |
L2P-GNN | 79.1 0.3 | 88.1 0.2 | 78.6 0.3 | 59.9 2.0 | 70.6 2.2 | 51.6 1.7 |
MGSSL (BFS) | 86.8 0.2 | 92.0 0.1 | 86.4 0.2 | 58.0 1.4 | 78.6 1.4 | 57.4 2.0 |
MGSSL (DFS) | 85.5 0.2 | 91.5 0.1 | 85.5 0.1 | 59.5 1.4 | 77.7 2.5 | 60.9 0.7 |
MoMu | 85.6 0.4 | 91.5 0.1 | 86.4 0.3 | 56.1 2.1 | 79.8 0.6 | 57.7 2.6 |
MolCLR | 86.6 0.5 | 92.1 0.3 | 86.5 0.3 | 56.9 2.3 | 79.9 3.4 | 60.4 1.4 |
G-Image | 87.7 0.3 | 92.5 0.3 | 87.4 0.3 | 61.9 1.8 | 82.8 1.1 | 65.2 3.8 |
G-SMILES | 88.0 0.3 | 92.4 0.1 | 88.3 0.5 | 61.4 2.8 | 76.8 2.4 | 64.1 2.9 |
G-1H NMR | 87.8 0.3 | 92.2 0.2 | 87.8 0.2 | 63.4 1.3 | 79.1 4.0 | 57.3 2.3 |
G-13C NMR | 87.5 0.4 | 92.1 0.3 | 87.6 0.4 | 62.6 1.6 | 76.9 2.0 | 61.9 4.6 |
G-GCMS | 86.6 0.3 | 91.3 0.2 | 87.1 0.4 | 60.2 1.6 | 76.9 1.4 | 62.4 1.3 |
G-LCMS | 87.1 0.2 | 91.6 0.2 | 87.0 0.4 | 57.4 2.2 | 73.7 3.1 | 63.3 1.8 |
TDC Datasets.
The results are shown in Table 5. Similar to the analysis on MoleculeNet datasets, the proposed ACML framework achieves the best performance on all datasets, demonstrating its superiority over other self-supervised pretraining strategies. Specifically, we found that G-SMILES consistently performs well on inhibition-related property tasks, achieving top or near-top results across six tasks: PGP Inhibition, CYP2C19 Inhibition, CYP2D6 Inhibition, CYP3A4 Inhibition, CYP1A2 Inhibition, and CYP2C9 Inhibition. The model’s prediction determines whether a drug inhibits specific enzymes, potentially reducing the enzyme’s ability to metabolize. Therefore, the strength of using SMILES modality can be attributed to its capability to represent functional groups and structural motifs, which effectively indicate how a molecule interacts with enzymes through binding. G-Image performs the best on two substrate tasks (which involve drugs metabolized by specific enzymes), with an average AUC-ROC of 82.8% in CYP2D6 Substrate and 65.2% in CYP3A4 Substrate, while G-1H NMR works the best in PAMPA Permeability (72.7%), Bioavailability (72.3%), and CYP2C9 Substrate (63.4%). G-GCMS achieves the highest average ROC-AUC of 96.9% in HIA, along with second best results in Bioavailability (68.0%) and PGP Inhibition (91.3%), however, it shows only trivial gains or slight declines compared to non-pretrained model on CYP2D6 Inhibition (-0.5%), CYP1A2 Inhibition (-0.7%), and CYP2D6 Substrate (-1.0%). These results suggest that no single chemical modality emerges as universally optimal, rather, each modality contributes unique advantages tailored to certain tasks, which cannot be fully substituted by others.
3 Discussion and Conclusion
In this work, we introduced the ACML framework, a novel approach tailored for molecular representational learning based on multi-modality contrastive learning. We demonstrated the inheritance of molecular graph-like structure allows graph representation to convey extensive information about molecules and treat the graph neural networks, i.e., the graph encoder, as a receptor to assimilate chemical semantics through multi-modal coordination, leading to effective and explainable molecular representational learning. Extensive experimental results on important chemical tasks, such as isomer discrimination, uncovering crucial chemical properties for drug discovery, and molecular properties prediction from MoleculeNet and TDC datasets, demonstrate the cross-modality transfer ability and the power of graph neural networks. In addition, our findings reveal a series of interesting questions to consider: of particular interest is how to effectively train a graph encoder to make it tailored to molecular representational learning. Instead of designing a deep and complex GNN framework and proposing various levels of self-supervised pretraining tasks, we demonstrate that the use of multi-modality contrastive learning can allow a shallow GNN (e.g. with no more than 5 layers) with light training resources to produce an expressive graph encoder with great interpretability. In this work, we focused primarily on small molecules, so the learning performance on polymers or biomacromolecules has not been fully explored. In addition, the chemical encoders employed pretrained weights from existing literature, derived from large and comprehensive databases, which may include molecules overlapping with the validation or test sets in downstream tasks. The graph encoder interacted solely with the chemical encoders through contrastive learning without direct exposure to these datasets.
Appendix A Methods
A.1 The ACML framework
We propose a Contrastive Multimodal Representation Learning (ACML) framework for molecules, which mainly contains three parts: (1) unimodal encoder of chemical modality; (2) graph encoder; (3) projection modules. The goal is to map the hidden representations of the molecular graph and the chemical modality into the joint latent space.
First, the inputs of each ACML framework consist of a molecular graph modality and one chemical modality. For each chemical modality, we make use of effective pre-trained encoders from publicly available sources, which have been demonstrated to be effective in the corresponding downstream tasks (See Table 6). As for molecular graph representation, we employ the graph neural network (GNN) as the graph encoder, where we elaborated several GNN variants in Section A.2. These inputs are then fed into their respective encoders to produce embeddings. Mathematically, let and denote a molecule’s graph representation and its chemical modality representation, their embeddings after passing to their corresponding encoders are:
(1) |
It’s worth noting that and may have different dimensions, i.e., . To ensure that the information from two modalities could be coordinated in the same dimensional latent space, we apply two projection modules after both encoders. These modules assist in facilitating and coordinating between two modalities, resulting in the same dimensional hidden representations as projection outputs. In this study, the projection modules are designed as multiple-layer perceptrons (MLPs), which could be formulated as:
(2) |
where and have the same dimension.
Unimodal | Representation | Encoder | Pre-trained Source |
---|---|---|---|
Image | 2D image | CNN | Img2mol [29] |
SMILES | Sequence | Transformer | CReSS [58] |
1H NMR | Sequence | 1D CNN | N/Aa |
13C NMR | Sequence | 1D CNN | AutoEncoder [59] |
GCMS & LCMS | Sequence | 1D CNN | AutoEncoder [59] |
aThe 1H NMR encoder undergoes pre-training using a CNN-based network similar to what’s described in [59]. Unlike the 13C NMR and GCMS/LCMS encoders, the 1H NMR encoder focuses on reducing the embedding dimensions to compress information while preserving satisfactory reconstruction ability. The diversity in encoder choices highlights the versatility of our suggested framework. This means encoders with varying degrees of modality information can be effectively integrated into the GNN network.
A.2 Graph Encoder
Chemical molecules can be naturally represented as attributed relational graphs , where and are the node set and the edge set. Here a node and an edge represent the atom and the chemical bond connecting and , respectively. The corresponding atom features (atomic number, chirality tag, hybridization, etc,.) and bond features (bond types, stereotypes, etc,.) are treated as node attributes and edge attributes in the graph. Graph neural networks (GNNs) [60, 61] which operate on graphs, have been combined with deep learning [62] to learn representations on graph data [63, 64, 65]. GNNs are also demonstrated as the effective frameworks for molecular representational learning [66, 39].
Graph Convolution.
Many popular graph convolutional frameworks follow the message-passing scheme, including an iterative way of updating node representations based on the aggregations from neighboring nodes. Widely used approaches, such as GCN [67], GIN [57], GAT [68] and GraphSage [69], all follow message passing schemes. For the rest of the paper, we refer “message passing based GNNs” as “GNN”. For simplicity, we denote as node feature vector of node in -th layer, as (optional) edge feature vector from node to node , and let represent the neighboring node set of the central node . One message passing based convolution layer includes two steps: (1) create the message between the central node and its neighbor , formalized by the function ; (2) aggregate the messages of all neighbors of the central node using the operator , which denotes a differentiable and permutation invariant function, e.g., sum, mean or max operator. Mathematically, the latent node presentation is updated according to Equation 3 and will be fed into the next convolutional layer.
(3) |
where and denote differentiable functions like MLPs (Multi Layer Perceptrons). is able to captures the structural information within its -hop network neighborhood after convolutional layers.
Graph Pooling and Readout.
The node representations at the final iteration will be processed to generate the fixed-length graph-level representation , which involves graph pooling and readout operations. The pooling layer is usually applied to get a coarser graph which can be further reduced within a readout function, which is similar to conventional CNNs [70]. There is no strict distinction between pooling and readout operations in GNNs. Despite some studies [71, 72] that leveraged stacks of pooling layers for coarse graphs, most widely used works handling pooling and readout simultaneously by applying a permutation invariant function directly on all nodes in the graph to generate a fixed length graph representation [73, 74, 75, 76], which is generally formalized as
(4) |
The Choice of Aggregation Operator.
Many previous works [69, 57, 75, 77] demonstrate that the choice of aggregation function contributes significantly to the expressive power and performance of the model. In this work, we used sum operator in both message aggregation and graph readout function, which enables simple and effective learning of structural graph properties implied in [57].
A.3 Multimodal Contrastive Learning
Instance discrimination is a versatile technique that can be harnessed to contrast representations in various ways: within the same modality, across different modalities, or even through a combination of both modalities, referring to “joint" instance discrimination [78, 79]. The target of ACML is to learn a joint embedding between two different modalities while at the same time ensuring that similar features from the same modality stay close-by in the joint embedding. Therefore, in the implementation of the contrastive loss mechanism we adopt across-modality instance discrimination [80]. This intermodal contrastive strategy proves to be particularly beneficial in scenarios where understanding relationships and translating knowledge across modalities is a primary objective.
Mathematically, given a mini-batch of samples, , where represents the graph modality and represents the chemical modality of the -th molecule. According to Equ. 2, we denote as the hidden representation vectors after the projections. We denote the contrastive loss for the -th graph representation as:
(5) | ||||
where , , and is the temperature parameter. Similarly, the contrastive loss for the -th chemical modality representation is:
(6) |
Such loss design aims to guarantee that irrespective of the input source, whether it originates from two modalities, the optimization procedure is oriented towards attaining consistent representations.
Appendix B Experimental Details
B.1 Datasets
B.1.1 Datasets for Training ACML framework
For the molecular depiction image dataset, the molecules were sourced from the Natural Products Magnetic Resonance Database (NP-MRD) [81] and the images were produced by RDKit [82]. Similarly, the SMILES dataset, consisting of molecules and their corresponding SMILES representations, was also sourced from NP-MRD. In total, the dataset contains approximately 270,000 samples. For 1H NMR dataset and 13C NMR dataset, the molecules, 1H NMR spectra, and 13C NMR spectra were obtained from NP-MRD as well. There are about 18,000 samples in 1H NMR dataset and 18,000 samples in 13C NMR dataset. These NMR data are represented as 1D sequential data, consisting of peak locations and intensities. In detail, the raw NMR data contains the chemical shift locations and peak intensities. For 1H NMR, the chemical shift ranges from 0 - 10 ppm and for 13C NMR, the chemical shift ranges from 0 - 220 ppm. During 1D NMR processing, 1H NMR data is mapped onto a grid spanning 0 to 10 ppm with increments of 0.01 ppm, and 13C NMR data is mapped onto a grid spanning 0 to 220 ppm with increments of 0.1 ppm. The intensity corresponding to each chemical shift is recorded on the grid, with all other grid points set to 0. For the GCMS dataset and LCMS dataset, molecules, LCMS spectra, and GCMS spectra were sourced from MassBank of North America (MoNA) [83]. There are about 10,000 samples in the GCMS dataset and 13,000 samples in the LCMS dataset. We removed molecules with a graph size of 100 or more atoms, excluding hydrogen atoms from the graph size count. All data was accessed on July 15th, 2023, with no evidence of potential bias. To prevent data leakage in the validation dataset, duplicated spectra and molecules were removed.
B.1.2 Zero-Shot Datasets
For the molecular candidate pool, 1M molecules were randomly chosen from PubChem [84] with access on July 15th 2023, no intersection with the training dataset. For the spectra, 1000 spectra were randomly chosen from the validation dataset. The data was accessed on July 15th, 2023, with no potential bias identified. To prevent data leakage, duplicated spectra and molecules were removed, and all data cleaning steps are fully documented.
B.2 Training Details
B.2.1 Pretraining in ACML
For graph modality, each atom is embedded by its atomic number and chirality type, and each bond is embedded by its bond type and stereotype. We used GNNs with ReLU activation function and a sum pooling is applied on each graph as the final readout operation to extract the hidden graph representation. The best model settings are provided in Table 7. Besides GIN [57], we tried multiple convolution techniques and found that GCN [67] achieved similar results to GIN, whereas GAT [68] and GraphSage [69] degraded performance during validation. We noticed that the number of layers did not have a strong effect as long as it exceeded a value of 5, implying that a shallow graph encoder is expressive enough. For chemical modalities, we used pre-trained unimodal encoders according to Table 6, where each encoder keeps frozen in the training phase. Outputs from the graph encoder and the chemical unimodal encoder are projected into the same dimensional joint hidden space using a 2-layer or 3-layer MLP with a specific projection dimension as provided in Table 7. All models were optimized with AdamW [85] in Pytorch implementation with , and weight decay of 0.001. The model is trained with a batch size of 128 for a total of 100 epochs. Each model is trained on one NVIDIA Tesla V100 GPU using float32 precision. The entire framework is training efficiently since only one shallow graph encoder and two MLPs from the projection module need to be updated through training.
Model | GNN type | # Layers | GNN hidden dim | Projection dim |
---|---|---|---|---|
G-Image | GIN | 5 | 64 | 512 |
G-SMILES | GIN | 5 | 64 | 512 |
G-1H NMR | GIN | 5 | 512 | 256 |
G-13C NMR | GIN | 3 | 512 | 256 |
G-GCMS | GIN | 5 | 128 | [128, 128]∗ |
G-LCMS | GIN | 5 | 128 | [128, 128]∗ |
∗ G-GCMS and G-LCMS use the 3-layer MLP in the projection module while others use the 2-layer MLP in projection.
B.2.2 Finetuning on Molecular Property Prediction Tasks
The following configurations were applied to all finetuning tasks from MoleculeNet [54] and TDC [55] accessed on July 15, 2023. For each task, we utilized a 5-layer GIN [57] as the backbone graph encoder, exploring hidden dimensions of [64, 128, 300]. The GIN encoder was initialized with weights pre-trained using the ACML framework. We used a batch size of 32 and a maximal training epoch of 100. We use scaffold splitting [86] to divide the dataset into the training set, validation set, and test set with a ratio of 8:1:1. Scaffold splitting helps avoid data leakage in molecular datasets because it ensures that structurally similar molecules do not appear in both the training and test sets. All experiments were conducted on one Tesla V100 GPU. We used the Adam optimizer. Different learning rates from [1e-2, 1e-3, 1e-4, 1e-5] are explored and the best results are reported. Before training, we performed data cleaning to remove certain molecules that failed to pass the sanitizing process in the RDKit or contained abnormal valence of a certain atom, as suggested in [87, 88, 44]. The detailed dataset statistics are summarized in Table 8.
Dataset | Description | # Graphs | # Valid graphs | # Tasks |
---|---|---|---|---|
BACE | Quantitative (IC50) and qualitative (binary label) binding results for a set of inhibitors of human -secretase 1 (BACE-1). | 1513 | 1513 | 1 |
BBBP | Binary labels of blood-brain barrier penetration (permeability). | 2039 | 1953 | 1 |
Clintox | Qualitative data of drugs approved by the FDA and those that have failed clinical trials for toxicity reasons. | 1478 | 1469 | 2 |
Sider | Database of marketed drugs and adverse drug reactions (ADR), grouped into 27 system organ classes. | 1427 | 1295 | 27 |
Tox21 | Qualitative toxicity measurements on 12 biological targets, including nuclear receptors and stress response pathways. | 7831 | 7774 | 12 |
HIV | Experimentally measured abilities to inhibit HIV replication. | 41127 | 41125 | 1 |
Esol | Water solubility data (log solubility in mols per litre) for common organic small molecules. | 1128 | 1127 | 1 |
FreeSolv | Experimental and calculated hydration free energy of small molecules in water. | 642 | 639 | 1 |
B.2.3 Discussion of Training Efficiency
To validate our claim regarding training efficiency, we focused on the two phases: pretraining and fine-tuning.
Finetuning.
This stage is standardized across models, so the efficiency gains result solely from the graph architecture. We observed that in some datasets, a pretrained graph encoder with a smaller hidden dimension is already enough to achieve a good performance, as implied in Table 9. We believe this is because the proposed approach can learn good chemical knowledge through ACML pretraining.
Embedding Dimension | Datasets | ||||
---|---|---|---|---|---|
|
|
||||
128 |
|
||||
300 | PAMPA Permeability, Bioavailability |
Pretraining.
We found the challenge of fairly comparing efficiency solely by time, given the different types of resource utilization involved, such as human efforts, dataset size, and CPU/GPU resources. Overall, the proposed approach uses a relatively small number of learnable parameters (only one graph encoder and two MLPs for projection), which is lightweight with minimal preprocessing and limited human involvement. For instance, SMILES-to-graph conversion takes just five minutes per million molecules on a standard laptop. In contrast, some baseline methods demand significant CPU or human resources for preprocessing (e.g., constructing motif vocabulary) or for multimodal pretraining, which requires generating augmented graphs and optimizing augmentation settings. Besides preprocessing efficiency, we provide a clear resource-efficiency comparison of pretraining. For MoMu and MotifConv, we report resource utilization as stated in their original papers. For L2P-GNN, MGSSL, MolCLR and the proposed ACML instantiations, we reproduced pretraining on a single NVIDIA Tesla V100 32GB GPU with 100 epochs. We reported both the dataset sizes and the training time per epoch in Table 10. These benchmarks indicate that our approach requires fewer resources, which is advantageous in scenarios where computational and financial costs are critical factors.
Model | GPU Requirement | Dataset Size | Time per Epoch |
---|---|---|---|
MoMu | 8 V100, 32G | 16K | – |
MotifConv | 8 RTX 2080, 11G | – | Several hours per dataset |
L2P-GNN | 1 V100, 32G | 250K | 7 min |
MGSSL | 1 V100, 32G | 250K | 40 min |
MolCLR | 1 V100, 32G | 1 Million | 80 min |
G-Image | 1 V100, 32G | 250K | 3 min |
G-SMILES | 1 V100, 32G | 250K | 7 min |
G-13C NMR | 1 V100, 32G | 18K | 12 sec |
G-1H NMR | 1 V100, 32G | 18K | 12 sec |
G-GCMS | 1 V100, 32G | 13K | 8 sec |
G-LCMS | 1 V100, 32G | 10K | 7 sec |
Appendix C Competing Interests
All authors have no competing interests to declare.
Appendix D Declaration of Generative AI and AI-assisted Technologies in The Writing Process
During the preparation of this work, the authors used ChatGPT and Grammarly to grammar check and improve writing. After using this tool/service, the authors reviewed and edited the content as needed and took full responsibility for the content of the publication.
Appendix E Data Availability
The pre-training data from NP-MRD and MoNA, and the zero-shot PubChem dataset are publicly available online, well-organized, and can be directly downloaded. RDKit can assist in parsing the dataset into individual samples.
Appendix F Code Availability
The code accompanying this work is available Github: https://github.com/GainGod-Xu/ACMLProject. The supplementary materials S1 and S2 will be available when this work is formal published.
References
- [1] Liang, P. P., A. Zadeh, L.-P. Morency. Foundations and trends in multimodal machine learning: Principles, challenges, and open questions. 2023.
- [2] Ramesh, A., P. Dhariwal, A. Nichol, et al. Hierarchical text-conditional image generation with clip latents, 2022.
- [3] Alayrac, J.-B., J. Donahue, P. Luc, et al. Flamingo: a visual language model for few-shot learning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh, eds., Advances in Neural Information Processing Systems, vol. 35, pages 23716–23736. Curran Associates, Inc., New York, 2022.
- [4] Sun, C., A. Myers, C. Vondrick, et al. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 7464–7473. 2019.
- [5] Rombach, R., A. Blattmann, D. Lorenz, et al. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695. 2022.
- [6] Brodeur, S., E. Perez, A. Anand, et al. Home: a household multimodal environment. In NIPS 2017’s Visually-Grounded Interaction and Language Workshop. 2017.
- [7] Savva, M., A. Kadian, O. Maksymets, et al. Habitat: A platform for embodied ai research. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9338–9346. 2019.
- [8] Zubatiuk, T., O. Isayev. Development of multimodal machine learning potentials: Toward a physics-aware artificial intelligence. Accounts of Chemical Research, 54(7):1575–1585, 2021.
- [9] Stahlschmidt, S. R., B. Ulfenborg, J. Synnergren. Multimodal deep learning for biomedical data fusion: a review. Briefings in Bioinformatics, 23(2):bbab569, 2022.
- [10] Kline, A., H. Wang, Y. Li, et al. Multimodal machine learning in precision health: A scoping review. npj Digital Medicine, 5(1):171, 2022.
- [11] Ektefaie, Y., G. Dasoulas, A. Noori, et al. Multimodal learning with graphs. Nature Machine Intelligence, 5(4):340–350, 2023.
- [12] Wen, J., X. Zhang, E. Rush, et al. Multimodal representation learning for predicting molecule–disease relations. Bioinformatics, 39(2):btad085, 2023.
- [13] Radford, A., J. W. Kim, C. Hallacy, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
- [14] Kim, H., J. Lee, S. Ahn, et al. A merged molecular representation learning for molecular properties prediction with a web-based service. Scientific Reports, 11(1):11028, 2021.
- [15] Edwards, C., C. Zhai, H. Ji. Text2Mol: Cross-modal molecule retrieval with natural language queries. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 595–607. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 2021.
- [16] Pinheiro, G. A., J. L. F. Da Silva, M. G. Quiles. Smiclr: Contrastive learning on multiple molecular representations for semisupervised and unsupervised representation learning. Journal of Chemical Information and Modeling, 62(17):3948–3960, 2022.
- [17] Wang, Y., J. Wang, Z. Cao, et al. Molecular contrastive learning of representations via graph neural networks. Nature Machine Intelligence, 4(3):279–287, 2022.
- [18] Su, B., D. Du, Z. Yang, et al. A molecular multimodal foundation model associating molecule graphs with natural language. 2022.
- [19] Ektefaie, Y., G. Dasoulas, A. Noori, et al. Multimodal learning with graphs. Nature Machine Intelligence, 5(4):340–350, 2023.
- [20] Fang, Y., Q. Zhang, N. Zhang, et al. Knowledge graph-enhanced molecular contrastive learning with functional prompt. Nature Machine Intelligence, 5(5):542–553, 2023.
- [21] Yang, Z., J. Song, M. Yang, et al. Cross-modal retrieval between 13c nmr spectra and structures for compound identification using deep contrastive learning. Analytical Chemistry, 93(50):16947–16955, 2021.
- [22] Zhang, R., H. Xie, S. Cai, et al. Transfer-learning-based raman spectra identification. Journal of Raman Spectroscopy, 51(1):176–186, 2020.
- [23] Gao, K., D. D. Nguyen, V. Sresht, et al. Are 2d fingerprints still valuable for drug discovery? Physical chemistry chemical physics, 22(16):8373–8390, 2020.
- [24] Li, W., G. Wang, J. Ma. Deep learning for complex chemical systems. National Science Review, 10(12):nwad335, 2023.
- [25] Zou, Z., Y. Zhang, L. Liang, et al. A deep learning model for predicting selected organic molecular spectra. Nature Computational Science, pages 1–8, 2023.
- [26] Hirst, J. D., S. Boobier, J. Coughlan, et al. Ml meets mln: machine learning in ligand promoted homogeneous catalysis. Artificial Intelligence Chemistry, page 100006, 2023.
- [27] Lai, N. S., Y. S. Tew, X. Zhong, et al. Artificial intelligence (ai) workflow for catalyst design and optimization. Industrial & Engineering Chemistry Research, 62(43):17835–17848, 2023.
- [28] Tao, S., Y. Feng, W. Wang, et al. A machine learning protocol for geometric information retrieval from molecular spectra. Artificial Intelligence Chemistry, 2(1):100031, 2024.
- [29] Clevert, D.-A., T. Le, R. Winter, et al. Img2mol–accurate smiles recognition from molecular graphical depictions. Chemical science, 12(42):14174–14181, 2021.
- [30] Wang, Y., J. Wang, Z. Cao, et al. Molecular contrastive learning of representations via graph neural networks. Nature Machine Intelligence, 4(3):279–287, 2022.
- [31] Weininger, D. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1):31–36, 1988.
- [32] Zong, Y., O. M. Aodha, T. Hospedales. Self-supervised multimodal learning: A survey. 2023.
- [33] Guo, H., K. Xue, H. Sun, et al. Contrastive learning-based embedder for the representation of tandem mass spectra. Analytical Chemistry, 95(20):7888–7896, 2023.
- [34] Zeng, X., H. Xiang, L. Yu, et al. Accurate prediction of molecular properties and drug targets using a self-supervised image representation learning framework. Nature Machine Intelligence, 4(11):1004–1016, 2022.
- [35] Li, C., J. Feng, S. Liu, et al. A novel molecular representation learning for molecular property prediction with a multiple smiles-based augmentation. Computational Intelligence and Neuroscience, 2022, 2022.
- [36] Guo, Z., Y. Fan, C. Yu, et al. Gcmsformer: A fully automatic method for the resolution of overlapping peaks in gas chromatography–mass spectrometry. Analytical Chemistry, 96(15):5878–5886, 2024.
- [37] Jiang, D., Z. Wu, C.-Y. Hsieh, et al. Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. Journal of cheminformatics, 13(1):1–23, 2021.
- [38] David, L., A. Thakkar, R. Mercado, et al. Molecular representations in ai-driven drug discovery: A review and practical guide. Journal of Cheminformatics, 12(1):56, 2020.
- [39] Gilmer, J., S. S. Schoenholz, P. F. Riley, et al. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263–1272. PMLR, 2017.
- [40] Alsentzer, E., S. Finlayson, M. Li, et al. Subgraph neural networks. Advances in Neural Information Processing Systems, 33:8017–8029, 2020.
- [41] Sun, Q., J. Li, H. Peng, et al. Sugar: Subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism. In Proceedings of the Web Conference 2021, pages 2081–2091. 2021.
- [42] Guo, Z., B. Nan, Y. Tian, et al. Graph-based molecular representation learning. arXiv preprint arXiv:2207.04869, 2022.
- [43] Sun, J., M. Wen, H. Wang, et al. Prediction of drug-likeness using graph convolutional attention network. Bioinformatics, 38(23):5262–5269, 2022.
- [44] Wang, Y., S. Chen, G. Chen, et al. Motif-based graph representation learning with application to chemical molecules. In Informatics, vol. 10, page 8. MDPI, 2023.
- [45] Mucllari, E., V. Zadorozhnyy, Q. Ye, et al. Novel molecular representations using neumann-cayley orthogonal gated recurrent unit. Journal of Chemical Information and Modeling, 63(9):2656–2666, 2023.
- [46] Wu, K., Z. Zhao, R. Wang, et al. Topp–s: Persistent homology-based multi-task deep neural networks for simultaneous predictions of partition coefficient and aqueous solubility. Journal of computational chemistry, 39(20):1444–1454, 2018.
- [47] Wu, K., G.-W. Wei. Quantitative toxicity prediction using topology based multitask deep neural networks. Journal of chemical information and modeling, 58(2):520–531, 2018.
- [48] Tanimoto, T. T. Elementary mathematical theory of classification and prediction. 1958.
- [49] Jackson, J. E. A user’s guide to principal components. John Wiley & Sons, 2005.
- [50] Pearson, K. Notes on the history of correlation. Biometrika, 13(1):25–45, 1920.
- [51] Lu, Y., X. Jiang, Y. Fang, et al. Learning to pre-train graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, vol. 35, pages 4276–4284. 2021.
- [52] Zhang, Z., Q. Liu, H. Wang, et al. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34, 2021.
- [53] Su, B., D. Du, Z. Yang, et al. A molecular multimodal foundation model associating molecule graphs with natural language. arXiv preprint arXiv:2209.05481, 2022.
- [54] Wu, Z., B. Ramsundar, E. N. Feinberg, et al. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513–530, 2018.
- [55] Huang, K., T. Fu, W. Gao, et al. Artificial intelligence foundation for therapeutic science. Nature chemical biology, 18(10):1033–1036, 2022.
- [56] Hu, W., M. Fey, M. Zitnik, et al. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020.
- [57] Xu, K., W. Hu, J. Leskovec, et al. How powerful are graph neural networks? In International Conference on Learning Representations. 2019.
- [58] Yang, Z., J. Song, M. Yang, et al. Cross-modal retrieval between 13c nmr spectra and structures for compound identification using deep contrastive learning. Analytical Chemistry, 93(50):16947–16955, 2021.
- [59] Costanti, F., A. Kola, F. Scarselli, et al. A deep learning approach to analyze nmr spectra of sh-sy5y cells for alzheimer’s disease diagnosis. Mathematics, 11(12):2664, 2023.
- [60] Wu, Z., S. Pan, F. Chen, et al. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
- [61] Zhou, J., G. Cui, S. Hu, et al. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020.
- [62] LeCun, Y., Y. Bengio, G. Hinton. Deep learning. nature, 521(7553):436–444, 2015.
- [63] Baskin, I. I., V. A. Palyulin, N. S. Zefirov. A neural device for searching direct correlations between structures and properties of chemical compounds. Journal of chemical information and computer sciences, 37(4):715–721, 1997.
- [64] Sperduti, A., A. Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997.
- [65] Gori, M., G. Monfardini, F. Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., vol. 2, pages 729–734. IEEE, 2005.
- [66] Kearnes, S., K. McCloskey, M. Berndl, et al. Molecular graph convolutions: moving beyond fingerprints. Journal of computer-aided molecular design, 30:595–608, 2016.
- [67] Kipf, T. N., M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations. 2017.
- [68] Velickovic, P., G. Cucurull, A. Casanova, et al. Graph attention networks. stat, 1050(20):10–48550, 2017.
- [69] Hamilton, W., Z. Ying, J. Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
- [70] Gu, J., Z. Wang, J. Kuen, et al. Recent advances in convolutional neural networks. Pattern recognition, 77:354–377, 2018.
- [71] Cangea, C., P. Veličković, N. Jovanović, et al. Towards sparse hierarchical graph classifiers. arXiv preprint arXiv:1811.01287, 2018.
- [72] Ying, Z., J. You, C. Morris, et al. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.
- [73] Vinyals, O., S. Bengio, M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
- [74] Zhang, M., Z. Cui, M. Neumann, et al. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, vol. 32. 2018.
- [75] Corso, G., L. Cavalleri, D. Beaini, et al. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260–13271, 2020.
- [76] Buterez, D., J. P. Janet, S. J. Kiddle, et al. Graph neural networks with adaptive readouts. Advances in Neural Information Processing Systems, 35:19746–19758, 2022.
- [77] Tailor, S. A., F. Opolka, P. Lio, et al. Do we need anistropic graph neural networks? In International Conference on Learning Representations. 2022.
- [78] Zolfaghari, M., Y. Zhu, P. Gehler, et al. Crossclr: Cross-modal contrastive learning for multi-modal video representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1450–1459. 2021.
- [79] Morgado, P., N. Vasconcelos, I. Misra. Audio-visual instance discrimination with cross-modal agreement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12475–12486. 2021.
- [80] Shariatnia, M. M. Simple CLIP, 2021.
- [81] Wishart, D. S., Z. Sayeeda, Z. Budinski, et al. Np-mrd: the natural products magnetic resonance database. Nucleic Acids Research, 50(D1):D665–D677, 2022.
- [82] RDKit: Open-source cheminformatics. http://www.rdkit.org.
- [83] Massbank of north america (mona).
- [84] Kim, S., J. Chen, T. Cheng, et al. Pubchem 2023 update. Nucleic acids research, 51(D1):D1373–D1380, 2023.
- [85] Loshchilov, I., F. Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
- [86] Ramsundar, B., P. Eastman, P. Walters, et al. Deep learning for the life sciences: applying deep learning to genomics, microscopy, drug discovery, and more. O’Reilly Media, 2019.
- [87] Chen, J., S. Zheng, Y. Song, et al. Learning attributed graph representation with communicative message passing transformer. In Z.-H. Zhou, ed., Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 2242–2248. International Joint Conferences on Artificial Intelligence Organization, 2021. Main Track.
- [88] Lim, S., Y. O. Lee. Predicting chemical properties using self-attention multi-task learning based on smiles representation. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 3146–3153. IEEE, 2021.