This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Infrared and Visible Image Fusion with Hierarchical Human Perception


Guang Yang11, Jie Li11*, Xin Liu11, Zhusi Zhong11, and Xinbo Gao1,21,2
11School of Electronic Engineering, Xidian University, Xi’an, China 22Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
Abstract

Image fusion combines images from multiple domains into one image, containing complementary information from source domains. Existing methods take pixel intensity, texture and high-level vision task information as the standards to determine preservation of information, lacking enhancement for human perception. We introduce an image fusion method, Hierarchical Perception Fusion (HPFusion), which leverages Large Vision-Language Model to incorporate hierarchical human semantic priors, preserving complementary information that satisfies human visual system. We propose multiple questions that humans focus on when viewing an image pair, and answers are generated via the Large Vision-Language Model according to images. The texts of answers are encoded into the fusion network, and the optimization also aims to guide the human semantic distribution of the fused image more similarly to source images, exploring complementary information within the human perception domain. Extensive experiments demonstrate our HPFusoin can achieve high-quality fusion results both for information preservation and human visual enhancement.

Index Terms:
Image Fusion, Large Vision-Language Model, Human Perception

I Introduction

Image fusion is a kind of pixel-level multi-domain data fusion task that incorporates images from different sources into a single image. The generated high-quality fusion results should effectively preserve essential information from the input domains, while compressing redundant information, expected to be more informative for human perception[1].

Infrared images can easily distinguish thermal targets from the background in bad weather conditions, but suffer from low resolution, while visible images usually contain rich texture but are sensitive to dark or bad conditions[2]. Early infrared and visible image fusion (IVF) methods often treated infrared thermal information as pixel intensity, and texture information as gradients to constrain the fusion network to integrate complementary information[3]. Some recent methods[4, 5] cascade with high-level vision tasks to guide the fused images retaining features, which can improve results like detection and segmentation. Above IVF methods pursue higher statistical evaluation metrics and metrics of high-level vision tasks to satisfy human visual perception, but they barely consider language prior, which may result in fusion results not aligning with human subjective perceptions.

Recently, with the development of Large Vision-Language Model (LVM), image generation is capable of generating images that conform to human visual perception by aligning and integrating visual information with text information[6, 7, 8]. SUPIR[9] adopts LLaVA[10] to generate textual descriptions as the multi-modality language guidance to restore images.

In this paper, we propose a method named Hierarchical Perception Fusion (HPFusion) to introduce hierarchical human priors into the fusion network by utilizing guidance of LVM. We first propose multiple questions people tend to ask when viewing an infrared and visible image pair, such as ’What is the content of the image?’ and ’What targets are significant in this image?’. These questions demonstrate a hierarchical semantic structure from the overall context to specific region, simulating human semantic perception when trying to comprehend the image. In our work, we set four question sets to ask, which is shown in Fig. 1. Then, we use LLaVA to answer these questions according to the input image pairs, and the emphasis of answers differs between infrared images and visible images due to the modality characteristic. These textual prompts are encoded by the CLIP (Contrastive Language-Image Pre-training)[11] and merge with the fusion network to guide the fusion process to be more semantically and contextually. Lastly, the optimization of network combines the traditional fusion loss and hierarchical semantic loss in the CLIP space, with the latter being constrained by the CLIP text embeddings between the fused image and input image pair.

II Related Work

II-A Infrared and Visible Image Fusion

Traditional methods adopt sparse representation [12] and multi-scale transform [13] to handle information extraction and fusion. DenseFuse[3] is the first deep learning (DL)–based infrared and visible image fusion method. FusionGAN[14] utilizes generative adversarial network (GAN) to integrate thermal radiation and texture in an implicit way. TarDAL[15] cascade image fusion and object detection to mine information beneficial for both human inspection and high-level tasks. DDFM[16] formulates fusion task as a conditional generative problem under the denoising diffusion model. FILM[17] generates textual descriptions via the ChatGPT to convey a deep semantic understanding of the fusion network, guiding the extraction of crucial visual features.

II-B Large Visual-Language Model

CLIP[11] learns from natural language supervision at large-scale tasks during pre-training, exhibiting robust transfer performance without fine-tuning on various tasks, which have been widely used in low-level vision tasks. BLIP[18] bootstrap image captions from noisy web data, which performs substantial improvement on different vision-language tasks. LLaVA[10] is an Large Multimodal Model (LLM) that connects visual encoder of CLIP with the large language models, and fine-tunes the model on their generated instructional vision-language data. It can be capable of multimodal chat abilities for visual and language understanding.

III Method

In this section, we introduce the paradigm of our hierarchical human perception image fusion method.

Refer to caption

Figure 1: Questions that humans tend to ask when viewing the infrared and visible image pair and corresponding answers generated by LLaVA.

III-A Overview

Infrared images can highlight the thermal targets and provide high contrast with the background where people can distinguish essential information. Visible images usually contain finer details, such as texts and signs, where people also tend to pay attention to. Meanwhile, the overall information describing an image is also mindful.

We have devised four questions from a global context to specific local regions, in order to simulate human curiosity and concerns when observing image pairs that are abundant in both thermal and detailed information. For example, when we view the infrared and visible image pair in Fig. 1, we first attempt to comprehend the overall content so we ask LLaVA to describe the image (Q4). Subsequently, given that infrared images can usually highlight thermal targets, our objective is to identify the presence of various objects within the image (Q3). Lastly, infrared images have higher contrast than visible images, but visible images can also contain more details within a specific region. Therefore, we will focus on specific regions with high contrast and rich information content (Q1 and Q2). The generated textual prompts which incorporate hierarchical human priors are fused with the fusion network to guide the retention of useful visual features.

III-B Architecture

The overall architecture is shown in Fig. 2. For the given infrared and visible image IirI_{ir} and IvisI_{vis}, we design four questions that people tend to notice according to the thermal information of infrared and texture information of visible images to simulate hierarchical human perception.

Refer to caption

Figure 2: The overall architecture of our fusion network, consisting of Human Perception Module, Cross-attention Block and Fusion Network.

Human Perception Module (HPM) consists of LLaVA and CLIP encoder to generate text features ΦirT\Phi_{ir}^{T} and ΦvisT\Phi_{vis}^{T} as human priors. To guide the fusion of visual features, we use 1×11\times 1 convolutional layer to reduce the text features of the 4 answers to 1 dimension, then concatenate them to form the fused text feature ΦisT\Phi_{i-s}^{T}.

The architecture of the fusion network follows the previous work MDA[19] as baseline, and the fused text feature ΦisT\Phi_{i-s}^{T} is combined with visual feature MirM_{ir} and MvisM_{vis} extracted by the convolutional block of MDA, guiding the subsequent fusion of visual features, which is as follows:

Fir,Fvis=CA(Mir,Mvis,ΦisT)F_{ir},F_{vis}=CA(M_{ir},M_{vis},\Phi_{i-s}^{T}) (1)

where CA()CA(\cdot) denotes the cross-attention mechanism, and the Query (Q) is calculated by the ΦisT\Phi_{i-s}^{T}, while Key (K) and Value (V) are calculated by the MirM_{ir} and MvisM_{vis}. We cascade two Cross-attention Blocks to integrate the text and visual features comprehensively.

Finally, FirF_{ir} and FvisF_{vis} are fused with the multi-scale encoder and decoder of MDA and then reconstruct the fused image IfI_{f}.

Refer to caption

Figure 3: Architecture of the Human Perception Module.

III-C Human Perception Module

The architecture of HPM is shown in Fig. 3. For the input image pair, we ask four questions and answer via LLaVA according to infrared and visible images. After obtaining the text of answers Tir77×4T_{ir}\in\mathbb{R}^{77\times 4} and Tvis77×4T_{vis}\in\mathbb{R}^{77\times 4} from LLaVA, we encode texts of these answers by parameter-frozen CLIP text encoder text\mathcal{E}_{text} to get the text features ΦirT\Phi_{ir}^{T} and ΦvisT\Phi_{vis}^{T}, guiding both fusion of visual features and optimization of training process.

To guide the fusion of visual features, ΦirT\Phi_{ir}^{T} and ΦvisT\Phi_{vis}^{T} are dimensionally reduced through the 1×11\times 1 convolutional layer and concatenation, and then ΦisT\Phi_{i-s}^{T} is fused with visual features MirM_{ir} and MvisM_{vis} by cross-attention mechanism. To guide the optimization of training process, the input image pair IirI_{ir} and IvisI_{vis} are encoded by the parameter-frozen CLIP image encoder img\mathcal{E}_{img} to generate visual features ΦirV\Phi_{ir}^{V} and ΦvisV\Phi_{vis}^{V}, and text-image similarity inner the batch is calculated by the cosine similarity between Φir/visV\Phi_{ir/vis}^{V} and Φir/visT\Phi_{ir/vis}^{T} to represent semantic distribution.

Similarly, text-image similarity inner the batch of the fused image IfI_{f} is calculated via the above method as well. Specifically, after obtaining the fused image IfI_{f}, we also ask the same four questions for IfI_{f} and use LLaVA to generate the text of answers TfT_{f}, and encode answers into text features ΦfT\Phi_{f}^{T} by CLIP text encoder text\mathcal{E}_{text}. Then the cosine similarity can be calculated by the ΦfT\Phi_{f}^{T} and ΦfV\Phi_{f}^{V}.

III-D Loss Function

The degree of information retention is determined by the optimization objective, thus we construct image loss LimageL_{image} to preserve both intensity and detail information from source images, while hierarchical semantic loss LhierL_{hier} aims at driving texts of answers guiding the preservation of information that prioritizes human focus and comprehension.

The image loss includes pixel intensity and detail items to maintain informational fidelity, which are maximum intensity loss, maximum gradient loss and structural similarity index metric (SSIM) loss, as shown in (2) and (3).

Lint=Ifmax(Iir,Ivis)1L_{int}={\left\|I_{f}-max(I_{ir},I_{vis})\right\|}_{1} (2)
Ldetail=(1SSIM(If,Iir))+(1SSIM(If,Ivis))+Ifmax(Iir,Ivis)1\begin{split}L_{detail}&=(1-SSIM(I_{f},I_{ir}))+(1-SSIM(I_{f},I_{vis}))\\ &+{\left\|\nabla I_{f}-max(\nabla I_{ir},\nabla I_{vis})\right\|}_{1}\end{split} (3)

The hierarchical semantic loss constrains the salient regions that humans focus on between fused image and source image pair. We expect the text-image similarity between the fused image and source image pair to be close, indicating that the distribution of each image with its textual answers is similar between the fused image and source image pair. Firstly, we measure the similarity score between the text vector of an answer and the corresponding image vector by calculating the cosine similarity in the CLIP space, which is as follows:

S(Im)=ecos(img(Im),text(Tmi))i{batch}ecos(img(Im),text(Tmi))S(I_{m})=\frac{e^{cos(\mathcal{E}_{img}(I_{m}),\mathcal{E}_{text}(T_{m}^{i}))}}{\sum_{i\in\{batch\}}e^{cos(\mathcal{E}_{img}(I_{m}),\mathcal{E}_{text}(T_{m}^{i}))}} (4)

where m{ir,vis,fusion}m\in\{ir,vis,fusion\}, and ii denotes other samples within a batch. img\mathcal{E}_{img} and text\mathcal{E}_{text} are parameter-frozen CLIP image and text encoder, respectively. Then, we adopt the similarity score between the fused image and the source image pair to be closer for the four answers, which expressed as hierarchical human concerns:

Lhier=j4(S(Ifj)S(Iirj)1+S(Ifj)S(Ivisj)1)L_{hier}=\sum_{j\in 4}\left(\left\|S(I_{f}^{j})-S(I_{ir}^{j})\right\|_{1}+\left\|S(I_{f}^{j})-S(I_{vis}^{j})\right\|_{1}\right) (5)

where jj corresponds to four different question-answer sets.

Refer to caption

Figure 4: Qualitative comparison of our method with 6 state-of-the-art models on five infrared and visible image pairs of the M3FDM^{3}FD dataset. The first and second columns are infrared and visible images, respectively. From the third to ninth columns are images fused by comparsion methods.

The total loss function is composed of pixel intensity loss, detail loss and hierarchical semantic loss, which is as follows:

Ltotal=Limage+β×Lhier=Lint+α×Ldetail+β×Lhier\begin{split}L_{total}=&L_{image}+\beta\times L_{hier}\\ =&L_{int}+\alpha\times L_{detail}+\beta\times L_{hier}\end{split} (6)

We set values of α\alpha and β\beta to 4 and 1, respectively.

IV Experiments

In this section, we demonstrate the effectiveness of our HPFusion on preserving thermal and detail information. We train and test our on the M3FDM^{3}FD dataset[15], with 4200 image pairs in the training set and 300 image pairs in the test set. We train our network for 100 epochs with Adam optimizer and batch size is set to 8. All the images during the training phase are resized to 224×224224\times 224. Two NVIDIA GeForce RTX 3090 GPUs are used in the training. One for the inference of LLaVA and the other for the training of the fusion network.

IV-A Qualitative Experiments

We choose six state-of-the-art image fusion methods to compare the performance of preserving salient targets and detail information, which are RFN-Nest[20], LRRNet[21], YDTR[22], GANMcC[23], EMMA[24] and our baseline MDA[19]. Qualitative results are shown in Fig. 4. The first two columns are source infrared and visible images, and the red and blue boxes in the fusion results from the third to ninth columns enlarge the details in the fused images. Some thermal targets exhibit low intensity in visible images, such as humans in the second rows and vehicles in the last rows. Compared with LRRNet and YDTR, our HPFusion can highlight the saliency of these targets, meanwhile keeping the contrast of visible images. Additionally, the detail information is effectively preserved in our results, like the sharp edges of targets and background, which is damaged in fused images generated by the GANMcC.

IV-B Quantitative Experiments

Five quantitative metrics are employed for evaluation, which are mean squared error (MSE), structural similarity index metric (SSIM), correlation coefficient (CC), peak signal-to-noise ratio (PSNR) and edge retentiveness (QAB/FQ^{AB/F}). These metrics provide a comprehensive assessment of the retention of pixel intensity and edge information transferred from the source images. Specifically, a lower MSE indicates a higher degree of preserving pixel intensity, whereas the other four metrics indicate the opposite trend. Comparison results are shown in upper part of Table I, where the method proposed achieves the best performance in four metrics, indicating the effective preservation of thermal and detail information.

IV-C Ablation Studies

We investigate the effectiveness of HPM and hierarchical semantic loss in our ablation studies. Quantitative results are shown in the lower part of Table I. Our HPFusion consisting of HPM and LhierL_{hier} achieves best performance across four metrics, which demonstrates the superiority of HPFusion in balancing the thermal saliency and texture details.

V Conclusion

In this work, we propose an infrared and visible image fusion method incorporating hierarchical human perception by leveraging Large Vision-Language Model. We design four questions and use LLaVA to generate the texts of answers, which are subsequently encoded into textual embeddings by CLIP for guiding fusion and optimization. Experiments show that our fusion method can effectively preserve thermal and detail information, thereby enhancing human comprehension of the context of the image pair.

TABLE I: Quantitative comparison with state-of-the-art methods and ablation studies. The best performance are shown in bold, and the second and third best performance are shown in red and blue, respectively
Method MSE\downarrow SSIM\uparrow PSNR\uparrow CC\uparrow QAB/FQ^{AB/F}\uparrow
RFN-Nest 0.034 0.397 63.373 0.572 0.406
GANMcC 0.037 0.391 62.956 0.571 0.268
YDTR 0.044 0.471 62.728 0.554 0.478
LRRNet 0.039 0.388 62.952 0.541 0.498
EMMA 0.057 0.451 61.686 0.502 0.592
MDA 0.033 0.438 62.517 0.585 0.487
HPFusion 0.032 0.500 63.794 0.595 0.505
Ablation Studies
HPM 𝑳𝒉𝒊𝒆𝒓\boldsymbol{L_{hier}} MSE\downarrow SSIM\uparrow PSNR\uparrow CC\uparrow QAB/FQ^{AB/F}\uparrow
×\times \surd 0.033 0.489 63.677 0.574 0.523
\surd ×\times 0.058 0.467 61.507 0.543 0.502
\surd \surd 0.032 0.500 63.794 0.580 0.505

Acknowledgment

This work is supported by the National Natural Science Foundation of China under Grants U21A20514, 62176195, 62441601 and 62036007.

References

  • [1] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” information Fusion, vol. 33, pp. 100–112, 2017.
  • [2] J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information fusion, vol. 45, pp. 153–178, 2019.
  • [3] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614–2623, 2018.
  • [4] L. Tang, J. Yuan, and J. Ma, “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Information Fusion, vol. 82, pp. 28–42, 2022.
  • [5] J. Liu, Z. Liu, G. Wu, L. Ma, R. Liu, W. Zhong, Z. Luo, and X. Fan, “Multi-interactive feature learning and a full-time multi-modality benchmark for image fusion and segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 8115–8124.
  • [6] Z. Liang, C. Li, S. Zhou, R. Feng, and C. C. Loy, “Iterative prompt learning for unsupervised backlit image enhancement,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8094–8103.
  • [7] S. Yang, M. Ding, Y. Wu, Z. Li, and J. Zhang, “Implicit neural representation for cooperative low-light image enhancement,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 12 918–12 927.
  • [8] H. Sun, W. Li, J. Liu, H. Chen, R. Pei, X. Zou, Y. Yan, and Y. Yang, “Coser: Bridging image and language for cognitive super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 25 868–25 878.
  • [9] F. Yu, J. Gu, Z. Li, J. Hu, X. Kong, X. Wang, J. He, Y. Qiao, and C. Dong, “Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 25 669–25 680.
  • [10] H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual instruction tuning,” Advances in neural information processing systems, vol. 36, 2024.
  • [11] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning.   PMLR, 2021, pp. 8748–8763.
  • [12] F. G. Veshki and S. A. Vorobyov, “Coupled feature learning via structured convolutional sparse coding for multimodal image fusion,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 2500–2504.
  • [13] Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters,” Information fusion, vol. 30, pp. 15–26, 2016.
  • [14] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information fusion, vol. 48, pp. 11–26, 2019.
  • [15] J. Liu, X. Fan, Z. Huang, G. Wu, R. Liu, W. Zhong, and Z. Luo, “Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5802–5811.
  • [16] Z. Zhao, H. Bai, Y. Zhu, J. Zhang, S. Xu, Y. Zhang, K. Zhang, D. Meng, R. Timofte, and L. Van Gool, “Ddfm: denoising diffusion model for multi-modality image fusion,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8082–8093.
  • [17] Z. Zhao, L. Deng, H. Bai, Y. Cui, Z. Zhang, Y. Zhang, H. Qin, D. Chen, J. Zhang, P. WANG, and L. V. Gool, “Image fusion via vision-language model,” in Forty-first International Conference on Machine Learning, 2024. [Online]. Available: https://openreview.net/forum?id=eqY64Z1rsT
  • [18] J. Li, D. Li, C. Xiong, and S. Hoi, “Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation,” in International conference on machine learning.   PMLR, 2022, pp. 12 888–12 900.
  • [19] G. Yang, J. Li, H. Lei, and X. Gao, “A multi-scale information integration framework for infrared and visible image fusion,” Neurocomputing, vol. 600, p. 128116, 2024.
  • [20] H. Li, X.-J. Wu, and J. Kittler, “Rfn-nest: An end-to-end residual fusion network for infrared and visible images,” Information Fusion, vol. 73, pp. 72–86, 2021.
  • [21] H. Li, T. Xu, X.-J. Wu, J. Lu, and J. Kittler, “Lrrnet: A novel representation learning guided fusion network for infrared and visible images,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 9, pp. 11 040–11 052, 2023.
  • [22] W. Tang, F. He, and Y. Liu, “Ydtr: Infrared and visible image fusion via y-shape dynamic transformer,” IEEE Transactions on Multimedia, vol. 25, pp. 5413–5428, 2022.
  • [23] J. Ma, H. Zhang, Z. Shao, P. Liang, and H. Xu, “Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–14, 2020.
  • [24] Z. Zhao, H. Bai, J. Zhang, Y. Zhang, K. Zhang, S. Xu, D. Chen, R. Timofte, and L. Van Gool, “Equivariant multi-modality image fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 25 912–25 921.