This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\icwg

Spatial-Spectral Manifold Embedding of Hyperspectral Data

D. Hong1,2    J. Yao1,3,4    X. Wu5    J. Chanussot6    X. Zhu1,3 1 Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany
- (danfeng.hong, jing.yao, [email protected])
2 Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
3 Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany
4 School of Mathematics and Statistics, Xi’an Jiaotong University, 710049 Xi’an, China
- [email protected]
5 School of Information and Electronics, Beijing Institute of Technology (BIT), 100081 Beijing, China.
- [email protected]
6 Univ. Grenoble Alpes, INRIA, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
- [email protected]
Abstract

This is a pre-print version accepted for publication in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.

In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods.

keywords:
Classification, embedding, hyperspectral data, manifold learning, remote sensing, spatial-spectral.

1 Introduction

Currently operational hyperspectral missions, such as DLR Earth Sensing Imaging Spectrometer (DESIS) [Krutz et al., 2018], Gaofen-5 [Ren et al., 2017], Environmental Mapping and Analysis Program (EnMAP) [Guanter et al., 2009], enable the recognition and identification of the materials of interest at a more accurate level compared to the multispectral data [Hong et al., 2015] or RGB data [Wu et al., 2018, Wu et al., 2019]. However, due to the effects of curse of dimensionality, some drawbacks are inevitably introduced with the high spectral dimensionality, possibly leading to the degradation of spectral information. As a result, the dimensionality reduction is a necessary step before the high-level data analysis is performed.

Refer to caption
Figure 1: An illustration for the holistic workflow of the proposed SSME model.

Over the past decades, a large amount of dimensionality reduction approaches have been successfully applied in many computer vision related fields, such as low-level vision analysis [Bi et al., 2017, Kang et al., 2020], biometric [Hong et al., 2014, Hong et al., 2016b], large-scale data classification [Hong et al., 2016c, Huang et al., 2020a, Bi et al., 2019a, Huang et al., 2020b, Bi et al., 2019b], multimodal data analysis [Zhang et al., 2019b, Zhang et al., 2019a, Hong et al., 2020a], data fusion [Hu et al., 2019a, Hu et al., 2019c], etc. Among them, spectral manifold embedding, as a popular topic in hyperspectral dimensionality reduction [Hong et al., 2016a], has attracted a growing attention in various hyperspectral remote sensing applications, such as hyperspectral image denoising [Cao et al., 2018b, Cao et al., 2018a], land cover and land use classification [Hang et al., 2019, Hong et al., 2019d], spectral unmixing [Hong, Zhu, 2018, Hong et al., 2019b, Yao et al., 2019], target detection and recognition [Li et al., 2018, Wu et al., 2020a], and multimodal data analysis [Liu et al., 2019, Hong et al., 2019c, Hang et al., 2020]. It is well known that the hyperspectral imagery is a three-dimensional imaging product by continuously scanning the region of interest (ROI) to obtain hundreds or thousands of two-dimensional images finely sampled from the wavelength nearly covering the whole electromagnetic spectrum, e.g., 300nm to 2500nm. This enables the identification and detection of materials lying on the surface of the Earth at a more accurate level compared to other optical data, e.g., RGB. In the meanwhile, high dimensional spectral signatures are also introduced some serious drawbacks. For example, high storage and computational cost, redundant information, and complex noises caused by atmospheric correlation would have a negative influence on the spectral discrimination of hyperspectral images, further degrading the performance of high-level applications, e.g., classification, detection.

Recently, enormous effects have been made to enhance the quality of low dimensional hyperspectral embedding in the spectral domain. More specifically, Hong et al. [Hong et al., 2016a] proposed robustly select the neighbouring pixels for hyperspectral dimensionality reduction. The same investigators in [Hong et al., 2017] further design a novel hyperspectral low-dimensional embedding algorithm to learn a robust local manifold representation for dimensionality reduction of hyperspectral images. Inspired by the recent success of deep learning [Wu et al., 2020b], Hong et al. [Hong et al., 2018] developed a joint and progressive learning strategy to learn the low-dimensional representations by using manifold regularization techniques in each layer. The proposed deep embedding model has demonstrated its superiority and effectiveness in the hyperspectral dimensionality reduction task.

Yet the spatial information [Hong et al., 2020b] is less investigated by the researchers who are working in the remote sensing community in the process of hyperspectral embedding [Rasti et al., 2020]. It is well known that the spatial information has been proven to be effective in the hyperspectral image classification task [Cao et al., 2020], owing to the important and reasonable assumption in hyperspectral images, that is, the target pixel and its neighboring pixels would share the same category to a great extent. We have to admit, however, that the spatial information modeling is capable of improving the discriminative ability of learned embedding representations more effectively, as the spatial structure is one of most important physically meaningful properties in hyperspectral imaging.

For the aforementioned reason, in this paper we attempt to develop a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods in terms of overall accuracy (OA), average accuracy (AA), and kappa coefficient (κ\kappa). More specifically, our contributions of this paper can be highlighted as follows:

  • A novel hyperspectral dimensionality reduction approach – spatial-spectral manifold embedding (SSME) – is devised to learn the low-dimensional manifold embedding of the hyperspectral data.

  • Beyond the pixel-wise spectral embedding, we propose to construct the spatial-spectral weight matrix in spectral embedding, yielding more smooth low dimensional hyperspectral embedding.

  • Experimental results conducted on a widely-used hyperspectral data demonstrate the effectiveness and superiority of the proposed SSME approach.

The rest of this paper is organized as follows. Section 2 details the methodology of the proposed SSME approach with some necessary formulation derivation. Accordingly, extensive experiments are conducted in comparison with several competitive methods in Section 3. Finally, we draw a conclusion in Section 4.

2 Methodology

Manifold embedding, also known as manifold learning, is built on the graph embedding framework [Yan et al., 2006] by attempts to capture the underlying structure of the original data and preserve it in the latent embedding space. The embedding process mainly consists of three steps in the following.

  • Neighbor selection on the spectral domain by spectrally measuring the similarities between pixels;

  • Adjacency matrix computation between each target pixel and its neighbouring pixels by using regression-based methods [Hong, 2019] or Gaussian kernel functions;

  • Calculation of embedding by solving a generalized eigen-decomposition problem.

Unlike the previous manifold embedding techniques, such as locally linear embedding (LLE) [Roweis, Saul, 2000], Laplacian eigenmaps (LE) [Belkin, Niyogi, 2002], and their linearized approaches: locality persevering projections (LPP) [He, Niyogi, 2004] and neighborhood preserving embedding (NPE) [He et al., 2005], that only conduct on the spectral domain, the newly-proposed spatial-spectral manifold embedding (SSME) performs the low-dimensional embedding process from both spatial and spectral domains in a joint fashion. Similarly, SSME follows the graph embedding framework as well. Given a hyperspectral image 𝐗D×N\mathbf{X}\in\mathcal{R}^{D\times N} with DD bands by NN pixels, 𝐱i,j\mathbf{x}_{i,j} is denoted as the spectral signature located in (i,j)(i,j) of the image. We then have

  • Spatial-spectral neighbor selection by using spatially prior knowledge and Euclidean distance based similarity measurement in spectral domains, respectively, which can be written as follows

    ϕispa[𝐱i,j,𝐱i1,j,𝐱i,j1,𝐱i+1,j,𝐱i,j+1],\displaystyle\phi_{i}^{spa}\leftarrow[\mathbf{x}_{i,j},\mathbf{x}_{i-1,j},\mathbf{x}_{i,j-1},\mathbf{x}_{i+1,j},\mathbf{x}_{i,j+1}], (1)
    ϕispesort{{𝐱i𝐱k2}i=1N}k=1N.\displaystyle\phi_{i}^{spe}\leftarrow sort\{\{\lVert\mathbf{x}_{i}-\mathbf{x}_{k}\rVert_{2}\}_{i=1}^{N}\}_{k=1}^{N}.

    ϕispa\phi_{i}^{spa} and ϕispe\phi_{i}^{spe} denote the spatial and spectral neighbours of the target pixel 𝐱i\mathbf{x}_{i}, where the latter one can be obtained by sorting the Euclidean distances (sortsort).

  • Spatially-induced adjacency matrix generation by computing the regression coefficients or weights between the target pixels and their spatial-spectral neighbours. The process can be formulated by

    min𝐰i,0jϕispa𝐱i,jkϕispe𝐱i,kwi,k,j22\displaystyle\mathop{\min}_{\mathbf{w}_{i,0}}\sum_{j\in\phi_{i}^{spa}}\lVert\mathbf{x}_{i,j}-\sum_{k\in\phi_{i}^{spe}}\mathbf{x}_{i,k}w_{i,k,j}\rVert_{2}^{2} (2)
    s.t.kϕispe𝐱i,k(4wi,k,0k=14wi,k,j)22η,\displaystyle{\rm s.t.}\;\lVert\sum_{k\in\phi_{i}^{spe}}\mathbf{x}_{i,k}(4w_{i,k,0}-\sum_{k=1}^{4}w_{i,k,j})\rVert_{2}^{2}\leq\eta,
    𝐰i,jT𝐰i,j=1,\displaystyle\qquad\mathbf{w}_{i,j}^{\operatorname{T}}\mathbf{w}_{i,j}=1,

    where 𝐱i,kϕi,kspe\mathbf{x}_{i,k}\in\phi_{i,k}^{spe} represents the kk nearest neighbors selected from the spectral domain, and jϕispaj\in\phi_{i}^{spa} is defined as the target pixel in the HSI and its neighbouring pixels, respectively. Accordingly, 𝐰i,j=[wi,1,j,,wi,k,j,],jϕispa\mathbf{w}_{i,j}=[w_{i,1,j},...,w_{i,k,j},...],j\in\phi_{i}^{spa}, where 𝐰i,0\mathbf{w}_{i,0} is the to-be-estimated regression coefficients of the target pixel, and η\eta is the tolerate error (10310^{-3} in our case).

    With the estimated 𝐰\mathbf{w}, the affinity weights 𝐀\mathbf{A} can be obtained by using the following equation:

    𝐀i,0,k={𝐰i,0,k+𝐰k,i,0𝐰i,0,k𝐰k,i,0,kϕispe,0,otherwise.\displaystyle\mathbf{A}_{i,0,k}=\begin{cases}\mathbf{w}_{i,0,k}+\mathbf{w}_{k,i,0}-\mathbf{w}_{i,0,k}\mathbf{w}_{k,i,0},\;\;k\in\phi_{i}^{spe},\\ 0,\;\;{\rm otherwise}.\end{cases} (3)
  • Joint embedding guided by the aforementioned adjacency matrix by solving a generalized eigen-decomposition problem. Once the affinity matrix 𝐀\mathbf{A} is given, the final hyperspectral embedding 𝐘={𝐲i=1N}\mathbf{Y}=\{\mathbf{y}_{i=1}^{N}\} can be computed by solving the minimization problem as follows.

    min𝐘i=1N𝐲ikϕi,kspe𝐀i,k𝐲k22,\displaystyle\mathop{\min}_{\mathbf{Y}}\sum_{i=1}^{N}\lVert\mathbf{y}_{i}-\sum_{k\in\phi_{i,k}^{spe}}\mathbf{A}_{i,k}\mathbf{y}_{k}\rVert_{2}^{2}, (4)
    s.t.i=1N𝐲i=0,1Ni=1N𝐲i𝐲iT=𝐈.\displaystyle{\rm s.t.}\;\sum_{i=1}^{N}\mathbf{y}_{i}=0,\;\;\frac{1}{N}\sum_{i=1}^{N}\mathbf{y}_{i}\mathbf{y}_{i}^{T}=\mathbf{I}.

Fig. 1 illustrates a workflow of our proposed SSMR method for extracting the low-dimensional hyperspectral embedding representations.

3 Experiments

3.1 Data Description

To assess the effectiveness of the proposed SSME in hyperspectral embedding task, classification is selected to be a potential strategy [Gao et al., 2020]. In our case, a simple but efficient classifier: nearest neighbour (NN), is used. Moreover, as a widely-used and classic hyperspectral dataset, the Indian pines scene is chosen in our experiments. It was acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor in the northwestern of Indiana, USA, which consists of 145×145145\times 145 pixels with 220 spectral bands covering the wavelength from 400nmnm to 2500nmnm. There are 16 classes in the studied scene. A fixed and popular training and test sets widely used in many references [Hong et al., 2019a] is given in Table 1, and a false-color image of the hyperspectral data is shown in the first place of Fig. 2.

Table 1: Class names as well as the number of training and test samples for each class on the hyperspectral dataset over the area of Indine Pines.
No. Class Name Training Samples Test Samples
Class 1 CornNotill 50 1384
Class 2 CornMintill 50 784
Class 3 Corn 50 184
Class 4 GrassPasture 50 447
Class 5 GrassTrees 50 697
Class 6 HayWindrowed 50 439
Class 7 SoybeanNotill 50 918
Class 8 SoybeanMintill 50 2418
Class 9 SoybeanClean 50 564
Class 10 Wheat 50 162
Class 11 Woods 50 1244
Class 12 BuildingsGrassTrees 50 330
Class 13 StoneSteelTowers 50 45
Class 14 Alfalfa 15 39
Class 15 GrassPastureMowed 15 11
Class 16 Oats 15 5
Total 695 9671

3.2 Results

Table 2: Classification performance comparison between different algorithms on the Indian Pines dataset. The best results are shown in bold. Furthermore, the values behind the name of algorithms mean the dimensions used in these compared methods.
Methods OSF (220) PCA (30) LE (60) LLE (60) SSME (16)
OA 65.89 65.76 64.45 69.05 76.46
AA 75.71 75.70 74.84 76.94 86.67
κ\kappa 0.6148 0.6138 0.5997 0.6500 0.7350
Class 1 51.66 51.81 50.51 56.00 71.97
Class 2 57.40 57.91 56.51 57.14 78.95
Class 3 70.65 69.57 67.39 70.11 90.22
Class 4 88.14 88.37 87.92 90.83 92.39
Class 5 81.78 81.78 80.92 90.82 71.45
Class 6 95.90 96.13 94.53 95.44 98.63
Class 7 66.56 67.97 68.30 68.52 84.31
Class 8 55.21 54.22 52.11 59.10 59.26
Class 9 53.01 52.48 50.53 59.40 84.75
Class 10 98.15 98.15 97.53 99.38 98.15
Class 11 82.88 82.48 80.95 81.11 82.07
Class 12 50.91 51.21 51.21 66.06 93.64
Class 13 97.78 97.78 97.78 93.33 97.78
Class 14 79.49 79.49 79.49 82.05 92.31
Class 15 81.82 81.82 81.82 81.82 90.91
Class 16 100.00 100.00 100.00 80.00 100.00
Refer to caption
Figure 2: Classification maps obtained by different hyperspectral embedding approaches on the Indine Pines dataset.

A false-color image of this scene is shown in the first column of Fig. 2 and the distribution of training and test sets are also given in the subsequent column of Fig. 2.

Several hyperspectral embedding baselines are selected to evaluate the quality of learned embedding representations using the different methods, such as the original spectral features (OSF), principal component analysis (PCA) [Wold et al., 1987], LE, LLE, and ours (SSME). Quantitative classification accuracies of these compared methods in terms of OA, AA, and κ\kappa are listed in Table 2, while Fig. 2 visualizes the classification maps with false-color image and the distributions of training and test sets.

Overall, the results using PCA are basically consistent with those using OSF in all indices. Also, LE holds similar embedding results assessed by means of classification tasks compared to PCA and OSF. By linearly regressing the local neighboring relationship of a target pixel, LLE performs better than the aforementioned embedding methods at an increase of about 5% OAs. As expected, our proposed SSME outperforms others dramatically by jointly considering spatial and spectral information, showing the superiority in hyperspectral low-dimensional embedding tasks. In addition, the accuracies for most of class using our proposed SSME are higher than those using other competitors, as listed in Table 2.

4 Conclusion

In this paper, we propose a novel spatial-spectral hyperspectral embedding approach, called spatial-spectral manifold embedding (SSME), for hyperspectral dimensionality reduction in remote sensing community. SSME not only utilizes the spectral information but also modals the spatial information when calculating the low-dimensional embedding. We have to admit, however, that although the SSME is capable of extracting the hyperspectral features well, yet the discriminative ability for feature representations still remains limited due to the relatively weak data fitting ability (linearized). To this end, we would like to introduce more powerful techniques, e.g., deep learning, to further enhance the representation ability in extracted low-dimensional embedding or introduce additional data sources to better guide the hyperspectral embedding, e.g., light detection and ranging (LiDAR) [Huang et al., 2019], synthetic aperture radar (SAR) [Hu et al., 2019b], multispectral data, in the future work.

References

  • [Belkin, Niyogi, 2002] Belkin, M., Niyogi, P., 2002. Laplacian eigenmaps and spectral techniques for embedding and clustering. Proceedings of Advances in Neural Information Processing Systems (NIPS), 585–591.
  • [Bi et al., 2017] Bi, H., Sun, J., Xu, Z., 2017. Unsupervised PolSAR image classification using discriminative clustering. IEEE Trans. Geosci. Remote Sens., 55(6), 3531–3544.
  • [Bi et al., 2019a] Bi, H., Sun, J., Xu, Z., 2019a. A graph-based semisupervised deep learning model for PolSAR image classification. IEEE Trans. Geosci. Remote Sens., 57(4), 2116–2132.
  • [Bi et al., 2019b] Bi, H., Xu, F., Wei, Z., Xue, Y., Xu, Z., 2019b. An active deep learning approach for minimally supervised PolSAR image classification. IEEE Trans. Geosci. Remote Sens., 57(11), 9378–9395.
  • [Cao et al., 2018a] Cao, W., Wang, K., Han, G., Yao, J., Cichocki, A., 2018a. A robust PCA approach with noise structure learning and spatial–spectral low-rank modeling for hyperspectral image restoration. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(10), 3863–3879.
  • [Cao et al., 2018b] Cao, W., Yao, J., Sun, J., Han, G., 2018b. A tensor-based nonlocal total variation model for multi-channel image recovery. Signal Processing, 153, 321–335.
  • [Cao et al., 2020] Cao, X., Yao, J., Fu, X., Bi, H., Hong, D., 2020. An Enhanced 3-Dimensional Discrete Wavelet Transform for Hyperspectral Image Classification. IEEE Geoscience and Remote Sensing Letters. 10.1109/LGRS.2020.2990407.
  • [Gao et al., 2020] Gao, L., Hong, D., Yao, J., Zhang, B., Gamba, P., Chanussot, J., 2020. Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning. IEEE Transactions on Geoscience and Remote Sensing. 10.1109/TGRS.2020.3000684.
  • [Guanter et al., 2009] Guanter, L., Segl, K., Kaufmann, H., 2009. Simulation of optical remote-sensing scenes with application to the EnMAP hyperspectral mission. IEEE Transactions on Geoscience and Remote Sensing, 47(7), 2340–2351.
  • [Hang et al., 2020] Hang, R., Li, Z., Ghamisi, P., Hong, D., Xia, G., Liu, Q., 2020. Classification of Hyperspectral and LiDAR Data Using Coupled CNNs. IEEE Transactions on Geoscience and Remote Sensing. DOI:10.1109/TGRS.2020.2969024.
  • [Hang et al., 2019] Hang, R., Liu, Q., Hong, D., Ghamisi, P., 2019. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 57(8), 5384–5394.
  • [He et al., 2005] He, X., Cai, D., Yan, S., Zhang, H.-J., 2005. Neighborhood preserving embedding. Proceedings of the IEEE International Conference on Computer Vision (ICCV),  2, 1208–1213.
  • [He, Niyogi, 2004] He, X., Niyogi, P., 2004. Locality preserving projections. Proceedings of Advances in Neural Information Processing Systems (NIPS), 153–160.
  • [Hong, 2019] Hong, D., 2019. Regression-Induced Representation Learning and Its Optimizer: A Novel Paradigm to Revisit Hyperspectral Imagery Analysis. PhD thesis, Technische Universität München.
  • [Hong et al., 2020a] Hong, D., Chanussot, J., Yokoya, N., Kang, J., Zhu, X. X., 2020a. Learning Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data. IEEE Geoscience and Remote Sensing Letters. DOI:10.1109/LGRS.2019.2944599.
  • [Hong et al., 2016a] Hong, D. F., Yokoya, N., Zhu, X. X., 2016a. Local manifold learning with robust neighbors selection for hyperspectral dimensionality reduction. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, 40–43.
  • [Hong et al., 2015] Hong, D., Liu, W., Su, J., Pan, Z., Wang, G., 2015. A novel hierarchical approach for multispectral palmprint recognition. Neurocomputing, 151, 511–521.
  • [Hong et al., 2016b] Hong, D., Liu, W., Wu, X., Pan, Z., Su, J., 2016b. Robust palmprint recognition based on the fast variation Vese–Osher model. Neurocomputing, 174, 999–1012.
  • [Hong et al., 2014] Hong, D., Pan, Z., Wu, X., 2014. Improved differential box counting with multi-scale and multi-direction: A new palmprint recognition method. Optik, 125(15), 4154–4160.
  • [Hong et al., 2020b] Hong, D., Wu, X., Ghamisi, P., Chanussot, J., Yokoya, N., Zhu, X. X., 2020b. Invariant Attribute Profiles: A Spatial-Frequency Joint Feature Extractor for Hyperspectral Image Classification. EEE Transactions on Geoscience and Remote Sensing, 58(6), 3791–3808.
  • [Hong et al., 2019a] Hong, D., Yokoya, N., Chanussot, J., Xu, J., Zhu, X. X., 2019a. Learning to propagate labels on graphs: An iterative multitask regression framework for semi-supervised hyperspectral dimensionality reduction. ISPRS Journal of Photogrammetry and Remote Sensing, 158, 35–49.
  • [Hong et al., 2019b] Hong, D., Yokoya, N., Chanussot, J., Zhu, X. X., 2019b. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Transactions on Image Processing, 28(4), 1923–1938.
  • [Hong et al., 2019c] Hong, D., Yokoya, N., Chanussot, J., Zhu, X. X., 2019c. CoSpace: Common subspace learning from hyperspectral-multispectral correspondences. IEEE Transactions on Geoscience and Remote Sensing, 57(7), 4349–4359.
  • [Hong et al., 2019d] Hong, D., Yokoya, N., Ge, N., Chanussot, J., Zhu, X. X., 2019d. Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS Journal of Photogrammetry and Remote Sensing, 147, 193–205.
  • [Hong et al., 2018] Hong, D., Yokoya, N., Xu, J., Zhu, X., 2018. Joint & progressive learning from high-dimensional data for multi-label classification. Proceedings of the European Conference on Computer Vision (ECCV), 469–484.
  • [Hong et al., 2016c] Hong, D., Yokoya, N., Zhu, X. X., 2016c. The k-lle algorithm for nonlinear dimensionality ruduction of large-scale hyperspectral data. 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), IEEE, 1–5.
  • [Hong et al., 2017] Hong, D., Yokoya, N., Zhu, X. X., 2017. Learning a robust local manifold representation for hyperspectral dimensionality reduction. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(6), 2960–2975.
  • [Hong, Zhu, 2018] Hong, D., Zhu, X. X., 2018. SULoRA: Subspace unmixing with low-rank attribute embedding for hyperspectral data analysis. IEEE Journal of Selected Topics in Signal Processing, 12(6), 1351–1363.
  • [Hu et al., 2019a] Hu, J., Hong, D., Wang, Y., Zhu, X. X., 2019a. A comparative review of manifold learning techniques for hyperspectral and polarimetric sar image fusion. Remote Sensing, 11(6), 681.
  • [Hu et al., 2019b] Hu, J., Hong, D., Wang, Y., Zhu, X. X., 2019b. A topological data analysis guided fusion algorithm: Mapper-regularized manifold alignment. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, 2822–2825.
  • [Hu et al., 2019c] Hu, J., Hong, D., Zhu, X. X., 2019c. MIMA: MAPPER-Induced Manifold Alignment for Semi-Supervised Fusion of Optical Image and Polarimetric SAR Data. IEEE Transactions on Geoscience and Remote Sensing, 57(11), 9025–9040.
  • [Huang et al., 2020a] Huang, R., Hong, D., Xu, Y., Yao, W., Stilla, U., 2020a. Multi-Scale Local Context Embedding for LiDAR Point Cloud Classification. IEEE Geoscience and Remote Sensing Letters, 17(4), 721–725.
  • [Huang et al., 2020b] Huang, R., Xu, Y., Hong, D., Yao, W., Ghamisi, P., Stilla, U., 2020b. Deep point embedding for urban classification using ALS point clouds: A new perspective from local to global. ISPRS Journal of Photogrammetry and Remote Sensing, 163, 62–81.
  • [Huang et al., 2019] Huang, R., Ye, Z., Hong, D., Xu, Y., Stilla, U., 2019. SEMANTIC LABELING AND REFINEMENT OF LIDAR POINT CLOUDS USING DEEP NEURAL NETWORK IN URBAN AREAS. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, 4.
  • [Kang et al., 2020] Kang, J., Hong, D., Liu, J., Baier, G., Yokoya, N., Demir, B., 2020. Learning Convolutional Sparse Coding on Complex Domain for Interferometric Phase Restoration. IEEE Transactions on Neural Networks and Learning Systems. DOI: 10.1109/TNNLS.2020.2979546.
  • [Krutz et al., 2018] Krutz, D., Sebastian, I., Eckardt, A., Venus, H., Walter, I., Günther, B., Säuberlich, T., Neidhardt, M., Zender, B., Lieder, M. et al., 2018. Desis-dlr earth sensing imaging spectrometer for the international space station iss. Sensors, Systems, and Next-Generation Satellites XXII, 10785, International Society for Optics and Photonics, 107850K.
  • [Li et al., 2018] Li, C., Gao, L., Wu, Y., Zhang, B., Plaza, J., Plaza, A., 2018. A real-time unsupervised background extraction-based target detection method for hyperspectral imagery. Journal of Real-Time Image Processing, 15(3), 597–615.
  • [Liu et al., 2019] Liu, X., Deng, C., Chanussot, J., Hong, D., Zhao, B., 2019. Stfnet: A two-stream convolutional neural network for spatiotemporal image fusion. IEEE Transactions on Geoscience and Remote Sensing, 57(9), 6552–6564.
  • [Rasti et al., 2020] Rasti, B., Hong, D., Hang, R., Ghamisi, P., Kang, X., Chanussot, J., Benediktsson, J. A., 2020. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep (Overview and Toolbox). IEEE Geoscience and Remote Sensing Magazine. DOI: 10.1109/MGRS.2020.2979764.
  • [Ren et al., 2017] Ren, H., Ye, X., Liu, R., Dong, J., Qin, Q., 2017. Improving land surface temperature and emissivity retrieval from the Chinese Gaofen-5 satellite using a hybrid algorithm. IEEE Transactions on Geoscience and Remote Sensing, 56(2), 1080–1090.
  • [Roweis, Saul, 2000] Roweis, S. T., Saul, L. K., 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.
  • [Wold et al., 1987] Wold, S., Esbensen, K., Geladi, P., 1987. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1-3), 37–52.
  • [Wu et al., 2020a] Wu, X., Hong, D., Chanussot, J., Xu, Y., Tao, R., Wang, Y., 2020a. Fourier-based Rotation-invariant Feature Boosting: An Efficient Framework for Geospatial Object Detection. IEEE Geoscience and Remote Sensing Letters, 17(2), 302–306.
  • [Wu et al., 2018] Wu, X., Hong, D., Ghamisi, P., Li, W., Tao, R., 2018. MsRi-CCF: Multi-scale and rotation-insensitive convolutional channel features for geospatial object detection. Remote Sensing, 10(12), 1990.
  • [Wu et al., 2019] Wu, X., Hong, D., Tian, J., Chanussot, J., Li, W., Tao, R., 2019. ORSIm Detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features. IEEE Transactions on Geoscience and Remote Sensing, 57(7), 5146–5158.
  • [Wu et al., 2020b] Wu, X., Tao, R., Hong, D., Wang, Y., 2020b. The FrFT convolutional face: toward robust face recognition using the fractional Fourier transform and convolutional neural networks. Science China Information Sciences, 63, 119103.
  • [Yan et al., 2006] Yan, S., Xu, D., Zhang, B., Zhang, H.-J., Yang, Q., Lin, S., 2006. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1), 40–51.
  • [Yao et al., 2019] Yao, J., Meng, D., Zhao, Q., Cao, W., Xu, Z., 2019. Nonconvex-sparsity and nonlocal-smoothness-based blind hyperspectral unmixing. IEEE Transactions on Image Processing, 28(6), 2991–3006.
  • [Zhang et al., 2019a] Zhang, B., Zhang, M., Hong, D., 2019a. Land surface temperature retrieval from Landsat 8 OLI/TIRS images based on back-propagation neural network. Indoor and Built Environment, 1420326X19882079.
  • [Zhang et al., 2019b] Zhang, B., Zhang, M., Kang, J., Hong, D., Xu, J., Zhu, X., 2019b. Estimation of pmx concentrations from landsat 8 oli images based on a multilayer perceptron neural network. Remote Sensing, 11(6), 646.