top of page

Unsupervised Data Fusion With Deeper Perspective: A Novel Multisensor Deep Clustering Algorithm

The ever-growing developments in technology to capture different types of image data [e.g., hyperspectral imaging and light detection and ranging (LiDAR)-derived digital surface model (DSM)], along with new processing techniques, have led to a rising interest in imaging applications for Earth observation. However, analyzing such datasets in parallel, remains a challenging task. In this article, we propose a multisensor deep clustering (MDC) algorithm for the joint processing of multisource imaging data. The architecture of MDC is inspired by autoencoder (AE)-based networks. The MDC paradigm includes three parallel networks, a spectral network using an autoencoder structure, a spatial network using a convolutional autoencoder structure, and lastly, a fusion network that reconstructs the concatenated image information from the concatenated latent features from the spatial and spectral network. The proposed algorithm combines the reconstruction losses obtained by the aforementioned networks to optimize the parameters (i.e., weights and bias) of all three networks simultaneously. To validate the performance of the proposed algorithm, we use two multisensor datasets from different applications (i.e., geological and rural sites) as benchmarks. The experimental results confirm the superiority of our proposed deep clustering algorithm compared to a number of state-of-the-art clustering algorithms. The code will be available at:

Figure 1. Scheme of our proposed multisensor deep clustering algorithm. In the figure, X1 , X2 , X3 , R1 , R2 , and R3 represent the original images of HSI, LiDAR-derived DSM, and the concatenation of HSI and LiDAR-derived DSM data and their reconstructed ones, respectively. The connection operation ⊕ denotes the concatenation process of extracted features from the AE and CAE networks.



bottom of page