TGF-RSAnet-main
Deep learning has achieved promising progress for digital elevation model (DEM) super-resolution (SR). However, the existing methods rarely consider the integration of multi-modal data with auxiliary high-frequency information. A primary challenge stems from the heterogeneous feature representations among these data sources, which complicates the effective learning of the terrain feature mapping relationships. In this paper, we propose a novel framework for DEM SR by integrating optical remote sensing imagery as the auxiliary data. A terrain-guided texture-edge feature fusion network is constructed to transfer the feature representation of high-resolution image textures with the guidance of informative terrain features, for adapting DEM SR learning. By exploiting the multi-dimensional attention mechanism, the meaningful components from the image conforming to the terrain features provide high-frequency information for DEM SR, while noisy features related to spectral variations are excluded from modelling. The terrain-oriented textural and edge features are then fused to generate the SR result with the constraint of a terrain feature-aware loss function. Extensive experiments on both simulated and real datasets indicate that the proposed method can reconstruct a DEM with high-accuracy elevation and sharper terrain details, and outperforms the state-of-art methods.