登录 EN

添加临时用户

基于光学和雷达遥感图像融合的地表变化检测方法研究

Study on Fusion of Optical and Radar Remote Sensing Images for Surface Change Detection

作者:蒋骁
  • 学号
    2016******
  • 学位
    博士
  • 电子邮箱
    jia******.cn
  • 答辩日期
    2021.05.26
  • 导师
    何友
  • 学科名
    信息与通信工程
  • 页码
    108
  • 保密级别
    公开
  • 培养单位
    023 电子系
  • 中文关键词
    遥感图像, 变化检测, 融合, 光学, 雷达
  • 英文关键词
    remote sensing images, change detection, fusion, optical, radar

摘要

遥感地表变化检测指在同一场景的多时相遥感图像之间检测发生变化的地表区域,在灾害监测评估、国土利用调查、军事及农业等领域有着重要应用。光学和雷达遥感图像是目前应用最为广泛的两种遥感图像。光学遥感图像具备特征直观、谱段丰富和分辨率较高等优点;雷达可提供全天时全天候的遥感图像,不受光照条件和恶劣天气干扰。在遥感地表变化检测中,融合光学和雷达遥感图像,有助于提升检测结果的准确性和时效性。 本文围绕遥感地表变化检测这一课题,基于不同场景下的光学和雷达遥感图像,主要研究对应的融合变化检测方法。论文的主要工作和创新点如下。1) 针对变化前后均存在光学和雷达遥感图像的场景,提出了一种基于超像素置信度融合的变化检测方法。该方法以超像素作为光学和雷达遥感图像的基本融合单元,并估计图像分辨率和噪声水平对融合变化检测的影响,最终利用证据理论融合光学和雷达遥感图像的检测结果。相比于已有的检测方法,能够在光学和雷达遥感图像的分辨率和噪声水平存在较大差异时保持检测的准确性,实施有效检测。2) 针对变化前后仅存在异质遥感图像的场景,提出了一种基于深度同质特征融合的变化检测方法,将异质遥感图像变换进入同质空间实施变化检测。该方法利用深度卷积神经网络分离异质光学和雷达遥感图像的内容特征和风格形式特征,随后通过保留图像的内容特征和替换风格形式特征的方式生成仿真同质图像,最终在同质空间完成变化检测。相比于已有方法,在同质变换中能够显著减少图像内容特征的损失,提升了变化检测结果的准确性。3) 针对变化前后仅存在异质遥感图像的场景,考虑到现有方法变化检测效率较低的问题,提出了一种基于弱监督孪生网络的变化检测方法。该方法构造孪生网络结构,以迁移学习策略实施弱监督训练,分别提取异质光学和雷达遥感图像的内容特征,通过检测内容特征的差异完成变化检测。相比于已有方法,该方法以弱监督学习的方式迁移自然图像的低层级特征,避免了大量网络权值的繁琐训练,显著降低了计算复杂度,最终有效提升了变化检测的效率。

Surface change detection finds the surface changes between multi-temporal remote sensing images. It has been widely used in disaster monitoring and evaluation, land usage investigation, military, and agriculture. Optical and radar remote sensing images are most extensively used for surface change detection. Optical remote sensing images are intuitive with rich spectra and high resolutions. Radar remote sensing images can be captured both all-day and all-weather, which are free of illumination conditions and appalling weather. Therefore, by fusing optical and radar remote sensing images, the performance of surface change detection can be improved both in accuracy and efficiency.This thesis focuses on the task of surface change detection via fusion of optical and radar remote sensing images in different scenes. The major contributions of this thesis are as follows:1) In the scenes that include both optical and radar remote sensing images before and after the change events, a method of change detection via superpixel-based belief fusion is proposed. This method uses superpixels as the basic units to fuse optical and radar remote sensing images. The influence from resolutions and detection noise on fusion is evaluated by basic belief assignments. The detection results are then fused by Dempster-Shafer Theory. Compared with the existing fusion methods, the proposed method keeps effective when there exists significant difference of resolutions and detection noise between optical and radar remote sensing images.2) In the scenes where there are only heterogeneous remote sensing images before and after the change events, a method based on image style transfer is proposed to transform the heterogeneous remote sensing images into the homogeneous space for change detection. This method utilizes deep convolutional neural networks (DCNN) to separate the semantic content features and style features. The semantic content features are preserved while the style features are replaced to simulate the homogeneous images. Finally, change detection is performed on homogeneous space. Compared with the existing methods, the proposed method avoids the corruption of semantic content significantly and improves the accuracy of detection results.3) In the scenes where there are only heterogeneous remote sensing images before and after the change events, considering homogeneous transformation is time-consuming in the existing methods, a method of change detection via weakly-supervised Siamese network is proposed. This method builds a Siamese network and applies a novel transfer learning strategy for training. After that, the semantic content features from heterogeneous optical and radar remote sensing images are extracted, respectively. Finally, the difference of semantic content features is identified for change detection. Compared with the existing methods, the proposed method transfers the low-level features from natural images in the weakly-supervised way, avoiding the time-consuming training of massive network parameters. It leads to greatly reduction of computational complexity and thus significantly improvements of detection efficiency.