登录 EN

添加临时用户

基于全局地图的无人机图像匹配定位方法研究

Research on Image Matching Localization Method for UAV Based on Global Map

作者:郝运
  • 学号
    2017******
  • 学位
    博士
  • 电子邮箱
    hao******.cn
  • 答辩日期
    2023.09.02
  • 导师
    孟子阳
  • 学科名
    仪器科学与技术
  • 页码
    144
  • 保密级别
    公开
  • 培养单位
    013 精仪系
  • 中文关键词
    无人机,全局地图,图像匹配,视觉定位
  • 英文关键词
    UAV,global map,image matching,visual localization

摘要

无人机在军事侦察、环境监测、灾难救援、休闲娱乐等领域具有广泛的应用价值,精确的定位与导航是无人机能否完成各项既定任务的前提。然而,在GNSS拒止环境下,利用IMU或视觉里程计的定位方案无法获得无人机的全局位置信息,且定位误差随时间增加而会越来越大。本文围绕基于全局地图的无人机图像匹配定位方法开展研究,通过全局地图赋予无人机在GNSS拒止环境中全局定位的能力,论文的主要成果和创新点如下:针对经典的视觉定位方案缺乏全局定位能力的问题,提出一种基于预建地图的两阶段全局定位方法。在离线建图阶段,通过紧耦合方式融合视觉、IMU以及全局位置测量信息,构建包含关键帧位姿以及对应特征点和描述子的全局地图;在全局定位阶段,通过图像匹配获得基于地图的测量,综合考虑连续边、回环边和地图边实现全局位姿图优化。在优化过程中通过将全局和局部坐标系之间的转换增广至状态变量中实现两坐标系的精确对齐。在数据集和真实世界中的实验表明,相较于其他定位方法,所提出方法具有更高的准确性和一致性。为解决卫星图像和无人机拍摄图像差异较大难以匹配的问题,提出一种基于语义信息的无人机图像匹配定位方法。在对图像进行语义分割得到建筑语义二值图后,通过建筑排列规律构建图像的语义特征描述。在此基础上,实现无人机在未知初始全局坐标条件下的全局搜索初始化定位。随后,根据视觉里程计提供的初值,基于最近点迭代配准实现定位。实验表明,在Google Earth构建的数据集中的全局搜索初始化定位误差小于15m,基于最近点迭代配准的定位平均误差为12.5m。针对现有的基于卫星图像的无人机图像匹配定位方案缺乏连续定位能力且依赖先验信息等问题,提出一种从粗到细的无人机图像匹配连续定位方法,方法包含图像检索、候选验证以及全局定位的完整流程。通过融合1-D激光测距仪、视觉与IMU测量构建的距离视觉惯性里程计,可抑制经典视觉惯性里程计算法的尺度漂移,实现无人机的帧间运动估计。通过粗匹配在数据库中检索与无人机拍摄图像对应的地图块,进而利用细匹配获得无人机的全局位置,并综合考虑局部特征匹配质量等因素对全局位置进行评价。最后,通过全局位姿图优化融合局部运动和全局位置。仿真场景和自建实验平台在真实世界的实验结果表明,所提出方法相较于局部定位方法与目前先进的图像匹配定位方法,具有更好的定位性能。

Unmanned Aerial Vehicles (UAVs) have a wide range of applications in military reconnaissance, environmental monitoring, disaster relief, and recreational entertainment. Accurate localization and navigation are crucial for UAVs to successfully carry out their tasks. However, in Global Navigation Satellite Systems (GNSS)-denied environments, localization methods based on Inertial Measurement Units (IMU) or visual odometry (VO) are insufficient in providing global position information for UAVs. Furthermore, these methods tend to accumulate errors over time. Therefore, this thesis focuses on researching the UAV image matching localization method based on global maps. By leveraging global maps, UAVs can achieve global localization capability in GNSS-denied environments. The main achievements and contributions of this study are as follows:To overcome the lack of global localization capability in classical visual localization approaches, we propose a two-stage global localization method based on pre-built maps. In the offline mapping process, visual data, IMU measurements, and global position measurements are fused in a tightly-coupled manner to construct a global map comprising poses of keyframes, corresponding feature and descriptors. In the global localization process, we employ image matching method to obtain map-based measurements. Subsequently, a global pose graph optimization is conducted, taking into account continuous edges, loop closure edges, and map edges. To ensure accurate alignment between the local odometry frame and the global map frame, we augment the transformation between these two frames into the state vector and use a global pose-graph optimization for online estimation. Experimental results on both public datasets and real-world scenarios demonstrate that the proposed method achieves higher accuracy and consistency compared to other methods.To address the challenge of significant differences between satellite images and UAV-captured images, we propose a UAV image matching localization method based on semantic information. First, we perform semantic segmentation on the images to generate binary semantic maps. We then construct the semantic features based on the spatial arrangement patterns of buildings. Utilizing these semantic features, we conduct a global search to initialize the UAV‘s position. Subsequently, using the initial conditions provided by VO, we employ the Iterative Closest Point (ICP) algorithm to align the binary semantic maps, thereby obtaining the global position of the UAV. Experimental results on the dataset constructed by Google Earth demonstrate the effectiveness of the proposed method. The localization error of the global search initialization is found to be less than 15m, while the average localization error based on the ICP algorithm is 12.5m.To address the limitations of existing satellite map-based UAV image matching localization methods, such as the lack of continuous localization capability and strong dependence on prior initial position information, we propose a systematic coarse-to-fine UAV image matching continuous localization method. This method encompasses a complete process that includes image retrieval, candidate verification, and global localization. In particular, we propose a Range-Visual-Inertial Odometry (RVIO) to provide local tracking, which utilizes measurements from a 1-D Laser Range Finder (LRF) to suppress scale drift in the classical Visual-Inertial Odometry (VIO). The global position of the UAV is obtained through a coarse-to-fine image matching process. In the coarse matching stage, we perform a database search to retrieve the map tiles that correspond to the UAV-captured images. Subsequently, fine matching is employed to obtain the UAV‘s global position. Furthermore, we evaluate the quality of image matching localization based on four factors: the quality of local feature matching, camera axis offset, pose estimation quality, and prior information. To integrate local motion and global position, a global pose graph optimization is performed. Experimental results conducted in simulated scenarios and on a self-built experimental platform in the real world demonstrate that the proposed method outperforms local localization methods and state-of-the-art image matching localization methods in terms of localization accuracy.