登录 EN

添加临时用户

基于事件相机与传统相机融合的特征跟踪与建图问题研究

Event and Standard Cameras Fusion for Feature Tracking and Mapping

作者:董岩
  • 学号
    2018******
  • 学位
    硕士
  • 电子邮箱
    don******.cn
  • 答辩日期
    2021.05.25
  • 导师
    张涛
  • 学科名
    控制科学与工程
  • 页码
    63
  • 保密级别
    公开
  • 培养单位
    025 自动化系
  • 中文关键词
    事件相机, 多传感器融合, 特征跟踪, 三维建图
  • 英文关键词
    Event Camera, Multi-sensor Fusion, Feature Tracking, 3D Mapping

摘要

事件相机是一种新型的受生物启发的传感器。与传统相机以恒定频率拍摄图像不同,事件相机只关注变化的“事件”,即当某一个像素的亮度变化超过一定阈值时会生成一个数据。因此,事件相机的数据形式具有在时间上高频、异步,在图像上稀疏的特点,与传统相机相比具有低延迟、高动态范围、无运动模糊等优点。这种特性对于基于视觉导航的自主移动机器人尤为重要。由于事件相机与传统相机的数据格式具有互补性,本文研究融合两种传感器数据,实现机器人定位中关键的特征跟踪,以及三维地图的构建。 论文主要内容包括: (1)提出了一种基于事件相机与传统图像融合的特征跟踪算法。算法首先从传统图像中确定关键点后提取附近的特征模板,之后利用事件相机生成的数据模板对特征进行跟踪。当新一帧图像到来时,利用图像修正特征位置,并更新特征模板和数据模板。当特征丢失后,可以及时从新的图像上补充特征。在数据集上的测试表明,提出的算法能够取得较低的跟踪误差和较长的跟踪时长。 (2)提出了一种基于事件相机与传统图像融合的半稠密地图填充算法。在取得事件相机生成场景的边缘地图后,算法对传统相机的图像进行分割,得到待填充的区域,利用已有的地图点信息对区域进行深度填充,得到更加稠密的点云地图,并提出了一种衡量填充性能的评价标准。在数据集上的测试表明,算法有效地增加了三维地图点的数量。 (3)设计实验对算法进行了验证。分析了所使用的事件相机采集数据的特点,并设计了相应的数据预处理方法。同时提出了一种基于多二维码标签进行相机定位的算法,以提供相机位姿真值。实验验证了所提出的特征跟踪与建图算法的有效性。 本文提出的融合事件相机与传统相机数据进行特征跟踪与建图的方法,充分利用两种传感器的不同特性,与现有方法相比取得了一定的提升。本文的工作是对两种传感器数据融合方法的一次积极探索。

The event camera is a new type of bio-inspired sensor. Different from standard cameras capturing images at a fixed frequency, event cameras focus on the changing "event", which means they generate data only when the brightness of a pixel changes over a certain threshold. As a result, the data format of event cameras is high-frequency and asynchronous in term of time but sparse on images. Comparing to standard ones, event cameras have the advantages of low latency, high dynamic range and no motion blur. These characters are important for visual-based autonomous mobile robots. Due to complementarity of event and standard cameras, this thesis studies on how to fuse two sensors for feature tracking and 3D mapping problems, which are core blocks for mobile robots. The main contributions of this thesis are: (1) A feature tracking algorithm based on the event and standard cameras fusion is proposed. The algorithm first selects key points from the images and extracts features, and then features are tracked from event streams. When a new frame arrives, feature positions, feature templates and data templates are updated based on the frame. New features will be added if some are lost. Experiments on dataset show that this algorithm can achieve low tracking error as well as long tracking time. (2) A 3D mapping algorithm based on event and standard cameras fusion is introduced. After obtaining semi-dense 3D map from the event camera, the algorithm segments the frame, selects candidate regions to be filled and fills the depth information in candidate regions from the existing semi-dense map to achieve a denser point cloud. And a score for evaluating the filling quality is proposed. Experiments on dataset show that this method effectively increases the number of 3D map points. (3) Experiments on a real-world event camera is designed to verify the tracking and mapping algorithms. This thesis first analyses the characteristics of the chosen event camera, and designs the data pre-processing method. Then thesis introduces an camera localization algorithm by multiple data matrix codes, to provide the ground truth of camera's trace. Experiments show that the proposed tracking and mapping algorithms can work in real-world situations. The tracking and mapping algorithms proposed in this paper make full use of event cameras and standard ones, and achieve better results comparing to existing works. The work in this thesis is an important attempt to fuse two sensors.