登录 EN

添加临时用户

基于毫米波雷达的移动机器人感知方法研究

Research on Mobile Robot Perception Methods based on Millimeter Wave Radar

作者:程宇威
  • 学号
    2018******
  • 学位
    博士
  • 电子邮箱
    842******com
  • 答辩日期
    2023.05.23
  • 导师
    刘一民
  • 学科名
    信息与通信工程
  • 页码
    136
  • 保密级别
    公开
  • 培养单位
    023 电子系
  • 中文关键词
    毫米波雷达,移动机器人感知,雷达检测器,目标识别,位姿估计
  • 英文关键词
    millimeter wave radar,mobile robot perception,radar detector,object detection,pose estimation

摘要

从空中的无人机、路面的无人车到水中的无人船,移动机器人在经济生活、工业生产、国防军工中具有重要的价值。对于移动机器人,准确、可靠的环境感知是其实现智能化与自主化的核心之一。毫米波雷达具有全天时、全天候环境感知的能力,在移动机器人中有着广泛的应用。近年来,新兴的77GHz毫米波雷达点云具有更高分辨率的优势,但其在移动机器人应用的复杂场景中存在着点云多杂波、闪烁、稀疏的难点。如何提升毫米波雷达点云在复杂场景下的质量,并充分发掘和利用其信息,将毫米波雷达点云应用至目标识别和位姿估计等高层级的环境感知任务中,具有十分重要的意义。本文以毫米波雷达点云为出发点,围绕高质量毫米波雷达点云生成、毫米波雷达点云单模态目标识别和位姿估计、毫米波雷达点云与视觉融合的多模态目标识别和位姿估计等核心问题,开展了基于毫米波雷达的移动机器人感知方法研究。主要的研究工作与创新点总结为以下三个方面:首先,针对高质量毫米波雷达点云生成问题,理论论证了其当前瓶颈在于雷达点云检测器,提供了新的研究视角。对于复杂场景中的雷达点云检测器,本文提出了数据驱动的雷达点云检测器新方法,突破了常规的雷达点云检测器模式,解决了复杂场景下毫米波雷达杂波点多、有效目标点稀疏的问题。其次,针对毫米波雷达点云的目标识别和位姿估计问题,通过对毫米波雷达点云的精细信息进行准确特征提取,提出了基于毫米波雷达点云时空信息的目标识别方法,显著提升了现有方法的识别精度。同时,本文提出了一种毫米波雷达点云检索与配准相结合的重定位方法,取得了当前的最好性能。最后,针对毫米波雷达点云与视觉融合的目标识别和位姿估计问题,通过对雷达点云与视觉图像进行自适应的深度融合,解决了跨模态下雷达与视觉信息的准确、鲁棒融合问题。提出了一种雷达点云与视觉图像特征级深层融合方法,在水面小目标识别任务中取得了当前最好性能。同时,本文提出了一种毫米波雷达与视觉紧耦合的里程计定位框架,实现了在复杂场景下较好的鲁棒性。此外,针对当前包含毫米波雷达在内的复杂场景多模态感知数据集缺乏的现状,本文还构建了水面自主实验平台并建立与发布了全球首个水面自动驾驶多模态感知数据集,并依托该平台进行了实际应用实验。

Mobile robots, ranging from unmanned aerial vehicles (UAVs), and unmanned ground vehicles (UGVs) to unmanned surface vehicles (USVs), play a crucial role in economic, industrial, and defense applications. Accurate and reliable environment perception is essential for achieving the intelligence and autonomy of mobile robots. The millimeter wave (mmWave) radar has the capability of all-weather and all-day sensing and has been widely used in mobile robots. However, current mmWave radar data provide limited and low-dimensional information, and are only applicable for low-level perception tasks, such as velocity and range measurements, and obstacle detection. Recently, the emerging 77GHz mmWave radar point cloud has achieved higher resolution. However, it still faces challenges such as clutter point clouds, glitter, and sparsity in complex scenes in mobile robot applications. Therefore, it is significant to enhance the quality of mmWave radar point clouds and fully exploit their information to support higher-level perception tasks, such as object detection and pose estimation. This study focuses on mmWave radar-based perception for mobile robots, centered on core issues including high-quality mmWave radar point cloud generation, radar-based single-modal object detection and pose estimation, and radar-vision fused-based multi-modal object detection and pose estimation. The research content and innovations of this work can be summarized in the following three aspects:Firstly, for high-quality mmWave radar point cloud generation, this work theoretically proves that the current bottleneck lies in radar point cloud detectors. This work proposes a data-driven mmWave radar point cloud detector for complex scenes, which breaks through the conventional radar point cloud detectors and solves the problem of clutter mmWave radar points and sparse valid target points in complex scenarios.Secondly, for mmWave radar-based object detection and pose estimation, through accurate mmWave radar point clouds feature extraction, this work proposes a detection method based on radar spatiotemporal information which significantly improves the detection accuracy. Furthermore, a mmWave radar point cloud retrieval and registration-based relocalization method is proposed in this work, achieving current state-of-the-art performance.Lastly, for radar-vision fusion-based object detection and pose estimation, through the adaptive deep fusion of radar and vision information, this work addresses the accurate and robust cross-modal radar-vision information fusion. This work proposes a feature-level deep fusion method for mmWave radar point cloud and image, and achieves state-of-the-art performance in small object detection on water surfaces. Moreover, a mmWave-vision tightly-coupled odometry framework is proposed and demonstrates robustness in complex scenes.In addition, aiming at the lack of multi-modal perception datasets including mmWave radar in complex scenes, this work builds a water surface experimental platform, establishes and releases the world‘s first water surface autonomous driving multi-modal perception dataset, and also conducts practical application experiments based on this platform.