登录 EN

添加临时用户

基于视觉的移动抓取机器人关键技术研究

Research on Key Technology of Vision-based Mobile Grabbing Robot

作者:王培荣
  • 学号
    2017******
  • 学位
    硕士
  • 电子邮箱
    100******com
  • 答辩日期
    2020.07.07
  • 导师
    王晓浩
  • 学科名
    仪器仪表工程
  • 页码
    82
  • 保密级别
    公开
  • 培养单位
    013 精仪系
  • 中文关键词
    移动抓取机器人,因子图VSLAM,视觉抓取,ROS
  • 英文关键词
    Mobile and grab robot, factor graph VSLAM, Visual grasping of mechanical arm, ROS

摘要

随着计算机视觉技术、机器人技术和传感器性能的不断发展,室内自主机器人逐渐向智能化的方向前进,广泛应用于制造业和服务业等多个领域,有效地提高了生产效率。但是目前的机器人自主移动方案多采用磁条或循线等方式,更换场地或路线比较繁琐。机械臂被固定于特定位置,采用示教形式进行作业,当被抓取物体移动或机械臂基座被移动后,需要重新示教。基于视觉的移动抓取机器人可以实现室内自主定位导航和视觉抓取,避免了上述方案的弊端。目前技术上的主要难点是移动抓取机器人系统平台设计,高精度和高鲁棒性的视觉定位与导航避障以及机器人手眼脚协同作业。本课题设计了一台基于视觉的移动抓取机器人系统,底部为轮式移动底盘,上部为六轴机械臂,完成了机器人的机械结构设计,电气连接,单片机嵌入式开发,工控机软件算法设计。系统底层硬件采用单片机进行底盘运动控制和数据接发,上层通过工控机完成视觉定位导航和机械臂视觉引导抓取。主体算法运行在ROS系统上,各个功能以ROS节点的方式运行和通信。研究了相机标定,手眼标定。对移动平台和六轴机械臂运动学进行研究,结合视觉完成物体的抓取。针对视觉定位系统,采用基于Apriltag标签的视觉SLAM方案,通过固定在机器人上的相机观测屋顶的稀疏二维码,因子图对后端优化,实现快速建图和定位,DWA轨迹控制算法完成机器人轨迹控制。添加了语义目标检测节点,基于Mobile-Net框架对每帧关键帧进行目标识别,结合深度图像对检测出来的物体进行3D目标定位,将目标物体坐标发送给导航模块实现语义导航。在搭建的机器人平台上测试机器人的室内自主移动和基于视觉的机械臂抓取物体的联合作业,机器人室内轨迹定位误差在10cm以内,在以0.2m/s速度移动下导航误差为20cm以内。到达目的点后可以抓取木块,杯子等多种物体。

With the continuous development of computer vision technology, robot technology and sensor performance, indoor autonomous robots are gradually moving in the direction of intelligence, and are widely used in many fields such as manufacturing and service industries, effectively improving production efficiency. However, the current robot autonomous movement scheme mostly adopts magnetic stripe or line-traveling, and it is more cumbersome to change the site or route. The robotic arm is fixed at a specific position and works in the form of teaching. When the grasped object moves or the base of the robotic arm is moved, we need to re-teach. The vision-based mobile grab robot can realize real-time autonomous localization and navigation indoors. After reaching the target point, the robot arm performs corresponding operations through visual measurement and feedback, avoiding the drawbacks of the above scheme. At present, the main technical difficulties are the mobile grab robot system platform design and hand-eye-foot calibration, high-precision and high-robust visual localization and intelligent perception obstacle avoidance. After reaching the target point, the robot graspe object by visual recognition and posture estimation, realizing the collaborative operation of hands, eyes and feet.This project has designed a vision-based mobile gripping robot system with a wheeled mobile platform at the bottom and a six-axis mechanical arm at the top. The mechanical structure design of the robot, electrical connection, embedded microcontroller desigen, and industrial computer software algorithm design are completed. The bottom hardware of the system use MCU for mobile platform motion control and data receiving, and the upper layer completes the visual positioning and navigation and the robotic arm visual guided grabbing through the industrial control computer. The main algorithm runs on the ROS system, and each function runs and communicates in the form of ROS nodes. The camera calibration and hand-eye calibration are studied. Research on the kinematics of the mobile platform and six-axis robotic arm, combined with vision to complete the grasping of objects.For the visual positioning system, the visual SLAM scheme based on the Apriltag tag is used. The camera is fixed on the robot to observe the sparse tags on the roof. The factor graph optimizes the SLAM end-back to achieve rapid mapping and localization. The DWA trajectory control algorithm completes the robot trajectory control. A semantic target detection node is added, based on the Mobile-Net framework, the target recognition of each key frame is performed, and the detected object is combined with the depth image to perform 3D target positioning, and the coordinates of the target object are sent to the navigation module to achieve semantic navigation.The joint operation of the robot's indoor autonomous movement and the vision-based robotic arm grabbing objects was tested on the built robot platform. The indoor trajectory positioning error of the robot is within 10cm, and the navigation error is within 20cm when moving at a speed of 0.2m/s. After reaching the destination point, robot can grab wooden blocks, cups and other objects.