当前立体定向手术机器人感知能力薄弱,自主性不足,缺乏对非结构化工作对象、不确定性工作环境和操作状态的感知理解能力,也缺少对自身本体参数性能的高效感知辨识能力。术前、术中仍需要大量的人工辅助,流程繁琐,易引入人为误差,造成手术风险。本文以增强立体定向手术机器人的感知能力为目标,从机器人手术对象、工作环境、手术操作过程以及本体性能参数四个方面研究非结构化环境下立体定向手术机器人感知增强问题。具体研究内容如下: 1)针对机器人手术对象空间定位问题,提出新型立体定向手术机器人手眼协调方案,发明基于影像可见视觉标记物的注册追踪方法,设计无模型机器人定位定向控制算法。缩短了机器人术中坐标传递链条,简化了坐标关系建立流程;赋予了机器人主动视觉能力和可预知的“手眼”关系;实现了对运动学误差鲁棒的立体定向手术机器人精确定位定向控制。 2)针对手术空间场景理解问题,设计神经网络和基于知识的传统算法融合下的场景信息处理方法,赋予机器人对神经外科手术场景中患者头部、头架、躯干对象的目标识别、分割和姿态估计能力,在2D和3D场景中均实现了更优的分割性能;提出一种场景数据快速标注方法,解决了目标姿态数据标注繁琐的问题;同时还构建了近5000张的神经外科手术机器人场景数据集。 3)针对机器人手术操作中状态信息感知问题,提出基于虚拟传感的机器人观察解剖结构影像方法,构建基于术前CT影像的钻削力信息预测模型,综合考虑钻削对象结构、目标材料特性等要素,实现了对机器人钻孔手术操作过程力信息预测;同时还提出基于术前多模影像的机器人钻削组织接触状态预测方法,预测钻削接触状态和钻削风险,为机器人术中微创钻削状态识别提供先验信息。 4)针对机器人自身状态参数感知问题,探索了自动化场景和人机协作场景下的立体定向手术机器人对自身运动学参数和视觉系统参数的高效感知辨识方法,显著提升了立体定向手术机器人系统本体状态性能。 论文结合实际临床需求,系统的研究了立体定向手术机器人感知增强问题。研究赋予机器人自动化的目标对象定位能力、对不确定手术环境理解的能力、对主动操作过程的信息预测能力以及对本体状态参数的感知辨识能力。为更精准、更易用、更智能的立体定向机器人系统提供感知信息支撑。
Current stereotactic surgical robots were still weak in situational awareness and lack autonomy, which only meets basic surgical needs. It lacks the ability to perceive and understand unstructured clinical objects, uncertain working environments, and operating conditions, and it also lacks the ability to efficiently perceive its own body performance. A lot of manual assistance is still needed before and during the robot operation, which is cumbersome, and easy to introduce human errors, increase the surgery risk. This dissertation aims to enhance the situational awareness ability for stereotactic surgical robots. We study the situational awareness ability enhancement for the stereotactic surgical robots in an unstructured environment from four aspects: the robotic surgery target positioning, the surrounding working environment understanding, the surgical operation process awareness, and the robot's own body parameter identification. The detailed research work is introduced as follows:1) For the robotic surgery target positioning issue, this work proposes a new stereotactic surgical robot hand-eye relationship scheme, invents a CT visible tracking marker-based robot registration method, and designs a model-free robot positioning and orientation control algorithm. We endow the robot with active vision capabilities and solvable "hand" and "eye" relationships; Enable the robot to quickly register and track lesions in the patient’s body; Shorten the coordinate transfer chain in the robot operation and simplify the process of establishing the coordinate relationship; Realizes the accurate positioning and orientation control of the stereotactic surgical robot that is robust to robot kinematics errors.2) For the robot surgical scene understanding issue, we give the robot the ability to recognize, segment, and estimate the pose of the patient's head, head frame, and body in the neurosurgery scene. A scene information processing method based on the fusion of a neural network and a traditional knowledge-based algorithm is proposed, which achieves better segmentation performance in both 2D and 3D scenes. A fast-labeling method for scene data is proposed, which significantly reduces the cumbersome degree of target pose data labeling. Moreover, we have also constructed a neurosurgery robot scene data set with 5000 pieces of data.For the robot surgical operation state aware problem: A virtual sensor-based method is proposed for robot observes preoperative anatomical images. A CT images-based drilling force prediction model was constructed with comprehensively considers the drilling object structure, the target material, the drill bit geometry, and the drilling speed, which realizes the drilling force prediction before real robot drilling operation. Base on the above work, this work further proposes a method for predicting the robotic drilling tissue contact state based on preoperative multimodal images to predict the contact state and drilling risk for real robot drilling progress, which provide prior information for the recognition of minimally invasive drilling during robotic surgery.For the robot self-body parameter identification problem, this work explores the efficient parameter identification and performance evaluation methods of the stereotactic surgical robot for its own kinematic parameters and visual parameters from the perspectives of automation and human-machine cooperation way. This work significantly improves the stereotactic surgical robot system positioning and orientation accuracy performance.In summary, this dissertation systematically studies and explores the situational awareness enhancement problem for stereotactic surgical robots. Our work gives stereotactic surgical robots the ability to concise and automated locate surgical objects, to understand the uncertain surgical environment, to predict the active drilling operation process information, and to identification and evaluation the self-body state parameters, which provide comprehensive information support for a more accurate, easier-to-use, and smarter stereotactic robot system.