本研究依托国家重点研发计划《缺陷甄别技能在线增强与多任务高效迁移》,研究工业场景下的缺陷检测和域自适应检测,致力于研发具有自主学习能力的品质检验关键技术。缺陷检测是工业生产中非常重要的一环,通过缺陷检测,能够有效保证稳定的出品质量,推进产品生产标准化。实际工业缺陷往往具备不同的场景,存在产品类型多种、缺陷类型各异、效率低等难题。如果依靠人工进行检测,稳定性差,效率低,劳动强度大,并且检测精度与效率严重依赖于质检员经验与技能。如果使用传统的机器视觉检测体系与系统,难以获得一个通用的算法,需要针对不同场景训练新的模型。此时利用迁移学习进行共有知识提取与甄别,获得鲁棒性和泛化能力更强的、对多种缺陷检测场景均有效的系统。现有许多算法特征提取方式较为简单和单一,较少针对图像特性进行设计。本文提出一种梯度引导的并行多尺度特征提取网络,利用卷积神经网络与视觉Transformer进行局部与全局特征提取,再相加为最终获取的特征。同时,引入图像的梯度图并作为补充信息输入到提出的网络中。实验显示,本文设计的网络可以有效学习与提取缺陷图像特征,提升缺陷场景下的分类准确性,在现有多个公开表面缺陷分类数据集上达到最优性能,这也表明本文算法具有较强鲁棒性。结合本文提出的梯度引导的并行多尺度特征提取网络,对其进行修改,加入Unet分割框架,通过特征的提取与交互,实现缺陷分割模型。该分割网络在具有磁瓦分割数据集上进行实验,获得具有竞争力的分割结果,证明了本方法的有效性。此外,现有研究主要集中在特定场景,面对复杂多变的缺陷检测场景过往方法难以表现出较好的鲁棒性与泛化性,因此需要进行缺陷场景下的域自适应检测研究,实现不同场景下的缺陷检测能力的迁移。同时,由于过去研究仅能处理已出现过的缺陷类型,因此针对目标域出现而源域未出现过的缺陷种类,研究开集检测,使缺陷跨域检测模型与系统能够拥有更强的能力与更广泛的使用价值。本文在缺陷场景下提出了一个域自适应开集检测算法,成功应用于现有缺陷场景数据集,实现检测能力的迁移与泛化。
Relying on the national key research and development plan "Online Enhancement of Defect Identification Skills and Multi-task Efficient Transfer Learning", this thesis studies defect detection and domain adaptive detection in industrial scenarios, and hopes to detect new defects in target domain which don‘t appear in the source domain. Defect detection is a very important part of industrial production. Through defect detection, stable product quality can be effectively guaranteed and product production standardization can be promoted. Actual industrial defects often have different scenarios, and there are problems such as various types of products, different types of defects, and low efficiency of detection. If relying on manual detection, the stability is poor, the efficiency is low, and it‘s rather laborious. If the traditional machine vision inspection systems are used, a general algorithm cannot be obtained, and new models need to be trained for different scenarios. Therefore, transfer learning is used to extract and identify common knowledge, and a system with stronger robustness and generalization ability for various defect detection scenarios is obtained.Many existing algorithms‘ feature extraction methods are relatively simple and rarely designed for image characteristics. This thesis proposes a gradient-guided parallel multi-scale feature extraction network, using convolutional neural network and visual Transformer for local and global feature extraction. Meanwhile, the gradient map of the image is introduced with original image into the proposed network as supplementary information. Experiments show that the proposed network can effectively learn and obtain image features and improve the classification accuracy in defect scenarios, achieving optimal performance on multiple existing public surface defect classification datasets, which also shows that the algorithm in this thesis has strong robustness.Based on the gradient guided parallel multi-scale feature extraction network proposed in this thesis, the extractor is modified and incorporated into the Unet segmentation framework. Through feature extraction and interaction, a defect segmentation model is implemented. The segmentation network was tested on a magnetic tile segmentation dataset, and competitive segmentation results were obtained, demonstrating the effectiveness of this method.In addition, existing researches mainly focus on specific scenarios, which is difficult to show good robustness and generalization when facing complex and changeable defect detection scenarios. As a result, it‘s necessary to study domain adaptation detection in different scenarios. At the same time, since the previous researches can only deal with the types of defects that have occurred in source domain, so studying open-set detection can recognize new types of defects that appear in the target domain for the first time. Based on this, the cross-domain defect detection model and system can have stronger capabilities and wider value in use. In this thesis, a domain-adaptive open-set detection algorithm is proposed in the defect scene, which is successfully applied to defect datasets to realize the transfer and generalization of defect detection capabilities.