结构表面裂缝是判断结构损伤的重要指标,裂缝的检测是结构检测鉴定与健康监测的重要内容。论文利用计算机视觉技术实现结构表面数字图像的裂缝识别并进行提取,进一步完成裂缝特征的量化计算。并通过多种机器学习算法建立裂缝特征与实际结构或者构件的损伤状态关联。论文的主要研究成果包括:(1)针对裂缝识别和裂缝提取分别建立了大型裂缝数据库用于神经网络的训练。分别提出了两步法和一步法两种裂缝识别和提取模型。此外,还进行了2组现场试验分别对本章提出的两步法和一步法的有效性进行了验证。(2)针对由于拍摄距离较远、拍摄时抖动、拍摄环境光照条件不理想等因素导致直接对原始图像进行裂缝识别较为困难的情况,基于经典的SRGAN生成对抗式神经网络结构进行进一步优化,分别训练得到放大倍数为2倍、4倍和8倍的超分辨率重建模型。所提出的超分辨率模型相较于双三次线性插值算法和SRGAN模型都更有优势。(3)提出了基于图像局部特征点检测与匹配的方法来实现试件图像的拼接,利用RANSAC算法来减少误匹配,最终达到了优秀的图像拼接效果。此外,系统性的对裂缝特征进行归纳总结,将其分为单条裂缝级别以及所有裂缝级别两种,针对每一项裂缝特征均针对性的给出其在数字图像上的定义以及计算方法。最后通过实际钢筋混凝土梁的加载试验对裂缝识别、提取、拼接以及特征计算的过程进行验证。(4)提出了一套完整的裂缝标记以及基于形状上下文的裂缝匹配算法,利用该算法可以有效的利用多次检测的裂缝数据,利用骨架线的形状上下文信息可以实现相同裂缝在不同阶段的像素级匹配,从而可以将裂缝上的每一个像素点的变化过程进行精准监测。并且经过实际试验的验证,说明了该方法的有效性。(5)设计了16组钢筋混凝土梁构件,设计变量包括高跨比、配箍率、配筋率、混凝土强度、保护层厚度以及纵筋直径。通过三点加载的方式对构件进行分级加载,通过多台相机记录裂缝发展过程并以此建立数据库。以该数据库为基础,分别提出基于随机森林、Adaboost和BP神经网络的机器学习算法,建立裂缝特征与结构损伤状态之间的联系,使其可以根据裂缝特征判断结构的损伤状态。论文承蒙国家自然科学基金项目(52121005,52192662)资助。
Structural surface cracks are important indicators for accessing structural damage and safety, thus the crack detection has become an important part of structural health monitoring. This paper uses computer vision technology to identify and extract the crack pixels from the image of structure surfaces, and then calculates the crack features including width and length based on the binary image of crack. Moreover, various machine learning algorithms are applied to make a direct connection between the crack features and the damage state of the actual structure. The main contributions of the paper are as follows.(1) A large crack database is established to train the neural network for crack identification and crack extraction. Two crack identification and extraction models are proposed, namely two-step method and one-step method. In addition, two field tests were carried out to verify the effectiveness of the two-step method and one-step method proposed in this chapter.(2) Considering that it is difficult to accurately detect and extract cracks in the original image due to factors such as long shooting distance, jitter during shooting, and unsatisfactory lighting conditions in the shooting environment, optimized super-resolution reconstruction models with magnifications of 2x, 4x, and 8x are proposed based on the classic SRGAN. The new models are proved to be better than bicubic linear interpolation algorithm and SRGAN model.(3) A method based on image local feature point detection and matching is proposed to realize image stitching. The RANSAC algorithm is also used to reduce the mismatching situation. In addition, the crack features are systematically summarized and divided into single level and overall level. And the definition and calculation method of each feature is respectively given. Finally, the whole process of crack identification, extraction, image stitching and crack features calculation is verified by the loading test of the actual reinforced concrete beam.(4) A complete set of crack labelling and crack matching algorithm based on shape context is proposed, which can combine the crack information from different detections. And the shape context of the skeleton line can be utilized to realize crack matching at pixel level, so that the development of each crack pixel can be accurately monitored. And through the verification of the actual experiment, this algorithm shows the effectiveness of the method.(5) 16 groups of reinforced concrete beam specimens were designed, and the design variables included height-span ratio, stirrup ratio, reinforcement ratio, concrete strength, cover thickness and longitudinal bar diameter. The specimens are loaded in stages by means of three-point loading, and the surface crack developments during the test are recorded by multiple cameras to establish a database. Based on the database, machine learning algorithms including random forest, Adaboost and BP neural network are respectively proposed to establish the connection between crack features and structural damage status, so that the damage status of structures can be accessed according to crack features.This dissertation was supported by the National Natural Science Foundation of China (52121005, 52192662).