在工业制造领域中,设备剩余寿命预测是对设备从当前时刻到生命结束的时间长度进行预测的技术,该方法可以帮助企业评估设备的健康状况,为设备的维修和保养提供参考依据,帮助工程人员辅助决策;从而避免工业设备发生损坏或故障,进一步提高生产效率和经济效益,降低生产安全风险和成本开支。 本文考虑到实际生产生活中,工业设备剩余寿命预测任务中存在的精度有限、故障样本少、终端计算能力差三个难点,提出了基于注意力机制的剩余寿命预测算法实现高精度预测;在此基础上设计一种基于元学习的适用于故障样本较少情况下的剩余寿命预测算法;最后引入一种基于模型轻量化的适用于计算资源受限情况下的预测算法。具体而言,本文的核心研究点主要包括: 1.设计了一种基于注意力机制的设备剩余寿命预测算法。该方法在注意力机制的基础上,设计并构建了适用于设备剩余寿命任务的预测模型。模型主体结构没有采用卷积神经网络或者循环神经网络,在理论上该模型的感受野无限并且能够实现并行计算,提高了网络训练推理速度;本算法进一步引入了门控卷积模块来辅助提取时间序列中的故障退化信息,从而进一步提高模型的预测精度。 2.设计了一种基于元学习的小样本条件下的设备剩余寿命预测算法,该算法首先基于时间序列相似度匹配方法在源域构建伪任务集合,满足元学习的训练需求,并将在训练阶段得到的元模型在目标域利用少量的故障样本微调,从而实现模型的高精度预测;同时本文对比并分析了不同的时间序列相似度匹配算法和主干网络结构对设备剩余寿命预测精度的影响,验证了本算法可行性与可拓展性。除此之外,考虑到故障样本之间的相似性,本文进一步将小样本算法扩展到一般情况下,并提升一般情况下的设备剩余寿命预测精度,拓展了本算法的应用范围。 3.设计了一种基于模型轻量化的资源受限情况下的设备剩余寿命预测算法。该预测算法首先对模型结构进行优化,设计了一种注意力机制和一维卷积相结合的轻量化模型,其中注意力机制负责提取全局注意力特征,而卷积负责提取局部特征。进一步地,本文设计了一种量化方案,即使用INT8量化对神经网络模型各层参数进行量化,实现在保证模型精度的同时,尽可能减少模型的参数量和计算量。
In the realm of industrial manufacturing, predicting the remaining useful life of equipment is a technique used to forecast the duration from the current moment until the end of the equipment‘s life. This approach aids enterprises in assessing the health of their equipment and provides a reference for equipment maintenance and repair, thereby assisting engineers in making informed decisions. Ultimately, this helps prevent damage or malfunction of industrial equipment, further enhancing production efficiency and economic benefits while reducing production safety risks and cost expenditures. Taking into consideration the limitations of accuracy, limited fault samples, and poor terminal computing capabilities that exist in the task of predicting the remaining useful life of industrial equipment in actual production and life, this paper proposes an attention-based mechanism to achieve high-precision predictions. Additionally, a meta-learning-based algorithm suitable for situations with few fault samples is designed, followed by the introduction of a model lightening-based prediction algorithm that is suitable for situations where computing resources are limited. Specifically, the core research points of this paper include: 1.We propose an algorithm for predicting the remaining useful life of equipment based on an attention mechanism. Building upon the attention mechanism, this method designs and constructs a prediction model suitable for device remaining useful life tasks. The backbone structure of the model does not adopt convolutional neural networks or recurrent neural networks, which theoretically makes the model‘s receptive field infinite and allows for parallel computing, thereby improving the speed of network training and inference. Furthermore, this algorithm introduces a gating convolutional module to assist in extracting fault degradation information from time series, thereby further improving the prediction accuracy of the model. 2.We propose the few-shot remaining useful life prediction algorithm for equipment based on meta-learning. Firstly, the algorithm uses time series similarity matching methods to construct a pseudo-task set in the source domain, which satisfies the training requirements of meta-learning. Then, the meta-model obtained during the training phase will be fine-tuned using few shot fault samples in the target domain to achieve high-precision model prediction. Additionally, this paper compares and analyzes the impact of different time series similarity matching algorithms and backbone network structures on the accuracy of remaining useful life prediction, thus verifying the feasibility and scalability of the proposed algorithm. Moreover, this article extends the few-shot algorithm to general situations by considering the similarity between fault samples and improving the accuracy of remaining useful life estimation, thus expanding the application scope of the proposed algorithm. 3.We propose the remaining life prediction algorithm of equipment based on model lightweight in resource-constrained scenarios. The algorithm initially optimizes the model structure by incorporating an attention mechanism and one-dimensional convolution into the model. The attention mechanism is responsible for extracting global attention features, while the convolution extracts local features. Furthermore, we suggest the quantization scheme using INT8 quantization to quantize the parameters of the neural network model. This approach enables us to maintain the model‘s accuracy while minimizing its parameter and computational requirements to the greatest extent possible.