Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110261
Title: Deep learning for vision-based defect inspection
Authors: Yeung, Ching Chi
Degree: Ph.D.
Issue Date: 2024
Abstract: Vision-based defect inspection is an essential quality control task in various industries. With the development of deep learning, deep learning-based visual defect inspection meth­ods have achieved remarkable performance. However, existing deep learning-based visual defect inspection models face three main challenges according to their specific applica­tion requirements, including inspection efficiency, precise localization and classification, and generalization ability. Therefore, this thesis aims to investigate deep learning-based models to address these challenges, which are particularly relevant to three specific appli­cations of vision-based defect inspection. These applications include steel surface defect detection, defect semantic segmentation, and pavement crack detection.
In this thesis, we study the efficient design of steel surface defect detection models. We propose a fused-attention network (FANet) to balance the trade-off between accuracy and speed. This model applies an attention mechanism to a single balanced feature map to improve accuracy while maintaining detection speed. Moreover, it introduces a feature fusion and an attention module to handle defects with multiple scales and shape variations.
Furthermore, we investigate the model design to boost the localization and classifi­cation performance for defect semantic segmentation. We propose an attentive boundary-aware transformer framework, namely ABFormer, to precisely segment different types of defects. This framework introduces a feature fusion scheme to split and fuse the boundary and context features with two different attention modules. This facilitates the different learning aspects of the attention modules. In addition, the two attention modules cap­ture the spatial and channel interdependencies of the features, respectively, to address the intraclass difference and interclass indiscrimination problems.
Finally, we focus on improving the generalization ability of pavement crack detec­tion models. We propose a contrastive decoupling network (CDNet) to effectively detect cracks in seen and unseen domains. This framework separately extracts global and local features with contrastive learning to produce generalized and discriminative representa­tions. Besides, it introduces a semantic enhancement module, detail refinement module, and feature aggregation scheme to tackle diverse cracks with complex backgrounds in input images.
The vision-based defect inspection models proposed in this thesis are evaluated by comparing them with other state-of-the-art methods on different defect inspection datasets. Experimental results validate that our models can achieve promising perfor­mance. These models have great potential to advance deep learning-based methods for various applications of vision-based defect inspection.
Subjects: Deep learning (Machine learning)
Engineering inspection -- Automation
Computer vision -- Industrial applications
Steel -- Surfaces --Defects
Image segmentation
Pavements --Cracking
Hong Kong Polytechnic University -- Dissertations
Pages: xxvi, 124 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

41
Citations as of Apr 14, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.