Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115228
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textiles-
dc.creatorLyu, Shuai-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13828-
dc.language.isoEnglish-
dc.titleA computer vision model for industrial product surface inspection using unsupervised learning and few-shot learning-
dc.typeThesis-
dcterms.abstractThe advancement of AI has significantly accelerated CV research in industrial manufacturing, focusing on using AI-based models to detect and classify surface defects in product inspection. However, existing models often underperform in practice due to neglecting defect characteristics. This study addresses three key issues: underutilization of large-scale normal samples, challenges in leveraging few defective samples, and ensuring robust detection against product/inspection standard changes. While industrial settings have abundant normal samples, effectively using them for training remains difficult. Humans can detect/classify defects with minimal examples, but deep learning requires substantial labeled data, which is often unfeasible. Lastly, current models struggle to generalize to new products/standards due to industrial products' vast diversity and irregularity.-
dcterms.abstractFirst, the unsupervised Reducing Biases (REB) is proposed for industrial anomaly detection representation. This model learns from normal images to detect product surface defects. A self-supervised task fine-tunes the pre-trained model via the DefectMaker strategy, ensuring diverse synthetic defects. Additionally, the local-density k-nearest neighbors (LDKNN) method addresses normal image patterns/distributions, reduces feature space density bias, and enables effective anomaly detection.-
dcterms.abstractSecond, this research proposes a novel multi-view region context (MVREC) framework for few-shot defect multi-classification (FSDMC), focusing on generalization and contextual feature extraction. MVREC enhances defect classification by integrating pre-trained AlphaCLIP for general feature extraction and using a region-context framework with multi-view context augmentation. The framework also includes Few-shot Zip-Adapter(-F) classifiers to cache support set features for efficient few-shot classification. To validate MVREC, this study introduces MVTec-FS, a new FSDMC benchmark with instance-level mask annotations for 46 defect categories, and demonstrates its performance via comprehensive experiments on multiple datasets.-
dcterms.abstractThird, addressing fabric inspection challenges in the dynamic clothing industry--where new fabric types and inspection criteria emerge--this research proposes a Continual Learning-based Fabric Inspection Model (CLFIM). Traditional models fail to adapt to unseen patterns or defect categories due to pattern/criteria shifts. CLFIM, trained on specific fabrics, learns pattern and inspection criteria contexts to adapt to new tasks. Experiments on three complex-patterned fabrics with varying criteria show that the YOLOV8-based CLFIM effectively handles evolving inspection challenges.-
dcterms.abstractOverall, this research introduces innovative approaches--REB, MVREC, and CLFIM--to address critical challenges: underutilization of normal/defective samples and detection difficulties for altered products/inspection criteria. These models enhance defect detection/classification across fabric types and industrial products. Incorporating self-supervised, few-shot, and continual learning expands defect detection research scope.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentix, 124 pages : color illustrations-
dcterms.issued2025-
dcterms.LCSHQuality control-
dcterms.LCSHEngineering inspection-
dcterms.LCSHArtificial intelligence-
dcterms.LCSHComputer vision-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.