Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117855
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Fashion and Textiles | - |
| dc.contributor | Research Centre of Textiles for Future Fashion | - |
| dc.creator | Qu, H | - |
| dc.creator | Zhou, Y | - |
| dc.creator | Mok, PY | - |
| dc.creator | Flatz, G | - |
| dc.creator | Li, L | - |
| dc.date.accessioned | 2026-03-05T07:57:01Z | - |
| dc.date.available | 2026-03-05T07:57:01Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/117855 | - |
| dc.language.iso | en | en_US |
| dc.publisher | MDPI AG | en_US |
| dc.rights | Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Qu, H., Zhou, Y., Mok, P. Y., Flatz, G., & Li, L. (2025). Efficient and Effective Detection of Repeated Pattern from Fronto-Parallel Images with Unknown Visual Contents. Signals, 6(1), 4 is available at https://doi.org/10.3390/signals6010004. | en_US |
| dc.subject | Convolutional neural network (CNN) | en_US |
| dc.subject | Deep feature selection | en_US |
| dc.subject | Fronto-parallel images | en_US |
| dc.subject | Repeated pattern | en_US |
| dc.subject | Template matching | en_US |
| dc.title | Efficient and effective detection of repeated pattern from fronto-parallel images with unknown visual contents | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 6 | - |
| dc.identifier.issue | 1 | - |
| dc.identifier.doi | 10.3390/signals6010004 | - |
| dcterms.abstract | The effective detection of repeated patterns from inputs of unknown fronto-parallel images is an important computer vision task that supports many real-world applications, such as image retrieval, synthesis, and texture analysis. A repeated pattern is defined as the smallest unit capable of tiling the entire image, representing its primary structural and visual information. In this paper, a hybrid method is proposed, overcoming the drawbacks of both traditional and existing deep learning-based approaches. The new method leverages deep features from a pre-trained Convolutional Neural Network (CNN) to estimate initial repeated pattern sizes and refines them using a dynamic autocorrelation algorithm. Comprehensive experiments are conducted on a new dataset of fronto-parallel textile images as well as another set of real-world non-textile images to demonstrate the superiority of the proposed method. The accuracy of the proposed method is 67.3%, which represents 20% higher than the baseline method, and the time cost is only 11% of the baseline. The proposed method has been applied and contributed to textile design, and it can be adapted to other applications. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Signals, Mar. 2025, v. 6, no. 1, 4 | - |
| dcterms.isPartOf | Signals | - |
| dcterms.issued | 2025-03 | - |
| dc.identifier.scopus | 2-s2.0-105001256389 | - |
| dc.identifier.eissn | 2624-6120 | - |
| dc.identifier.artn | 4 | - |
| dc.description.validate | 202603 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The work described in this paper was supported by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Nos. 152112/19E and 15602323). The work was also supported, in part, by The Hong Kong Polytechnic University (Project Nos. P0049355/CD95 and P0051330/BDVH). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| signals-06-00004.pdf | 7.79 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



