Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/112850
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Land Surveying and Geo-Informatics | - |
dc.creator | Cui, X | - |
dc.creator | Li, S | - |
dc.creator | Zhang, L | - |
dc.creator | Peng, L | - |
dc.creator | Guo, L | - |
dc.creator | Cao, X | - |
dc.creator | Chen, X | - |
dc.creator | Yin, H | - |
dc.creator | Shen, M | - |
dc.date.accessioned | 2025-05-09T06:12:41Z | - |
dc.date.available | 2025-05-09T06:12:41Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/112850 | - |
dc.language.iso | en | en_US |
dc.publisher | MDPI AG | en_US |
dc.rights | Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Cui, X., Li, S., Zhang, L., Peng, L., Guo, L., Cao, X., Chen, X., Yin, H., & Shen, M. (2025). Integrated Extraction of Root Diameter and Location in Ground-Penetrating Radar Images via CycleGAN-Guided Multi-Task Neural Network. Forests, 16(1), 110 is available at' https://doi.org/10.3390/f16010110. | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Ground-penetrating radar | en_US |
dc.subject | Root diameter | en_US |
dc.subject | Root location | en_US |
dc.title | Integrated extraction of root diameter and location in ground-penetrating radar images via CycleGAN-guided multi-task neural network | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.volume | 16 | - |
dc.identifier.issue | 1 | - |
dc.identifier.doi | 10.3390/f16010110 | - |
dcterms.abstract | The diameter of roots is pivotal for studying subsurface root structure geometry. Yet, directly obtaining these parameters is challenging due their hidden nature. Ground-penetrating radar (GPR) offers a reproducible, nondestructive method for root detection, but estimating diameter from B-Scan images remains challenging. To address this, we developed the CycleGAN-guided multi-task neural network (CMT-Net). It comprises two subnetworks, YOLOv4-Hyperbolic Position and Diameter (YOLOv4-HPD) and CycleGAN. The YOLOv4-HPD is obtained by adding a regression header for predicting root diameter to YOLOv4-Hyperbola, which achieves the ability to simultaneously accurately locate root objects and estimate root diameter. The CycleGAN is used to solve the problem of the lack of a real root diameter training dataset for the YOLOv4-HPD model by migrating field-measured data domains to simulated data without altering root diameter information. We used simulated and field data to evaluate the model, showing its effectiveness in estimating root diameter. This study marks the first construction of a deep learning model for fully automatic root location and diameter extraction from GPR images, achieving an “Image Input–Parameter Output” end-to-end pattern. The model’s validation across various dataset scales opens the way for estimating other root attributes. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Forests, Jan. 2025, v. 16, no. 1, 110 | - |
dcterms.isPartOf | Forests | - |
dcterms.issued | 2025-01 | - |
dc.identifier.scopus | 2-s2.0-85216019740 | - |
dc.identifier.eissn | 1999-4907 | - |
dc.identifier.artn | 110 | - |
dc.description.validate | 202505 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | The National Natural Science Foundation of China, grant number 42271329 | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
forests-16-00110-v2.pdf | 14.79 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.