Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110376
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Land Surveying and Geo-Informatics | - |
| dc.creator | Fekry, R | - |
| dc.creator | Yao, W | - |
| dc.creator | Sani-Mohammed, A | - |
| dc.creator | Amr, D | - |
| dc.date.accessioned | 2024-12-03T03:34:15Z | - |
| dc.date.available | 2024-12-03T03:34:15Z | - |
| dc.identifier.issn | 2194-9042 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/110376 | - |
| dc.description | ISPRS Geospatial Week 2023, 2–7 September 2023, Cairo, Egypt | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Copernicus Publications | en_US |
| dc.rights | © Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/deed.en). | en_US |
| dc.rights | The following publication Fekry, R., Yao, W., Sani-Mohammed, A., and Amr, D.: INDIVIDUAL TREE SEGMENTATION FROM BLS DATA BASED ON GRAPH AUTOENCODER, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., X-1/W1-2023, 547–553 is available at https://dx.doi.org/10.5194/isprs-annals-X-1-W1-2023-547-2023. | en_US |
| dc.subject | Lidar | en_US |
| dc.subject | Individual tree segmentation | en_US |
| dc.subject | Backpack laser scanning | en_US |
| dc.subject | Graph neural network | en_US |
| dc.subject | Graph autoencoder | en_US |
| dc.title | Individual tree segmentation frombls data based on graph autoencoder | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 547 | - |
| dc.identifier.epage | 553 | - |
| dc.identifier.volume | X-1/W1 | - |
| dc.identifier.doi | 10.5194/isprs-annals-X-1-W1-2023-547-2023 | - |
| dcterms.abstract | In the last two decades, Light detection and ranging (LiDAR) has been widely employed in forestry applications. Individual tree segmentation is essential to forest management because it is a prerequisite to tree reconstruction and biomass estimation. This paper introduces a general framework to extract individual trees from the LiDAR point cloud based on a graph link prediction problem. First, an undirected graph is generated from the point cloud based on K-nearest neighbors (KNN). Then, this graph is used to train a convolutional autoencoder that extracts the node embeddings to reconstruct the graph. Finally, the individual trees are defined by the separate sets of connected nodes of the reconstructed graph. A key advantage of the proposed method is that no further knowledge about tree or forest structure is required. Seven sample plots from a plantation forest with poplar and dawn redwood species have been employed in the experiments. Though the precision of the experimental results is up to 95 % for poplar species and 92 % for dawn redwood trees, the method still requires more investigations on natural forest types with mixed tree species. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | ISPRS annals of the photogrammetry, remote sensing and spatial information sciences, 2023, v. X-1/W1, p. 547-553 | - |
| dcterms.isPartOf | ISPRS annals of the photogrammetry, remote sensing and spatial information sciences | - |
| dcterms.issued | 2023 | - |
| dc.identifier.isi | WOS:001185683800070 | - |
| dc.identifier.eissn | 2194-9050 | - |
| dc.description.validate | 202412 bcrc | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Self-funded: for "The author(s) did not specific funding for this work." | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| isprs-annals-X-1-W1-2023-547-2023.pdf | 1.31 MB | Adobe PDF | View/Open |
Page views
24
Citations as of Apr 14, 2025
Downloads
7
Citations as of Apr 14, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



