Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90947
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorHu, J-
dc.creatorDeng, W-
dc.creatorLiu, Q-
dc.creatorLam, KM-
dc.creatorLou, P-
dc.date.accessioned2021-09-03T02:35:34Z-
dc.date.available2021-09-03T02:35:34Z-
dc.identifier.issn1751-9659-
dc.identifier.urihttp://hdl.handle.net/10397/90947-
dc.language.isoenen_US
dc.publisherInstitution of Engineering and Technologyen_US
dc.rights© 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology This is an open access article under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.en_US
dc.rightsThe following publication Hu, J., Deng, W., Liu, Q., Lam, K. M., & Lou, P. (2021). Constructing an efficient and adaptive learning model for 3D object generation. IET Image Processing, 15(8), 1745-1758 is available at https://doi.org/10.1049/ipr2.12146en_US
dc.titleConstructing an efficient and adaptive learning model for 3D object generationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1745-
dc.identifier.epage1758-
dc.identifier.volume15-
dc.identifier.issue8-
dc.identifier.doi10.1049/ipr2.12146-
dcterms.abstractStudying representation learning and generative modelling has been at the core of the 3D learning domain. By leveraging the generative adversarial networks and convolutional neural networks for point-cloud representations, we propose a novel framework, which can directly generate 3D objects represented by point clouds. The novelties of the proposed method are threefold. First, the generative adversarial networks are applied to 3D object generation in the point-cloud space, where the model learns object representation from point clouds independently. In this work, we propose a 3D spatial transformer network, and integrate it into a generation model, whose ability for extracting and reconstructing features for 3D objects can be improved. Second, a point-wise approach is developed to reduce the computational complexity of the proposed network. Third, an evaluation system is proposed to measure the performance of our model by employing various categories and methods, and the error, considered as the difference between synthesized objects and raw objects are quantitatively compared, is less than 2.8%. Extensive experiments on benchmark dataset show that this method has a strong ability to generate 3D objects in the point-cloud space, and the synthesized objects have slight differences with man-made 3D objects.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIET image processing, June 2021, v. 15, no. 8, p. 1745-1758-
dcterms.isPartOfIET image processing-
dcterms.issued2021-06-
dc.identifier.scopus2-s2.0-85101902772-
dc.identifier.eissn1751-9667-
dc.description.validate202109 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
ipr2.12146.pdf3.69 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

72
Last Week
0
Last month
Citations as of May 19, 2024

Downloads

24
Citations as of May 19, 2024

SCOPUSTM   
Citations

2
Citations as of May 16, 2024

WEB OF SCIENCETM
Citations

2
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.